Media Stream AI

ESG & Sustainability

Our sovereign GPU & inference cloud is engineered for efficiency and responsibility: canal-cooled racks, heat-to-homes and vertical farming via ORC turbines, solar/PV augmentation, and RDU-powered inference to reduce everyday GPU load.

Canal-Cooled Lenovo Neptune Racks

Our 2 MW sovereign data centre design uses closed-loop water circuits drawing from the canal system to stabilise rack temperatures and cut cooling energy. The thermal head is then fed into an ORC (Organic Rankine Cycle) micro-turbine to produce low-voltage power and reuse heat for domestic networks and year-round vertical farming.

  • Closed-loop water, corrosion-inhibited; no contact with canal water.
  • Designed for high-density racks (H200/B200) with stable inlet temps.
  • Heat cascade: ORC power → district/domestic hot water → farming.
  • Telemetry on loops: flow, ΔT, pump health, and leak sensors.
Cooling and ORC Infrastructure System
Solar and PV Power System

Solar & PV Augmentation

Roof-mounted PV arrays and integrated solar-thermal panels support the cooling and domestic hot-water loops. Smart inverters smooth output and coordinate with battery buffers for peak-shave and grid-friendly operation.

  • PV + solar-thermal hybrids to maximise energy capture per m².
  • Battery buffer for inverter ramp control and outage bridging.
  • Priority routing: pumps & control → IT essential → house loads.
  • Export when surplus; import under peak compute demand.

RDU-Powered Inference (SambaNova SN40L)

For day-to-day inference, we offload workloads to RDU (Reconfigurable Dataflow Unit) nodes. This lowers total energy use and frees GPUs for heavy training and fine-tune cycles. RDUs excel at transformer dataflow with high utilisation and stable latency—ideal for production inference, RAG, and low-precision fine-tuning.

Metric8× H200 Node8× B200 NodeSambaNova SN40L (RDU)
Primary RoleTraining / Fine-tuning / High-throughput inferenceFrontier-scale training / Super-dense inferenceProduction inference / Efficient fine-tune
InterconnectNVLink 4 + 400 Gb/s fabricNVLink 5 / NVSwitch + 400 Gb/sRDU mesh + high-bw host links
Cooling FitOptimised for canal-cooled racksOptimised for canal-cooled racksLower steady-state thermal load per token
When to ChooseLLM training / mixed workloadsVery large models & peak trainingDaily inference, SLA, cost/latency focus
Notes: Comparative characteristics are indicative and depend on model, precision, batch, routing and runtime configuration. We right-size per workload.

Sovereign Fabric & Heat Re-Use Topology

Clusters interlink over a 400 Gb/s sovereign backbone with local heat re-use partners. The diagram shows a simplified layout for one 2 MW site; multiple sites mesh to balance training, inference and heat grids.

ManchesterDüsseldorfKingston