ESG & Sustainability
Our sovereign GPU & inference cloud is engineered for efficiency and responsibility: canal-cooled racks, heat-to-homes and vertical farming via ORC turbines, solar/PV augmentation, and RDU-powered inference to reduce everyday GPU load.
Canal-Cooled Lenovo Neptune Racks
Our 2 MW sovereign data centre design uses closed-loop water circuits drawing from the canal system to stabilise rack temperatures and cut cooling energy. The thermal head is then fed into an ORC (Organic Rankine Cycle) micro-turbine to produce low-voltage power and reuse heat for domestic networks and year-round vertical farming.
- Closed-loop water, corrosion-inhibited; no contact with canal water.
- Designed for high-density racks (H200/B200) with stable inlet temps.
- Heat cascade: ORC power → district/domestic hot water → farming.
- Telemetry on loops: flow, ΔT, pump health, and leak sensors.


Solar & PV Augmentation
Roof-mounted PV arrays and integrated solar-thermal panels support the cooling and domestic hot-water loops. Smart inverters smooth output and coordinate with battery buffers for peak-shave and grid-friendly operation.
- PV + solar-thermal hybrids to maximise energy capture per m².
- Battery buffer for inverter ramp control and outage bridging.
- Priority routing: pumps & control → IT essential → house loads.
- Export when surplus; import under peak compute demand.
RDU-Powered Inference (SambaNova SN40L)
For day-to-day inference, we offload workloads to RDU (Reconfigurable Dataflow Unit) nodes. This lowers total energy use and frees GPUs for heavy training and fine-tune cycles. RDUs excel at transformer dataflow with high utilisation and stable latency—ideal for production inference, RAG, and low-precision fine-tuning.
| Metric | 8× H200 Node | 8× B200 Node | SambaNova SN40L (RDU) |
|---|---|---|---|
| Primary Role | Training / Fine-tuning / High-throughput inference | Frontier-scale training / Super-dense inference | Production inference / Efficient fine-tune |
| Interconnect | NVLink 4 + 400 Gb/s fabric | NVLink 5 / NVSwitch + 400 Gb/s | RDU mesh + high-bw host links |
| Cooling Fit | Optimised for canal-cooled racks | Optimised for canal-cooled racks | Lower steady-state thermal load per token |
| When to Choose | LLM training / mixed workloads | Very large models & peak training | Daily inference, SLA, cost/latency focus |
Sovereign Fabric & Heat Re-Use Topology
Clusters interlink over a 400 Gb/s sovereign backbone with local heat re-use partners. The diagram shows a simplified layout for one 2 MW site; multiple sites mesh to balance training, inference and heat grids.