Beyond the Grid: The Race to Put AI Data Centers Underwater, Offshore, and in Orbit

As AI's electricity demands overwhelm aging power grids and spark ratepayer revolts, startups are racing to deploy computing infrastructure where land-based constraints don't apply

“Before we go off-world, we should go offshore.”

That’s Sam Kanner, CEO of Aikido Technologies, explaining why his company is deploying AI data centers beneath floating wind turbines rather than launching them into orbit. This week, Aikido announced plans to submerge a 100-kilowatt demonstration data center off Norway later this year, with a commercial-scale UK deployment targeted for 2028.

It’s one of several unconventional solutions racing to address an increasingly urgent problem: the U.S. electrical grid cannot support AI’s appetite for power, and the companies building AI don’t want to wait decades for it to catch up.

The Grid Problem

A single AI-focused data center can demand 50 to 100 megawatts of sustained electricity - comparable to a small city. According to Morgan Stanley Research, U.S. data center demand could reach 74 gigawatts by 2028, with a projected shortfall of about 49 gigawatts in available power access.

PJM Interconnection, the largest U.S. grid operator serving over 65 million people across 13 states, projects it will be six gigawatts short of its reliability requirements by 2027. Most of the grid was built between the 1950s and 1970s, and approximately 70% is approaching the end of its life cycle.

The costs are already hitting consumers. Residential electricity prices are forecast to rise another 4% nationwide in 2026 after increasing about 5% in 2025. Goldman Sachs estimates $23 billion in capacity costs are attributable to data centers - costs ultimately passed to ratepayers.

This is why seven tech giants gathered at the White House this week to sign a “ratepayer protection pledge.” The timing wasn’t coincidental. The companies need political cover as much as they need power.

Option 1: Float It (Aikido’s Approach)

Aikido’s AO60DC platform hosts 10-12 megawatts of AI-grade compute alongside a 15-18 megawatt wind turbine and integrated battery storage. The data center sits in the submerged pods of the floating offshore wind installation, using the ocean as a natural heat sink.

The numbers are striking. Aikido claims a Power Usage Effectiveness (PUE) below 1.08 - meaning almost all the electricity goes to actual computing rather than cooling. Traditional data centers typically run between 1.2 and 1.5 PUE.

The cooling system is entirely passive. Heat transfers through the steel hull into surrounding seawater, with thermal impact limited to a few meters from the structure. The onboard wind turbine and batteries power the compute load for the majority of operating hours, with grid connection used primarily during summer months.

Aikido says the platform can be deployed within 200 miles of major compute centers in sovereign waters, with farms ranging from 30 megawatts to over 1 gigawatt of IT load capacity. The company is a member of NVIDIA’s Inception program and claims early interest from AI inference customers.

Option 2: Sink It (China’s Approach)

While Microsoft abandoned its underwater Project Natick in 2024 - despite achieving lower failure rates than land-based servers - China has accelerated commercial deployment.

Hailanyun (HiCloud) is building an undersea data center off Shanghai, approximately six miles from the coast. Construction began in June 2025 following a pilot project off Hainan Island in December 2022. According to Scientific American, the company claims their undersea centers use at least 30% less electricity than land-based facilities, thanks to natural seawater cooling.

The Shanghai project’s first phase will contain 198 server racks with 396 to 792 AI-capable servers - designed to complete GPT-3.5 training equivalent in one day. It will be powered by an adjacent offshore wind farm supplying approximately 97% of its energy needs.

Microsoft’s explanation for abandoning the approach is telling: underwater technology “is not able to be updated or upgraded as easily as on land,” making it “just probably not the easiest way to be flexible in a very fast-changing world.” But China appears willing to accept that trade-off in exchange for energy efficiency and sovereignty.

Option 3: Launch It (SpaceX’s Approach)

Then there’s the maximalist vision. SpaceX has filed plans with the FCC for up to one million “orbital data center” satellites, claiming the constellation would “operate with unprecedented computing capacity to power advanced artificial intelligence models.”

The first two orbital data center nodes successfully launched to low-Earth orbit on January 11, 2026. Google has announced plans to test orbital AI data centers with prototype satellites by early 2027. Even Chennai-based Agnikul Cosmos plans to launch a prototype AI data center into orbit by the end of this year.

Elon Musk predicted orbital data centers will be more cost-effective than earth-bound ones “within two to three years.” Deutsche Bank is more skeptical, estimating it will be well into the 2030s before orbital data centers reach close to parity.

The pitch is compelling: unlimited solar power, natural vacuum cooling, no grid constraints, no NIMBY opposition. The reality is more complicated. Latency matters for many AI workloads. Maintenance is impossible. And the regulatory framework for space-based computing infrastructure doesn’t exist yet.

What’s Actually Viable?

Each approach makes different trade-offs:

Offshore floating (Aikido) maintains grid connection for backup, allows for maintenance and upgrades, and can be deployed relatively quickly with existing maritime infrastructure. The downside: it still requires proximity to shore and faces maritime regulatory complexity.

Underwater (Hailanyun) achieves the best cooling efficiency and can operate largely off-grid with adjacent wind farms. The downside: you can’t easily swap out hardware as AI accelerators evolve every 18 months.

Orbital (SpaceX) theoretically solves every constraint simultaneously. The downside: it doesn’t actually work at scale yet, and the economics remain speculative.

The most likely near-term winner is the boring middle option: offshore floating platforms close enough to shore to plug into existing infrastructure when needed, far enough out to avoid ratepayer politics, and accessible enough to swap hardware as the AI arms race continues.

The Sovereignty Factor

There’s a geopolitical dimension to all of this. Aikido explicitly markets its platform for “sovereign AI infrastructure,” deployable in countries’ exclusive economic zones. China is building domestic capability that doesn’t depend on importing hardware through Western supply chains. SpaceX’s orbital vision would put computing capacity beyond any single nation’s jurisdiction.

As AI becomes critical infrastructure, where the compute physically sits matters as much as who controls the models running on it. The race to build data centers beyond traditional grid constraints is also a race to build data centers beyond traditional regulatory reach.

The Bottom Line

The AI industry’s electricity problem is real and getting worse. The solutions being proposed range from pragmatic (bolt servers under wind turbines) to speculative (launch them into space). What they share is a recognition that waiting for the grid to catch up isn’t an option - and neither is continuing to pass the costs to everyone else’s electricity bills.

Aikido’s proof-of-concept deployment later this year will be the first real test of whether “go offshore” can work at scale. If it does, the future of AI infrastructure might look less like rows of humming servers in Virginia and more like floating platforms on the open ocean.