Thermodynamic Computing Could Cut AI Energy Use by Orders of Magnitude

Berkeley Lab researchers demonstrate neural networks powered by thermal noise instead of electricity, potentially slashing AI energy consumption.

Server room with rows of illuminated equipment in blue light

Researchers at Lawrence Berkeley National Laboratory have demonstrated a new approach to neural network computing that harnesses thermal noise as a power source rather than fighting against it. Their work, published in Nature Communications on March 6, shows that thermodynamic computers can perform the nonlinear calculations required for machine learning while using a fraction of the energy consumed by conventional digital systems.

From Noise to Signal

Traditional computers treat thermal fluctuations as an enemy to be suppressed. Every transistor wastes energy maintaining stable states against the constant jostling of heat. Thermodynamic computing flips this relationship entirely.

“Thermodynamic computing is noise-powered,” said Stephen Whitelam, a staff scientist at Berkeley Lab’s Molecular Foundry. “Classical and quantum computing fight noise; thermodynamic computing is powered by it.”

The team designed nonlinear circuit components that behave like artificial neurons, responding to random thermal fluctuations in predictable ways. When connected into networks, these thermodynamic neurons can perform the same pattern recognition and generation tasks as conventional neural networks.

The Energy Problem AI Needs to Solve

The timing matters. Current AI systems are voracious energy consumers. Training a single large language model can use as much electricity as several hundred American homes consume in a year. Inference, running these models to generate responses, multiplied across billions of queries, adds up to a substantial portion of global data center energy consumption.

Some estimates suggest thermodynamic computing could reduce AI image generation energy use by a factor of ten billion. Even if real-world implementations achieve a fraction of that theoretical efficiency, the implications for AI’s environmental footprint would be substantial.

Breaking the Equilibrium Barrier

Previous thermodynamic computing research hit a wall: systems could only perform useful calculations after reaching thermal equilibrium, a slow process that negated much of the energy savings. The Berkeley team solved this by designing circuits that compute correctly even while still settling.

“It’s a very different way of optimizing a neural network,” said Corneel Casert, a researcher at the National Energy Research Scientific Computing Center. “Once trained and built as physical hardware, we can perform inference for very low energy cost.”

To train their thermodynamic neural networks, the researchers ran a genetic algorithm on 96 GPUs of the Perlmutter supercomputer, simulating over one trillion thermodynamic computer runs in parallel. The algorithm evolved network parameters the same way nature evolves organisms: testing variations, keeping the most fit, and iterating.

The Fine Print

This is still early-stage research. The thermodynamic computers demonstrated in the study exist as simulations, not physical hardware. Building actual circuits that perform as well as the digital neural networks we use today remains an engineering challenge.

“We don’t yet know how to design a thermodynamic computer that would be as good at image generation as, say, DALL-E,” the researchers acknowledged.

The path from laboratory proof-of-concept to practical AI accelerator is long and uncertain. Quantum computing has traveled that road for decades without yet delivering on its early promise for general computation.

But the fundamental physics works. Thermal fluctuations can power useful computation. If the engineering catches up, data centers might one day run on something closer to room temperature noise than the megawatts they consume today. That alone makes this research worth watching as AI’s energy demands continue to climb.