The official numbers say one thing. The math says another.
A Guardian analysis found that Apple, Google, Meta, and Microsoft’s data center emissions are 662% higher than officially reported - roughly 7.62 times the figures in their sustainability reports.
This isn’t fraud. It’s accounting.
How the Numbers Work
The discrepancy comes down to two different ways of counting emissions.
Scope 2 emissions (what companies report) allow the use of Renewable Energy Certificates (RECs). A company can run a data center on fossil-fuel-heavy grid power in Virginia, buy RECs from a wind farm in Texas, and report near-zero emissions. The electrons never touch, but the accounting balances.
Location-based emissions measure what actually happens - the carbon intensity of the electricity grid where the data center operates. This number is substantially higher when facilities run in regions with carbon-heavy grids.
The difference is stark. Meta reported 273 metric tons of Scope 2 emissions in 2022. Location-based accounting puts the figure at 3.8 million metric tons - a 19,000x increase. Microsoft showed 280,782 metric tons officially, versus 6.1 million metric tons by location - a 22x gap.
Jay Dietrich from the Uptime Institute stated it plainly: “Location-based gives an accurate picture of the emissions associated with the energy that’s actually being consumed to run the data centre.”
Why It Matters Now
AI is driving explosive growth in data center energy demand. The International Energy Agency projects data center electricity consumption could double by 2026. U.S. data centers already account for 4.4% of national electricity - up from 1.9% in 2018.
Google’s emissions rose 48% since 2019. Microsoft’s climbed 29% since 2020. Combined indirect emissions from Amazon, Microsoft, Alphabet, and Meta increased 150% from 2020 to 2023.
Google acknowledges the challenge: “As we further integrate AI into our products, reducing emissions may be challenging.”
The Water Nobody Talks About
While emissions get the accounting treatment, water consumption operates with less scrutiny.
A UC Riverside study published in March projects that U.S. data centers may require 697 million to 1.45 billion gallons of additional peak water capacity daily by 2030. The upper range exceeds New York City’s entire daily water supply of roughly 1 billion gallons.
The infrastructure upgrades needed could cost up to $58 billion.
Each 100-word AI prompt uses roughly 519 milliliters of water - about one bottle. Scale that to ChatGPT’s 1 billion daily queries and the numbers get uncomfortable fast.
Big Tech’s Nuclear Bet
The response has been dramatic. In January, Meta announced nuclear deals totaling 6.6 gigawatts - enough to power 5 million homes.
The deals span three companies:
- Vistra: 2.1+ GW from existing Ohio and Pennsylvania plants, starting late 2026
- Oklo: Up to 1.2 GW from planned small modular reactors, targeting 2030
- TerraPower: 690 MW from two sodium-cooled reactors, expected 2032
Microsoft committed $16 billion to restart Three Mile Island (835 MW) by 2028. Google signed with Kairos Power for 500 MW of small modular reactor capacity. Amazon invested over $20 billion converting Susquehanna into an AI campus.
The nuclear push is real. But there’s a timing problem: the data centers are being built now, and these power sources won’t come online for years.
The Efficiency Question
Not all AI queries carry equal weight.
According to Carbon Credits analysis, GPT-4o uses about 0.30 watt-hours per request with 0.13 grams of CO2. Claude 3 Opus runs at 4.05 watt-hours per request - roughly 13.5 times more energy - producing 1.80 grams of CO2.
But the most capable model isn’t always the least efficient. Recent analysis found Claude 3.7 Sonnet scored highest in eco-efficiency when running on AWS infrastructure, while GPT-4.5 ranked among the least efficient despite being newer.
The lesson: newer doesn’t automatically mean greener, and the model choice matters.
The Transparency Gap
Here’s what we don’t know: exact figures for training the latest models.
Training GPT-4 required an estimated 50-62 gigawatt-hours. What did GPT-5 cost? What about Claude 4? Anthropic has not reported Scope 1, 2, or 3 emissions in any public filing.
Environmental reporting for AI models remains voluntary. Companies disclose what they choose to disclose, using methodologies that can obscure more than they reveal.
What Would Actually Help
Nature Sustainability research identifies a roadmap that could reduce AI infrastructure’s carbon footprint by 73% and water use by 86%:
Smart siting (52% water reduction): Build in Midwest and “windbelt” states with better grid mixes and adequate water. Avoid drought-stricken regions like Arizona and Texas.
Grid decarbonization (15% carbon reduction): Accelerate renewable deployment wherever AI facilities expand.
Operational efficiency (7% carbon, 29% water reduction): Deploy liquid cooling and improve server utilization.
The catch: these decisions are being made right now. Infrastructure choices lock in for decades.
What You Can Do
Use smaller models. Claude Haiku uses 18x less energy than Claude Opus. For simple tasks, lighter models work fine.
Run local when practical. A model on your laptop uses your local grid’s energy mix, not a data center in a drought region.
Push for location-based reporting. RECs let companies claim green credentials while running on carbon-heavy grids. Location-based accounting shows what’s actually happening.
Question the marketing. A “carbon neutral” claim backed by RECs is not the same as actually reducing emissions at the source.
The Bottom Line
Big Tech’s sustainability reports use accounting methods that make AI infrastructure look 7.62 times greener than it actually is. The gap isn’t illegal - it’s standard practice under current voluntary reporting frameworks.
But as AI energy demand doubles and water consumption approaches the scale of major cities, “technically accurate” isn’t the same as “actually sustainable.”
The nuclear investments are promising. The efficiency gains are real. But until location-based reporting becomes standard, we’re measuring AI’s environmental impact with a ruler that systematically undercounts.