MatX just secured $500 million in Series B funding to build AI chips that its founders claim will outperform Nvidia by a factor of ten. The startup was founded in 2023 by two former Google engineers who helped design the TPU, Google’s custom AI accelerator that proved you don’t need Nvidia’s GPUs to train frontier models.
The big question: can they actually deliver, or is this another well-funded bet on a future that never arrives?
The Founders and Their Credentials
Reiner Pope and Mike Gunter aren’t random founders with a slide deck. Pope was the efficiency lead for Google PaLM, where he designed what the company called “the world’s fastest LLM inference software.” He also helped conceive the TPU v5e and optimize it specifically for large language models.
Gunter brings 28 years of hardware architecture experience, including 12 years focused on machine learning. He was a lead designer of Google’s TPU hardware itself.
These are the people who actually built one of Nvidia’s only serious competitors. Now they’re trying to build something better.
The Investor List
The Series B was led by Jane Street, the quantitative trading firm that has become an increasingly active AI investor, and Situational Awareness, the fund formed by Leopold Aschenbrenner - the former OpenAI researcher who wrote extensively about AI acceleration and left the company amid controversy.
Additional investors include Marvell Technology (a major semiconductor player), Spark Capital, and Stripe co-founders Patrick and John Collison. When Marvell invests in a competitor, it suggests the semiconductor industry itself sees room for alternatives to Nvidia’s dominance.
The 10x Claim
MatX’s pitch is aggressive: chips that deliver ten times better performance for training large language models compared to Nvidia’s current GPUs.
That’s a bold claim. Nvidia has been iterating on GPU architecture for AI workloads for over a decade. Their CUDA ecosystem represents billions in software investment. Their Blackwell chips are already deployed at scale, and Vera Rubin is on the horizon with its own 10x efficiency promises.
But Google’s TPU proved the concept. You can design silicon specifically for matrix operations and get dramatically better efficiency than general-purpose GPUs. The question is execution - and whether MatX can actually ship chips that deliver on the benchmarks.
Timeline and Manufacturing
MatX plans to finalize chip design in 2026 and begin shipping products in 2027. They’ll manufacture through TSMC, the same foundry that builds Nvidia’s chips and Apple’s processors.
The $500 million will fund completing the design, scaling manufacturing capacity, and securing critical components. In the current chip market, just getting TSMC capacity is a significant challenge.
The Competitive Context
MatX enters a crowded field of Nvidia challengers:
- AMD just landed a $100 billion deal with Meta
- Google’s TPU continues to power Gemini training
- Amazon’s Trainium chips are deployed across AWS
- Intel is attempting a comeback with Gaudi
- Cerebras offers wafer-scale chips for training
- Etched is building transformer-specific ASICs
None of these have meaningfully dented Nvidia’s market share in the broader industry. But the combined pressure is real, and the AI labs are motivated to find alternatives. OpenAI, Anthropic, and others have all invested in reducing Nvidia dependence.
What Success Looks Like
MatX doesn’t need to beat Nvidia across every workload. They need to be demonstrably better for the specific use case that matters most right now: training and running large language models.
If they can deliver on the 10x claim for LLM inference specifically, they’ll find buyers. Every major AI company is spending billions on inference compute. A 10x cost reduction changes the economics of the entire industry.
The catch: Nvidia has an 18-month head start on Vera Rubin, which promises similar efficiency gains. MatX is racing against a moving target that also happens to control the dominant software ecosystem.
Who Wins, Who Loses
Winners if MatX succeeds:
- AI companies desperate for cheaper inference
- Cloud providers wanting to reduce Nvidia dependence
- The broader ecosystem that benefits from chip competition
Losers:
- Nvidia’s margin protection strategy
- Other Nvidia challengers who lose the race
- Anyone who bet big on current-generation hardware
The AI chip market is projected to hit $500 billion in 2026. There’s room for more than one winner. But MatX has to actually ship chips that work before we’ll know if this $500 million bet pays off.