Nvidia and AMD, the two companies competing most aggressively for AI chip dominance, just agreed on something: the copper wires connecting AI chips are becoming a problem. Both companies participated in Ayar Labs’ $500 million Series E, valuing the photonics startup at $3.8 billion.
The deal signals that silicon photonics, long relegated to telecommunications and high-end networking, is now essential infrastructure for the AI buildout.
The Copper Problem
Modern AI clusters run into a fundamental bottleneck. Data moves between chips as electrical signals through copper interconnects. As AI models grow larger and training clusters expand to tens of thousands of GPUs, those copper connections become the limiting factor.
The physics are straightforward. Electrical signals degrade over distance. Higher data rates require more power and generate more heat. At some point, you can’t push more data through copper without burning more watts than the computation itself.
“We’re solving one of the biggest hardware issues that’s causing bottlenecks in AI,” said Mark Wade, Ayar Labs CEO.
Ayar’s answer: replace copper with light.
What Ayar Labs Built
The company developed two complementary products.
SuperNova is a light source roughly the size of a dime that generates laser beams for data transmission between chips.
TeraPHY is an optical encoder containing millions of transistors alongside miniaturized optical devices, including microring resonators that modify laser properties through constructive interference. The chip processes up to 8 terabits per second of traffic.
The claimed performance gains are substantial: 4 to 20 times more computing throughput per watt compared to copper connections. That’s not incremental improvement. If accurate, it changes the economics of large-scale AI infrastructure.
There’s also a latency advantage. TeraPHY eliminates forward error correction, a technology that typically adds about 100 nanoseconds of delay to connections. For AI inference workloads requiring real-time responses, those nanoseconds matter.
Why Nvidia and AMD Both Invested
The investor list explains why this matters now. Neuberger Berman led the round, but Nvidia, AMD, MediaTek, Alchip Technologies, ARK Invest, Insight Partners, Sequoia Capital, 1789 Capital, and Qatar Investment Authority all participated. Total outside funding has reached $870 million.
When Nvidia and AMD back the same company, they’re both acknowledging a shared constraint. Their GPUs are increasingly limited not by compute capability but by the speed at which data can move between chips and between nodes.
In May 2025, Ayar Labs unveiled a UCIe-standard compatible chip version that can integrate directly into processors. That compatibility is reportedly what drew Nvidia’s investment. If photonic interconnects can plug directly into GPU architectures, the upgrade path becomes much simpler.
The 2028 Timeline
Volume production isn’t happening tomorrow. Ayar Labs is targeting 2028 AI systems, with product selection, validation, and qualification completing by late 2027.
That timeline reflects both manufacturing reality and the design cycles of major AI chip programs. Nvidia’s next-generation GPU architectures after Blackwell will need to lock in their interconnect strategies within the next 18 months. AMD faces similar windows for its MI400 series and beyond.
Wade noted that volume intercepts “start to change the kind of optimization that you have to aim at” when targeting AI accelerators, with “faster and steeper” production ramps than the optics industry has experienced before.
The company plans to manufacture tens of thousands to hundreds of thousands of SuperNova light sources annually within the next few years.
The Competitive Landscape
Ayar Labs isn’t alone in pursuing optical interconnects. Marvell recently completed its $5.5 billion acquisition of Celestial AI, another photonics startup. Intel, Broadcom, and smaller players like Lightmatter are also chasing the opportunity.
But Ayar has an advantage in timing. The company has been developing co-packaged optics since 2015 and spun out of research at MIT, UC Berkeley, and the University of Colorado Boulder. That head start matters when the window for integration into major chip programs is measured in months.
What the Money Buys
The $500 million will fund three priorities:
- Scaling manufacturing capacity for volume production
- Enhancing product testing workflows
- International expansion, including a new office in Hsinchu, Taiwan, the heart of semiconductor manufacturing
The Taiwan office matters. If Ayar’s chips are going into Nvidia and AMD processors, those processors are being manufactured at TSMC. Proximity to the fabrication process accelerates development.
The Stakes for AI Infrastructure
The AI buildout is straining every part of the supply chain. GPUs are allocation-constrained. Memory is sold out. Power and cooling capacity limit data center expansion. The interconnect bottleneck is less visible but equally real.
Hyperscalers are spending more than $600 billion on data center infrastructure in 2026. A meaningful portion of that investment assumes continued improvements in chip-to-chip communication. If copper hits its limits before photonics reaches volume production, AI infrastructure buildouts slow down.
Ayar Labs is betting it can deliver optical interconnects at scale before that happens. Nvidia and AMD are betting Ayar is right. Given how rarely those two companies agree on anything, that convergence is informative.