Top Stories
The People Who Build AI Are Sounding the Alarm - On Their Way Out
Something unusual is happening at the three companies leading the AI race: the safety people are leaving, and they’re not staying quiet about it.
At xAI, half the founding team has now departed. Co-founders Jimmy Ba and Tony Wu both resigned within 48 hours of each other last week, joining Igor Babuschkin, Kyle Kosic, and Christian Szegedy on the exit list. Musk framed it as a routine reorganization - “the structure must evolve just like any living organism” - but the Financial Times reported the departures stem from disputes within the technical team over model performance. Ba, an associate professor at the University of Toronto whose research directly influenced Grok 4, publicly warned that systems capable of “redesigning and improving themselves without human input” could emerge “within a year.”
At Anthropic, Mrinank Sharma, who led the Safeguards Research team since it was formed, published a resignation letter on X that racked up over a million views. “The world is in peril,” he wrote, citing not just AI risks but “a whole series of interconnected crises.” He said employees “constantly face pressures to set aside what matters most” and that he’d struggled to ensure “values govern our actions.” He’s planning to study poetry.
At OpenAI, the company fired VP of product policy Ryan Beiermeister, citing sexual discrimination against a male colleague - a claim Beiermeister has denied. The timing raised questions: Beiermeister had opposed the rollout of ChatGPT’s planned “adult mode,” telling colleagues she didn’t believe OpenAI had built adequate safeguards against child-exploitation content or teen access. OpenAI says her departure “was not related to any issue she raised while working at the company.” Meanwhile, researcher Zoë Hitzig resigned and published a critical op-ed warning that OpenAI is “building an economic engine that creates strong incentives to override its own rules.”
The pattern matters more than any individual departure. These aren’t outside critics or regulators raising alarms. These are the engineers and researchers who built the guardrails, saying the guardrails aren’t holding.
Sources: TechCrunch (xAI exits), CNBC, TechCrunch (OpenAI firing), Decrypt
White House Plans Chip Tariff Carve-Out for AI Hyperscalers
The Trump administration is preparing to shield Amazon, Google, and Microsoft from the next round of semiconductor tariffs through a framework tied to TSMC’s U.S. manufacturing investments. Under the proposed plan, Taiwanese chipmakers building factories in the U.S. could import up to 2.5 times the planned capacity of their new facilities tariff-free during the build-out phase. Companies with existing U.S. plants would get a 1.5x multiplier.
The practical effect: the companies spending the most on AI infrastructure get the cheapest access to the chips that power it. TSMC has pledged $165 billion to expand its U.S. footprint, and the exemptions are specifically designed to support the hyperscalers racing to build data centers. It’s a calculated move - Washington wants domestic chip fabrication but also needs Big Tech to keep building AI infrastructure at scale.
Nothing has been signed yet. An administration official told reporters that final decisions could still change, and the plans remain fluid. But the direction is clear: AI infrastructure is being treated as a strategic asset worth protecting from the administration’s own trade policy.
Sources: Tom’s Hardware, Benzinga
Cisco Enters the AI Chip Race with 102.4Tbps Networking Silicon
Cisco unveiled the Silicon One G300, a 102.4-terabit-per-second Ethernet switching chip designed to move data between GPUs in massive AI training clusters. Built on TSMC’s 3nm process, the chip includes what Cisco calls “Intelligent Collective Networking” - a combination of shared packet buffers, path-based load balancing, and telemetry designed to handle the bursty traffic patterns that AI workloads generate.
The numbers Cisco is claiming: 33% improvement in network utilization and 28% faster job completion times versus non-optimized alternatives. The chip will power both the Nexus 9000 and Cisco 8000 switching platforms, targeting gigawatt-scale AI clusters for training, inference, and agentic workloads.
Cisco is positioning the G300 against Nvidia, which bundles networking with its GPU accelerators, and Broadcom, whose Tomahawk chips dominate data center switching. The bet is that as AI clusters expand beyond hyperscale cloud providers and into enterprise data centers, customers will want networking silicon from a company that already runs their networks. The chip is expected to ship in the second half of 2026.
Sources: SiliconANGLE, Cisco Newsroom
Quick Hits
-
Microsoft explores superconducting cables for AI data centers: High-temperature superconductor (HTS) cables have zero electrical resistance, meaning no power losses and no heat generation during transmission. Microsoft is testing the technology with VEIR, a Massachusetts startup whose cables are more than 10x smaller and lighter than traditional copper. Still early-stage - this is about understanding where the tech makes sense, not deploying it tomorrow - but it points to how seriously hyperscalers are taking the power bottleneck. Tom’s Hardware, Windows Central
-
Runway raises $315M at $5.3B valuation: The AI video-generation company closed a Series E led by General Atlantic, with participation from Adobe Ventures, AMD Ventures, Nvidia, and Fidelity. Runway is pivoting from pure video generation to “world models” - AI systems that can understand and simulate complex environments. The company plans to expand its 140-person team across research, engineering, and go-to-market. TechCrunch
-
Legal AI startup Harvey eyes $11B valuation: Harvey is in talks to raise $200M led by Sequoia and Singapore’s GIC, which would be its fourth fundraise in 14 months. The company hit $190M in annual recurring revenue by end of 2025. For context, Harvey was valued at $3B in February 2025 - a year later, it’s nearly 4x that. TechCrunch
-
NVIDIA and Eli Lilly commit $1B to AI drug discovery lab: The five-year co-innovation lab in the San Francisco Bay Area will combine Lilly’s biological expertise with NVIDIA’s AI infrastructure, running on the BioNeMo platform and Vera Rubin architecture. The lab aims to create a continuous learning system connecting AI-assisted “wet labs” and computational “dry labs” for 24/7 experimentation. NVIDIA Newsroom
Worth Watching
The safety researcher exodus deserves more attention than it’s getting. When an individual leaves a company with concerns, it’s an anecdote. When multiple people leave multiple companies in the same week, all citing variations of the same concern - that commercial pressures are overriding safety work - it starts to look like a trend. The fact that xAI lost half its founding team, Anthropic lost its safeguards lead, and OpenAI fired an executive who raised child-safety concerns about a porn feature, all within days of each other, is the kind of convergence that regulators and investors should take seriously.
The chip tariff exemption story is also worth following. The administration is essentially building a two-tier system where AI hyperscalers get preferential access to semiconductors. That’s good for the companies building frontier models and bad for everyone else - smaller cloud providers, startups, and any company that needs GPUs but doesn’t have the leverage to negotiate a carve-out from U.S. trade policy.