Top Stories
Meta Debuts Muse Spark, Its First Model Under Alexandr Wang
Meta shipped its first new AI model since restructuring its entire AI division around Scale AI co-founder Alexandr Wang nine months ago. Muse Spark — code-named Avocado — is the opening product from Meta Superintelligence Labs, and it’s rolling out across WhatsApp, Instagram, Facebook, Messenger, and Meta’s Ray-Ban AI glasses.
The model handles voice, text, and image inputs but produces text-only output for now, with a “Contemplating” mode for complex reasoning expected to follow. On the Artificial Analysis Intelligence Index, Muse Spark scores 52, trailing Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6 — a respectable debut but not a leapfrog. Meta says rebuilding its AI stack from scratch allowed it to produce a smaller model with Llama 4-equivalent capabilities at “an order of magnitude less compute.”
The interesting shift is what Meta isn’t doing: Muse Spark is not being released as open weights. After years of positioning Llama as the open-source alternative to proprietary models from OpenAI and Google, Meta is keeping this one in-house — at least for now. The company says open-weight versions may come later, but the fact that its flagship model launches closed suggests the economics of open-source AI are changing, even for the company that made it a competitive strategy.
Source: TechCrunch, Axios, CNBC
Meta Signs Another $21 Billion with CoreWeave, Bringing Total to $35 Billion
The same week Meta launched a new model, it locked in another $21 billion in AI infrastructure spending with CoreWeave, running through 2032. That’s on top of an existing $14.2 billion arrangement, giving CoreWeave $35 billion in contracts from a single customer.
The dedicated capacity will include early deployments of Nvidia’s Vera Rubin platform, the next-generation GPU architecture. Meta’s capex guidance for 2026 alone stands at $115–135 billion — nearly double last year — with the bulk directed at AI training and inference infrastructure.
The math here is hard to ignore. Meta is simultaneously launching a model it built in nine months with a fraction of previous compute, while signing deals that commit it to more GPU capacity than most countries’ research budgets. Either the company sees usage patterns that demand this scale, or it’s buying insurance against being outbuilt by Microsoft and Google. Either way, CoreWeave — which went public just weeks ago — now has a single customer responsible for a third of its business.
Amazon’s AI Revenue Hits $15 Billion Run Rate
Amazon CEO Andy Jassy disclosed in his annual shareholder letter that AWS’s AI revenue run rate exceeded $15 billion in Q1 2026 — the first time the company has broken out the number directly. That’s roughly 10% of AWS’s $142 billion overall run rate, and Jassy said the figure is “ascending rapidly” but would be growing faster if not for capacity constraints.
Meanwhile, Amazon’s custom chips business (Graviton and Trainium processors) crossed $20 billion in annual run rate, doubling from $10 billion. The company is spending about $200 billion in capex this year, with most going to AWS infrastructure that Jassy says will be monetized through 2027 and 2028.
The $15 billion figure matters because it’s one of the first concrete revenue disclosures from a hyperscaler that directly ties AI spending to top-line growth. Investors have been questioning whether the hundreds of billions pouring into AI infrastructure will generate returns. Amazon just provided its answer — though $15 billion against $200 billion in capex still means the payback period stretches years into the future.
Source: Yahoo Finance, TradingPedia
Quick Hits
-
Anthropic launches Managed Agents: A new public beta service that handles the infrastructure behind deploying autonomous AI agents — sandboxed execution, checkpointing, credential management, scoped permissions, and tracing. Costs $0.08 per session hour on top of standard Claude API pricing. Notion, Rakuten, and Asana are among early users. Anthropic is positioning it as reducing agent deployment time from months to days. SiliconANGLE, The New Stack
-
GLM-5.1 tops SWE-Bench Pro as open-source closes the gap: Z.ai (formerly Zhipu AI) released GLM-5.1, a 754-billion-parameter MoE model under the MIT license, scoring 58.4 on SWE-Bench Pro — above GPT-5.4 (57.7) and Claude Opus 4.6 (57.3). The model can sustain autonomous coding tasks for up to eight hours. It activates only 40 billion parameters at a time with a 200K context window. The open-source frontier is now beating proprietary models on at least one major coding benchmark. VentureBeat, MarkTechPost
-
OpenAI Foundation commits $100M+ to Alzheimer’s research: Grants going to six research institutions for disease pathway mapping, biomarker detection, and treatment personalization — the first tranche of a larger $25 billion commitment to health research and AI resilience. Whether AI-driven drug discovery delivers on its promises remains to be seen, but this is real money going to real researchers. OpenAI Foundation, Gizmodo
-
White House AI framework pushes federal preemption of state laws: The National Policy Framework for AI, released March 20 but still generating policy analysis this week, recommends a nationally unified approach that would override the 600+ state AI bills introduced in 2026 sessions. The DOJ’s AI Litigation Task Force, created in January, is already empowered to challenge state AI laws. This sets up a major fight between state-level regulation (Colorado, California, New York) and federal preemption efforts. Consumer Finance Monitor
Worth Watching
The Muse Spark launch and the CoreWeave deal together tell you everything about where Meta’s head is at. On one hand, the company proved it could build a competitive model in nine months by rebuilding its infrastructure from scratch — an argument for efficiency. On the other hand, it immediately signed $21 billion in new GPU contracts, an argument that efficiency doesn’t reduce the appetite for compute, it increases it. This is Jevons paradox playing out in real time across the entire AI industry.
The open-source picture is also shifting. GLM-5.1 topping SWE-Bench Pro under an MIT license is a milestone, but Meta — the company that made open-source AI a competitive strategy — shipping its flagship model as closed-source is a signal. When the economics get serious enough, even the open-source champions start hedging. Watch whether Meta actually follows through on the promised open-weight release, or whether Muse Spark stays behind the wall.