The open-source AI movement just had one of its strongest months yet. China’s first publicly traded AI company dropped a frontier model under MIT license. OpenAI - yes, that OpenAI - released open-weight reasoning models. And the Linux Foundation created a new home for AI agent standards with backing from every major player.
Here’s what happened and why it matters.
GLM-5: Frontier AI Without NVIDIA
Zhipu AI released GLM-5 on February 11, marking a milestone for both open-source AI and chip independence.
The numbers: 744 billion parameters in a mixture-of-experts architecture with 40B active per token. It supports a 200K context window and scores 77.8% on SWE-bench Verified, putting it in the same league as GPT-5.2 and Claude Opus on coding tasks.
The twist: GLM-5 was trained entirely on Huawei Ascend chips using the MindSpore framework. Zero NVIDIA hardware involved. This isn’t just about open weights - it’s proof that frontier AI development no longer requires access to export-controlled chips.
Zhipu released GLM-5 under the MIT license, the most permissive option available. You can download it from Hugging Face, access it through their Z.ai API, or use it on OpenRouter. Their stock jumped 34% on the Hong Kong exchange following the announcement.
OpenAI Goes Open (Sort Of)
After years of “open” being the most ironic word in their name, OpenAI finally released open-weight models: gpt-oss-120b and gpt-oss-20b.
Both ship under Apache 2.0. The 120B model runs on a single 80GB GPU and achieves near-parity with o4-mini on reasoning benchmarks. The 20B version fits on 16GB of VRAM and matches o3-mini performance - meaning you can run competitive reasoning on a consumer graphics card.
They’ve also released gpt-oss-safeguard, a safety classifier that lets developers define their own content policies at inference time. Both safety models come in 120B and 20B variants under the same license.
The models support three reasoning effort levels (low/medium/high), trading latency for performance. OpenAI says they trained these using techniques from o3 and other frontier systems, then optimized for local deployment.
This matters because OpenAI’s open releases historically came with restrictive licenses or limited capabilities. Apache 2.0 is genuinely permissive. You can fine-tune, deploy commercially, and build whatever you want.
Qwen3.5: Agentic Tool Calling Goes Open
Alibaba’s Qwen team released Qwen3.5 on February 17 with three models under Apache 2.0: Qwen3.5-27B, Qwen3.5-35B-A3B, and Qwen3.5-122B-A10B.
The standout feature is native agentic tool calling. These aren’t just chat models - they’re designed to interact with external tools, APIs, and workflows. The 122B-A10B variant reportedly matches Claude Sonnet 4.5 performance on standard benchmarks while using far less compute thanks to its mixture-of-experts architecture.
For the first time, Qwen supports true multimodality: text, images, audio, and video in a single system. All available now on Hugging Face and ModelScope.
Mistral Large 3: 675B Parameters, Apache 2.0
Mistral’s Large 3 brings 675B total parameters (41B active) under Apache 2.0. It processes text and images with a 256K token context window.
The company trained it from scratch on 3,000 NVIDIA H200 GPUs with particular attention to non-English languages. On LMArena, it ranks #2 among open non-reasoning models and #6 overall in the open-source category.
The full Mistral 3 family also includes dense models at 3B, 8B, and 14B parameters - all Apache 2.0 - making them practical for edge deployment on drones and robotics.
The Agentic AI Foundation: MCP Gets a Neutral Home
The biggest structural change happened at the Linux Foundation, which announced the Agentic AI Foundation (AAIF) with an unprecedented list of founding members: Anthropic, OpenAI, Google, Microsoft, Amazon, Block, Bloomberg, and Cloudflare.
The foundation’s anchor projects include:
- Model Context Protocol (MCP): Anthropic’s standard for connecting AI agents to tools and data sources
- goose: Block’s open-source, local-first AI agent framework
- AGENTS.md: OpenAI’s specification for agent behavior and capabilities
This is significant because AI agent development has been fragmented across dozens of incompatible frameworks. Having MCP under neutral governance - with buy-in from competitors who rarely agree on anything - creates the potential for real interoperability.
The first MCP Dev Summit happens April 2-3 in New York with over 95 sessions from maintainers and production users.
Other Notable Releases
Moondream 3 Preview: The tiny vision-language model family released a mixture-of-experts version with 9B total parameters but only 2B active. It handles captioning, visual Q&A, object detection, and document understanding with a 32K context window - all in roughly 1GB of VRAM.
DeepSeek-V3.2: DeepSeek continues its dominance with a reasoning-first model built specifically for agent workflows. Their R1 model still offers o1-level reasoning at $2.19 per million tokens versus o1’s $60, all under MIT license.
What This Means
Three trends stand out:
The frontier is open now. GLM-5 and Mistral Large 3 aren’t “open alternatives” anymore - they’re competitive with the best proprietary models. The gap between what you can run yourself and what you have to pay for keeps shrinking.
OpenAI’s shift is real. Releasing competitive models under Apache 2.0 suggests they’ve accepted that open weights are table stakes. Whether this continues or remains a one-time gesture, it validates years of pressure from the open-source community.
Agent infrastructure is standardizing. The Agentic AI Foundation brings together companies that normally compete on everything. If MCP becomes the TCP/IP of AI agents, we’ll look back at February 2026 as the month it started.
What You Can Do
If you have capable hardware (24GB+ VRAM), try running GLM-5 or Mistral Large 3 locally. Both offer experiences comparable to paid APIs.
For consumer hardware, gpt-oss-20b and Moondream 3 run on 16GB GPUs. DeepSeek’s distilled models work on even less.
If you’re building AI applications, start experimenting with MCP. The standard is still evolving, but getting familiar now means you won’t be playing catch-up when it becomes the default way agents connect to tools.
The gap between open and proprietary AI has never been smaller. This month’s releases suggest it might close entirely.