Five Chinese AI Labs Are Launching Major Models Before Lunar New Year

GLM-5, Kimi K2.5, Qwen 3.5, Doubao 2.0, and MiniMax M2.2 arrive in the most concentrated wave of Chinese AI releases ever. Some are open-source. Here's what matters.

Five Chinese AI laboratories are releasing major models before Lunar New Year on February 17, creating what may be the most concentrated wave of Chinese AI releases in history. Three of the five are open-source. One already claims to beat every US model on certain benchmarks. And the billions of yuan being spent to acquire users during the holiday make last year’s DeepSeek moment look like a soft launch.

The Five Models

Kimi K2.5 from Alibaba-backed Moonshot AI launched January 27 and is already making noise. The 1-trillion-parameter mixture-of-experts model scored 50.2% on Humanity’s Last Exam with tools enabled, beating GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro. It ranked fifth on Artificial Analysis’ Intelligence Index - the only open-source and Chinese model in the top five. Its “Agent Swarm” feature coordinates up to 100 AI sub-agents working in parallel, and it’s available as open weights on Hugging Face.

Qwen 3.5 from Alibaba landed in early February with 119 language support and a specialized Qwen3-Coder variant covering 370 programming languages with a 1-million-token context window. The base model was trained on 36 trillion tokens. It ships under Apache 2.0, meaning anyone can deploy it commercially without restriction.

GLM-5 from Zhipu AI (marketed internationally as Z.ai) is targeting a mid-February release with a focus on creative writing, coding, and reasoning. The notable detail: it was trained entirely on a 100,000-chip Huawei Ascend cluster using the MindSpore framework. No NVIDIA hardware involved. That matters in the context of ongoing US export controls on AI chips.

MiniMax M2.2 is a developer-focused refresh with 230 billion total parameters (10 billion active via MoE). It ships under an MIT license at $0.30 per million input tokens - roughly 8% of Claude Opus pricing. It targets Rust, Java, Go, C++, and TypeScript workflows and can run on four H100 GPUs.

Doubao 2.0 from ByteDance is the outlier: a closed, API-only multimodal suite bundling an LLM, image generator (Seedream 5.0), and video generator (SeedDance 2.0). ByteDance reports over 50 trillion daily tokens processed across its model infrastructure. The suite has an exclusive partnership with CCTV’s Spring Festival Gala, putting it in front of hundreds of millions of viewers.

The User Acquisition War

The model launches are just the technical layer. Underneath, Chinese tech giants are spending billions to convert holiday attention into AI users.

Tencent launched a 1-billion-yuan ($144 million) giveaway through its DeepSeek-powered Yuanbao app, with individual prizes reaching 10,000 yuan. Alibaba followed with a 3-billion-yuan campaign subsidizing services through its Qwen product. Baidu and ByteDance are running their own gala-tied promotions.

The playbook echoes the digital payment wars of a decade ago, when WeChat Pay and Alipay fought over the same holiday window. This time the prize is becoming the default AI assistant for China’s 720 million mobile AI users.

Why This Matters Outside China

Three of these five models are genuinely open - Kimi K2.5, Qwen 3.5, and MiniMax M2.2 ship under permissive licenses that allow commercial deployment anywhere. For developers and companies concerned about vendor lock-in with US providers, these models offer real alternatives that can be self-hosted.

The benchmark results demand attention. Kimi K2.5 isn’t just “catching up” - it leads on agentic tasks like web navigation (75.0% success rate vs. lower scores from GPT-5.2 and Gemini 3 Pro) and costs 76% less than Claude Opus 4.5 to run. MiniMax M2.2’s MIT-licensed model at 8% of frontier pricing changes the economics for startups building on top of these systems.

GLM-5’s all-Huawei training pipeline is strategically significant. US export controls aimed to slow Chinese AI development by restricting access to NVIDIA’s top chips. Zhipu AI just demonstrated that domestic hardware can produce frontier models. Whether GLM-5 actually matches US models remains to be seen, but the hardware independence is now proven.

The Privacy Question

This wave highlights a real tension for users outside China. The open-source models (Kimi K2.5, Qwen 3.5, MiniMax M2.2) can be self-hosted, keeping data under your control. That’s the privacy-positive path - you get competitive AI without routing your data through anyone’s servers.

ByteDance’s Doubao 2.0, however, is API-only, and data is processed on servers in China. For organizations subject to GDPR, HIPAA, or other data residency requirements, that’s a non-starter. New York State has already banned DeepSeek from government devices over national security and data privacy concerns.

The open-weight releases don’t carry the same risk. Running Qwen 3.5 on your own infrastructure is functionally identical from a privacy standpoint to running Llama or Mistral. The model weights don’t phone home. But this distinction often gets lost when policy discussions lump all “Chinese AI” into one category.

What to Watch

DeepSeek, last year’s Lunar New Year disruptor, is notably quiet this cycle. The company is reportedly wrestling with training challenges for its trillion-parameter foundational model and plans only a minor V3 update. Whether that silence means a setback or a bigger surprise later is anyone’s guess.

The real test for these models won’t be benchmarks - it’ll be whether developers outside China actually adopt them. DeepSeek proved the demand exists. This wave of releases will show whether that was a one-off moment or the start of sustained competition.