Let’s be honest. This week felt like a sprint for China’s AI scene.
Just before the Lunar New Year, Alibaba Cloud quietly dropped a big update: Qwen-3.5, its latest open AI model family. And it didn’t arrive alone. Nearly every major Chinese AI lab released something new in the same week, making it clear this wasn’t random timing. This was momentum.
A new model, smaller — yet sharper
Here’s the thing that caught attention.
One of the new models, Qwen-3.5-Open-Source, has 397 billion parameters. That’s big, sure. But what’s interesting is that it reportedly performs better than Alibaba’s older flagship, Qwen-3-Max-Thinking, which crossed the one-trillion-parameter mark.
Bigger isn’t always better anymore. It’s starting to feel like a see-saw.
According to Alibaba’s own benchmarks, Qwen-3.5 lands close to leading models from Open AI, Anthropic, and Google DeepMind. Not their newest releases, sure, but still serious company to keep.
The closed model with a massive memory
There’s also a closed-source sibling: Qwen-3.5-Plus.
This one’s aimed at top-tier performance. Alibaba says it’s on par with “state-of-the-art” models and comes with a 1 million token context window. That’s huge. Think of it as an AI that can read and remember entire books, documents, or long conversations without losing the thread.
Not many models can do that yet.
Multimodal, finally — and that matters
For the first time, Qwen is natively multimodal.
Text, images, audio, video — all handled inside one system. No patchwork. No duct tape. That’s a big deal because this is where real-world AI use is heading. Not just chat. Actual understanding across formats.
Under the hood, Qwen-3.5 also uses a new architecture previewed last year under Qwen3-Next, designed to cut computing costs while improving output. Alibaba even claims a new benchmark for “capability per unit of inference cost.”
Translation? More brain, less electricity.
Open weights, global reach
Here’s where the story gets geopolitical.
Alibaba released the model weights publicly on Hugging Face and its own ModelScope platform. Developers with the right hardware can download and run Qwen-3.5 locally.
That’s not just generosity. It’s strategy.
China has been leaning hard into open-source AI, while many Silicon Valley players keep their most powerful models locked down. And it’s working.
Download data from Hugging Face shows Chinese open models overtook US ones last year, with DeepSeek and Qwen driving most of that growth.
Language support that quietly changes the game
Another underrated move: language coverage.
Qwen-3.5 now supports 201 languages and dialects, adding 82 new ones. Even niche languages like Hawaiian and Fijian made the cut.
That breadth matters. It’s one reason Qwen is starting to feel like an open-model default. Open-model researcher Nathan Lambert even noted that Qwen downloads in December topped all other major open models combined.
That’s not a fluke.
Downloads aren’t everything — deployment is
Still, not everyone’s convinced downloads tell the full story.
AI analyst Lennart Heim points out the obvious question: are these models actually being used in production? Or are people just experimenting?
There’s also the national-security angle. Open Chinese models are spreading globally, even as US start-ups and universities deploy them locally. The long-term implications? Still unclear.
It’s worth noting that Alibaba isn’t fully open, either. Its biggest models — the Max series — remain closed and tightly tied to its consumer Qwen app.
Open models as a business, not just ideology
Alibaba’s real play may be commercial.
Open models sit at the center of its cloud strategy. Offer the weights. Then sell the infrastructure. Hosting, inference, tooling. The whole stack.
And if a hint dropped by Qwen technical lead Lin Junyang on X is anything to go by, more open-weight releases are coming soon.
So yes, the AI race between China and the US just got another jolt.
And honestly? It’s starting to look less like a straight line and more like a constant tug-of-war.


