LIVE
Loading prices...
View All

Qwen3 Coder Next to rival heavyweight coders

Qwen3 Coder Next benchmark chart showing high performance vs parameter count on SWE-bench Pro

Alibaba has released Qwen3 Coder Next, a focused open-source coding agent built as an 80‑billion‑parameter mixture-of-experts model, with only about 3 billion parameters active per token. It is trained on roughly 800,000 verifiable coding tasks, emphasizing long-horizon reasoning, tool use, and robust error recovery.

On benchmarks like SWE-bench Verified, SWE-bench Multilingual, and SWE-bench Pro, Qwen3 Coder Next matches or beats heavily larger open models such as DeepSeek and GLM 4.7, despite its smaller size. Plotted against SWE-bench Pro scores, it sits in the “upper-left” region, delivering high performance with lower parameter counts, on par with Claude 4.5.

Demos show it building chat UIs that simulate AI responses, multi-stage interactive games, and even cleaning up a desktop via CLI commands, similar to Anthropic’s Claude Code. The model can also plug into OpenClaw and other orchestration frameworks.

It can browse Amazon, compare prices on Sony headphones, and decide whether a deal is good, all autonomously. Alibaba has already published weights and a GitHub repo with setup instructions, though the main instruct checkpoint is still over 150GB, and the FP8 Hugging Face variant sits around 80GB.

Communication graduate, closet cynic, and kid at heart. Duane is a rare person to find, quite literally. He often takes to himself but has proven his mettle in tech media with his quick wits. Well, the portfolio of scriptwriting, web content, and public relations help too, we suppose. As a homebody, he often spends his time on the streaming platform Twitch or ‘farming’ gaming clips with friends. He is also an avid fan of round glasses and anything relative to blueberries.

199 posts

Comments

Your contact info is private.

No comments yet. Be the first to share your thoughts!