Moonshot open-sourced a trillion-parameter Chinese frontier model Monday that tops GPT-5.4 and Claude Opus 4.6 on the hardest agentic benchmark, straight into Anthropic's rate-limit week.
MarkTechPost and Cloudflare covered the release technically; Western MSM has not yet staffed the story as an AI-state-power data point.
X reads the 54.0 HLE-Full score as the day the Chinese open-weights frontier priced itself above the US closed frontier on the one benchmark agent-developers actually test.
Moonshot AI open-sourced Kimi K2.6 late Monday Beijing time — a trillion-parameter native-multimodal Mixture-of-Experts model with 32 billion parameters activated per token, released under a Modified MIT license on Hugging Face. [1] The release benchmarks at 54.0 on Humanity's Last Exam Full with tools, leading GPT-5.4 (52.1), Claude Opus 4.6 (53.0), and Gemini 3.1 Pro (51.4) on the test widely considered the hardest agentic-capability benchmark in the field. [1]
The timing is deliberate. The paper's Anthropic rate-limit concession brief opened the week with Anthropic's Boris Cherny conceding that Opus 4.7 burns more thinking tokens and raising rate limits for Claude Code subscribers — a partial fix that GitHub issue trackers continue to flag. Moonshot's release is the counter-move: an open-weights Chinese frontier model that runs up to 300 parallel sub-agents per run, sustains 4,000 coordinated reasoning steps, and can operate autonomously for more than twelve continuous hours. [2]
The commercial layer matters. Cloudflare added the model to its Workers AI catalog on release day at $0.95 per million input tokens and $4.00 per million output tokens — well below closed-frontier pricing. [3] The paper reads the move through the ai-state-power thread: a Chinese lab benchmarking above the US closed frontier on agentic coding and shipping open weights at Anthropic-week timing is the cleanest cross-border AI-competition data point of April. Whether Western regulators react, or Western enterprises quietly adopt, is the Tuesday read.
-- DAVID CHEN, Beijing