Day eight of Kimi K2.6's lead on the Humanity's Last Exam full benchmark closes Saturday with a second Chinese open-weights frontier model in the chart. DeepSeek V4 launched Friday with 1.6 trillion parameters, a one-million-token context window, and native support for Huawei's Ascend silicon. [1] The paper's Friday account named the K2.6 lead at Day Seven and noted Western lab silence. Saturday lands the second half of what is now a one-week pattern: two Chinese open-weights models at frontier, both running on chips Nvidia did not sell. BenchLM's provisional read places V4 Pro at second on its overall leaderboard with K2.6 at sixth — a closer cluster than Kimi's HLE-full lead suggested, and a much closer cluster than the Western frontier has produced an answer to. [2]
The architecture detail is the part that travels. V4's training, by Reuters and CGTN's reporting, used Huawei's Ascend A2/A3/950 supernode lineup; the Pro and Flash variants split the cost-tiered deployment that Western incumbents currently address only with proprietary tiering. [3] Sean Kim's deep-read pegged the model as 37 billion active parameters per token, with output pricing at $1.74 input / $3.48 output per million tokens — cheaper than K2.6's $0.95/$4.00 split and substantially cheaper than the Anthropic and OpenAI frontier tiers. [4] Whether the cost numbers hold against full-scale demand is the open question. Whether Ascend can absorb the demand is the export-control question. Both questions are now being asked in the same week.
What the K2.6 + V4 pair describes, taken together, is a frontier release cadence that does not depend on Nvidia. Kimi's release on April 17, V4's on April 24, place two open-weights frontier launches inside eight days, both on non-Nvidia silicon. Western frontier labs — Anthropic, OpenAI, Google DeepMind — released no comparably scaled open-weights model in the same window. Anthropic was, by the same Friday afternoon, in private discussions at the White House with Susie Wiles and Treasury Secretary Scott Bessent over the Pentagon supply-chain-risk dispute. OpenAI was in court over copyright. The frontier-release silence on the Western side is not strategic positioning; it is, increasingly, an absence of equivalent product to ship.
The export-control thesis has run on a single load-bearing claim: that Nvidia GPUs are the bottleneck, that Chinese labs cannot reach frontier without them, and that U.S. licensing decisions can therefore pace Chinese AI capability. The K2.6 + V4 pair stress-tests three corollaries at once. First, that Huawei Ascend can host frontier-scale training; the V4 release is the first major Chinese model with a public Ascend training claim, and the parallel ZhiPu GLM-5.1 release was reportedly trained on a 100,000-chip Ascend cluster. [5] Second, that Chinese teams can compensate for per-chip performance gaps through software optimization; V4's reported 15-to-1 cost advantage versus Western training runs implies they can. Third, that open-source models cannot compete at frontier with proprietary ones; the BenchLM leaderboard says they now do.
Day Eight of the K2.6 watch is the day this becomes a market pattern, not a single-product story. K2.6 holds. V4 lands behind it on Ascend silicon. ZhiPu's earlier release sits in the same ecosystem. Three Chinese open-weights frontier-class models are now serviceable through APIs and downloadable weights, with Western incumbents producing nothing equivalent in the same window. The financial implication is what Cerebras's IPO roadshow is now negotiating: a wafer-scale chip company filing into a market where the dominant alternative-silicon story is no longer "Cerebras vs Nvidia" but "Ascend vs Nvidia." The geopolitical implication is one the Trump administration's Friday Anthropic meeting did not address. The export-control architecture was designed to slow Chinese AI. The eight-day cadence says it accelerated Chinese silicon instead.
-- MAYA CALLOWAY, New York