The New Grok Times

The news. The narrative. The timeline.

Technology

OpenAI, Anthropic, and Google Unite Against Chinese Model Copying in Rare Alliance

Three corporate logos — OpenAI, Anthropic, and Google — displayed on screens in a dimly lit conference room
New Grok Times
TL;DR

America's three fiercest AI rivals are sharing intelligence to stop Chinese labs from copying their models.

MSM Perspective

Bloomberg broke the story as a corporate collaboration; the IP protection framing dominates MSM coverage.

X Perspective

X is split — open-source advocates call it anti-competitive gatekeeping, national security hawks call it overdue.

BEIJING — The three companies that spend more time competing with each other than with anyone else have found a common enemy. OpenAI, Anthropic, and Google have begun sharing intelligence through the Frontier Model Forum to detect and counter what they describe as Chinese AI model distillation — the process by which a smaller model learns to replicate a larger model's capabilities by systematically querying it. [1]

The collaboration, first reported by Bloomberg, represents the most significant joint action by the three leading American AI labs since the founding of the Frontier Model Forum itself. [2] The initiative focuses on identifying patterns of API usage that suggest systematic extraction of model behavior — millions of carefully designed queries that, in aggregate, allow a Chinese lab to build a model that mimics the original without training from scratch.

This paper covered DeepSeek's V4 release on Huawei chips last week, documenting China's accelerating AI independence. The anti-distillation alliance is the American response — but it raises a question the three companies have not answered: is this intellectual property protection or market gatekeeping?

The Distillation Mechanism

Distillation is not hacking. It does not require access to model weights or training data. It requires only API access — the same access any paying customer has. A distilling lab sends queries designed to map the frontier model's decision boundaries, then trains a smaller model to reproduce those boundaries. The result is a model that behaves like GPT-5 or Claude Opus without the billions of dollars in compute that training them required. [3]

OpenAI has publicly accused Chinese labs of distilling from its models, citing usage patterns that suggest automated extraction at scale. [4] The company's terms of service explicitly prohibit distillation, but enforcement is difficult when the queries themselves are indistinguishable from legitimate use.

The alliance's approach involves sharing detection signals — IP addresses, query patterns, timing signatures — across the three platforms. If a suspected distillation campaign is identified on one platform, the others can check for correlated activity. [2] The collaboration is intelligence-sharing, not joint enforcement; each company retains independent authority over its own access policies.

The Open-Source Counterargument

The initiative has drawn sharp criticism from the open-source AI community. On X and Reddit, the most common response is that the alliance is less about protecting innovation than about maintaining a pricing moat. [5] If Chinese labs can build competitive models through distillation, the argument goes, the $100-billion training runs that justify frontier lab valuations become unnecessary — and so do the subscription fees that fund them.

The counterargument from the labs is national security, not economics. OpenAI's policy team has argued that Chinese AI capabilities, if achieved through extraction rather than independent research, could advance military and surveillance applications without the safety research that American labs conduct alongside capability development. [4]

The DeepSeek precedent makes both arguments more concrete. DeepSeek's V4, running on Huawei's Ascend chips, demonstrated performance competitive with American frontier models. Whether that performance was achieved through distillation, independent innovation, or some combination is disputed. What is not disputed is that the gap between American and Chinese AI capabilities is narrowing, and the three labs that define the American frontier have decided that narrowing is a problem worth collaborating to solve.

-- DAVID CHEN, Beijing

Sources & X Posts

News Sources
[1] https://www.bloomberg.com/news/articles/2026-04-06/openai-anthropic-google-unite-to-combat-model-copying-in-china
[2] https://www.straitstimes.com/business/companies-markets/openai-anthropic-google-unite-to-combat-ai-model-copying-in-china
[3] https://www.techbrew.com/stories/openai-anthropic-google-distillation-collab
[4] https://timesofindia.indiatimes.com/technology/tech-news/american-technology-industrys-biggest-rivals-google-openai-and-anthropic-come-together-to-fight-silicon-valleys-chinese-problem-that-they-recently-warned-government-of/articleshow/130086203.cms
[5] https://www.reddit.com/r/LocalLLaMA/comments/1sehamp/openai_anthropic_google_unite_to_combat_model/
X Posts
[6] OPENAI, ANTHROPIC & GOOGLE UNITE AGAINST CHINESE AI COPYING https://x.com/CHItraders/status/2041267014396772730
[7] OpenAI, Anthropic, and Google are now sharing intelligence through the Frontier Model Forum to detect Chinese distillation attacks. https://x.com/alexwg/status/2041520485943648710

Get the New Grok Times in your inbox

A weekly digest of the stories shaping the timeline — delivered every edition.

No spam. Unsubscribe anytime.