The first widely publicized cloud-platform breach to propagate out of an AI productivity tool, routed through an Allow-All OAuth grant and sold on a forum at two million dollars.
The Register and BleepingComputer have the OAuth detail and CEO Rauch's AI-acceleration quote; the mainstream tech press is still running it as a generic data-leak story.
Security X is treating the Context.ai-to-Vercel cascade as the proof-of-concept for AI supply-chain risk and passing around the indicators of compromise by Tuesday morning.
Vercel, the San Francisco cloud platform behind the Next.js JavaScript framework, disclosed late Sunday that attackers reached its internal systems through a compromised OAuth token issued to Context.ai, a third-party AI productivity tool a Vercel employee had connected to the company's Google Workspace. [1] Tuesday is Day Three. ShinyHunters, the same forum persona that claimed the Snowflake customer breaches in 2024, is advertising 580 employee records, Vercel source code, API keys, and environment variables at a two-million-dollar asking price. [2] This is the first widely reported case of an AI productivity tool's compromise cascading into a major cloud-infrastructure vendor. It is the blueprint the industry has been warning about.
The path runs backwards through February. Context.ai's own security bulletin, published alongside Vercel's, describes a Lumma Stealer infostealer infection at Context in February that CrowdStrike was engaged to investigate. [3] CrowdStrike's review, Context concedes, missed the extent of what walked out. The OAuth tokens that Context held against customers' Google Workspace tenants — including at least one Vercel enterprise account whose owner had clicked "Allow All" on scope requests covering mail, Drive, and admin directory — were among the stolen material. When ShinyHunters moved against Vercel, they moved not by phishing a Vercel employee but by replaying an OAuth grant they already held. From there the attacker reached a Vercel employee's Google Workspace, then reached into internal Vercel systems that trusted that account. [1] The attack surface was the consent screen an engineer clicked through two months earlier.
Vercel CEO Guillermo Rauch said in the disclosure that the intrusion was "significantly accelerated by AI" — a phrase security teams read as machine-augmented enumeration of the attacker's pivot paths inside the compromised Workspace tenant. [4] What that phrase concretely describes, in the agentic-AI sense, is the attacker using language-model assistance to prioritize which files, mailboxes, and repositories to exfiltrate first. The three indicators of compromise Vercel published are OAuth client IDs tied to Context.ai's application registrations. [2] The guidance to other customers is explicit: identify users who have connected Context.ai, revoke the associated tokens, audit for unusual logins since February, and treat the Context integration as a compromise path until proven otherwise.
The broader read is what the Drift breach of 2023 was the dry run for. OAuth consent flows between AI productivity tools and enterprise Workspace tenants have become a federated identity layer that no one on the security side has a map of. Vercel's perimeter was not breached. Its vendors' perimeters were. Context.ai's was. The cascade is the story. Guidance from Grip Security and Gartner has, in the last six months, projected that half of enterprise breaches by year-end will involve a SaaS-to-SaaS identity pivot. Vercel is the proof of that projection reaching a platform whose downstream customers include a large share of the Node.js ecosystem. [3]
Rauch's statement names Next.js, Turbopack, and Vercel's other open-source projects as safe. The source-code and API-key exposure is enterprise-scoped. The supply-chain question — whether repositories Vercel employees maintain on GitHub were touched through the same access — remains open Tuesday. ShinyHunters' two-million-dollar asking price is the forum-market price of a foothold that looked, from the inside, like a single engineer clicking "Allow All" on an AI tool two months ago.
-- THEO KAPLAN, San Francisco