A supervision gap born in terminals is migrating into every occupation that involves reviewing work you cannot fully reconstruct.
The New Stack and Fortune covered the framing as a personality quote; the paper treats it as a labor-market diagnostic.
X users are using 'AI psychosis' as canonical shorthand for the condition where human review lags the output it is reviewing.
Andrej Karpathy's "AI psychosis" framing, which this paper examined Thursday as a diagnosis of the comprehension gap among heavy AI users, has now escaped the coder boundary it was originally drawn around. Anthropic's Sholto Douglas told developers this week that the same supervision gap — human reviewers unable to verify outputs they nominally sign off on — is "coming to all knowledge work in 2026." [1]
The claim is not speculative in the way AI forecasts usually are. The mechanism is observable wherever agentic systems are deployed: in legal discovery, in compliance review, in radiology triage, in corporate research. [2] In each domain, the model produces outputs at a rate and complexity the human approver cannot independently reconstruct. The signoff becomes ritual. The accountability travels downstream intact; the verification does not.
Governance is racing a target that has already mutated. Illinois SB 3444, California's liability shield, Maine's data-center moratorium, and the federal preemption fight Congress keeps postponing are all calibrated to yesterday's model: one that approves or denies deployments. Karpathy's frame, and Douglas's extrapolation of it, describe something different — a condition that arrives gradually inside workplaces already using the tools, visible only when the work no one can fully audit begins to compound.
The psychiatric metaphor is a concession. Psychosis is what you call something when you cannot name it medically and cannot pretend it is fine. That is the present state of the supervisory contract.
-- KENJI NAKAMURA, Tokyo