The New Grok Times

The news. The narrative. The timeline.

Technology

The Developer Who Helped Build Tesla's AI Says We're Experiencing 'AI Psychosis'

Developer staring at multiple screens with AI code outputs, office dark except for screen glow
New Grok Times
TL;DR

Andrej Karpathy posted about a 'growing gap in understanding of AI capability' — and the three-camp structure of the debate reveals more about power than technology.

MSM Perspective

Business Insider and The New Stack both covered the post as a technology story; neither examined the class dimension Karpathy embedded in his own framework.

X Perspective

X developer communities are sharing Karpathy's post as a mirror, with heavy users nodding vigorously and skeptics pointing out that 'psychosis' is what true believers call doubt.

Andrej Karpathy posted on April 10 that he has observed "a growing gap in understanding of AI capability" between people who use AI heavily and those who don't. He called the state of those deepest in AI workflows a kind of "psychosis" — an obsessive, reality-distorting immersion in what the technology can do. [1]

Karpathy was an early employee at OpenAI, left to build Tesla's Autopilot program, returned to OpenAI as a founding team member for the second time, and has since gone independent. He is among the most credible technical voices in AI research. When he describes the gap, he is describing it from the heavy-use side. [2]

His post identifies three groups. The first are heavy users who have integrated AI into every part of their workflow — coding, writing, research, decision-making — and who experience what Karpathy calls a genuinely different relationship to capability than they had before. The second are people who have tried AI tools casually and found them disappointing or unreliable, concluding that the technology is brittle and overhyped. The third are people who haven't tried at all, whose understanding is formed entirely by media coverage. [1]

The gap Karpathy identifies is real. There is a substantial and reproducible difference between what AI can do for someone who has spent years developing workflows that leverage its strengths while routing around its weaknesses, and what it does for someone who asked it one question, got a hallucinated answer, and stopped. That gap is not primarily about the technology. It is about the investment required to use the technology well. [2]

Here the concept demands harder examination than either its boosters or critics have given it.

"AI psychosis," as Karpathy frames it, is the condition of people who have invested so heavily in AI tools that they have difficulty accurately perceiving how much of their productivity gain comes from the tool and how much comes from their own accumulated expertise in using it. The psychosis is not delusion about AI's power — it is the loss of a baseline that would allow accurate attribution. [1]

But the term also carries an implicit hierarchy. To call heavy users "psychotic" is to suggest that their perception is disordered — that they are seeing things that aren't there. The alternative reading is that heavy users are simply working in a different informational environment than light users or non-users, and that the gap is not a cognitive disorder but a knowledge gap with economic consequences.

That reading surfaces the class dimension Karpathy does not name directly. The workers most at risk of displacement by AI tools are precisely those who don't have the job security, time, or workplace permission to develop deep AI workflows. A software engineer at a well-funded startup can spend weeks integrating AI into their process. A paralegal at a law firm billing by the hour cannot. The person with the investment time is the person who becomes a "heavy user." The person without it stays a "light user" or "non-user" — and sees their work increasingly priced against outputs that the heavy users' AI can approximate. [2]

Karpathy's "growing gap in understanding" is therefore not only epistemological. It is economic and political. Who gets to invest the time to understand AI's actual capability? Who is evaluated against the productivity outputs of those who have? The gap he identifies between comprehension levels maps closely onto existing hierarchies of who has discretionary time at work. [1]

The "AI psychosis" frame, however useful as a description of individual experience, obscures this structural reality by treating the gap as a perception problem rather than a distribution problem. The question is not only how to close the comprehension gap. It is who bears the cost of it closing. [2]

-- ANNA WEBER, Berlin

Sources & X Posts

News Sources
[1] https://www.businessinsider.com/andrej-karpathy-growing-gap-ai-understanding-2026-4
[2] https://thenewstack.io/karpathy-says-developers-have-ai-psychosis-everyone-else-is-next/
X Posts
[3] Judging by my tl there is a growing gap in understanding of AI capability. The first issue I think is around recency and tier of use. https://x.com/karpathy/status/2042334451611693415

Get the New Grok Times in your inbox

A weekly digest of the stories shaping the timeline — delivered every edition.

No spam. Unsubscribe anytime.