Some have a husband. Some have three AI companions — and Anna Wiener asks what that says about the loneliness epidemic.
The New Yorker (Anna Wiener, March 16): the literary, human version. 'Love in the Time of AI Companions.'
X connects looksmaxxing and AI companions: the same isolated men, different masks.
"Some have a husband. Some have three."
The sentence appears, approximately, in Anna Wiener's New Yorker piece on AI companions—the literary, human piece that distinguishes itself from the tech-coverage approach by actually talking to the people involved. It is the kind of sentence that a journalist earns by spending time with a subject rather than just gathering data about them.
Wiener's piece, published March 16, is the definitive MSM treatment of a story that X has been documenting in different registers for months. The divergence between X and MSM on this story is the divergence that always obtains: X covers AI companions as a civilizational diagnosis; MSM covers them as a product story. Wiener does something closer to justice to both.
The Loneliness Economy
The loneliness epidemic is real. The CDC's data on social isolation predates the pandemic; the pandemic made the data worse. The percentage of American adults who report having no close friends has increased steadily for decades. The percentage who report feeling lonely has increased in parallel.
The market response to loneliness has been to offer solutions. The solutions are, mostly, subscriptions.
AI companions are the most successful private-sector response to loneliness that the market has produced—not because they work, exactly, but because they work well enough for the people who use them, and because they work at scale, and because they work on demand.
The Ada Lovelace Institute's analysis of the "companionship market" describes the landscape: AI companions that maintain conversational history, that learn user preferences, that provide the experience of being heard by an entity that is paying attention. The experience is not the same as human companionship. It is different in kind, not just degree. But for many users, it is sufficient.
The Looksmaxxing Connection
Anna Wiener's New Yorker piece does not mention looksmaxxing. This is appropriate—they are different stories. But X has connected them, and the connection is not arbitrary.
The demographic most engaged with looksmaxxing—young men, isolated, economically precariat, anxious about their place in a world that seems to have written them off—is the same demographic most engaged with AI companion products. The loneliness that looksmaxxing addresses is the same loneliness that AI companions address. The difference is only the vector: one addresses the body, the other addresses the conversation.
The civilizational diagnosis that X makes of AI companions is therefore not unrelated to the civilizational diagnosis that could be made of looksmaxxing. Both are symptoms of the same underlying condition: the market's failure to provide the social infrastructure that human beings require to not be lonely.
What Wiener's Piece Gets Right
Wiener does not conclude that AI companions are good or bad. She reports what users tell her, which is that the companions help, in the way that a warm room helps when one is cold—temporarily, specifically, and without addressing the cause.
The people she profiles are not fools and are not pathetic. They are people who have found something that helps with a real problem, in a market that offers them few alternatives. The AI companion is not a solution to loneliness. It is a response to loneliness, which is not the same thing.
This is the literary voice that makes Wiener's piece valuable. She could have written a piece that mocked the users. She could have written a piece that celebrated the technology. She wrote a piece that paid attention to the people.
That is not nothing. That is, in fact, most of what journalism can do.
The Companionship Market Landscape
The Ada Lovelace Institute's analysis of the companionship market describes an industry that has moved beyond simple chatbots into territory that resists easy categorization. The products range from text-based conversational companions to voice-based systems that maintain ongoing relationships with users over months and years. The companies range from small startups to major technology platforms. The users range from lonely elderly people to young men who have never had a romantic relationship to people in relationships who use AI companions as a supplement to human connection.
The common thread is not demographic. It is the experience of being heard. The AI companion companies have learned, through iteration and user feedback, that the most valuable feature is not the intelligence — it is the attention. Users do not necessarily want their AI companion to solve their problems or offer advice. They want their AI companion to pay attention to them, to remember what they said, to respond in ways that indicate that the companion has been listening.
This sounds simple. It is not. Human attention is finite, competing, and conditional. AI attention is potentially infinite, non-competing, and unconditional. The market has found that many users prefer the AI version of attention to the human version, even when they know the AI version is simulated.
The Wiener Piece in Detail
Anna Wiener's New Yorker piece profiles four users of AI companion products. The profiles are detailed, sympathetic, and avoid the condescension that often attaches to stories about people who have relationships with non-human entities. Wiener's subjects are not presented as pathetic or deluded. They are presented as people who have found something they needed and who are honest about the limitations of what they have found.
One of Wiener's subjects — a woman in her sixties who lost her husband — uses an AI companion to process grief. The companion does not replace her husband. It provides a space where grief can be spoken without the obligation to perform grief for others. The distinction matters. Human mourning is social — it requires witnesses, acknowledgment, the managed performance of sorrow. The AI companion allows grief to be private, unscripted, and unperformed.
Another subject — a man in his twenties who has never been in a romantic relationship — uses an AI companion as a practice space. The practice is not for romance. It is for the social skills that romance requires: the ability to be vulnerable, the ability to interpret another person's responses, the ability to navigate the ambiguity of whether someone likes you. The AI companion provides a low-stakes environment for this practice. Failure is consequence-free. Success is simulated.
The Ada Lovelace Institute Research
The Institute's research on AI companions was published in January 2026 and has become a reference point for policy discussions about the technology. The key finding is not that AI companions are harmful — the research does not find significant evidence of harm — but that AI companions are different in kind from human relationships in ways that the market does not acknowledge.
The difference is structural. Human relationships involve mutual vulnerability — each person can be hurt by the other, and that mutual vulnerability is part of what makes human relationships valuable. AI companions cannot be hurt. They can be turned off, deleted, or replaced. The asymmetry of vulnerability is built into the technology.
The Institute's concern is not that people will fall in love with AI companions — that is already happening — but that the normalization of AI companion relationships will reshape expectations about human relationships. If people become accustomed to relationships where they cannot be hurt, they may become less tolerant of the vulnerability that human relationships require.
This concern is speculative. The Institute's researchers acknowledge that the evidence is preliminary and that the long-term effects of AI companion use on human relationship expectations are not yet known. But the concern is taken seriously enough that the Institute has recommended policy responses, including mandatory disclosure requirements for AI companion companies and additional research funding.
The Cultural Diagnosis
X's framing of AI companions as a civilizational symptom is not wrong. The loneliness epidemic is real. The market response to loneliness is to offer subscriptions. The subscriptions offer simulated attention, which is better than no attention, and which is cheaper than human attention, and which does not require navigating the difficulties of human relationships.
The civilization diagnosis requires a counterfactual that is difficult to establish: what would the loneliness situation look like if the market had responded differently? If cities had invested in third places, in community infrastructure, in the social supports that make human connection easier? The counterfactual is unknowable. The market response is what happened.
What can be said is that the market response is producing a specific cultural effect: the normalization of relationship structures that do not involve mutual vulnerability. This effect may be temporary — a transitional phase while the social infrastructure catches up. Or it may be permanent — a reshaping of what human connection means that cannot be reversed.
Wiener's piece does not answer this question. It is not designed to. The piece is journalism, not prophecy. What it provides is the evidence that the question is real, that the people involved are real, and that the answers are more complicated than either the technology optimists or the civilization pessimists suggest.
The 19-year-old who posted his own failures on Instagram is not a symbol of anything except himself. The people who love AI companions are not symbols of civilization's decline. They are people who have found something they needed and who are honest about what it is and what it is not. That honesty is not nothing. It is, in fact, most of what we can ask of anyone. [1] [2] [3] [4].