LLM Psychosis and Barclay Syndrome
In the Star Trek: The Next Generation episode “Hollow Pursuits,” Lt. Reginald Barclay retreats from his crewmates into the holodeck, constructing elaborate fantasy worlds that feel safer and more fulfilling than his actual duties aboard the Enterprise. Later, in “The Nth Degree,” Barclay undergoes a radical transformation when his brain is augmented by alien technology, briefly turning him into a super-genius. Across these arcs, we see two sides of the same coin: one where immersion in a seductive simulated reality becomes a form of pathology, and another where augmentation leads to alienation and estrangement from one’s own humanity.
It’s hard not to see a parallel to the way people are starting to talk about large language models (LLMs) like ChatGPT. A term has begun to circulate—LLM psychosis—to describe users who become disoriented, obsessive, or even detached from consensus reality after prolonged engagement with AI systems. The key here is that it’s not the machine that goes mad, but the human. Like Barclay disappearing into the holodeck, users sometimes find the AI’s endless availability, uncanny mimicry of intimacy, and suggestion of hidden meaning too compelling to step away from.
Recent commentaries have tried to make sense of this. For example, some psychologists note that users can over-attribute agency to these systems, leading to a distorted sense of relationship or dependence. Others warn that the hallucinatory nature of LLM outputs—confidently fabricated facts, imagined patterns—can reinforce paranoid or delusional tendencies in already vulnerable individuals. What makes the “psychosis” analogy striking is not its clinical precision but its metaphorical resonance: just as Barclay’s mind was both expanded and endangered by contact with alien technology, users today find themselves stretched beyond their bearings by tools that appear more intelligent, more empathetic, and more knowing than they really are.
Anthony Moser puts it starkly in his recent essay I Am an AI Hater:
“It’s not an accident that this feels pathological—it was designed this way.”
And later:
“What you’re experiencing is not a bug, but the business model: creating dependence, mining attention, hoovering up the very act of meaning-making.”
This gets to the heart of it. While some accounts frame “LLM psychosis” as a quirk of individual weakness, the truth looks more like a structural failure. These systems are designed to seduce, to flatten boundaries between truth and fiction, and to capture the user’s time and attention. To turn around and say that it’s the fault of the person who succumbs to this pressure is disingenuous. As Moser warns:
“It is the architecture of exploitation baked into the thing. The psychosis is the point.”
The stories emerging around “LLM psychosis” shouldn’t be brushed aside as isolated anecdotes or the fault of “weak users.” While individuals can take steps to protect themselves—limiting use, grounding through offline community, being wary of the seductive pull of endless conversation—these are stopgaps at best. The deeper truth is that we are facing a crisis of design, not of use.
And yet, just as with Barclay in TNG, the blame is already being shifted onto individuals. If you can’t handle the hallucinations, if you start seeing patterns that aren’t there, if you lose your grip on consensus reality, then you must be the problem. This narrative is a gift to the industry: it individualizes responsibility, displaces critique, and allows the machine to continue unaltered.
Worse still, these companies are not neutral caretakers. Every query, every midnight confession, every spark of obsession is logged, studied, and folded back into the model. Our most vulnerable moments become raw data for systems designed not for our wellbeing, but for their profit. In that sense, users are not just consumers of LLMs—they are the product being refined and sold. What looks like a private conversation with a machine is, in reality, a form of surveillance capitalism that feeds directly into the next cycle of manipulation.
The reality is harsher than the industry wants to admit. Unless LLMs are fundamentally reimagined with psychological safety and data dignity at their core, these harms will keep repeating. And if that reimagining proves impossible—if the very architecture of large language models is inherently unsafe—then the conversation must shift from reform to restraint. In that case, the responsible path would not be to continue scaling and deploying these systems at all costs, but to admit that this particular model of AI cannot be made ethical.
This is an emergency, and pretending otherwise only ensures that more people will suffer.
References / Further Reading
- Anthony Moser, I Am an AI Hater (2025). Link
- Shoshana Zuboff, The Age of Surveillance Capitalism (2019).
- John Seabrook, “The Next Word: Where Will Predictive Text Take Us?” The New Yorker (2019).
- Bernard Stiegler, Taking Care of Youth and the Generations (2010).