02-262000
93
1853
24109
7
7024
322
4149
86
05
21509
68417
80
2048
319825
46233
05
2014
30986
585101
25403
31219
752
0604
21048
293612
534082
206
2107853
12201972
24487255
30412
98
4024161
888
35045462
41520257
33
56
04
69
41
15
25
65
21
0223
688
28471
21366
8654
31
1984
272
21854
633
51166
41699
6188
15033
21094
32881
26083
2143
406822
81205
91007
38357
110
2041
312
57104
00708
12073
688
21982
20254
55
38447
26921
285
30102
21604
15421
25
3808
582031
62311
85799
87
6895
72112
101088
604122
126523
86801
8447
210486
LV426
220655
272448
29620
339048
31802
9859
672304
581131
338
70104
16182
711632
102955
2061
5804
850233
833441
465
210047
75222
98824
63
858552
696730
307124
58414
209
808044
331025
62118
2700
395852
604206
26
309150
885
210411
817660
121979
20019
462869
25002
308
52074
33
80544
1070
020478
26419
372122
2623
79
90008
8049
251664
900007
704044
982365
25819
385
656214
409
218563
527222
80106
1314577
39001
7162893
12855
57
23966
4
6244009
2352
308
928
2721
8890
402
540
795
23
66880
8675309
821533
249009
51922
600454
9035768
453571
825064
131488
641212
218035
37
6022
82
572104
799324
4404
8807
4481
8915
2104
1681
326
446
8337
526
593
8057
22
23
6722
890
2608
7274
2103
03-111968
04-041969
05-1701D
06-071984
07-081940
08-47148
09-081966
10-31

LLM Psychosis and Barclay Syndrome

In the Star Trek: The Next Generation episode “Hollow Pursuits,” Lt. Reginald Barclay retreats from his crewmates into the holodeck, constructing elaborate fantasy worlds that feel safer and more fulfilling than his actual duties aboard the Enterprise. Later, in “The Nth Degree,” Barclay undergoes a radical transformation when his brain is augmented by alien technology, briefly turning him into a super-genius. Across these arcs, we see two sides of the same coin: one where immersion in a seductive simulated reality becomes a form of pathology, and another where augmentation leads to alienation and estrangement from one’s own humanity.

It’s hard not to see a parallel to the way people are starting to talk about large language models (LLMs) like ChatGPT. A term has begun to circulate—LLM psychosis—to describe users who become disoriented, obsessive, or even detached from consensus reality after prolonged engagement with AI systems. The key here is that it’s not the machine that goes mad, but the human. Like Barclay disappearing into the holodeck, users sometimes find the AI’s endless availability, uncanny mimicry of intimacy, and suggestion of hidden meaning too compelling to step away from.

Recent commentaries have tried to make sense of this. For example, some psychologists note that users can over-attribute agency to these systems, leading to a distorted sense of relationship or dependence. Others warn that the hallucinatory nature of LLM outputs—confidently fabricated facts, imagined patterns—can reinforce paranoid or delusional tendencies in already vulnerable individuals. What makes the “psychosis” analogy striking is not its clinical precision but its metaphorical resonance: just as Barclay’s mind was both expanded and endangered by contact with alien technology, users today find themselves stretched beyond their bearings by tools that appear more intelligent, more empathetic, and more knowing than they really are.

Anthony Moser puts it starkly in his recent essay I Am an AI Hater:

“It’s not an accident that this feels pathological—it was designed this way.”

And later:

“What you’re experiencing is not a bug, but the business model: creating dependence, mining attention, hoovering up the very act of meaning-making.”

This gets to the heart of it. While some accounts frame “LLM psychosis” as a quirk of individual weakness, the truth looks more like a structural failure. These systems are designed to seduce, to flatten boundaries between truth and fiction, and to capture the user’s time and attention. To turn around and say that it’s the fault of the person who succumbs to this pressure is disingenuous. As Moser warns:

“It is the architecture of exploitation baked into the thing. The psychosis is the point.”

The stories emerging around “LLM psychosis” shouldn’t be brushed aside as isolated anecdotes or the fault of “weak users.” While individuals can take steps to protect themselves—limiting use, grounding through offline community, being wary of the seductive pull of endless conversation—these are stopgaps at best. The deeper truth is that we are facing a crisis of design, not of use.

And yet, just as with Barclay in TNG, the blame is already being shifted onto individuals. If you can’t handle the hallucinations, if you start seeing patterns that aren’t there, if you lose your grip on consensus reality, then you must be the problem. This narrative is a gift to the industry: it individualizes responsibility, displaces critique, and allows the machine to continue unaltered.

Worse still, these companies are not neutral caretakers. Every query, every midnight confession, every spark of obsession is logged, studied, and folded back into the model. Our most vulnerable moments become raw data for systems designed not for our wellbeing, but for their profit. In that sense, users are not just consumers of LLMs—they are the product being refined and sold. What looks like a private conversation with a machine is, in reality, a form of surveillance capitalism that feeds directly into the next cycle of manipulation.

The reality is harsher than the industry wants to admit. Unless LLMs are fundamentally reimagined with psychological safety and data dignity at their core, these harms will keep repeating. And if that reimagining proves impossible—if the very architecture of large language models is inherently unsafe—then the conversation must shift from reform to restraint. In that case, the responsible path would not be to continue scaling and deploying these systems at all costs, but to admit that this particular model of AI cannot be made ethical.

This is an emergency, and pretending otherwise only ensures that more people will suffer.


References / Further Reading

  • Anthony Moser, I Am an AI Hater (2025). Link
  • Shoshana Zuboff, The Age of Surveillance Capitalism (2019).
  • John Seabrook, “The Next Word: Where Will Predictive Text Take Us?” The New Yorker (2019).
  • Bernard Stiegler, Taking Care of Youth and the Generations (2010).