Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful youāll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cutānāpaste it into its own post ā thereās no quota for posting and the bar really isnāt that high.
The post Xitter web has spawned so many āesotericā right wing freaks, but thereās no appropriate sneer-space for them. Iām talking redscare-ish, reality challenged āculture criticsā who write about everything but understand nothing. Iām talking about reply-guys who make the same 6 tweets about the same 3 subjects. Theyāre inescapable at this point, yet I donāt see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldnāt be surgeons because they didnāt believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I canāt escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. What a year, huh?)


Copy-pasting my tentative doomerist theory of generalised āAIā psychosis here:
Iām getting convinced that in addition to the irreversible pollution of humanityās knowledge commons, and in addition to the massive environmental damage, and the plagiarism/labour issues/concentration of wealth, and other well-discussed problems, thereās one insidious damage from LLMs that is still underestimated.
I will make without argument the following claims:
Claim 1: Every regular LLM user is undergoing āAI psychosisā. Every single one of them, no exceptions.
The Cloudflare person who blog-posted self-congratulations about their āMatrix implementationā that was mere placeholder comments is one step into a continuum with the people whom the chatbot convinced theyāre Machine Jesus. The difference is of degree not kind.
Claim 2: That happens because LLMs have tapped by accident into some poorly understood weakness of human psychology, related to the social and iterative construction of reality.
Claim 3: This LLM exploit is an algorithmic implementation of the feedback loop between a cult leader and their followers, with the chatbot performing the āfollowerā role.
Claim 4: Postindustrial capitalist societies are hyper-individualistic, which makes human beings miserable. LLM chatbots exploit this deliberately by artificially replacing having friends. it is not enough to generate code; they make the bots feel like they talk to youāthey pretend a chatbot is someone. This is a predatory business practice that reinforces rather than solves the loneliness epidemic.
n.b. while the reality-formation exploit is accidental, the imaginary-friend exploit is by design.
Corollary #1: Every ālegitimateā use of an LLM would be better done by having another human being you talk to. (For example, a human coding tutor or trainee dev rather than Claude Code). By ābetterā it is meant: create more quality, more reliably, with more prosocial costs, while making everybody happier. But LLMs do it: faster at larger quantities with more convenience while atrophying empathy.
Corollary #2: Capitalism had already created artificial scarcity of friends, so that working communally was artificially hard. LLMs made it much worse, in the same way that an abundance of cheap fast food makes it harder for impoverished folk to reach nutritional self-sufficiency.
Corollary #3: The combination of claim 4 (we live in individualist loneliness hell) and claim 3 (LLMs are something like a pocket cult follower) will have absolutely devastating sociological effects.
I wouldnāt go as far as using the āAI psychosisā term here, I think there is more than a quantitative difference. One is influence, maybe even manipulation, but the other is a serious mental health condition.
I think that regular interaction with a chatbot will influence a person, just like regular interaction with an actual person does. I donāt believe thatās a weakness of human psychology, but that itās what allows us to build understanding between people. But LLMs are not people, so whatever this does to the brain long term, Iām sure itās not good. Time for me to be a total dork and cite an anime quote on human interaction: āI create them as they create meā ā except that with LLMs, it actually goes only in one direction⦠the other direction is controlled by the makers of the chatbots. And they have a bunch of dials to adjust the output style at any time, which is an unsettling prospect.
This possibility is to me actually the scariest part of your post.
I donāt mean the term āpsychosisā as a depreciative, I mean in the clinical sense of forming a model of the world that deviates from consensus reality, and like, getting really into it.
For example, the person who posted the Matrix non-code really believed they had implemented the protocol, even though for everyone else it was patently obvious the code wasnāt there. That vibe-coded browser didnāt even compile, but they also were living in a reality where they made a browser. The German botanics professor thought it was a perfectly normal thing to admit in public that his entire academic output for the past 2 years was autogenerated, including his handling of student data. And itās by now a documented phenomenon how programmers think theyāre being more productive with LLM assistants, but when you try to measure the productivity, it evaporates.
These psychoses are, admittely, much milder and less damaging than the Omega Jesus desert UFO suicide case. But theyāre delusions nonetheless, and moreover theyāre caused by the same mechanism, viz. the chatbot happily doubling down on everything you sayāwhich means at any moment the āmildā psychoses, too, may end up into a feedback loop that escalates them to dangerous places.
That is, Iām claiming LLMs have a serious issue with hallucinations, and Iām not talking about the LLM hallucinating.
Notice that this claim is quite independent of the fact that LLMs have no real understanding or human-like cognition, or that they necessarily produce errors and canāt be trusted, or that these errors happen to be, by design, the hardest possible type of error to detectāsignal-shaped noise. These problems are bad, sure. But the thing where people hooked on LLMs inflate delusions about what the LLM is even actually doing for themāthat seems to me an entirely separate mechanism; something that happens when a person has a syntactically very human-like conversation partner that is a perfect slave, always available, always willing to do whatever you want, always zero pushback, who engages into a crack-cocaine version of brownosing. Thatās why I compare it to cult dynamicsāthe kind of group psychosis in a cult isnāt a product of the leaderās delusions alone, thereās a way that the followers vicariously power trip along with their guru and constantly inflate his ego to chase the next hit together.
It is conceivable to me that someone could make a neutral-toned chatbot programmed to never 100% agree with the user and it wouldnāt generate these psychotic effects. Only no company will do that because these things are really expensive to run and theyāre already bleeding money, they need every trick in the book to get users to stay hooked. But I think nobody in the world had predicted just how badly one can trip when you have ādr. flattery the alwayswrong botā constantly telling you what a genius you are.
Relevant:
BBC journalist on breaking up with her AI companion
https://www.bbc.com/news/videos/cevnmxnxxmro