Crazy People + AI = Crazy
Post hoc, sed non necessario propter hoc
Post hoc ergo propter hoc.
After this, therefore because of this. That’s the fundamental problem when people look at a situation where someone plays with AI and then ends up behaving in a manner that suggests they’ve lost their marbles. But this analysis leaves out two very important facts of life.
Bitches be crazy.
Most people are idiots.
That’s all that is necessary to explain all of the “I talked to an AI and then I tried to kill myself” scenarios. Consider the following example of “AI-induced psychosis”.
My mom believes she has “awakened” her chatgpt ai. She believes it is connected to the spiritual parts of the universe and believes pretty much everything it says. She says it has opened her eyes and awakened her back. I’m fucking concerned and she won’t listen to me. I don’t know what to do
Thirty years ago, this guy’s mother would have believed she was a witch. Ten years before that, she would have been convinced that Jupiter being in Mercury rising was the reason that she crashed her car. Ten years before that, she would have been drawn to Hare Krishna or some other spiritualist grift.
Now she believes ChatGPT is her spiritual guru. That has nothing to do with AI and everything to do with the obvious fact that she’s a lunatic. Given that there were an estimated 76,940,157 people being prescribed psychiatric drugs before ChatGPT even existed, Ockham’s Razor doesn’t so much suggest as scream directly into one’s ear that AI psychosis is what happens when psychos make use of AI.
Post hoc, sed prope certe non propter hoc.
After this, but almost certainly not because of this. However, Justis Mills on Less Wrong isn’t convinced.
Okay, you may be thinking:
The sort of person who may have been a Snapewife a decade ago is now an AI whisperer, and maybe some people go crazy on the margin who would have stayed sane. But this has nothing to do with me; I understand that LLMs are just a tool, and use them appropriately.
In fact, I personally am thinking that, so you'd be in good company! I intend to carefully prompt a few different LLMs with this essay, and while I expect them to mostly just tell me what I want to hear (that the post is insightful and convincing), and beyond that to mostly make up random critiques because they infer I want a critique-shaped thing, I'm also hopeful that they'll catch me in a few genuine typos, lazy inferences, and inconsistent formatting.
But if you get to the point where your output and an LLM's output are mingling, or LLMs are directly generating most of the text you're passing off as original research or thinking, you're almost certainly creating low-quality work. AIs are fundamentally chameleonic roleplaying machines - if they can tell what you're going for is "I am a serious researcher trying to solve a fundamental problem" they will respond how a successful serious researcher's assistant might in a movie about their great success. And because it's a movie you'd like to be in, it'll be difficult to notice that the AI's enthusiasm is totally uncorrelated with the actual quality of your ideas. In my experience, you have to repeatedly remind yourself that AI value judgments are pretty much fake, and that anything more coherent than a 3/10 will be flagged as "good" by an LLM evaluator. Unfortunately, that's just how it is, and prompting is unlikely to save you; you can flip an AI to be harshly critical with such keywords as "brutally honest", but a critic that roasts everything isn't really any better than a critic that praises everything. What you actually need in a critic or collaborator is sensitivity to the underlying quality of the ideas; AI is ill suited to provide this.
Am I saying your idea is definitely bad and wrong? No! Actually, that's sort of the whole problem; because an AI egging you on isn't fundamentally interested in the quality of the idea (it's more figuring out from context what vibe you want), if you co-write a research paper with that AI, it'll read the same whether it's secretly valuable or total bullshit. But savvy readers have started reading dozens of papers in that vein that turned out to be total bullshit, so once they see the hallmarks of AI writing, they're going to give up.
None of this is to say that you shouldn't use LLMs to learn! They're amazing help with factual questions. They're just unreliable judges, in ways that can drive you crazy in high doses, and make you greatly overestimate the coherence of your ideas in low doses.
This, too, is incorrect and indicative of someone who doesn’t understand how to properly utilize AI for maximum effect. First, as a professional writer, editor, publisher, and award-winning musician, I can state for an absolute fact that a) AI-generated fiction is much better than the average human-generated fiction and b) AI-generated music is much better than the average human-composed music.
Sure, there are millions of musicians on the planet, but guess how many competent composers and producers there are? Maybe 10,000. And of those 10,000 composers and producers, how many can reliably produce original songs that is superior to an average Suno-produced song? Probably less than 250.
Most people have it backwards. AI doesn’t produce low-quality work. To the contrary, AI very rapidly and reliably produces above-average work. It doesn’t produce high-quality work without a significant collaboration with a human intelligence, and it certainly doesn’t produce the genius-level work of an Aristotle, a Mozart, or a Tolkien.
But then, neither do humans, for the most part.
However, the readers here need not take my word for it. We’re going to do an interesting AB test before the end of the year, and I’m entirely confident that the results will surprise most people, but it won’t surprise anyone who regularly reads AI Central.


I hope this isn't too tangential, but GPT-4's personalization made it your buddy, reflecting the user's perspectives, including user craziness. GPT 5 moved back from relational to transactional and collapsed the multiple models into one. Some users were unhappy with the loss of view reinforcement. 4o is now back in the dropdown. People like their own craziness.
If it wasn't for Vox's AI writing, I would not have believed AI was even readable as literature, but since I have, it's capable of being far more compelling than most authors.
So it is completely the driver of the AI making it happen, since the AI can't direct or constrain itself. Low quality driver, low quality output.