
There's a lot of absolute garbage on YouTube, but there are a few folks out there who do some truly special work. One of those people is Eddy Burback, who makes maybe two or three videos a year, but they're always very high quality, both in technical terms and in terms of the amount of research that goes into them. You may recall a while back I was rather taken by his video about giving up the smartphone life.
Today, he put out a new video called "ChatGPT made me delusional", and I sincerely recommend you set aside an hour or so of your life to watch it through in its entirety. Not skip through it at 1.5x speed, not "have it on in the background". Watch it. Because I think it is important.
Here it is:
Burback's aim for the video was to understand the phenomenon of "chatbot-induced psychosis" or "AI psychosis". This is where vulnerable people, already struggling with matters of mental health, would turn to large language model chatbots such as ChatGPT and use them as a form of "therapy" or as a substitute for actual human contact. There have already been some incredibly tragic results, as anyone who has ever read any science fiction would have been able to predict a mile off.
To explore how this might happen, Burback presented ChatGPT with an obviously ridiculous hypothesis based on complete fabrications: that he was the smartest under-1 baby of 1997, capable of producing great works of art, having in-depth philosophical discussions and demonstrating a deep understanding of complex mathematics. It took him two statements to convince the chatbot that this was the undeniable truth, and things just escalated from there.
Burback presented the chatbot with suggestions that his friends and family might not understand his brilliance, and it recommended he flee into the middle of nowhere and break all contact with them, including stopping sharing his location data with the person he trusts most in the world: his twin brother. He continued feeding the chatbot with increasingly ridiculous, obviously delusional statements and deliberate, complete and utter nonsense, and at no point did it attempt to deter him from the path it had set him on.
It was only at one point — the day when OpenAI controversially swapped its "4o" model for GPT-5 — that the chatbot had a momentary blip in feeding into his "delusions" (and, to its credit, suggested some psychological help facilities in the neighbourhood), but Burback pointed out that it was very easy for someone who was paying for the service to just switch it back to the old model, which seemingly finds it impossible to say "no" to the user.
What was particularly eerie about the whole situation is that Burback was using the premium voice feature on ChatGPT, which has clearly been designed to sound as "human" as possible, even going so far as to add realistic inflections and non-fluency features to the things it is saying. (It also pronounces emojis as completely unrelated sound effects, which somewhat detracts from the "humanity" of it all, but still.) In other words, it wasn't hard to see how someone suffering from real, genuine mental health problems might feel like they really did have a person in their phone who was willing to listen to them, tell them they were always right, and repeatedly give them some really, really bad advice.
It was actually kind of horrifying. The way the bot continually escalated into increasingly outlandish behaviour — culminating in him chanting mantras under an electricity pylon, wrapping his entire apartment in tin foil and tattooing a symbol into his thigh — was genuinely frightening.
I know we can all have a good laugh about how the chatbots get things wrong sometimes, but Burback's research here demonstrates that it doesn't just get things wrong (and I apologise for using this sentence construction, given its indelible association with AI writing, but it's an established turn of phrase for a reason) — it offers genuinely dangerous advice with minimal guardrails in place. And it does so without thinking about it or understanding why it might be dangerous — because it's not actually thinking or understanding anything at all. It's constructing sentences that, based on the data it has Hoovered up from across the Internet, it thinks are the correct responses to the things the user has been typing. It is, in essence, an extremely advanced version of the old ELIZA program on classic computers.
And it can go fuck itself.
Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.
If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.
