#oneaday Day 510: Another great Eddy Burback video

There's a lot of absolute garbage on YouTube, but there are a few folks out there who do some truly special work. One of those people is Eddy Burback, who makes maybe two or three videos a year, but they're always very high quality, both in technical terms and in terms of the amount of research that goes into them. You may recall a while back I was rather taken by his video about giving up the smartphone life.

Today, he put out a new video called "ChatGPT made me delusional", and I sincerely recommend you set aside an hour or so of your life to watch it through in its entirety. Not skip through it at 1.5x speed, not "have it on in the background". Watch it. Because I think it is important.

Here it is:

Burback's aim for the video was to understand the phenomenon of "chatbot-induced psychosis" or "AI psychosis". This is where vulnerable people, already struggling with matters of mental health, would turn to large language model chatbots such as ChatGPT and use them as a form of "therapy" or as a substitute for actual human contact. There have already been some incredibly tragic results, as anyone who has ever read any science fiction would have been able to predict a mile off.

To explore how this might happen, Burback presented ChatGPT with an obviously ridiculous hypothesis based on complete fabrications: that he was the smartest under-1 baby of 1997, capable of producing great works of art, having in-depth philosophical discussions and demonstrating a deep understanding of complex mathematics. It took him two statements to convince the chatbot that this was the undeniable truth, and things just escalated from there.

Burback presented the chatbot with suggestions that his friends and family might not understand his brilliance, and it recommended he flee into the middle of nowhere and break all contact with them, including stopping sharing his location data with the person he trusts most in the world: his twin brother. He continued feeding the chatbot with increasingly ridiculous, obviously delusional statements and deliberate, complete and utter nonsense, and at no point did it attempt to deter him from the path it had set him on.

It was only at one point — the day when OpenAI controversially swapped its "4o" model for GPT-5 — that the chatbot had a momentary blip in feeding into his "delusions" (and, to its credit, suggested some psychological help facilities in the neighbourhood), but Burback pointed out that it was very easy for someone who was paying for the service to just switch it back to the old model, which seemingly finds it impossible to say "no" to the user.

What was particularly eerie about the whole situation is that Burback was using the premium voice feature on ChatGPT, which has clearly been designed to sound as "human" as possible, even going so far as to add realistic inflections and non-fluency features to the things it is saying. (It also pronounces emojis as completely unrelated sound effects, which somewhat detracts from the "humanity" of it all, but still.) In other words, it wasn't hard to see how someone suffering from real, genuine mental health problems might feel like they really did have a person in their phone who was willing to listen to them, tell them they were always right, and repeatedly give them some really, really bad advice.

It was actually kind of horrifying. The way the bot continually escalated into increasingly outlandish behaviour — culminating in him chanting mantras under an electricity pylon, wrapping his entire apartment in tin foil and tattooing a symbol into his thigh — was genuinely frightening.

I know we can all have a good laugh about how the chatbots get things wrong sometimes, but Burback's research here demonstrates that it doesn't just get things wrong (and I apologise for using this sentence construction, given its indelible association with AI writing, but it's an established turn of phrase for a reason) — it offers genuinely dangerous advice with minimal guardrails in place. And it does so without thinking about it or understanding why it might be dangerous — because it's not actually thinking or understanding anything at all. It's constructing sentences that, based on the data it has Hoovered up from across the Internet, it thinks are the correct responses to the things the user has been typing. It is, in essence, an extremely advanced version of the old ELIZA program on classic computers.

And it can go fuck itself.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 404: Today's AI idiot story

The latest hilarious story from the world of artificial "intelligence" is the sorry saga of a Redditor who "worked on a book" (and I use the term "worked" loosely) with ChatGPT and found that they couldn't download it.

You want to know why? This is the best bit. It's because ChatGPT hadn't actually created anything, because it can't do that. It had outright lied to the person because, as a large language model — which, let's not forget, is essentially fancy predictive text, not actual intelligence — it believed, based on the data it had ingested, that telling the user it had successfully created 487MB of book was what the user wanted to hear.

To be fair, it was what the user wanted to hear, only they wanted that 487MB of book to, you know, actually exist.

The Redditor's eventual conclusion was thus:

After understanding a lot of things it's clear that it didn't [generate the book at all]. And it fooled me for two weeks.

I have learned my lesson and now I am using it to generate one page at a time.

Several other Redditors commented, quite correctly, that this is perhaps not the ideal takeaway from this lesson. This is my absolute favourite response, though. This response deserves to be framed and put in a museum as a monument to how utterly stupid the age we're living in is:

At least you're finally admitting that ChatGPT is working on creating this fictional thing instead of you having "worked on it together". lol. Meanwhile real writers don't need this nonsense to be creative.

As a wise person once said: why would I invest more time reading something than the author spent writing it? Best of luck on something literally no one, including you, will read.

Absolute perfection.

Even more hilarious is the fact that the original poster was supposedly trying to create "a collection of a lot of children [sic] stories with moral lessons that [they] wanted to present in a colourful manner with underprivileged kids of [their] area". They claimed that the text was "all theirs" and that they were using ChatGPT to "refine the flow"… and generate 700 images.

Because what the world needs is an AI-edited book of children's stories almost certainly ripped off from existing tales, illustrated with AI slop images.

Dear Lord. I absolutely despair that we're living in an age where people are this fucking stupid.

Let me be 100% clear on this: if you're using ChatGPT to generate or "refine" anything you want to publish, you are not an author. You are certainly not the illustrator.

Learn to write. Practice it. It is a craft like any other. Develop your own unique, distinctive voice, because AI very much has a "voice" of its own — a particularly obnoxious, hand-wringing, obsequious, simpering one — and it is immediately recognisable. And, if you want to improve, hire a fucking editor. Or, at the very least, just give it to another sodding human being to look at.

ChatGPT is not an editor. ChatGPT gets things wrong a significant proportion of the time. And, as this story shows, ChatGPT just fucking makes things up quite a bit, too. You cannot trust it. You should not trust it. It is not a person. It is not intelligent. It doesn't "know" anything.

And if you need art? Two options: one, learn to do it yourself, which can be rewarding and fulfilling in its own right. Or two, and you'll like this, can you guess what it is yet? That's right, it's hire a fucking artist.

I truly despair for the fucking dumb age we live in right now. I can't wait for the AI bubble to pop and all this stupid shit to go the way of the Metaverse and NFTs. Because it's actually driving me insane what it's clearly doing to people. We're going to end up completely incapable of producing cultural artefacts if we're not careful. And that's not a world I want to live in.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.