#oneaday Day 661: When people would gnaw off an arm for a freelance writing gig, using generative AI is unforgivable

In the last 18 years, 4,535 posts and 3,263,700 words (yes, really, I got a plugin to count them and everything), I have never once felt the need to outsource my thinking and creativity to a machine. There are two posts written by "guest authors" (which, spoiler, were actually both me in a cunning disguise!) and there are a couple of posts where I permitted drunken friends the opportunity to contribute a sentence or two to a post I was writing while out and about, but the remainder is all me, scooping out the contents of my brain and plopping it onto the page for no other reason than the fact that I enjoy doing so, and occasionally find it helpful.

Today, this notice appeared in the New York Times on a book review it had published:

Editors' Note: March 30, 2026:
A reader recently alerted The Times that this review included language and details similar to those in a review of the same book published in The Guardian. We spoke to the author of this piece, a freelancer reviewer, who told us he used an A.I. tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on A.I. and his use of unattributed work by another writer are a clear violation of The Times's standards. The reviewer said he had not used A.I. in his previous reviews for The Times, and we have found no issues in those pieces. The Guardian review of "Watching Over Her" can be read here. (link)

This, to me, is unforgivable. Supposedly there are plenty of writers out there who are doing this — or something like it, anyway — but to me, it is unfathomably awful. To be a writer, someone who cares about one's craft, you have to give a shit. And absolutely nothing says "I don't give a shit" quite like relying on generative AI so heavily that your article has to be pulled because its plagiarism was too obvious.

I mean, when you think about it, it's obvious that this would happen, given the way generative AI works and is trained — if it's pulling all its wording from existing texts that it has absorbed (without any compensation for the original authors) from around the Web, then of course it's going to come up with some of the same things, perhaps even the exact same phrasing.

You'd think it would be obvious, anyway — and that any writer worth their salt would not, as a result, rely on it — but apparently this is not the case. Much how the above-linked Wired article should really result in all the authors named being blacklisted from every freelance writing pool, effective immediately, this incident should be the end of Alex Preston's career. There should be no second chances. To quote the old Batman meme, this is the weapon of the enemy; we do not need it; we will not use it.

Believe me, at this point I've heard every pro-AI argument there is — some, like the nonsensical "back in the '90s some people thought the Internet would be a bad thing!!" one, more than others — and none of them stand up to the slightest bit of scrutiny. AI does not make you a better writer. AI does not make you a writer. The only thing that makes you a writer is, quite simply, writing. And if you are not sitting down and writing something for yourself — whether that be through putting pen to paper, tapping away at a keyboard or dictating your words verbally — you are not a writer. And no, "writing" your prompt to get the bot to churn out a thousand words for you does not count.

Humanity's written languages have survived for thousands of years — albeit with plenty of evolution — through people being taught how to use them. It is, today, a fundamental part of your early socialisation process to learn how to read and write; yes, some folks have specific learning needs that make it harder or even impossible for them to do so, but even for them, generative AI is emphatically not the answer, as we have plenty of assistive methodology and technology that can allow these people to thrive that does not rely on the odious fad that is presently bleeding the planet dry.

So I'm sorry, I have no patience left whatsoever for any incidents like this. The people involved in the Wired and New York Times articles above deserve to be kicked out of their career. Because if they have no respect for writing as a craft, why on Earth should any readers be expected to have any respect whatsoever for the shit they've churned out through the bots?

There are myriad people out there who would chew off their own arm for an opportunity to have a byline beneath a prestigious masthead — and every one of them who relies entirely on their own writing abilities, rather than outsourcing their creative process to the planet-burning chatbot, deserves those opportunities a million times more than those who clearly have no respect for themselves, their peers, or their readership.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 654: Jensen Huang is an enemy of the arts

The headline is probably not news to most of you reading this, but I feel like it's worth commenting on, because the NVidia CEO just can't seem to keep his mouth shut.

To recap: a little while back, NVidia introduced its new "DLSS5" technology via transparently obvious Digital Foundry advertorial video. I still don't really know what DLSS is, or what it used to be I guess, but this latest incarnation of it did… not go down well, to say the least.

The reason? It's fucking generative AI, because of course it is. In this case, it's generative AI that takes two multi-thousand dollar graphics cards to render a slop filter over the top of the perfectly functional graphics the game already had. Early defenders tried to convince everyone else that it was just "improving the lighting", but then Huang came out and said the following:

First of all, [the critics are] completely wrong. The reason for that is because, as I have explained very carefully, DLSS5 fuses controllability of the geometry and textures and everything about the game with generative AI. It's not post-processing at the frame level, it's generative control at the geometry level.

(Tom's Hardware)

Okay. So it is generative AI. Which sucks. And everyone hates. And in this instance, it is adding what is colloquially referred to as a "yassification" filter atop character graphics in particular, making them look markedly different from their actual, canonical designs. You know, the ones that artists worked on.

Today, Kotaku posted what I would argue is a bit of a fluff piece on the subject, quoting Huang extensively. Huang is presumably in some sort of "damage control" mode — although not that much, because the part of NVidia that makes decent graphics cards for gaming PCs and consoles is of very little importance to a company that has very much thrown its entire lot in with generative AI.

From the Kotaku piece, quoting Huang, who was speaking on a recent episode of Lex Fridman's podcast:

DLSS 5 is 3D conditioned, 3D guided. It's ground truth structure data guided. And so the artist determined the geometry we are completely truthful to. The geometry maintains in every single frame.

Okay, first of all, what the fuck does "ground truth structure data guided" mean? Secondly, I'm sure the geometry is still there, it's just underneath a hallucinated AI-generated image.

He goes on (emphasis mine):

Every single frame, it enhances but it doesn't change anything. The system is open, you could train your own models to determine, and you could even in the future prompt it. You know, 'I want it to be a toon shader, I want it to look like this kinda,' so you can give it even an example. And it would generate in the style of that, all consistent with the artistry, you know, the style, the intent of the artist. And so all of that is done for the artist, so that they can create something that is more beautiful, but still in the style that they want.

So let me get this straight. It "doesn't change anything", but it does "generate in the style of" how it is prompted, am I getting this right? So it does, in fact, change something?

And who is doing this "prompting", exactly? Who is saying "I want it to be a toon shader"? The end user? Because that sure as fuck doesn't sound like being "consistent with the artistry and intent of the artist". Or is it the artist? Because if an artist wants their visuals in a toon style, they'll design them in a fucking toon style in the first place and they don't need the slop machine to do it for them. Or they don't if they're an artist with any fucking skills, anyway.

All this just confirms exactly what we've known for a while now: Jensen Huang is an enemy of the arts. He doesn't give a shit what the "style and intent of the artist" are, because his magic slop machine can just overwrite it and make it look "more beautiful". Fuck the artists who worked hard on each scene, each character, each object. Fuck having a coherent, distinctive artistic vision and visual style — bring on the uncanny valley AI slop! Fuck everyone who makes it their life's work to bring interactive worlds and the characters who inhabit them to life!

Jensen Huang, you are a rancid little fuckboi who, years after this bubble pops, will be looked back on as one of the most insidious, dangerous influences on the arts that there has been for a very long time. I'm not sure what sort of legacy you think you're leaving behind, but I can tell you with great confidence that it will not be a flattering one.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 645: I do not need Gaming Copilot. No-one does

Apparently, three weeks almost to the day after the new AI person in charge of Xbox said that they wouldn't let Xbox become overrun with slop, Xbox has announced that it will be launching a slop generation feature that you can use right on your console!

It's Gaming Copilot! Something absolutely no-one asked for! The world's least popular lying plagiarism bot, shoehorned into the console brand that is seemingly determined to fast-track itself into becoming the world's least popular console!

I do not understand this. I do not want this. I do not understand why people think anyone wants this. It is an oft-overused line, but I come to gaming to escape from all the annoying bullshit of real life — which includes lying plagiarism bots — and not to talk to my fucking Xbox. I do not want my games console to "help" me or offer "advice" (and, given the inaccuracy rate of generative AI, I use those terms loosely) and I certainly do not want or need it to be a "companion".

The people pushing this bullshit have absolutely no understanding of what makes good games work, and why people enjoy good games. At GDC, there was a Google demonstration about how chatbots could power NPCs, and their showcase involved a random nobody in an 8-bit Final Fantasy-style RPG babbling on for three pages about their daily routine. The executives who think this is a good idea are impressed because people can "ask anything" to any character in the game. Anyone who has ever played a game with characters that talk to you will know that writing decent NPC dialogue is an art; it needs to balance worldbuilding with helpful advice, without bogging things down too much with, say, three pages of meaningless waffle from a random townsperson. Google's demonstration does none of this correctly.

And Gaming Copilot completely misses the point of everything, too. Today's games are incredibly, wonderfully immersive, transporting players to a whole other world where they can be someone else and achieve things that are impossible in reality. The best games are full of moments of organic discovery and the joy of play — although there's a whole other discussion to be had about modern game design, particularly in the triple-A space — and absolutely do not need a fucking chatbot listening to everything you do. It is not a substitute for having an actual friend, and it is not a substitute for looking up information from someone who knows the game inside out and can offer you well-sourced, helpful advice.

"Oh, but you can just say what you want instead of having to look it up!" Yes, you can, but if you can't guarantee it will be correct — which you absolutely fucking cannot — then what is the point? Plus where do you think all this information that it is offering you is coming from? That's right, the hard work of people who actually took the time to assemble all that information. And you can bet your fucking ass that Gaming Copilot will not credit the sources of this information, because it certainly isn't doing so in the early demo videos that are currently circulating.

I'm so tired. I'm so tired. The video game medium is in a fucking disastrous state. Earlier today, I saw someone actually say it is in the worst position it has been in since the Great Global Video Game Crash of North America back in '83, and while I'm not sure we're at "burying things in the desert" territory just yet — at least partly because the overwhelming amount of software available today is mostly digital — it's becoming increasingly clear that things are absolutely fucked. Game Key Cards, always-online games, perpetual development roadmaps, live service games — all of it is just driving people like me, who have been involved with gaming since its very inception, far, far away.

At least if everything does come crashing down in the next few years, there is still a rich library of games from between the late '70s and now to enjoy. With each passing day, and with every announcement of Some New Bullshit, I feel increasingly like just packing in "Modern Gaming" altogether and living my life with the games I have in my collection right now.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 642: I will never use Gemini when I'm bored

yelling formal man watching news on laptop
Photo by Andrea Piacquadio on Pexels.com

The website "Android Police" posted an incredibly stupid article today, headlined "I use Gemini when I'm bored — and it's better than doomscrolling". I'm sure I don't have to tell you that the premise of this article is spectacularly dumb and the author, Anu Joy, should feel bad for having written it… if indeed they are actually a real person. You never can be sure of that with engagement-bait articles these days, and the author's complete lack of online presence beyond LinkedIn doesn't fill me with confidence that they actually exist. But never mind.

I'm not going to link to the article because it doesn't deserve it, but I am going to systematically destroy it for today's post, which features a lot of swearing. Hope you don't mind, about either part of that statement. If you do, well, tough titties.

Cock!

Turning boredom into a 5-minute adventure

The first lake-boiling, environmentally ruinous use of the lying plagiarism machine that Anu Joy cites as an antidote to boredom is "turning it into a mini choose-your-own-adventure generator", with her argument being that "rather than passively consuming content, I now engage with short, interactive stories that unfold in real time, making them ideal for five-minute boredom gaps."

In response to this, I would like to introduce any Gemini-brained fuckwits to the long, rich and deep history of the interactive fiction genre, all of which has been written by actual humans, and designed to occupy you for anything between a few minutes and multiple hours — possibly even days or weeks if you get stuck and have the willpower to not look at a walkthrough.

It's easy to get involved with interactive fiction, too! There are plenty of great standalone games that fall into this category, such as Inkle's excellent titles 80 Days, Overboard!, Expelled! and more, plus their adaptations of actual choose-your-own-adventure-style gamebooks such as Sorcery! The indie marketplace itch.io has a whole tag for titles developed in Twine, which are essentially hypertext-based choose-your-own-adventure games. And if you want to get into the history of the medium and its rich diversity developed over the course of the last 40+ years, the Interactive Fiction Database (IFDB) has more interactive fiction than you can probably get through in a lifetime, much of which can be played online right there in your web browser.

Or you could, I don't know, actually read a Choose Your Own Adventure book. They still exist, you know! And, as an adult, a single "run" through one will probably only take you about five minutes!

"Oh, but Gemini can make me something that's never been done before!" No it fucking can't! That's sort of the problem with LLMs! They will never, ever have an original thought because their entire fucking functionality is built on plagiarising other people's work. So why not actually go and enjoy a human being's work rather than burning down a forest to get the obsequious chatbot to "tell you a story?"

FUCK.

Quizzes, riddles and brain-teasers on demand

Do I really have to dignify this with a response? Okay, here are some places you can take quizzes online that don't involve getting a lying robot to make shit up:

The Encyclopaedia Britannica, the place where we used to go to look things up before the Internet, has a whole page full of quizzes.

Puzzle publishing company Lovatts has a straightforward and flexible quiz you can challenge any time.

Fucking Buzzfeed, the website where clickbait goes to die, has tons of quizzes. They're sort of famous for them! (EDIT: I had forgotten that Buzzfeed "pivoted to AI" a couple of years back. Maybe forget about this one.)

The best news of all is that these quizzes are put together by actual humans, so the answers should be right, which is not something you can guarantee with the garbage LLMs like Gemini spew out!

FUUUUUCK.

Curiosity on demand, without the time sink

"Oooh, but Gemini is so good at research and telling me fun little facts!"

Heard of Wikipedia? They feature a different article on their front page every day. And those articles are written by humans. (They're specifically trying to fend off the lying chatbots right now.) Not only that, if you want to dive deeper, they are sourced, so you can actually follow up on the things they say.

If you really want to surprise yourself, bookmark https://en.wikipedia.org/wiki/Special:Random — that will take you to a completely random page, where you can start a whole new knowledge journey that doesn't involve polluting the drinking water of any communities. (Fun fact: you can use /wiki/Special:Random on any sites that run on the MediaWiki software, not just Wikipedia!)

FUUUUUUUUUUUCK!

Gemini as a creative partner

"Some days, I'll argue whether pineapple on pizza is a culinary crime or a stroke of genius," Joy writes. If that's the level of your creativity, I suggest throwing a dart at Reddit and posting about how cool and random?! you think bacon is, you t3hPeNgU1NoFd00m, you.

If you just want someone to talk to, that is literally what social media is for. I know there are lots of things one can criticise about social media (particularly the Nazi bar that is Twitter in 2026), but if you just want to start a conversation with someone, there are few things easier than typing "@random hello, I disagree with your opinion on the Star Wars prequels, let's have a fight" or some other such bollocks.

If you want to talk to someone you don't know, there are services for that, too! Join a random Discord — or even better, one for something you're interested in! Play an MMO! Go on IRC! Brave Chatroulette! (Omegle apparently doesn't exist any more after some nasty shit went down there, so maybe don't go there.)

Just don't waste your fucking life talking to the cunting chatbot. It doesn't love you. It never will. And you're making the worst people in the world richer just by looking at it.

FUUUUUUUUUUUCCCCCCCCCCKKKKKKKKKKKKK!!!

Boredom doesn't stand a chance

If you are bored in the world as it exists today and can't think of anything better to do than open up Google fucking Gemini, you are a lost fucking cause. There is more entertainment, more media, more games, more reading material, more opportunities for socialising online than there have ever been. Not only that, there are unprecedented opportunities for you to get creative and express yourself in all manner of different ways, regardless of your past experience. You could even start your very own blog where you yell at people who might not exist!

There is no fucking excuse for turning to the chatbot "because you're bored". Even if the absolute limit of your creativity is "debating the merits of pineapple pizza", which Joy mentions twice in that dogshit article.

I realise that I have given the article in question far more attention than it ever deserved. But hey! It was the inspiration for something actually creative. And who knows? Someone might actually find some of the links I've provided useful.

Friends don't let friends use chatbots. So if I ever hear that you, dear reader, have turned to Google Gemini "because you're bored", I will hunt you down, wherever you are, and I will slap you repeatedly about the face with a wet trout.

Here endeth the lesson.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 640: I hate 2026

I am tired and frustrated. This is nothing new, of course, but I am feeling it particularly keenly today. I can't go into the specifics for reasons that are probably obvious, but as an attempt to vent at least a little bit of the fury festering inside my spleen, I am going to vaguepost my way through this.

I learned today that something I had been looking forward to happening — which would be a good thing for me, and particularly for my mental health — might not be happening, through no fault of my own, and through no fault of the person who was organising this Thing. Instead, the blame can be placed squarely (albeit slightly indirectly, removed by a degree of, like, one or two) at the feet of the perpetual garbage fire that is the tech industry in the mid 2020s — specifically, the chip shortages caused by all the AI crap.

Every so often I see an AI booster wanking on about how much more "productive" AI has made them, and I do stop to question if I've got things right. And the answer is inevitably "yes"; every time I ask this question I find myself feeling more and more resolute in my absolute, complete and utter distaste for AI and what it is doing to the tech industry — and, more broadly, what it is doing to anyone who wants to do anything that isn't AI-related in the tech space.

It's just the latest in a long line of examples of people and organisations with a lot of money and influence taking everything that other people might need, and making (supposed) use of it for something that no-one actually wants — and which causes knock-on effects on multiple steps down the "ladder". The really galling thing about this all is that it's arguably not even organisations with a lot of real money; the seemingly daily billion-dollar deals that are being bandied around are all being done with money that doesn't actually exist, that has no intention of existing, and which will never exist as anything other than a means of making the worldwide economy collapse completely.

I can go to the shop these days and get a few snacky bits and it'll be £50 or more. I shudder to think what the current Happenings are doing to petrol prices. And, of course, it's getting near-impossible to buy anything even vaguely related to computer memory or storage for what one might call a "reasonable" price. Not all of these are directly and specifically related to AI, of course, but they do all relate to how the economy is utterly fucked as a result of everything that has been happening for the last few years.

And of course it's selfish for me to speak up about this stuff because it's something in my life that is being specifically affected by it — but regular readers will know that I have been pretty staunchly opposed to All This Bullshit long before the still-vagueposted news that I had today.

I'm just so tired. When I was young, I thought there was a point you'd get to in your adult life where everything was just sort of sorted and you could get on with living and enjoying your life. I feel like my parents had that. (They might disagree. But it's the impression I got.) But no-one living through this horrible, horrible time in existence is getting any degree of peace, because everyone is being affected by the absolute worst pieces of shit in the world to varying degrees.

I'm tired of it. So very tired. And I wish there was an easy way to make it go away.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 592: Abstinence from AI

I, as I may have made clear on a few separate occasions on these hallowed pages, fucking hate generative AI. I do not use it. I do not need to use it. I do not want to use it. And I cannot wait for the whole bubble to pop and this whole shitshow to go the way of the NFT and the Metaverse.

In the last few weeks in particular, I've found that there are a lot more people seemingly trying to push AI as "sort of all right, really". You know the sort of thing, people just casually, jokingly drop into a Discord chat that "out of curiosity, [they] threw it into Gemini to see what would happen" and before you know it, all meaningful human conversation has been replaced with copy-pasted obsequious fawning over the prompter, bold-type section headers and bullet-pointed lists.

Not only that, but the press are at it, too; just today, Undark Magazine (which I've never heard of prior to today) posted a piece called "Abstinence from AI is Not the Answer", in which the authors, C. Brandon Ogbunu and Cristopher Moore, make the baffling assertion that refusing to engage with AI "puts vulnerable people at risk".

"Like many new technologies," they write, "AI can either amplify inequality or ameliorate it, depending on how it is deployed. And fears about the likelihood of it amplifying stratification and segregation are valid. But advocating for abstinence will deny communities access to the tools the privileged are already using to help them write college essays, do their homework problems and learn a second language. Puritanical stances leave people ill-equipped to use this technology responsibly and unable to benefit from it."

Okay, but… hear me out… generative AI is terrible at all of those things. AI writing can be spotted a mile off. It gets answers to basic problems wrong, making it useless for homework. Due to its propensity to hallucinate and fawn over the user, you can't necessarily guarantee that its use of a non-English language is correct, nor that it will correct you if you get something wrong. And, more importantly than all of those things, relying on generative AI to do any of those things strips you of the ability to do them yourself. Not only that, it kills your curiosity to learn and discover new things for yourself, because it's much easier to just ask the chatbot to do it for you rather than to put in the work to learn a new skill yourself.

It's this latter part that really concerns me about generative AI. I've seen so many people willingly hand off to a chatbot during normal discussions and arguments and think that's a shortcut to "winning". When our legal and medical professionals are caught using these unflinchingly awful tools, their own skills and knowledge atrophy because they have no need to retain them — the chatbot will do all the hard work for them.

And what happens when, as looks increasingly likely, the money runs out and all these monumentally wasteful services are no longer able to operate? We're going to need humans who can actually do stuff again. And I'm concerned we're going to struggle to find them, because just over the course of the last couple of years I've seen a frightening amount of people completely give up on seeking out reliable information, knowledge and training for themselves because they can just ask the chatbot.

To address Ogbunu and Moore's main point — that abstinence from generative AI puts vulnerable people at risk — I say, full-throatedly, bollocks. The Internet has been a constant presence in all our lives — whether we're privileged or vulnerable — for decades at this point, to such a degree that it is considered one of the basic utilities these days. It is rammed full of helpful, thoughtful, weird and wonderful information, and the only skill one needs to cultivate in order to take advantage of this is how to determine whether or not something is a reputable source. That is something that we learn to do in school — or we should learn how to do, anyway.

If you hand that job over to a chatbot which is demonstrably wrong a statistically significant amount of times you ask it a question, you are not making use of that skill. That is not democratising the delivery of information; it is filtering all that information through a technology that, at its core, has been designed only with the interests of its billionaire owners in mind. And not only that, to get the supposed "best" out of these chatbots, you're expected to pony up $200 or more a month for a subscription. That doesn't sound very inclusive to the most vulnerable of society.

"Choices we make now will determine whether AI will be a tool for the powerful, dazzling the rest of us with its hype and subjecting us to its harms, or whether it will be a tool — imperfect but useful — in everyone's hands," conclude Ogbunu and Moore.

If it's an imperfect tool, it's not useful. I repeat: I do not use it; I do not need to use it; I do not want to use it. My choice is made; if I see anyone "powerful" using generative AI, I will laugh at them, because they are depriving themselves of the joy of thinking, of learning, of discovering, of creating. And then I will pity them.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 557: How to torch universal goodwill with one simple interview

Today, Larian Studios, makers of the Divinity series and the universally acclaimed Baldur's Gate 3, found itself in the crosshairs of the Internet's ire due to comments made by its CEO, Swen Vincke during an interview with Bloomberg.

According to Vincke, Larian has been using generative AI behind the scenes to, in his words, "explore ideas, flesh out PowerPoint presentations, develop concept art and write placeholder text". None of which are things you need generative AI for, and all of which are things that people have been perfectly capable of doing with their own human brains for decades. In fact, there are people who specialise in elements of what he described — most notably concept art, which is the area a lot of critics have been focusing on.

Vincke's comments are remarkably ill-considered given the number of times that generative AI use in video games has been subject to backlash from the general public and journalists alike over the course of just the last year — and for many of the same reasons that Vincke is arguing in favour of.

The otherwise well-regarded sci-fi game The Alters was irreversibly poisoned for a lot of people earlier this year when it became apparent that they had used ChatGPT to generate placeholder text for background textures and localised strings for non-English languages.

The umpteenth reboot of Everybody's Golf came under fire for non-specific use of generative AI that I'm not sure anyone ever quite got to the bottom of.

The new Let It Die game, which has no involvement from the previous game's original developers Suda51 or Grasshopper Manufacture, has been lambasted for extensive use of AI-generated material.

The promising "people sim that isn't called The Sims" inZOI turned huge swathes of prospective players away by its game's heavy reliance on generative AI, as well as its publisher Krafton's insistence that they are pivoting to becoming an "AI-first" company.

The latest hot "extraction shooter" (I still don't really know what that is, and no, I don't really care) ARC Raiders got dinged with a 2/5 review score for its use of AI-generated voices — not just because they were AI, but because using AI-generated voices is at artistic odds with the story the game is trying to tell.

Even the once-beloved Oliver Twins, former stars of the UK "bedroom programming" scene in the '80s, got a kicking from press and public alike for their absolutely terrible AI-generated "follow-up" (and I use the term loosely) to their old Spectrum game, Ghost Hunters.

People hate this shit — and with good reason. Generative AI is a lazy, soulless solution for feckless CEOs to foist on their creative teams because they think it will "add value" for shareholders, when in fact there is growing evidence by the day that the entire generative AI scene is financially, environmentally and societally ruinous.

On top of all that, it doesn't work well enough to be worth using! Every single AI "tool" currently available carries a prominent disclaimer that it "might" (read: "will") get things wrong from time to time, making them fundamentally useless for doing anything useful with — and their "fun" uses are causing the Internet to become overrun with even more meaningless, pointless slop than was already splattered everywhere in the first place, on top of boiling all our lakes. At least stupid things from a bygone age like Badger Badger Badger and Seepage (to name just two examples from what I believe to be the golden age of Internet nonsense) are the result of both genuine human creativity and skilful use of creative tools that don't involve typing "make me funny video garfield giant boobs mechahitler piss filter" into a chatbot.

Vincke's point was not that the new Divinity game will be riddled with AI-generated voice lines or visuals. In fact, he claims that the studio is "neither releasing a game with any AI components, nor are [they] looking at trimming down teams to replace them with AI", but that AI is "a toolset for creatives to use and see how it can make their day-to-day lives easier, which will let us make better games".

Vincke has, apparently, been receiving some pushback from within Larian about this — and he's certainly been getting some choice words from former employees today, too. The situation escalated to such a degree that he issued a statement in response to IGN earlier today. Unfortunately, said statement doesn't really say anything — and, worse, attempts to obfuscate his earlier statements by pointedly using the term "ML" (for "Machine Learning") rather than his earlier use of "AI" — today typically interpreted to mean "generative AI" when used in contexts such as this.

For me, the worst thing was his final paragraph:

While I understand [generative AI] is a subject that invokes a lot of emotion, it's something we are constantly discussing internally through the lens of making everyone's working day better, not worse.

Here's the thing. You see that people are getting sniffy about generative AI, something which is well-established by this point to be A Thing The Public Fucking Hates. The sensible thing to do from a public relations perspective at this point, regardless of what you actually think, is to go "okay, you know what, we hear you, this sucks" or something along those lines, and then promise to "do better" or the like. A bunch of people won't believe you, of course, but this is better than going "no, well, I actually do think everyone at Larian should use this, and by 'discussing internally' I probably actually mean mandating that all employees have to use it at least a certain amount", which is how this is all coming across right now.

The particularly dumbass thing about this episode is, as I said above, none of the examples he gave are situations that need generative AI — or even where it is particularly beneficial. In fact, several creative types have commented today on how using "good enough", plausible-looking placeholders is actually detrimental to the entire creative process. Former Rocksteady employee Amy-Leigh Shaw commented thus on Bluesky earlier:

Placeholder text isn't supposed to be unique per line. It is supposed to be an instruction to the writer with a great big warning sign slapped on the top, so that it doesn't slip into the finished game. Unique sentences of bland writing are the least helpful thing to use for that purpose!

I also find that one of the more frustrating blockers to writing is when there's already a (bad) suggestion of what you should say. You are no longer able to organically find the idea because the suggestion in front of you knocks you off the track of your natural thought process.

Shaw is talking specifically about writing here, but several artists agreed that this is the case when dealing with concept art, too. The difference between a hastily scrawled Microsoft Paint doodle and the "this sort of looks right" thing that generative AI spits out is enormous — and in the latter case, it will absolutely colour an artist's interpretation of a scene or character, often unconsciously.

In other words, there's no defence of using generative AI as "placeholders" for text, concept art, voice acting, music — anything that a creative person is actually going to get involved with. The entire point of a placeholder is that it's something obviously shit and out of place so it can be easily spotted and subsequently replaced by a specialist at some point in the development process. Because generative AI produces something that is often "good enough" to the untrained eye or someone not looking closely, it's easy for it to get missed — as happened with The Alters earlier in the year.

Vincke's comments — and his subsequent follow-up statement — have torched a significant amount of goodwill that people had for Larian Studios in the space of just a single day. People fucking loved Baldur's Gate 3 and the previous Divinity: Original Sin games! It feels like it shouldn't have been a difficult job to maintain that goodwill while hyping up your new game — even if some found themselves a tad squicked out by a rather grim trailer at The Game Awards. But no. C-suite gonna C-suite, I guess — and it appears that this is true for companies people had, up until now, actually liked, as much as it is for companies people love to hate. And the net result of this for Larian is that people who were previously excited about a new Divinity game are now not going to touch it.

I know this has certainly given me a great degree of pause on wanting to check out any of Larian's work. I've been meaning to look at the Divinity: Original Sin games and Baldur's Gate 3 for a while — but now I'm in even less of a hurry to do so than I was already.

I'm so very tired of this. I, like many others, cannot wait for this fucking bubble to pop so we can get back to something approaching "normality", whatever that even means any more.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 510: Another great Eddy Burback video

There's a lot of absolute garbage on YouTube, but there are a few folks out there who do some truly special work. One of those people is Eddy Burback, who makes maybe two or three videos a year, but they're always very high quality, both in technical terms and in terms of the amount of research that goes into them. You may recall a while back I was rather taken by his video about giving up the smartphone life.

Today, he put out a new video called "ChatGPT made me delusional", and I sincerely recommend you set aside an hour or so of your life to watch it through in its entirety. Not skip through it at 1.5x speed, not "have it on in the background". Watch it. Because I think it is important.

Here it is:

Burback's aim for the video was to understand the phenomenon of "chatbot-induced psychosis" or "AI psychosis". This is where vulnerable people, already struggling with matters of mental health, would turn to large language model chatbots such as ChatGPT and use them as a form of "therapy" or as a substitute for actual human contact. There have already been some incredibly tragic results, as anyone who has ever read any science fiction would have been able to predict a mile off.

To explore how this might happen, Burback presented ChatGPT with an obviously ridiculous hypothesis based on complete fabrications: that he was the smartest under-1 baby of 1997, capable of producing great works of art, having in-depth philosophical discussions and demonstrating a deep understanding of complex mathematics. It took him two statements to convince the chatbot that this was the undeniable truth, and things just escalated from there.

Burback presented the chatbot with suggestions that his friends and family might not understand his brilliance, and it recommended he flee into the middle of nowhere and break all contact with them, including stopping sharing his location data with the person he trusts most in the world: his twin brother. He continued feeding the chatbot with increasingly ridiculous, obviously delusional statements and deliberate, complete and utter nonsense, and at no point did it attempt to deter him from the path it had set him on.

It was only at one point — the day when OpenAI controversially swapped its "4o" model for GPT-5 — that the chatbot had a momentary blip in feeding into his "delusions" (and, to its credit, suggested some psychological help facilities in the neighbourhood), but Burback pointed out that it was very easy for someone who was paying for the service to just switch it back to the old model, which seemingly finds it impossible to say "no" to the user.

What was particularly eerie about the whole situation is that Burback was using the premium voice feature on ChatGPT, which has clearly been designed to sound as "human" as possible, even going so far as to add realistic inflections and non-fluency features to the things it is saying. (It also pronounces emojis as completely unrelated sound effects, which somewhat detracts from the "humanity" of it all, but still.) In other words, it wasn't hard to see how someone suffering from real, genuine mental health problems might feel like they really did have a person in their phone who was willing to listen to them, tell them they were always right, and repeatedly give them some really, really bad advice.

It was actually kind of horrifying. The way the bot continually escalated into increasingly outlandish behaviour — culminating in him chanting mantras under an electricity pylon, wrapping his entire apartment in tin foil and tattooing a symbol into his thigh — was genuinely frightening.

I know we can all have a good laugh about how the chatbots get things wrong sometimes, but Burback's research here demonstrates that it doesn't just get things wrong (and I apologise for using this sentence construction, given its indelible association with AI writing, but it's an established turn of phrase for a reason) — it offers genuinely dangerous advice with minimal guardrails in place. And it does so without thinking about it or understanding why it might be dangerous — because it's not actually thinking or understanding anything at all. It's constructing sentences that, based on the data it has Hoovered up from across the Internet, it thinks are the correct responses to the things the user has been typing. It is, in essence, an extremely advanced version of the old ELIZA program on classic computers.

And it can go fuck itself.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 439: Parallel dimension

A recent post over on WIRED begged the question "OpenAI is poised to become the most valuable startup ever. Should it be?" Leaving aside the obvious Betteridge's Law commentary for a moment, the actual content of this article was utterly baffling.

OpenAI claims it is worth $500 billion. We've heard this a lot of times over the last few months, and everyone seems to sort of have accepted it as the "truth". And yet there's this in the article:

[An anonymous OpenAI investor] argues that the math for investing at the $500 billion valuation is straightforward: Hypothetically, if ChatGPT hits 2 billion users and monetizes at $5 per user per month — "half the rate of things like Google or Facebook" — that's $120 billion in annual revenue.

"That alone would support a trillion-and-a-half dollar company, which is a pretty good return, just thinking about ChatGPT," the investor says.

Except that "math" isn't "straightforward" at all, is it? In fact, I would go so far as to say that it isn't "math" at all, because all of it, all of it, is complete fantasyland nonsense plucked out of the arse of a particularly flatulent ogre, then mindlessly parroted by breathless idiots who think spicy autocorrect is in any way a substitute for the most bare minimum of interpersonal interactions.

Look at it. Two billion users. That's a significant portion of the planet, and it's only very few services — likely Google and Facebook among them — that can count that many user accounts on their books, let alone active users, which is what this nonsense is actually talking about. For context, ChatGPT, at present, continually reports somewhere in the region of 300 million weekly users. That's a lot, sure, but an overwhelming proportion of those are people who are not paying for the service and just using it to burn down a forest or two for a picture of Garfield with tits.

To put it another way, assuming that not only are two billion active users going to magically appear from nowhere, but that every single one of them is going to pay $5 a month to use the lake-boiling plagiarism machine that loses OpenAI money on every paying user already, is patent nonsense.

It is, right?

It is, yes?

I know nothing about economics or business, and I feel like I can see beyond any shadow of a doubt whatsoever that this is an absolute absurdity. Couple that with OpenAI's Sam Altman making incredibly stupid comments like "building a Dyson sphere around the whole solar system" just so we have enough space for all the data centres these two billion imaginary users will need to use their equally imaginary $5 ChatGPT subscriptions, and I'm just left feeling like at some point between COVID and now I've crossed over from a dimension where things make sense into one where they just… don't.

Are we really living in a world where a company's valuation is determined based on completely imaginary figures? Well, I guess it makes sense when they have a completely imaginary product, too. Nearly half a decade into this nonsense and there are still no compelling use cases for the technology for most people — and even the most sweaty AI apologists are obliged to admit that yes, the chatbots get things wrong quite a lot of the time.

Microsoft put CoPilot in Excel! You know, the software you use when you want accurate data analysis and calculations! They added it with the disclaimer that it "might be wrong" and that it "shouldn't be relied on for high-risk situations". Like, you know, pretty much fucking anything you might use Excel for in a business situation.

What are we doing? What are we doing? And WHY?! ARRRRGGGGHHHHHH


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 404: Today's AI idiot story

The latest hilarious story from the world of artificial "intelligence" is the sorry saga of a Redditor who "worked on a book" (and I use the term "worked" loosely) with ChatGPT and found that they couldn't download it.

You want to know why? This is the best bit. It's because ChatGPT hadn't actually created anything, because it can't do that. It had outright lied to the person because, as a large language model — which, let's not forget, is essentially fancy predictive text, not actual intelligence — it believed, based on the data it had ingested, that telling the user it had successfully created 487MB of book was what the user wanted to hear.

To be fair, it was what the user wanted to hear, only they wanted that 487MB of book to, you know, actually exist.

The Redditor's eventual conclusion was thus:

After understanding a lot of things it's clear that it didn't [generate the book at all]. And it fooled me for two weeks.

I have learned my lesson and now I am using it to generate one page at a time.

Several other Redditors commented, quite correctly, that this is perhaps not the ideal takeaway from this lesson. This is my absolute favourite response, though. This response deserves to be framed and put in a museum as a monument to how utterly stupid the age we're living in is:

At least you're finally admitting that ChatGPT is working on creating this fictional thing instead of you having "worked on it together". lol. Meanwhile real writers don't need this nonsense to be creative.

As a wise person once said: why would I invest more time reading something than the author spent writing it? Best of luck on something literally no one, including you, will read.

Absolute perfection.

Even more hilarious is the fact that the original poster was supposedly trying to create "a collection of a lot of children [sic] stories with moral lessons that [they] wanted to present in a colourful manner with underprivileged kids of [their] area". They claimed that the text was "all theirs" and that they were using ChatGPT to "refine the flow"… and generate 700 images.

Because what the world needs is an AI-edited book of children's stories almost certainly ripped off from existing tales, illustrated with AI slop images.

Dear Lord. I absolutely despair that we're living in an age where people are this fucking stupid.

Let me be 100% clear on this: if you're using ChatGPT to generate or "refine" anything you want to publish, you are not an author. You are certainly not the illustrator.

Learn to write. Practice it. It is a craft like any other. Develop your own unique, distinctive voice, because AI very much has a "voice" of its own — a particularly obnoxious, hand-wringing, obsequious, simpering one — and it is immediately recognisable. And, if you want to improve, hire a fucking editor. Or, at the very least, just give it to another sodding human being to look at.

ChatGPT is not an editor. ChatGPT gets things wrong a significant proportion of the time. And, as this story shows, ChatGPT just fucking makes things up quite a bit, too. You cannot trust it. You should not trust it. It is not a person. It is not intelligent. It doesn't "know" anything.

And if you need art? Two options: one, learn to do it yourself, which can be rewarding and fulfilling in its own right. Or two, and you'll like this, can you guess what it is yet? That's right, it's hire a fucking artist.

I truly despair for the fucking dumb age we live in right now. I can't wait for the AI bubble to pop and all this stupid shit to go the way of the Metaverse and NFTs. Because it's actually driving me insane what it's clearly doing to people. We're going to end up completely incapable of producing cultural artefacts if we're not careful. And that's not a world I want to live in.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.