#oneaday Day 661: When people would gnaw off an arm for a freelance writing gig, using generative AI is unforgivable

In the last 18 years, 4,535 posts and 3,263,700 words (yes, really, I got a plugin to count them and everything), I have never once felt the need to outsource my thinking and creativity to a machine. There are two posts written by "guest authors" (which, spoiler, were actually both me in a cunning disguise!) and there are a couple of posts where I permitted drunken friends the opportunity to contribute a sentence or two to a post I was writing while out and about, but the remainder is all me, scooping out the contents of my brain and plopping it onto the page for no other reason than the fact that I enjoy doing so, and occasionally find it helpful.

Today, this notice appeared in the New York Times on a book review it had published:

Editors' Note: March 30, 2026:
A reader recently alerted The Times that this review included language and details similar to those in a review of the same book published in The Guardian. We spoke to the author of this piece, a freelancer reviewer, who told us he used an A.I. tool that incorporated material from the Guardian review into his draft, which he failed to identify and remove. His reliance on A.I. and his use of unattributed work by another writer are a clear violation of The Times's standards. The reviewer said he had not used A.I. in his previous reviews for The Times, and we have found no issues in those pieces. The Guardian review of "Watching Over Her" can be read here. (link)

This, to me, is unforgivable. Supposedly there are plenty of writers out there who are doing this — or something like it, anyway — but to me, it is unfathomably awful. To be a writer, someone who cares about one's craft, you have to give a shit. And absolutely nothing says "I don't give a shit" quite like relying on generative AI so heavily that your article has to be pulled because its plagiarism was too obvious.

I mean, when you think about it, it's obvious that this would happen, given the way generative AI works and is trained — if it's pulling all its wording from existing texts that it has absorbed (without any compensation for the original authors) from around the Web, then of course it's going to come up with some of the same things, perhaps even the exact same phrasing.

You'd think it would be obvious, anyway — and that any writer worth their salt would not, as a result, rely on it — but apparently this is not the case. Much how the above-linked Wired article should really result in all the authors named being blacklisted from every freelance writing pool, effective immediately, this incident should be the end of Alex Preston's career. There should be no second chances. To quote the old Batman meme, this is the weapon of the enemy; we do not need it; we will not use it.

Believe me, at this point I've heard every pro-AI argument there is — some, like the nonsensical "back in the '90s some people thought the Internet would be a bad thing!!" one, more than others — and none of them stand up to the slightest bit of scrutiny. AI does not make you a better writer. AI does not make you a writer. The only thing that makes you a writer is, quite simply, writing. And if you are not sitting down and writing something for yourself — whether that be through putting pen to paper, tapping away at a keyboard or dictating your words verbally — you are not a writer. And no, "writing" your prompt to get the bot to churn out a thousand words for you does not count.

Humanity's written languages have survived for thousands of years — albeit with plenty of evolution — through people being taught how to use them. It is, today, a fundamental part of your early socialisation process to learn how to read and write; yes, some folks have specific learning needs that make it harder or even impossible for them to do so, but even for them, generative AI is emphatically not the answer, as we have plenty of assistive methodology and technology that can allow these people to thrive that does not rely on the odious fad that is presently bleeding the planet dry.

So I'm sorry, I have no patience left whatsoever for any incidents like this. The people involved in the Wired and New York Times articles above deserve to be kicked out of their career. Because if they have no respect for writing as a craft, why on Earth should any readers be expected to have any respect whatsoever for the shit they've churned out through the bots?

There are myriad people out there who would chew off their own arm for an opportunity to have a byline beneath a prestigious masthead — and every one of them who relies entirely on their own writing abilities, rather than outsourcing their creative process to the planet-burning chatbot, deserves those opportunities a million times more than those who clearly have no respect for themselves, their peers, or their readership.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 439: Parallel dimension

A recent post over on WIRED begged the question "OpenAI is poised to become the most valuable startup ever. Should it be?" Leaving aside the obvious Betteridge's Law commentary for a moment, the actual content of this article was utterly baffling.

OpenAI claims it is worth $500 billion. We've heard this a lot of times over the last few months, and everyone seems to sort of have accepted it as the "truth". And yet there's this in the article:

[An anonymous OpenAI investor] argues that the math for investing at the $500 billion valuation is straightforward: Hypothetically, if ChatGPT hits 2 billion users and monetizes at $5 per user per month — "half the rate of things like Google or Facebook" — that's $120 billion in annual revenue.

"That alone would support a trillion-and-a-half dollar company, which is a pretty good return, just thinking about ChatGPT," the investor says.

Except that "math" isn't "straightforward" at all, is it? In fact, I would go so far as to say that it isn't "math" at all, because all of it, all of it, is complete fantasyland nonsense plucked out of the arse of a particularly flatulent ogre, then mindlessly parroted by breathless idiots who think spicy autocorrect is in any way a substitute for the most bare minimum of interpersonal interactions.

Look at it. Two billion users. That's a significant portion of the planet, and it's only very few services — likely Google and Facebook among them — that can count that many user accounts on their books, let alone active users, which is what this nonsense is actually talking about. For context, ChatGPT, at present, continually reports somewhere in the region of 300 million weekly users. That's a lot, sure, but an overwhelming proportion of those are people who are not paying for the service and just using it to burn down a forest or two for a picture of Garfield with tits.

To put it another way, assuming that not only are two billion active users going to magically appear from nowhere, but that every single one of them is going to pay $5 a month to use the lake-boiling plagiarism machine that loses OpenAI money on every paying user already, is patent nonsense.

It is, right?

It is, yes?

I know nothing about economics or business, and I feel like I can see beyond any shadow of a doubt whatsoever that this is an absolute absurdity. Couple that with OpenAI's Sam Altman making incredibly stupid comments like "building a Dyson sphere around the whole solar system" just so we have enough space for all the data centres these two billion imaginary users will need to use their equally imaginary $5 ChatGPT subscriptions, and I'm just left feeling like at some point between COVID and now I've crossed over from a dimension where things make sense into one where they just… don't.

Are we really living in a world where a company's valuation is determined based on completely imaginary figures? Well, I guess it makes sense when they have a completely imaginary product, too. Nearly half a decade into this nonsense and there are still no compelling use cases for the technology for most people — and even the most sweaty AI apologists are obliged to admit that yes, the chatbots get things wrong quite a lot of the time.

Microsoft put CoPilot in Excel! You know, the software you use when you want accurate data analysis and calculations! They added it with the disclaimer that it "might be wrong" and that it "shouldn't be relied on for high-risk situations". Like, you know, pretty much fucking anything you might use Excel for in a business situation.

What are we doing? What are we doing? And WHY?! ARRRRGGGGHHHHHH


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

1109: Killachine

Page_1Another day, another article declaring the console will be "dead" before we know it. Lots of people — mostly analysts and business-savvy people who work in the mobile and social sectors — have been saying things like this recently, so it must be true, right?

Nah. 'Tis bollocks, as usual. While it's impossible to deny the huge impact that mobile devices have had on bringing the concept of playing games to the masses — the actually-not-all-that-good Temple Run 2 recently surpassed a whopping 50 million downloads — to say that they are going to "kill" consoles and/or dedicated gaming handhelds is, frankly, ridiculous.

Why? Because they cater to completely different markets and tastes. Mobile and social games are, for the most part, designed for players to while away a few minutes while something else is going on — perhaps a lengthy dump, a wait for a bus or a particularly boring meeting with a conveniently-placed table to hide what you're up to — while computer and console games are, for the most part, designed for players to sit down in front of for a more protracted period of time and immerse themselves in the experience. There are exceptions in both cases, of course — hence the "for the most part" disclaimers — but, on the whole, that's where we stand. And there's nothing wrong with either aspect of gaming — they both exist, and they will both more than likely continue to exist.

The word "games" isn't all that useful any more, in fact, because the medium it describes is now too diverse to be covered by a single word. I can say "I like playing games" and that will mean something completely different to what someone else means when they say it. When I say it, I mean that I like relaxing on my couch with a controller in my hand, staring at the TV and immersing myself in a game with depth, an interesting story, or both. When someone else says it, they might mean that they have three-starred all the levels on Angry Birds, or that they fire up FarmVille during quiet periods in the office, or that they have fifteen Words With Friends games on the go at any one time. These are obviously completely different experiences, though there can be a degree of crossover between the two extremes — there's nothing to stop someone who, say, is big into competitive League of Legends play also enjoying playing Scramble With Friends against their less gaming-savvy friends and family.

Where we start to get problems is when developers and/or publishers from one group start to try and step across the invisible line into the other group. More often than not, this is seen in the form of mobile and social developers promising a mobile or social experience that will appeal to "core gamers" — in other words, the group that, like me, enjoys immersing themselves in an experience for hours at a time rather than as a throwaway diversion. It is, sadly, abundantly clear that a huge number of developers who try and take this route have absolutely no clue whatsoever how to design a game that will appeal to these players. The article I linked above is from the CEO of a company called Kabam, who specialise in developing a variety of almost-identical "strategy" (and I use the term loosely) games that supposedly appeal to "core" players. All of their games are the same (literally — I tested three side-by-side as an experiment once, and the quests the player was expected to follow were completely identical, right down to the wording) albeit with a slightly different visual aesthetic, and all of them are as dull as ditchwater.

The bewildering thing is that someone, somewhere, is playing these games — and, more to the point, spending money on them — enough to let them be considered a "success". So more and more of them start appearing, each inevitably following the exact same template, making all the same mistakes and pissing off the same people while somehow convincing the same others that reaching for their credit card is a really, really good idea.

Note that I'm not saying here that mobile, social and/or free-to-play games are inherently bad in and of themselves; it's that in many of these cases — particularly those that are supposed to be designed to appeal to "core" gamers — they are designed by people with an astonishingly strong sense of business savvy, and a complete lack of understanding in what makes a game actually fun or interesting to play. In other words, the fact that something is financially successful should not be the only criteria for it being considered "good" — you just have to look at Mobage/Cygames' shockingly awful Rage of Bahamutone of the top-grossing mobile games in the world, to see how this is the case.

No, the problem that we have is that everything new always has to "kill" something else. This flawed logic has been seen with numerous other technologies in the past; laptops would kill desktops, tablets would kill laptops, TV and video would kill the cinema… the list goes on. In very few cases is it actually true. Okay, DVD killed VHS, but that was a simple case of a superior format doing the same thing rather than two vaguely related — but not identical — things battling it out for supremacy. People still use desktops as well as laptops because big screens are nice and more practical in many circumstances. People still use laptops as well as tablets because typing on a touchscreen is still a horrid experience. People still go to the cinema as well as watching TV or DVD/videos because it's nice to see something on a huge screen with room-shaking sound.

Why does everything have to be reduced to binaries? Why does something new always have to "kill" something else, even if it clearly isn't performing the same function? Can't these people just accept that certain parts of the populace are happy with one thing, and others are happy with another?

Ahh, if only.