#oneaday Day 375: So very very tired

Earlier today, someone shared a photo of a packet of Uncle Ben's instant noodles or something, which came with a disclaimer on the front that the image of the supposed product festering inside the pouch had been "generated with AI". And I think I felt something actually snap in my brain.

What are we doing. What are we actually doing. I am absolutely beyond sick of this garbage being force-fed to us from every possible angle, and for breathless ball-gargling apologists to come out with all the usual "oh, it's a tool, a tool can't be bad".

No. Fuck off. Generative AI is hot garbage, and I think we've proven that beyond every reasonable doubt at this point. "It hallucinates a bit" should be enough to put absolutely fucking everyone off ever even thinking about using it for research and analysis, and the fact that the companies who trained these models have had to go about it in the most underhanded means possible, potentially destroying creators' rights over their own work in the process, should be enough to ward everyone off. And to cap it all, these people spend billions every month to achieve nothing. Several years into this shit and we're still yet to see convincing use cases that don't have hefty caveats. And still the rich get richer, somehow, and the world, as a whole, gets worse and worse off.

Is the fact that people have been driven to suicide by "conversations" with AI bots not enough? Is the fact that multiple social media platforms are now pretty much unusable and a privacy nightmare due to the flood of AI not enough? Does the prospect of people not actually being able to perform necessary skills — like, say, coding to hold the world's infrastructure together — not absolutely terrify you? And do you not see anything even a little bit wrong with ChatGPT offering to modify an existing piece of writing "in the style of" another magazine so you can successfully pitch something you didn't write a single word of?

Every day, the world gets worse and worse, and frankly, I'm reaching a point where it is becoming less and less desirable to live in it. Couple all this inescapable AI shit with what's going on in America, the looming war in the Middle East (again) and the frankly frightening regressions the world has seemingly been going through with regard to acceptance, tolerance and inclusion, and it's not a pretty sight. It's no wonder that everyone in the world seems to be so argumentative, aggressive and confrontational all the time these days. This is a problem, but it's also a symptom.

When I was growing up, it felt like I was living through one of the most exciting periods in cultural, societal and technological history. Now I'm just embarrassed to be on the same planet as a frankly terrifying proportion of the population, who seem to think that everything we're doing right now is just fine, and we should definitely continue on this course, it absolutely won't cause terrible problems down the line.

I don't know what to do any more. I feel powerless, helpless, alone. And I'm sure I'm not the only one feeling that way.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 283: We should probably be resisting generative AI more than we are

There was a good piece by 404 Media on "AI slop" today. Author Jason Koebler described the issue as AI slop being a "brute force attack on the algorithms that control reality", and goes on to explain how those taking advantage of AI are exploiting social media algorithms to such a degree that platforms are now flooded with this garbage, making it hard to find 1) anything made by a real person and 2) anything made by someone you might actually want to connect with.

There is zero value to this stuff, other than self-fulfilling engagement. Presumably the long game is to build up "the numbers" with this shit, then sell the accounts, or make bank off impressions-based ad revenue. And the platform holders don't give a shit; as Koebler points out in his piece, it seems like Mark Zuckerberg actively wants the experience on Facebook to be real humans arguing over AI-generated slop rather than anything real and meaningful.

And I don't understand why we're letting this happen. Not only on social media, but in more "traditional" industries, too. It's happening to a frightening degree in publishing, with myriad "get rich quick" schemes fundamentally being based on churning out multiple AI-generated books every week (or even day) and then profiting off, let's face it, vulnerable people who aren't able to tell the difference between garbage churned out by a robot and something written by an actual human being.

As Koebler puts it, "there is a dual problem with this: it not only floods the Internet with shit, crowding out human-created content that real people spend time making, but the very nature of AI slop means it evolves faster than human-created content can, so any time an algorithm is tweaked, the AI spammers can find the weakness in that algorithm and exploit it."

At the moment, there are a few common responses to generative AI:

  • "I love generative AI! The genie is out of the bottle, so if you're resisting it you're a Luddite who isn't embracing the latest technological innovations!"
  • "Generative AI is just a tool that people can add to their arsenal, like digital art packages. I can't really tell you how or why that's a good thing, but I heard someone else say it so I'm saying it too."
  • "Generative AI might be useful in certain circumstances, but I can't really tell you what they are because no-one really knows or can offer specific, concrete examples that aren't prone to hallucinations to such a degree to make them worthless."
  • "Generative AI sucks balls and I hate it."

I'm somewhere towards the bottom of that list, leaning towards hating it and very much wanting it to go away. At present, I am disinclined to trust the people who claim it will be "revolutionary" for things like medicine, because of the amount of times it fucks simple things up, still. I am also concerned for the field of programming, because as more and more junior coders show up who are only capable of feeding prompts into an AI, not actually doing (and checking!) the coding themselves, we're going to have a real problem on our hands with software development.

At the same time, I'm sure there are some worthwhile use cases for a means of communicating with a computer using natural language. I mean, hell, look at Star Trek; the assumption there was that you could just say "Computer" like you say "Alexa" today, then rattle off an often fairly abstract task for it to complete, and it would do it. That is, presumably, the goal.

But AI isn't there yet, not by a long shot, which is why ChatGPT costs $200 a month for a subscription and can't really tell you what it's for, let alone how to stop it making stupid mistakes, and in the meantime the companies involved in all this shit are burning through both money and the planet's natural resources in pursuit of something which might, in fact, be impossible. "Agents" are coming, apparently, but all we've seen of them so far is making things that are already pretty straightforward to do on the Web (like grocery shopping) actively more cumbersome, and OpenAI's "deep research" tool is utterly laughable at this point, pulling out citation-free forum posts and SEO-optimised slop ahead of actual, worthwhile information written (and reviewed) by humans.

You, reading this, almost certainly know all this, and perhaps you've even read or shared some articles talking about the problems with AI slop and the problems that is causing all over the Internet. But what have you done about it? Because I feel like we should be doing more about it, rather than just pointing and tutting at it, going "whoo, lad, that generative AI sure is a bit shit, isn't it? Someone should do something about it."

The trouble, of course, is that it's difficult to do anything meaningful about it, particularly when big corporate entities like Microsoft are the ones forcing it onto people through things people use every day like Windows, Office 365, and even the bloody Xbox. I mean, sure, you can find ways to disable it when it does show up, but these workarounds often end up circumvented by the corporations, meaning you need to faff around even more to get rid of the shit. And sure, you can install Linux, but that carries its own burden of needing to know how to do that. Which you and I might be comfortable doing, but what about people who use computers more casually; those who don't know how they work, but just want to be able to get on with simple tasks without intrusive AI features popping up every few seconds?

All we can do, really, is make a specific effort not to use generative AI tools when there are other alternatives available. I will never, ever use generative AI on this site, MoeGamer or my YouTube channel to produce words, scripts, images, thumbnails or videos, however tempting it might be as a "quick fix" to get something done. If that means there are things I either can't do or would have to pay a specialist to be able to do, I will either go without the thing or pay a specialist. Or perhaps even learn how to do the thing myself.

That's a crucial one, I think. Over the years, I've learned how to do a lot of things on computers simply by running into an issue I don't know how to solve, researching it myself and learning how to deal with it. Some of that knowledge I've retained, some of it fell out of my brain the moment I finished using it, but on the whole I've had a net gain on knowledge simply through running into problems and taking the initiative to learn how to fix them myself. I suspect many people who grew up with computers throughout the '80s and '90s are the same.

I'm not going to tell you what to do. But I am going to tell you what I'm doing:

  • I will not use ChatGPT to research anything, when perfectly good information is available through well-established, reliable, trustworthy and peer-reviewed sources both online and offline.
  • I will not use AI image or video generation for anything, period. If I need an image or video of something, I will produce it myself, search for a usable (and suitably licensed) stock or otherwise publicly available image or video, providing credit where appropriate, or just not use that image or video.
  • I will not use AI voice generation to make a "famous" voice say something it never said. Even if it's really funny. I will freely admit to having done this in the past (only among friends), but that was before we really knew or understood the numerous negative impacts that generative AI has on both the environment and on culture.
  • I will not use AI to create content for the sake of content. I write here because I like writing. I write on MoeGamer because I like writing about games. I make videos because I like making videos. I am not entitled to a "share" of the Internet based on the volume of stuff I churn out, nor am I entitled to be able to make a living from it. I will not pollute the Internet with meaningless slop.

Someday, there may be a valid use case for generative AI. I am open to that. Right now, I do not believe that is there, and I believe the continued proliferation of generative AI online is actively harmful to the Internet specifically, and human culture more broadly.

It needs to stop. But I'm concerned the "genie in a bottle" people are right, and that now we've started this process of enshittifying the entire Internet, we can't stop it again.

But we can make our own little corners of the Internet a safe haven away from the deluge of sewage. And that's what I'll continue to do.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 179: Your occasional reminder that AI can fuck off

I saw a TV ad for "Apple Intelligence" yesterday. The concept of the ad is that someone is angry someone at their workplace keeps stealing their pudding — hahaha, so hilarious and cosy and relatable — and writes them a furious email. They then click the "Friendly" button on Apple Intelligence and the email is rewritten to be the most milquetoast, handwringy, insincere thing you've ever seen. And this is supposed to be a selling point.

Elsewhere, a YouTuber I know had someone in their comments getting pissy about how they pronounced "ZX81", and, presumably in an attempt to further their argument, the commenter in question then copy-pasted a ChatGPT conversation — without editing out the "ChatGPT says:" bits — that didn't even particularly help their cause.

I keep seeing YouTube thumbnails made with AI art-stealing machines. Coca-Cola made a Christmas ad with AI. The memorial lunch for beloved broadcaster Steve Wright had an invitation that was made with AI. Entire websites are made of AI slop. And even here in fucking WordPress, I can't escape the sodding "Generate with AI" button.

I fucking hate it. I want it to go away. I want people who say "but it's good for summarising things" to drown in the sea. I want people who say "but it's better than doctors at diagnosing problems!" to be the victims of the worst malpractice the medical industry has ever seen. I wish eternal loneliness and desolation on those who use it to write emails. And I want it out of the pieces of software I use on a daily basis.

We're even starting to get accounts on BlueSky that pretend to be real people, but simply respond with ChatGPT answers that are tuned to be deliberately argumentative. What is the fucking point of all this shit? How is it benefiting humanity and productivity in any way whatsoever?

It isn't. All it's doing is continuing to make tech worse, year on year, while keeping oblivious shareholders — who aren't interested in anything but seeing "growth" — happy that companies are providing supposed "new innovations" that actually don't provide any sort of useful functionality whatsoever.

I'm aware I'm ranting incoherently, but honestly right now it feels like it's pointless to even try and come up with a cogent argument. This shit is infesting everything, and it's becoming impossible to escape from. And I legitimately do not understand how anyone can possibly think this shit is in any way better than what we had before.

I guess the one upside is that with how much AI is being used pointlessly to provide "summaries" of Google Searches, YouTube videos and other such shite, the planet will burn down all the sooner, so eventually we won't have to worry about it at all. Then the Great Thinkers of the day — assuming anyone survives — can stroke their chins for two hundred years about "where it all went wrong".

Here. Here is where it all went wrong.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 76: Nopegrade

I'm due a phone upgrade. This is probably the first time I've come to that point and haven't been tempted to immediately get a new shiny phone. And the reason? So many of the latest models appear to be absolutely rammed to the gills with "AI" features I don't want anything to do with.

And it's a shame, because some of these phones do otherwise look good. The Google Pixel 9 looks like it has an excellent camera, for example, and that's pretty high up my list of priorities these days. The newest Samsung devices also look quite nice, and having had a Samsung device for my last couple of phones, I'd be quite happy to go with them.

If it wasn't for the bloody AI crap, that is. I know I could just "not use it", but that's not really the point. I don't really want to send any sort of message that AI junk is something that I'm interested in in the slightest, and my concern is people happily jumping on with Google Pixel 9 and "just trying out" Gemini will just prolong the amount of time we all have to suffer with AI garbage being jammed into places we don't want it.

I'm sure there are some "valid" uses for AI, but honestly, I don't really see the usefulness right now. Earlier on, I watched a Marques Brownlee review of the Google Pixel 9, and everything that was "AI-powered" seemed very superfluous and unnecessary. An on-phone image generator? Cool, now I can steal artwork wherever I am in the world! An assistant I can talk to about what I should do about a wasp infestation? I'd rather talk to a real person that doesn't hallucinate, thanks. The ability to turn on my lights with my voice? 1) I can already do that with several other devices and 2) I don't want to do that. The ability to insert myself into a photo I wasn't in? Cool, now I can create "memories" of things that didn't actually happen. I'm sure that's healthy.

It's the voice stuff that really gets me. I genuinely do not understand how any of that is desirable. How is getting an Amazon Alexa, Google Gemini or whatever to read out your email headers better than tapping on the email icon and looking at them? How is getting a device to give you a "daily briefing" better than just doing a quick round of your favourite websites to check on the headlines? How is bellowing "SET A TIMER FOR THREE MINUTES… no, THREE minutes. THREE. MINUTES." better than going to the clock app and typing the number "3"?

It isn't. These things are all gimmicks. They're not actually useful. The grand dream is presumably some sort of omniscient, omnipresent Star Trek-style capital-C Computer that we can call upon to dispense its knowledge and information wherever we are at any time of day. But we're not there yet. We're not even close to being there yet, with how unreliable and hallucination-prone modern AI still is. And if reports are to be believed, we've already pretty much hit a cap on how good the current "AI" tech can get, because the various models are already starting to feed on themselves, making hallucinations more likely, not less likely, as they inadvertently guzzle up AI-generated swill rather than material that has had a human involved at any point during its creation.

And it disgusts me to see how many publishing companies are gleefully signing up to feed their writers' work into ChatGPT, almost certainly without consulting the actual writers for their consent beforehand. Today it was Condé Nast. Previously it was Vox Media. And I'm sure there's a lot more all over the place, too.

I cannot wait for this odious trend to be over. And I suspect it will be over within a few years, as the money is almost certainly going to run out. None of these models are sustainable; none of them have a "killer app" that convinces naysayers that actually, AI might be quite good after all; none of them even really have a marketable product beyond "look at this thing that might one day be able to do something vaguely useful (but doesn't just yet)".

The sooner that fucking sparkly magic icon goes away, the better.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 53: Our AI-powered dystopian garbage future

I was unfortunately exposed to this video today:

For those who quite understandably can't bring themselves to watch it based on the thumbnail and source alone, it's a video about how a dad is super-proud of his daughter and her athletics ability, but how he also knows that his daughter idolises an Olympic athlete. All seemingly wholesome and nice on the surface, until the main point of the ad: the Dad gets Google Gemini (which is Google's ChatGPT-esque chatbot interface) to write the athlete in question a "fan letter" that is supposedly from his daughter.

It's difficult to know exactly where to start with how fucked up this is. But I think as good a place as any is to point out that written communication between people has always been a means of direct, personal contact — particularly if it's via what is seen as a medium that takes a bit more effort, such as a handwritten letter. Of course, chances are that if the "fan letter" ever made it to the athlete in question, any response would probably be a carefully vetted template from a PR representative rather than the athlete herself, which sticks something of a pin in the "direct, personal contact" thing, but that's no reason that regular people who aren't PR consultants should auto-generate things that are supposed to be personal.

If someone inspires you, you presumably respect them. And if you respect them, you should demonstrate that respect by making an appropriate effort when attempting to contact them. And getting an AI to write a fan letter for you is the height of disrespect. It tells the recipient that you don't even respect them enough to communicate with them in your own words. It tells them that you would rather get a machine to handle your communication than "waste time" writing things yourself.

"But what about people who aren't able to write?" you may ask. To that I would point out that in order to get Google Gemini to write something, you still have to write a fucking prompt for it, and if you're capable of doing that you're capable of writing a letter. They teach how to do that in primary school. At least they used to.

There are myriad other ways to get your point across without getting garbage generative AI involved, even if you're incapable of holding a pen or typing on a keyboard. There's voice recognition, allowing you to still communicate in your own words without typing. Or you can get someone to help you — remember other people? Remember how to speak to them? Or do you need ChatGPT for that too? I'm a socially anxious autistic recluse and I can still talk to a person if I absolutely have to, and on more than one occasion I have sent some form of personal message to someone who genuinely inspires me, all in my own words.

We absolutely should not normalise the use of AI to craft even form responses to emails. I used to get mildly offended when a pal of mine used the "auto-respond" text message facility on his phone, which would send a rather blunt "Answer is YES" or "Answer is NO" SMS on his behalf if he couldn't be bothered to type a full message, but at least in that instance I know he had at least read my message and considered whether to respond in the affirmative or negative.

AI zealots seem to think that garbage like this is going to revolutionise communication between human beings, making it "more efficient" or some such bullshit. But all it's going to do is remove any semblance of personality from an individual's method of communication with you — something which is already somewhat at risk as a result of the homogenisation of culture brought about by the Internet. Look at how many people fall back on the same memes and turn of phrase these days rather than communicating in their own individual fashion, using their background and location as a means of making their communication unique. Now imagine even that layer of personalisation being taken away, with everyone "communicating" with one another using that smug, pretentious tone all AI chatbots appear to have developed.

"You're just resistant to change!" Yes, I am, if that "change" is demonstrably harmful to the way we interact with one another and our culture in general. Anyone who uses AI to communicate with someone rather than drafting an email, chat message or social media post themselves is an inconsiderate, disrespectful asshole, and I will absolutely not shift my opinion on this. I will, however, point and laugh.

So fuck off with your "Gemini" garbage, Google. And Mr Man's little girl? Tell your father to go fuck himself, punch him in the balls hard enough that he doesn't have any more children, and go write something yourself, with a pen. I can guarantee that your idol Sydney will find that far more meaningful and emotionally worthwhile than what is effectively a form letter that you didn't even write the prompt for.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 19: The AI Rot

Look at this bastard little icon. You probably see it every day right now. Hell, I see it every time I pop open the WordPress toolbar, because Automattic, makers of WordPress and Jetpack (back-end technology that helps WordPress sites do what they do) are cramming it in absolutely fucking everywhere, just like every other tech company is right now. No-one asked for this, no-one wants it, no-one is happy with the results it produces.

And yet, look at that bastard little icon. Such promise it carries in its little sparkly starbursts! The suggestion that magic is about to happen! The implication that, were you just to click that bastard little icon, creativity will be magically produced from nothing, allowing you to truly express yourself without any of that pesky "thinking"! You will truly be once and for all free!

As a creative type, naturally I object to generative AI being jammed in everywhere that it doesn't belong. I'll admit to having found some uses of it potentially interesting — music generation is intriguing, feeling like a step onwards from a program we used to have on the Atari ST called "Band In A Box" — but whatever use case I come across, it's hard to shake the feeling that its only real use is to enable laziness, and to prevent having to pay a real person for doing the creative work that is their specialism. (The actual computing and environmental cost of such tech doesn't matter to AI zealots, of course.)

That's not to say there's no money in AI, mind; no, by golly, the big tech companies are falling over themselves to hoover up investor cash right now, and every big generative AI site features some sort of predatory monetisation system, usually involving "credits" that obfuscate how much you're actually paying, and/or "monthly" subscriptions that are actually charged annually, because apparently that's just a thing you can lie about now and no-one calls you on it.

I think one of the clearest signals I've felt that AI bullshit has gone too far is its encroachment into pornography. It's now easier than ever to produce "deepfake" pornography featuring people who have not consented to appear in pornographic material. Of course, AI-generated slop has plenty of telltale signs, still, but the fact this stuff exists at all was already cause for concern even before it was easy to produce it.

On top of that, sites that were once about posting collections of erotic art and animations from artists, movies, anime series and video games are now overflowing with AI-generated swill; a cursory glance at e-hentai's front page earlier revealed a multitude of galleries tagged with "[AI Generated]", making them virtually worthless. Of course, e-hentai and sites like it already skirt the borders of morality by often including artwork artists intend to be kept behind Patreon, skeb or Fantia paywalls — but many of these galleries seem to suggest that there are a significant number of individuals out there attempting to position themselves as "artists" when all they are, in fact, doing is plugging prompts into an AI model that doesn't chastise them in a patronising way when requesting erotic material.

I'm sick of it. I'm sick of Jetpack emailing me to join an AI "webinar", I'm sick of ClickUp, the productivity tool we use at work, constantly spamming me about some AI feature I don't care about, I'm sick of the breathless zealotry from the cryptobros who have found the next big thing to latch onto before it all inevitably comes tumbling down in burning wreckage… and I'm sick of the uneasiness that I'm sure anyone in a vaguely creative field is feeling right now.

And I'm not sure it's going to go away for a while. Big Tech seems determined to make "AI" a thing. And while I'm not averse to actual, helpful uses of it — which I'm yet to see a convincingly working example of that can't be better fulfilled by other, existing methods — I think we all know that with the people we have in charge, those actual, helpful uses are inevitably going to take a back seat to ways of screwing poor old Joe Public and his friend Struggling Artist out of their hard-earned money more than anything else.

(Aside: I tried running this article through Jetpack's stupid "AI Assistant" to "get suggestions on how to enhance my post to better engage my audience", and the thing just crashed. Good show!)

So fuck that bastard little icon. Take your magic sparkles and jam them right up your robotic arse. The only things allowed to sparkle like that are fairies and ponies, and AI is neither of those things. So into the trash it goes, so far as I'm concerned.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday Day 7: Suggested Content

One of the "innovations" of modern tech and software that I am most consistently baffled by is the concept of "Suggestions".

Don't get me wrong, I am under no illusions as to what "Suggested Content" really means on websites and social media platforms (it's advertising, in case you somehow weren't savvy enough to know that by now) but I'm talking more in contexts where it's not obviously advertising, or where it doesn't make sense for advertising to try and worm its way into places.

Places like, you know, just Microsoft Windows in general. Or Google Drive. Both of those have features where they provide you with a list of "Suggested" files, and I absolutely, genuinely do not understand why that feature is there or what it is for. Right now, for example, my Google Drive "Suggested files" list is a non-chronological index of things that I have opened or edited recently. Fine, you might say, except there is a perfectly good "Recent" option in the sidebar which does give me a chronological list of things I have opened or edited recently.

Likewise, the Windows 11 start menu on my "work" computer (it came preinstalled, otherwise I would have been quite happy continuing with 10 as I do with my "play" computer) appears to "suggest" applications almost completely at random, with its first two suggestions usually being the things I have installed most recently, and the others being… pretty much anything that I have installed, for no discernible reason.

Under certain circumstances, I get the idea. When it comes to media, a "suggestion" feature might inspire you to look at photos or listen to music that you haven't enjoyed for a while — though this can also backfire somewhat. Earlier today, my phone's "Gallery" app decided to send me an unasked-for notification that I presume someone somewhere thought was "cute", with the text "Feline footprints in Southampton". The attached image? Our dearly departed cat Meg. I'm still quite upset about Meg's passing, so I emphatically do not want my phone randomly bringing her up out of the blue for no apparent reason. I will look at pictures of her when I'm good and ready, thanks very much.

The push for "AI" in everything is only making this shit worse, too; the Gallery app on my phone recognising that the image in question was a picture of a cat is a result of improving image recognition technology, and I suspect as generative AI becomes more and more pervasive and invasive in our daily online life, situations like this are only going to become more and more common — because you can bet your bippy that all these "Suggestion" features are going to be turned on by default.

What happens when your phone decides to "suggest" a photo of something you'd rather keep private at an exceedingly inappropriate moment? Well, some might say you should keep your private photos private, but realistically, practically speaking, most people these days are not that organised, because we've made the mistake of trusting our software and online services to do the organisation for us. I actually like the fact that Google Photos can pick out, say, pictures of cats, or pictures that mention something specific in a piece of text, because that is indisputably useful — but what I don't want is my phone going "HEY REMEMBER YOUR CAT THAT DIED? HUH? HERE SHE IS, I PICKED HER OUT FROM ALL YOUR PHOTOS, AREN'T I SMART?"

There's a place for some — some — of the innovations that are currently going on in tech. But, as always, it seems we're going to have to endure a period of people pushing things to absolute breaking point before we settle into something approaching a useful routine. And, unfortunately, that period appears to have been going on for quite a while now… and people don't seem to be willing to push back against the more unreasonable uses of these features.

"Suggested Content" can get in the fucking bin. I know what I need on my computer and when. And, more often than not, when I'm browsing the Web, I know what I'm looking for, too. Sadly, it feels increasingly unlikely that I'm going to be left in peace these days.

If anyone mentions Linux, they are getting a slap.


Want to read my thoughts on various video games, visual novels and other popular culture things? Stop by MoeGamer.net, my site for all things fun where I am generally a lot more cheerful. And if you fancy watching some vids on classic games, drop by my YouTube channel.

If you want this nonsense in your inbox every day, please feel free to subscribe via email. Your email address won't be used for anything else.

#oneaday, Day 140: Being An Asshole

Every time there is a "new advance" in AI for video games, the first question a lot of people ask is "how human is it?" How does it compare to playing against a real, actual, human person? A gaming-related Turing Test, if you will. And the answer is always "it's not very human". There's one reason for this – computers can't be assholes.

I was playing Blur multiplayer tonight and the one thing that struck me is how much of an asshole players online can be. That's not a criticism, by the way. In fact, the sheer assholeness of a lot of online Blur players makes multiplayer races a pretty thrilling experience. And the AI players in the single-player, while frustrating, aren't assholes. They never drop a mine directly behind a powerup so you grab the powerup and then explode. They never use a Barge to knock you off a cliff. They never swerve into you at the start line and bash you into a wall. They never wait until the home straight to launch a mine right up your arse and sail past in the last half-a-second of the race. They never park sideways across a narrow bit of track just to get in the way.

This sort of creative sadism which online Blur players have developed is what makes the multiplayer so much more appealing than the single-player mode. It's really interesting to see the tactics that people have obviously developed independently without any prompting from the game. The "trapping a powerup" thing, for example. The AI players never do that. It's never suggested you do it in the loading-screen tips. But it's, when you think about it, a smart idea. Everyone is clamouring for powerups throughout every race. So why not make the more desirable ones rather more difficult to get?

This is a different sort of assholeness to the kind of 13-year-olds who scream racist, homophobic abuse down their headsets during games of Modern Warfare 2 (which they shouldn't be playing anyway, but of course, that's another conversation) – this is a stubborn, passionate desire to win at any cost bar cheating, rather than a stubborn, passionate desire to be a dick. And it's fun. You can't help getting involved. Watch other people playing Blur and all you want to do is out-asshole them. Get someone with a carefully-placed mine, or accurately slam a backward-fired Shunt into their face while they're slipstreaming you and it's immensely satisfying.

In fact, Blur as a whole is set up for being an asshole. Take the social gaming features I discussed the other day. What possible reason could there be for posting information about how well you're doing other than to make other people think "I need to take that asshole down a peg or two"?

The reason, of course, that AI in single-player games being a perfectly accurate representation of a human is not necessarily a desirable thing is this: sometimes we like to win. And if you're playing against 19 other assholes, most of whom are more of an asshole than you, very often you don't win. That's all very well, and competitive and so on… but if you're playing by yourself, you want to win, don't you? So that's why I can say with some confidence that I really, really hope AI doesn't ever improve to a level where it's indistinguishable from a human. Because I like to beat it sometimes. And I've played over 60 online races in Blur now… and won two of them!