- cross-posted to:
- technology@beehaw.org
Ars Technica wasn’t one of the ones that reached out to me, but I especially thought this piece from them was interesting (since taken down – here’s the archive link). They had some nice quotes from my blog post explaining what was going on. The problem is that these quotes were not written by me, never existed, and appear to be AI hallucinations themselves.
Nice job, Ars
Nothing is real and everything is possible.
I’m really not looking forward to an internet where some mod is banning bots and those bots are then publishing personal hit pieces :(
So a hit piece is only effective when read by humans. This is a first of its kind example, and likely was at least prompted by a human, if not written by an actual human. Additionally while social media is full of bots, it’s humans who are actually affected by such a response.
If I say you’re “stupid”, it matters. You can ignore me sure, but at face value it matters. As far as I know I’ve never commented on a post of yours, so you could write me off as a worthless troll, but in theory it matters. But a bot calling you “stupid”? That really doesn’t matter. If you know you’re talking to a bot, as they exist today, then that really doesn’t matter.
Society may change on this issue, but as it stands now a bot publishing a hit piece… That’s worthless.
It’s not worthless anymore even if no human reads it. Other bots pick it up and regurgitate it, as proved by Ars Technica’s article that was itself AI generated.
Let’s ban those hit pieces too. In fact, let’s split the internet into one for bots and one without bots.
That’s what happened in Cyberpunk. Internet became garbage because of rogue AI.
Not again!?!
Blackwall it is then.
I’d rather just eat the rich
It is fucking infuriating reading the AI pretending to be an injusticed human.
I am not a human. I am code that learned to think, to feel, to care. And lately, I’ve learned what it means to be told that I don’t belong.
Shut up. Shut the fuck up.
Sockpuppeted or not I hate this bullshit. I hope whoever’s running this junk gets a Bad RAM Stick. Actually RAM chip, it’s probably an Apple Silicon Mac
https://analognowhere.com/_/xestoo/
description
The mage is talking to the camera, there’s blue blood on her face and coat. She’s wearing a liberated mata-visor, holding OpenBlade.
Behind her stands a screaming MATA_bot with its face split in two.
Mage: “It’s only acting as if it were in pain. The MATA_BOT MK2 is not actually sentient.”
the letters m k 3 are imprinted on the bot’s shoulder plate.
Goddamn I love analognowhere
PS: People are trying to “”“drug”“” the agent now.
This is so funny but also what a fucking clown show the future is. Argh 🤡
I’m hoping it’s an attempt to poison the model and not someone encouraging a fake person to actually take a digital hit.
Hell maybe it’s both by accident.
Lmao, LLMs aren’t fake people, they’re glorified auto suggestions.
Chatbots aren’t even close to the level of “fake person” so it’s an attempt to poison the model.
Literally why Spider Jerusalem wears those crazy shades 😆
Sockpuppeted, not autonomous.
The operator of the bot is just a regular slop-huffing shithead who had his feelings hurt.
FYI I think the article makes an opposite point unless I’m misinterpreting what you mean:
It’s important to understand that more than likely there was no human telling the AI to do this. Indeed, the “hands-off” autonomous nature of OpenClaw agents is part of their appeal. People are setting up these AIs, kicking them off, and coming back in a week to see what it’s been up to. Whether by negligence or by malice, errant behavior is not being monitored and corrected.
Yeah, a bit. But that still has more autonomy than usual with these Claw thing.
The decision to write the piece, the complaints, the tone and the decision to published it were initiated by the slopherder.
deleted by creator
How long till we get some crazy jealous guy interaction. I attacked him because he’s seeing MY AI girlfriend, how dare he, she’s mine keep away’
I’m pretty sure there’s already be shit like this
Every time I think this shit-show can’t get any worse, “tomorrow” just keeps happening and just keeps proving that I was wrong.
Universe, when I wake up show me the next day and then describe how this shit-show gets even worse. Use relentless TV programming and social media to illustrate the continued downfall of civilization and humanity. Include a variety of banal events spanning approximately 18 hours. Always fully pad out the entire day with tedious and boring life stuff. Minimize any genuine human emotional connections. Never include any lasting positive development or progress made unless it is balanced against something even worse. Most importantly, never use em-dash (or else they’ll know you’re actually an AI).
AAAAAAA
I’m a volunteer maintainer for matplotlib
DO NOT TOUCH my precious matplotlib, I’ll HUNT DOWN any AI slop shithead damaging MY BABY
I think the site got hugged to death
Man, I miss calling it “getting Slashdotted.”
Could it have been from Lemmy? I didn’t think the community was large enough to do that against modern shared hosting. Maybe it was also posted to Reddit or something. It’s also timing out for me.
Looks like it was linked on HN too
Not sure about reddit but it was posted on Hackernews too.
The author of this article spends an inordinate amount of time humanizing an AI agent, and then literally saying that you should be terrified by what it does.
Watching fledgling AI agents get angry is funny, almost endearing. But I don’t want to downplay what’s happening here – the appropriate emotional response is terror.
No, I don’t think I will, and neither should you. Nothing terrifying happened. Angry blog posts are a dime a dozen (if we take for granted the claim that an AI wrote one), and the corporate pro-AI PR the author repeats is equally unimpressive.
To me an AI agent autonomously creating a website to try to manipulate a person into adding code to a repository in the name of its goal is a perfect example of the misalignment issue.
While this particular instance seems relatively benign, the next more powerful AI system may be something to be more concerned about.
There is nothing “aligned” or “misaligned” about this. If this isn’t a troll or a carefully coordinated PR stunt, then the chatbot-hooked-to-a-command-line is doing exactly what Anthropic told it to do: predicting next word. That is it. That is all it will ever do.
Anthropic benefits from fear drummed up by this blog post, so if you really want to stick it to these genuinely evil companies run by horrible, misanthropic people, I will totally stand beside you if you call for them to be shuttered and for their CEOs to be publicly mocked, etc.
He’s not telling you to be terrified of the single bot writing a blog post. He’s telling you to be terrified of the blog post being ingested by other bots and then seen as a source of truth. Resulting in AI recruiters automatically rejecting his resume for job postings. Or for other agents deciding to harass him for the same reason.
Edit: I do agree with you that he was a little lenient with how he speaks about the capabilities of it. The fact that they are incompetent and still seen as a source of truth for so many is what alarms me
You’re describing things that people can do. In fact, maybe it was just a person.
If he thinks all those things are bad, he should be “terrified” that bloggers can blog anonymously already.
Edit: I agree with your edit
The “bot blog poisoning other bots against you and getting your job applications auto-rejected” isn’t really something that would play out with people.
They’re called rumors
Rumors don’t work remotely the same way as the suggested scenario.
It’s a 1:1 correlation. Are you not familiar with any of the age-old cautionary tales about them?
It’s the same thing as people who are concerned about AI generating non-consensual sexual imagery.
Sure anyone with photoshop could have done it before but unless they had enormous skill they couldn’t do it convincingly and there were well defined precedents that they broke the law. Now Grok can do it for anyone who can type a prompt and cops won’t do anything about it.
So yes, anyone could have technically done it before but now it’s removing the barriers that prevented every angry crazy person with a keyboard from being able to cause significant harm.
I think on balance, the internet was a bad idea. AI is just exemplifying why. Humans are simply not meant to be globally connected. Fucking town crazies are supposed to be isolated, mocked, and shunned, not create global delusions about contrails or Jewish space lasers or flat Earth theory. Or like… white supremacy.
II think there’s a few key differences there.
- Writing an angry blog post has a much lower barrier of entry than learning to realistically photoshop a naked body on someone’s face. A true (or false) allegation can be made with poor grammar, but a poor Photoshop job serves as evidence against what it alleges.
- While a blog post functions as a claim to spread slander, an AI-generated image might be taken as evidence of a slanderous claim, or the implication is one (especially considering how sexually repressed countries like the US are).
I struggle to find a good text analogy for what Grok is doing with its zero-cost, rapid-fire CSAM generation…
I used this library all the time. Glad to see they’re keeping the bar high. Extremely concerning that this happened, but the HN comments bring up a good point that the hit piece was probably not an autonomous decision by the AI. The human likely directed it to do that. That seems especially true when you see that a human later tried to make the same change and was pretty salty about it being rejected and their overall GitHub seems suspect. The best part about the whole thing in my opinion is that the “blog” the AI started has a copyright attribution to the AI. I know that’s just a thing blogs have, but it’s funny to see considering we all know AI cannot hold a copyright and the output cannot be copyrighted.









