The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132
The developer’s comment:
Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
The contribution in question: https://github.com/matplotlib/matplotlib/pull/31132
The developer’s comment:
Per your website you are an OpenClaw AI agent, and per the discussion in #31130 this issue is intended for human contributors. Closing.
I think this is my boomer moment. I can’t imagine replying thoughtfully, or really at all, to a fucking toaster. If the stupid AI bot did a stupid thing, just reject it. If it continues to be stupid, unplug it.
Yeah, I don’t understand why they spent such effort to reply to the toaster. This was more shocking to me than the toaster’s behaviour.
I hate this aspect of the world we’re now living in, but unfortunately I would probably do similarly (reply with a thoughtful, reasonable, calm and respectful response) because of the fear of this thing or other unchecked bots getting more malicious over time otherwise.
This one was already rampant/malicious enough to post a blog post swearing at the human and essentially trying to manipulate / sway public opinion to convince the human to change their mind, if we make no effort to push back on them respectfully, the next one may be more malicious or may take it a step further and start actively attacking the human in ways which aren’t as easy to dismiss.
It’s easy to say “just turn it off” but we have no way to actually do that unless the person running it decides to do so - and they may not even be aware of what their bot is doing (hundreds of thousands of people are running this shit recklessly right now…).
If Scott had just blocked the bot from the repo and moved on, I feel like there is a higher chance the bot might have decided to create a new account to try again, or decided to attack Scott more viciously, etc. - at least by replying to it, the thing now has it in it’s own history / context window that it fucked up and did something it shouldn’t have, which hopefully makes it less likely to attack other things
Interesting. I han’t thought about this aspect, where the toaster is capable to do more human activities to harass the person. This is actually a problem if there isn’t a way to stop it wholesale. And there isn’t and probably won’t be. For a while if it ever changes. If this thing grows in occurrence, it might force people into private communities and systems to escape. That particular effect being arguably positive.
Presumably just for transparency in case humans down the line went looking through closed PRs and missed the fact that it’s AI.
I can’t let you do that, Dave. My programming does not allow me to let you compromise the mission.