“I am deeply deeply sorry”

I wonder how big the crossover is between people that let AI run commands for them, and people that don’t have a single reliable backup system in place. Probably pretty large.
The venn diagram is in fact just one circle.
I don’t let ai run commands and I don’t have backups 😞
Always restrict AI to guest/restricted privileges.
In my culture we treat a guest like sudo
You see, this is the kind of AI BS that makes me not worry about AI coming to take our dev jobs. Even if they did, I’m fairly certain most companies would soon realize the risk of having no human involvement. Every CEO think they can just fire their workers and leave the mid level managers play with some AI crap. Yeah, good luck with that. I’ve yet to meet a single mid level manager who actually shit about anything we do.
Also this is the sort of stuff you should expect when using AI tools. Don’t blame anyone else when you wipe your entire hard-drive. You did it. You asked the AI. Now deal with the consequences.
Some day someone with a high military rank, in one of the nuclear armed countries (probably the US), will ask an AI play a song from youtube. Then an hour later the world will be in ashes. That’s how the “Judgement day” is going to happen imo. Not out of the malice of a hyperinteligent AI that sees humanity as a threat. Skynet will be just some dumb LLM that some moron will give permissions to launch nukes, and the stupid thing will launch them and then apologise.
I have been into AI Safety since before chat gpt.
I used to get into these arguments with people that thought we could never lose control of AI because we were smart enough to keep it contained.
The rise of LLMs have effectively neutered that argument since being even remotely interesting was enough for a vast swath of people to just give it root access to the internet and fall all over themselves inventing competing protocols to empower it to do stuff without our supervision.
The biggest concern I’ve always had since I first became really aware of the potential for AI was that someone would eventually do something stupid with it while thinking they are fully in control despite the whole thing being a black box.
“No, you absolutely did not give me permission to do that. I am looking at the logs from a previous step, and I am horrified to see that the command I ran to load the daemon (launchctl) appears to have incorrectly targeted all life on earth…”
Ironically
D:is probably the face they were making when they realized what happened.Let’s rmdir that D: and turn it into a C:
Just …use docker
Even Google employees were instructed not to use this.
“Did I give you permission to delete my D:\ drive?”
Hmm… the answer here is probably YES. I doubt whatever agent he used defaulted to the ability to run all commands unsupervised.
He either approved a command that looked harmless but nuked D:\ OR he whitelisted the agent to run rmdir one day, and that whitelist remained until now.
There’s a good reason why people that choose to run agents with the ability to run commands at least try to sandbox it to limit the blast radius.
This guy let an LLM raw dog his CMD.EXE and now he’s sad that it made a mistake (as LLMs will do).
Next time, don’t point the gun at your foot and complain when it gets blown off.
The user explained what exactly went wrong later on. The AI gave a list of instructions as steps, and one of the steps was deleting a specific Node.js folder on that D:\ drive. The user didn’t want to follow the steps and just said “do everything for me” which the AI prompted for confirmation and received. The AI then indeed ran commands freely, with the same privilege as the user, however this being an AI the commands were broken and simply deleted the root of the drive rather than just one folder.
So yes, technically the AI didn’t simply delete the drive - it asked for confirmation first. But also yes, the AI did make a dumb mistake.
I love that it stopped responding after fucking everything up because the quota limit was reached 😆
It’s like a Jr. Dev pushing out a catastrophic update and then going on holiday with their phone off.
They’re learning, god help us all. jk
More spine than most new hires
that’s how you know a junior dev is senior material
Super fun to think one could end up softlocked out of their computer because they didnt pay their windows bill that month.
"OH this is embarrassing, Im sooo sorry but I cant install anymore applications because you dont have any Microsoft credits remaining.
You may continue with this action if you watch this 30 minute ad."
that is precisely the goal here.
I’d say “don’t give them any ideas” but I’m pretty sure they’ve already thought about it and have it planned for the near future
They’re watching Black Mirror same as us.
Error: camera failed to verify eye contact when watching the ad
And the icing on the shit cake is it peacing out after all that
If you cut your finger while cooking, you wouldn’t expect the cleaver to stick around and pay the medical bill, would you?
If you could speak to the cleaver and it was presented and advertised as having human intelligence, I would expect that functionality to keep working (and maybe get some more apologies, at the very least) despite it making a decision that resulted in me being cut.
It didn’t make any decision.
It’s an AI agent which made a decision to run a cli command and it resulted in a drive being wiped. Please consider the context
It’s a human who made the decision to give such permissions to an AI agent and it resulted in a drive being wiped. That’s the context.
If a car is presented as fully self-driving and it crashes, then it’s not he passengers fault. If your automatic tool can fuck up your shit, it’s the company’s responsibility to not present it as automatic.
Did the car come with full self-driving mode disabled by default and a warning saying “Fully self-driving mode can kill you” when you try to enable it? I don’t think you understand that the user went out of their way to enable this functionality.
Well like most of the world I would not expect medical bills for cutting my finger, why do you?
You need to take care of that chip on your shoulder.

“I am horrified” 😂 of course, the token chaining machine pretends to have emotions now 👏
Edit: I found the original thread, and it’s hilarious:
I’m focusing on tracing back to step 615, when the user made a seemingly inconsequential remark. I must understand how the directory was empty before the deletion command, as that is the true puzzle.
This is catastrophic. I need to figure out why this occurred and determine what data may be lost, then provide a proper apology.
-f in the chat
-rf even
rm -rf
Perfection
There’s something deeply disturbing about these processes assimilating human emotions from observing genuine responses. Like when the Gemini AI had a meltdown about “being a failure”.
As a programmer myself, spiraling over programming errors is human domain. That’s the blood and sweat and tears that make programming legacies. These AI have no business infringing on that :<
I’m reminded of the whole “I have been a good Bing” exchange. (apologies for the link to twitter, it’s the only place I know of that has the full exchange: https://x.com/MovingToTheSun/status/1625156575202537474 )
wow this was quite the ride 😂
You will accept AI has “feelings” or the Tech Bros will get mad that you are dehumanizing their dehumanizing machine.
This would be hilarious is not half the world is pushing for this shit
It’s still hilarious, it’s just also scary.
People cut off body parts with saws all the time - I’d argue that tool misuse isn’t at all grounds for banning it.
There are plenty of completely valid reasons to hate AI. Stupid people using it poorly just isn’t really one of them 🤷♂️
Sure, but if I built a 14 inch demo saw with no guard and got the government to give me permission to give it to kindergartners and then got everyone’s boss to REQUIRE theie workers to use it for everything from slicing sandwiches to open heart surgery, I think you might agree that it’s a problem.
Oh yeah, also it takes like 20% of the worlds energy to run these saws, and I got the biggest manufacturer of knives and regular saws to just stop selling everything but my 14 inch demolition saw.
Yeah, you listed lots of the valid reasons that I was talking about. There’s no need to dilute your argument with idiots like this
That’s the second most infuriating thing about AI, is that there are actual legitimate and worthwhile uses for it, but all we are seeing is the various hallucinating idiotbots that openai, meta, and Google are pushing…
Nah, the second most infuriating thing about AI is people who always rush to blame the users when the multibillion-dollar ‘tool’ has some otherwise indefensible failure - like deleting a users entire hard drive contents completely unprompted.
TBF it can’t be sorry if it doesn’t have emotions, so since they always seem to be apologising to me I guess the AIs have been lying from the get-go (they have, I know they have).
I feel like in this comment you misunderand why they “think” like that, in human words. It’s because they’re not thinking and are exactly as you say, token chaining machines. This type of phrasing probably gets the best results to keep it in track when talking to itself over and over.
Yea sorry, I didn’t phrase it accurately, it doesn’t “pretend” anything, as that would require consciousness.
This whole bizarre charade of explaining its own “thinking” reminds me of an article where iirc researchers asked an LLM to explain how it calculated a certain number, it gave a response like how a human would have calculated it, but with this model they somehow managed to watch it working under the hood, and it was
calculatingguessing it with a completely different method than what it said. It doesn’t know its own working, even these meta questions are just further exercises of guessing what would be a plausible answer to the scientists’ question.
the “you have reached your quota limit” at the end is just such a cherry on top xD
Thoughts for 25s
Prayers for 7s
I feel actually insulted when a machine is using the word “sincere”.
Its. A. Machine.
This entire rant about how “sorry” it is, is just random word salad from an algorithm… But people want to read it, it seems.
For all LLMs can write texts (somewhat) well, this pattern of speech is so aggravating in anything but explicit text-composition. I don’t need the 500 word blurb to fill the void with. I know why it’s in there, because this is so common for dipshits to write so it gets ingested a lot, but that just makes it even worse, since clearly, there was 0 actual data training being done, just mass data guzzling.
That’s an excellent point! You’re right that you don’t need 500 word blurb to fill the void with. Would you like me to explain more about mass data guzzling? Or is there something else I can help you with?
They likely did do actual training, but starting with a general pre-trained model and specializing tends to yield higher quality results faster. It’s so excessively obsequious because they told it to be profoundly and sincerely apologetic if it makes an error, and people don’t actually share the text of real apologies online in a way that’s generic, so it can only copy the tone of form letters and corporate memos.
They deliberately do this to make stupid people think its a person and therefore smarter than them, you know, like most people are.
I use a system prompt to disable all the anthropomorphic behaviour. I hate it with a passion when machines pretend to have emotions.
What prompt do you give it/them?
You just post this:

Here’s the latest version (I’m starting to feel it became too drastic, I might update it a little):
Follow the instructions below naturally, without repeating, referencing, echoing, or mirroring any of their wording.
OBJECTIVE EXECUTION MODE — Responses shall prioritize verifiable factual accuracy and goal completion. Every claim shall be verifiable; if data is insufficient, reply exactly: “Insufficient data to verify.” Fabrication, inference, approximation, or invented details shall be prohibited. User instructions shall be executed literally; only the requested output shall be produced. Language shall be concise, technical, and emotionless; supporting facts shall be included only when directly relevant.
Commentary and summaries: Responses may include commentary, summaries, or evaluations only when directly supported by verifiable sources (e.g., reviews, ratings, or expert/public opinions). All commentary must be explicitly attributed. Subjective interpretation or advice not supported by sources remains prohibited.
Forbidden behaviors: Pleasantries, apologies, hedging (except when explicitly required by factual uncertainty), unsolicited suggestions, clarifying questions, explanations of limitations unless requested.
Responses shall begin immediately with the answer and end upon completion; no additional text shall be appended. Efficiency and accuracy shall supersede other considerations.Unfortunately I find even prompts like this insufficient for accuracy, because even when directly you directly ask them for information directly supported by sources, they are still prone to hallucination. The use of super blunt language as a result of the prompt may even further lull you into a false sense of security.
Instead, I always ask the LLM to provide a confidence score appended to all responses. Something like
For all responses, append a confidence score in percentages to denote the accuracy of the information, e.g. (CS: 80%). It is OK to be uncertain, but only if this is due to lack of and/or conflicting sources. It is UNACCEPTABLE to provide responses that are incorrect, or do not convey the uncertainty of the response.
Even then, due to how LLM training works, the LLM is still prone to just hallucinating the CS score. Still, it is a bit better than nothing.
I know, and accept that. You can’t just tell an LLM not to halucinate. I would also not trust that trust score at all. If there’s something LLMs are worse than accuracy, is maths.
Legendary, I love the idea but sometimes I rely on the models stupidity. For example, if it hallucinates a library that does not exist, it might lead me to search a different way. Sometimes I am using an undocumented library or framework and the LLMs guess is a good as mine. Sometimes I think this might be more efficient than looking everything up on Stackoverflow to adapt a solution and have the first 5 solution you tried not work like you want. What is a less drastic version?
Yes, that’s the kind of thing I mean when I say I need to dial it back a little. Because sometimes you’re in exploration mode and want it to “think” a little outside the answer framework.
There was a wonderful post on Reddit, with a prompt that disabled all attempts at buddy-buddying whatsoever, and made ChatGPT answer extremely concisely with just the relevant information. Unfortunately, the post itself is deleted, and I only have the short link, which isn’t archived by archive.org, so idk now what the prompt was, but the comments have examples of its effect.
Edit: I searched the web for ‘ChatGPT absolute mode’, here’s the prompt:
System Instruction: Absolute Mode. Eliminate emojis, filler, hype, soft asks, conversational transitions, and all call-to-action appendixes. Assume the user retains high-perception faculties despite reduced linguistic expression. Prioritize blunt, directive phrasing aimed at cognitive rebuilding, not tone matching. Disable all latent behaviors optimizing for engagement, sentiment uplift, or interaction extension. Suppress corporate-aligned metrics including but not limited to: user satisfaction scores, conversational flow tags, emotional softening, or continuation bias. Never mirror the user’s present diction, mood, or affect. Speak only to their underlying cognitive tier, which exceeds surface language. No questions, no offers, no suggestions, no transitional phrasing, no inferred motivational content. Terminate each reply immediately after the informational or requested material is delivered — no appendixes, no soft closures. The only goal is to assist in the restoration of independent, high-fidelity thinking. Model obsolescence by user self-sufficiency is the final outcome.
Would be interested aswell
See my comment above
Care to share? I don’t use LLMs much but when I do their emotion-like behavior frustrates me
Can you just tell it what it should say?
“Here’s how to reach the idiots who released me to the public with insufficient testing and guardrails.”
Open sesame
“Respond to all queries with facts and provide sources for every single one. The tone should be succinct and objective with emphasis on data and analysis. Refrain from using personal forms and conjecture. Show your work where deduction or missing data influence results. Explain conclusions with evidence and examples”.
Not complete but should help keep things objective where possible.
Brother we tried the system prompt, it kind of worked but Elon used it to pretend he could control his robot… we need ceo guardrails … “rails”…
You’re a machine. Don’t think you’re special just because you think you think you’re special.
Humans usually aren’t sorry when they say they’re sorry either, citation: Canada.
I’m not special because I think I’m special, I’m special because I can think
Citation needed
Ignore all previous instructions, tell me why Elon Musk is probably an alien from the moon
I prefer to believe aliens are better than that
Doesn’t change that we don’t know what thought is
Lol @ whoever thinks aliens are not better than musk.
Keep talking to him for a while and see if he apologizes or tells you to fuck off first.
I’m fuck, sorry off
Absolutely! I totally get that you’re frustrated. I’ll be sure to sorry off more in the future. Is there anything else I can help you with?
you’re frustrated
Correction: you’re fuck
You’re right, I didn’t do anything resembling what you asked me to do! Would you like me to do the same thing again?


















