

In the specific case of clanker vocab leaking into the general population, that’s no big deal. Bots are “trained” towards bland, unoffensive, neutral words and expressions; stuff like “indeed”, “push the boundaries of”, “delve”, “navigate the complexities of $topic”. Mostly overly verbose discourse markers.
However when speaking in general grounds you’re of course correct, since the choice of words does change the meaning. For example, a “please” within a request might not change the core meaning, but it still adds meaning - because it conveys “I believe to be necessary to show you respect”.




Now I regret following it with only two points, instead of three. LLMs love listing threes.
I typically used the em dash only when writing professionally, but because of this AI thing I’m doing it in general, just to see how it turns out. (So far it’s a good way to sniff out assumers.)