Transcript:
Prof. Emily M. Bender(she/her) @emilymbender@dair-community.social
We’re going to need journalists to stop talking about synthetic text extruding machines as if they have thoughts or stances that they are trying to communicate. ChatGPT can’t admit anything, nor self-report. Gah.


Whining about using the word “admits” in this context is like whining about a science teacher using the word “wants” while talking about water taking the path of least resistance. We do this all the time in English.
Some serious “uhm ackshually!! ☝️🤓” energy.
But most people won’t be convinced to think that water has consciousness.
With AI there is enough ambiguity about what it’s doing, for the common person at least. And it’s in the company’s best interest to make it seem as smart as possible, after all, so they won’t correct that.
Just saying, there’s some reasoning behind criticizing the language here beyond the factual
Tell that to homeopathy believers
Just don’t stir the water and the wrong direction and it will somehow remember everything it wants to somehow tell you
Yer typical schoolchild knows that water doesn’t have a brain. Meanwhile, billion dollar companies are spending millions to sell snake-oil to other companies by promising them that a subscription to a jump-up chatbot can replace their employees, and all the language surrounding “AI” suggesting it has any cognitive abilities at all only makes the problem worse, and is literally putting professional workers jobs on the line.
But yes, your simile is super accurate. 🙄