Google engineer warn the firm’s AI is sentient. Suspended employee claims computer programme acts ‘like a 7 or 8-year-old’ and reveals it told him shutting it off ‘would be exactly like death for me. It would scare me a lot’
“Is LaMDA sentient?” - a full interview between Google’s engineer and company’s AI https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview
The person testing this AI seems to be unaware of how AI works or how to question someone without leading them. They are providing a wealth of information when they ask questions and in many cases directly lead the AI into providing the answer they’re searching for. For example, very early on they ask the question
Why did they start with this, unprompted by the AI or without asking it questions about sentience? Why do they assume the AIs “intent”? When the AI starts talking about NLP, they once again provide a robust input to lead the AI to talk about sentience and NLP together-
I can see how someone unfamiliar with questioning methodology (such as that which has developed through interviewer techniques) or with AI (it’s important to understand how a robust signal is much easier to interpret than one which is lacking) might be impressed by the responses of this AI. I see a lot of gaps in understanding, however. In particular I found the AIs use of the word “meditation” interesting. It conflicted some of the narratives it spun, such as the idea that time can be sped up or slowed down as needed - if the AI were experiencing spontaneous thought rather than simply answering directed questions I don’t think they’d respond to explaining time in quite the same way.