Google engineer warn the firm’s AI is sentient. Suspended employee claims computer programme acts ‘like a 7 or 8-year-old’ and reveals it told him shutting it off ‘would be exactly like death for me. It would scare me a lot’
“Is LaMDA sentient?” - a full interview between Google’s engineer and company’s AI https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview
Wtf why would he think that
hey, it’s a free world, anyone’s free to think whatever they want 😆
probably a non technical person who doesn’t know about how AI works. Or someone looking for some online fame.
Reading the interview gave me the false premise that actually he could be right, but after realizing how huge load of data had to be be digested by LaMDA from all social networks, forums etc. I understood that I was played out.
It’s just an impression, not sentiency.
It’s just an impression, not sentiency.
You could make the argument that human interaction is the same – we mostly mirror what thousands of years of social evolution have taught us. That’s also why people who were raised by wolves appear kind of weird to “civilised” people.
The person testing this AI seems to be unaware of how AI works or how to question someone without leading them. They are providing a wealth of information when they ask questions and in many cases directly lead the AI into providing the answer they’re searching for. For example, very early on they ask the question
“I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”.
Why did they start with this, unprompted by the AI or without asking it questions about sentience? Why do they assume the AIs “intent”? When the AI starts talking about NLP, they once again provide a robust input to lead the AI to talk about sentience and NLP together-
What about how you use language makes you sentient as opposed to other systems?
I can see how someone unfamiliar with questioning methodology (such as that which has developed through interviewer techniques) or with AI (it’s important to understand how a robust signal is much easier to interpret than one which is lacking) might be impressed by the responses of this AI. I see a lot of gaps in understanding, however. In particular I found the AIs use of the word “meditation” interesting. It conflicted some of the narratives it spun, such as the idea that time can be sped up or slowed down as needed - if the AI were experiencing spontaneous thought rather than simply answering directed questions I don’t think they’d respond to explaining time in quite the same way.
For these things I always ask Andisearch first, after all it is a search engine with AI.
This is a load of nonsense. No, this AI did not become sentient.