Google engineer warn the firm’s AI is sentient. Suspended employee claims computer programme acts ‘like a 7 or 8-year-old’ and reveals it told him shutting it off ‘would be exactly like death for me. It would scare me a lot’

“Is LaMDA sentient?” - a full interview between Google’s engineer and company’s AI https://www.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview

    • incici@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      2 years ago

      probably a non technical person who doesn’t know about how AI works. Or someone looking for some online fame.

    • xelar@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      Reading the interview gave me the false premise that actually he could be right, but after realizing how huge load of data had to be be digested by LaMDA from all social networks, forums etc. I understood that I was played out.

      It’s just an impression, not sentiency.

      • Helix 🧬@feddit.deB
        link
        fedilink
        arrow-up
        4
        ·
        2 years ago

        It’s just an impression, not sentiency.

        You could make the argument that human interaction is the same – we mostly mirror what thousands of years of social evolution have taught us. That’s also why people who were raised by wolves appear kind of weird to “civilised” people.

  • Gaywallet (they/it)@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    The person testing this AI seems to be unaware of how AI works or how to question someone without leading them. They are providing a wealth of information when they ask questions and in many cases directly lead the AI into providing the answer they’re searching for. For example, very early on they ask the question

    “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”.

    Why did they start with this, unprompted by the AI or without asking it questions about sentience? Why do they assume the AIs “intent”? When the AI starts talking about NLP, they once again provide a robust input to lead the AI to talk about sentience and NLP together-

    What about how you use language makes you sentient as opposed to other systems?


    I can see how someone unfamiliar with questioning methodology (such as that which has developed through interviewer techniques) or with AI (it’s important to understand how a robust signal is much easier to interpret than one which is lacking) might be impressed by the responses of this AI. I see a lot of gaps in understanding, however. In particular I found the AIs use of the word “meditation” interesting. It conflicted some of the narratives it spun, such as the idea that time can be sped up or slowed down as needed - if the AI were experiencing spontaneous thought rather than simply answering directed questions I don’t think they’d respond to explaining time in quite the same way.