We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

https://archive.ph/Fapar

  • terrific@lemmy.ml
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    7 hours ago

    I’m a computer scientist that has a child and I don’t think AI is sentient at all. Even before learning a language, children have their own personality and willpower which is something that I don’t see in AI.

    I left a well paid job in the AI industry because the mental gymnastics required to maintain the illusion was too exhausting. I think most people in the industry are aware at some level that they have to participate in maintaining the hype to secure their own jobs.

    The core of your claim is basically that “people who don’t think AI is sentient don’t really understand sentience”. I think that’s both reductionist and, frankly, a bit arrogant.

    • jpeps@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 hours ago

      Couldn’t agree more - there are some wonderful insights to gain from seeing your own kids grow up, but I don’t think this is one of them.

      Kids are certainly building a vocabulary and learning about the world, but LLMs don’t learn.

      • stephen01king@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 hours ago

        LLMs don’t learn because we don’t let them, not because they can’t. It would be too expensive to re-train them on every interaction.

        • terrific@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          I know it’s part of the AI jargon, but using the word “learning” to describe the slow adaptation of massive arrays of single precision numbers to some loss function, is a very generous interpretation of that word, IMO.

          • stephen01king@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            58 minutes ago

            But that’s exactly how we learn stuff, as well. Artificial neural networks are modelled after how our neuron affect each other while we learn and store memories.

            • terrific@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              12 minutes ago

              Neural networks are about as much a model of a brain as a stick man is a model of human anatomy.

              I don’t think anybody knows how we actually, really learn. I’m not a neuro scientist (I’m a computer scientist specialised in AI) but I don’t think the mechanism of learning is that well understood.

              AI hype-people will say that it’s “like a neural network” but I really doubt that. There is no loss-function in reality and certainly no way for the brain to perform gradient descent.