Chatbots provided incorrect, conflicting medical advice, researchers found: “Despite all the hype, AI just isn’t ready to take on the role of the physician.”

“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”

  • pkjqpg1h@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 days ago

    I just use LLMs for tasks that are objective and I’ll never ask or follow advice from LLMs

    • PoliteDudeInTheMood@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Exactly what it’s designed for, it’s an LLM. Thinking this is science fiction and expecting that level of AI from an LLM is the height of stupidity

      • pkjqpg1h@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        I don’t know why people thinks so radically about this, they love or they hate