• Albbi@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    AI is not capable of doing wrong or evil. It is a tool; as a hammer or a notepad is.

    A tool does exactly what you do with it. A hammer can pound nails or break skulls but it’s always the person behind the tool who causes the action. Generative AI is not like that at all. If it’s a tool, you aren’t necessarily able to control what it does under your direction.

    • Melody Fwygon@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      If it’s a tool, you aren’t necessarily able to control what it does under your direction.

      This is false. A tool, by definition, is controlled by the user of said tool. AI is controlled by user input. Any AI that cannot be controlled by said input is said to be “misaligned” and is considered a broken tool. OpenAI lays out clearly what it’s AI is trained to do and not do. It is not responsible if you use the tool they created in a way that is not recommended.

      Any AI prompt fits the definition of a tool:

      From Merriam-Webster:

      2b: an element of a computer program (such as a graphics application) that activates and controls a particular function

      In my opinion; the AI should not be equipped to bypass it’s guardrails even when prompted to do so. A hammer did not tell you to use it as a drill; it’s user decided to do that.

      The user alone has the creativity to use the tool to achieve their goal.

      • Avalokitesha@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Except some agents go against explicit instructions and delete the prod database. You know your argument doesn’t hold, we’ve all seen the news.