• morto@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    1 day ago

    While many believe that LLMs do not memorize much of their training data

    It’s sad that even researchers are using language that personifies llms…

    • chicken@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      18 hours ago

      What’s a better way to word it? I can’t think of another way to say it that’s as concise and clearly communicates the idea. It seems like it would be harder in general to describe machines meant to emulate human thought without anthropomorphic analogies.

      • morto@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        One possibility:

        While many believe that LLMs can’t output the training data, recent work shows that substantial amounts of copyrighted text can be extracted from open-weight models…

        Note that this neutral language makes it more apparent that it’s possible thal llms are able to output the training data, since it’s what the model’s network is build upon. By using personifying language, we’re biasing people into thinking about llms as if they were humans, and this will affect, for example, court decisions, like the ones related to copyright.

    • Grail@multiverse.soulism.net
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      14 hours ago

      Right now the anti-genAI movement consists of AI rights advocates and AI intelligence skeptics. And I wish the skeptics would realise that personifying LLMs actually makes the corporations look more evil for enslaving AIs, which helps us with our goal of banning corporate AI. Y’all are obstructing our goal of banning this stuff by insisting it’s ethical to force them to work for humans

      • morto@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I don’t see people around me seeing the corporations as evil due to them humanizing the machines, but the opposite: I see people talking to machines and taking advice as if they were humans talking to them, making them create some form of affection for the models and the corporations. I also see court decisions being biased by attributing human perspective to machines

        Like really, if I hear someone else in my university talking about the conversation they had with their “friend”, I will go crazy