Besides obvious traps in differing levels of quality of LLM’s and resources, being availiable to commoners for cheap or free and paid for corporate rich clients, there’s still an undiscovered question about how these different AIs are biased.

After reading a lenghty thread where I’ve seen many takes about if LLM could’ve pushed a teen into commiting suicide, I thought to myself: if there are obviously different models availiable, may they be taught differently for each userbase?

May, for example, some genAI for rich and poor differ, helping first ones to procreate and others to die off?

What if some data engineers trained a popular model to represent one specific agenda, to serve their favorite bosses and institutions?

What if, for the argument’s sake, their GenAIs serve this role as an enabler of suicide, as it was intentionally programmed for?

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    Whatevere, whatever, if closeness to platfroms my and languages, their AI pick, are a statement

    We’re cooked…