• @The_Monocle_Debacle@lemmygrad.ml
    link
    fedilink
    8
    edit-2
    2 years ago

    most of these so-called “AI” implementations are just self-optimizing algorithms trained with incomplete or biased data for a very specific problem. A lot of them can’t even do something in the same problem space that wasn’t part of their training data correctly.

    • Amicese
      link
      fedilink
      3
      edit-2
      2 years ago

      Oh yeah I see what you mean. I struggle with discerning them though.

      I worry that the training data for deepfakes is suspiciously normative. (there seems to be no neurodiverse, queer, or (physically) disabled people in those training sets).

      • @southerntofu@lemmy.ml
        link
        fedilink
        32 years ago

        Well first deepfakes need to die. It’s a dangerous tech that should not exist at all and does not need any more research.

        To be fair, i haven’t dug into deepfake models, but i assume you would train them on the specific person you’re trying to deepfake: i mean for basic video stuff going with a pre-trained model may be ok but for audio there’s no way you can get away with it ;)

        • poVoq
          link
          fedilink
          0
          edit-2
          2 years ago

          There are also specific ML models for audio that sound pretty convincing in replicating a specific person’s voice.