I am worried about the criminal malice of deep fakes. I’m not worried about deep fakes being difficult to identify (thought that is a possibility), since context is important; rather I am worried about the response to malicious use of deepfakes by governments.

Will governments even attempt to reduce the destructive potential of deepfakes? I’m doubtful considering political corruption.


Deep fakes could be useful tools for people that have difficulties with neurotypically social communication, or just increase acceptance of nonneurotypical communication.

  • @AgreeableLandscape@lemmy.ml
    link
    fedilink
    4
    edit-2
    2 years ago

    In general fakes getting better, tech evolves but counter-measures to spot fakes also evolve which is a natural process.

    Deepfakes often use a type of AI called a Generational Adversarial Network (GAN). Oversimplified: When you want to make a deepfake, you create two sets of AIs, one set to create deepfakes and one to detect them. They are pitted against each other and only the best at either are used to derive new versions of each type. This means both the generation and detection methods get better as the system runtime increases and the deepfake becomes more convincing, though usually the AIs are only tuned to that one specific instance, so of you set out to create a deepfake that merges hypothetical people Jack and Jill, they can only make deepfakes of Jack and Jill, and not Romeo and Juliet. For Romeo and Juliet, you would have to start the process all over again. Though keep on mind that all the creating new versions of the AIs stuff is automated, so if you have the right hardware, you can churn out deepfakes of tons of people with very little human intervention.

    However, this also raises concerns about how exactly you develop a detector for deepfakes if you’re external to the process of creating them.