Title. And this will also affect non-AI imitations of voices?

  • Scubus@sh.itjust.works
    cake
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    13 hours ago

    Copyrighting voices to defeat AI would achieve nothing. Modern AI’s can be overtrained to the point where they strongly resemble their training data, but that is a problem that will be fixed within the next 5 years, which is way earlier than we will see any legislation regarding AI if our geriatric government is anything to go by. After that, AI will generate its “own” content that will be legally protected as its intellectual property.

    I base this off of the fact that every media from humans is inspired by previous media. Fundamentally, once it stops directly plagiarising, there is no legal distinction between what a human would be doing and what an AI would be doing, unless we want to come up with a legal classification of “human” that explicitly rejects AI.

    That opens up a whole new can of worms though. If you define human to be having human dna, does that mean the 3 babies they have been genetically altered are not human? Or are they, because they comtain at least some human dna? Does that mean i can give my AI a vestigial organ and now its legally protected? Does it have to run off a brain? What is a brain? If i duplicate the neural connections in a brain with mosfets down to every single connection, that is indisputably a human intelligence running on possibly non human hardware depending on the word of law. Or is it a human intelligence? They would react the exact same way to their organic counterpart, down to having the same memories and emotions. Does their non-biological hardware preempt them from being human? Does a pacemaker? Or neuralink?

    Theres a lot to be worked out here, but it seems to me to be much less problematic to target the people wanting to misuse AI rather than targetting the tool itself.

    • JustARaccoon@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      12 hours ago

      Ai models should contain a database of all the data it was trained on, where it came from, and through what license did it get perms to train on it. That way you don’t get the issue of comparing outputs to real life work, because the issue isn’t with the output, it’s with the use of the data in the first place without permission to train the AI model, and no it’s not similar to humans learning. The scale and accuracy of reproduction from a software is so much higher than a human that it’s not even a comparison, before we get to what it means to allow humans to grow as people by enriching themselves Vs what it means for a corporate owned churner.