• treadful@lemmy.zip
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    2
    ·
    1 year ago

    That’s like says smartphones are fundamentally a surveillance technology. There’s truth to it, but it’s not inherent to the technology. It’s a deliberate act by people using the tech that we allow for whatever reason.

    • Hot Saucerman@lemmy.ml
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      6
      ·
      edit-2
      1 year ago

      Right, you can still do traditional advertising without the targeted metrics provided by smartphones, but…

      AI LLMs literally require a corpus of language to learn from. Thus the “Large Language” part of “LLM.” The amount of data these models need to function is so staggeringly huge there is no way they can compile all that data without scraping the entire internet and pirating a bunch of copyrighted books.

      It’s fundamentally a surveillance technology, because the technology fundamentally cannot function without that large dataset of language to begin with. It needs massive amounts of data that have to be surveilled to be achieved, because unless you’re Reddit or Facebook, your own site probably doesn’t contain enough data to fill out the needs of the LLM. Thus you need to scrape the internet for more data in hopes of filling it out.

      Books3 is used widely as part of “The Pile” and is clearly all of the content of private torrent tracker Bibliotik. People theorize Books2 is all of the books from Library Genesis. To be able to make their models work, they have to scrape the internet and pirate thousands of books to make it functional at all.

      This is also fundamentally why AI starts to fail so quickly, because these tools have been used to flood the internet with AI generated pages, which in turn become training data for AI, which means the training data is tainted with AI generated garbage, which will further degrade the LLM. On the plus side, I guess, is that if they keep using this kind of business model, they will unintentionally make their AI pretty useless within a few years by flooding the internet with useless, incorrect data.

      • treadful@lemmy.zip
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        1 year ago

        It’s fundamentally a surveillance technology, because the technology fundamentally cannot function without that large dataset of language to begin with. It needs massive amounts of data that have to be surveilled to be achieved, because unless you’re Reddit or Facebook, your own site probably doesn’t contain enough data to fill out the needs of the LLM. Thus you need to scrape the internet for more data in hopes of filling it out.

        I very much disagree with the characterization that training an LLM on a book is pirating said book. We might see copyright owners release their materials in the future under licenses that disallow this, which is their right (though it’s not clear to me that any copy is being made). In my opinion there’s not a lot difference between me training an LLM on said book and me using the story as inspiration for my own book. I suspect we’ll never agree on that one.

        Pretty amusing that you think scraping published data somehow constitutes surveillance, though.

        • Zaktor@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What do you think happens to data when it’s scraped? Copying the data is a fundamental requirement for using it in training. These models are trained in big datacenters where the original work is split up and tokenized and used over and over again.

          The difference between you training a model and you reading a book (put online by its author in clear text, to avoid the obvious issue of actual piracy for human use) is that you reading on a website is the intention of the copyright holder and you as a person have a fundamental right to remember things and be inspired. You don’t however have a right to copy and use the text for other purposes, whether that’s making a t-shirt with a memorable line, printing it out to give to someone else, or tokenizing it to train a computer algorithm.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            What do you think happens to data when it’s scraped? Copying the data is a fundamental requirement for using it in training. These models are trained in big datacenters where the original work is split up and tokenized and used over and over again.

            Tokenizing and calculating vectors or whatever is not the same thing as distributing copies of said work.

            The difference between you training a model and you reading a book (put online by its author in clear text, to avoid the obvious issue of actual piracy for human use) is that you reading on a website is the intention of the copyright holder and you as a person have a fundamental right to remember things and be inspired.

            Copyright holders can’t say what I do with their work, nor what I do with the knowledge of their book. They can only say how I copy and distribute it. I don’t need consent to burn an author’s book, create fan art around it, or quote characters in my blog. I do need their consent to copy and distribute their works directly.

            You don’t however have a right to copy and use the text for other purposes, whether that’s making a t-shirt with a memorable line, printing it out to give to someone else, or tokenizing it to train a computer algorithm.

            And at some point the resolution of said words is so specific that it becomes uncopyrightable. You can’t copyright most phrases nor words.

            • Zaktor@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Tokenizing and calculating vectors or whatever is not the same thing as distributing copies of said work.

              It very much is. You can’t just run a cipher on a copyrighted work and say “it’s not the same, so I didn’t copy it”. Tokenization is reversible to the original text. And “distributing” is separate from violating copyright. It’s not distriburight, it’s copyright. Copying a work without authorization for private use is still violating copyright.

              • treadful@lemmy.zip
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                You can’t just run a cipher on a copyrighted work and say “it’s not the same, so I didn’t copy it”.

                Yes I can. I can download a Web page, encrypt it on my machine, and I’m not distributing said work.

                And “distributing” is separate from violating copyright. It’s not distriburight, it’s copyright. Copying a work without authorization for private use is still violating copyright.

                That’s just false.

                • Zaktor@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  1 year ago

                  You absolutely do not know what you’re talking about. This is just trivial copyright law, but there’s a weird internet mythology that if you can access something on the net you can take it as long as you don’t share it further. The reason the mass-sharers tended to get prosecuted is because they were easier and more valuable targets, not because the people they were sharing it with weren’t also breaking the law.

        • Hot Saucerman@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          5
          ·
          edit-2
          1 year ago

          on a book is pirating said book.

          If the source is literally a piracy website that serves up applications on how to remove DRM from ebooks, it’s absolutely piracy. You can’t just deny the source and be like “it’s not piracy!” The way the data came into your hands was illicitly, not legally. Especially if DRM has been circumvented and removed before it came into your hands.

          They didn’t go out and buy copies of thousands of books.

          Pretty amusing that you think scraping published data somehow constitutes surveillance, though.

          I don’t, I was making a point about how absurdly large the language models have to be, which is to say, if they have to have that much data on top of thousands of pirated books, it means they fundamentally cannot make the models work without also scraping the internet for data, which is surveillance.

          • treadful@lemmy.zip
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            1 year ago

            If the source is literally a piracy website that serves up applications on how to remove DRM from ebooks, it’s absolutely piracy. You can’t just deny the source and be like “it’s not piracy!”

            They didn’t go out and buy copies of thousands of books.

            And if they went to a library and scanned all the books?

            I don’t, I was making a point about how absurdly large the language models have to be, which is to say, if they have to have that much data on top of thousands of pirated books, it means they fundamentally cannot make the models work without also scraping the internet for data, which is surveillance.

            I mean, it’s just not surveillance, by definition. There’s no observation, just data ingestion. You’re deliberately trying to conflate the words to associate a negative behavior with LLM training to make your argument.

            I really don’t get why LLMs get everybody all riled up. People have been running Web crawlers since the dawn of the Web.

      • Zaktor@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        Downloading copyrighted stuff from the internet isn’t “surveillance”.

      • zik@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 year ago

        Even if that’s true AI isn’t just LLMs. Saying that all AI is a surveillance technology is just stupid from a technical standpoint - you can train AIs on whatever data you like and it doesn’t have to be surveilled data. Adding to that there are plenty of non-LLM AI technologies that have nothing to do with data gathered by surveillance.

      • HarkMahlberg@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        1 year ago

        And thus the Tech Industry Hype Cycle will begin anew. Maybe next time it’ll be The Fediverse. Maybe it’ll be Holograms. Maybe it’ll be Blockchain But This Time It’s Not A Scam, Pinky Promise.

  • NaibofTabr@infosec.pub
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    1 year ago

    Big tech knows where you work. Big tech knows how much money you make, where you keep it, and how you spend it.

    Big tech knows who your friends are. Big tech knows where your family lives.

    Big tech knows when you’re sleeping. Big tech knows when you’re awake.

    Big tech knows what you had for lunch on Tuesday of last week.

    Big tech has a camera in your home. Big tech has a microphone in your home.

    If a person behaved the way that these technology companies do, we would label that person a stalker. But somehow when it’s being done by a corporation for profit that makes it OK.

  • qwertyqwertyqwerty@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    3
    ·
    1 year ago

    Admittedly, I know little of AI. However, once companies can no longer increase profit with AI, they will use it to save costs instead. This will inevitably lead to mass layoffs, not because AI will correctly determine where to maximize revenue, but because executives don’t understand how how AI works, and they don’t understand how their employees contribute to their revenue.

    • Zaktor@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’ll also do the maximizing revenue sort of layoffs, which are also a really bad thing in a society where basic necessities are tied to employment. The execs will also fuck up a bunch in humorous ways, but that’s nothing more than a comforting distraction from the real and present danger automation of this level presents to a society built around employment.

  • asg101 [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 year ago

    Poison the algorithm, post garbage as often as possible. Fill out quizzes/surveys with fake responses, click on shit you hate.

    Make their data worthless.

    • NaibofTabr@infosec.pub
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Unfortunately you are feeding the data machine just by interacting with their platforms. Even if what you give them is garbage, they still get who, when, where, and how.

  • NotAPenguin@kbin.social
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    The article doesn’t explain how that’s the case at all.

    Aren’t all the big AI models trained on publicly available data?

    • Hot Saucerman@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Books3 is the definition of “not publicly available” because it’s all from pirated material downloaded from private torrent tracker Bibliotik.

      Books3 is literally why several of AI groups are being sued by various authors like Sarah Silverman and George R.R. Martin.

      Books3 was always illicitly obtained material which put into question whether an LLM using it could really fall under Fair Use. (It most likely does, but it’s still a legal question that hasn’t been answered yet.)

      Books3 Link: https://huggingface.co/datasets/the_pile_books3

      Books3 Description from Link:

      This dataset is Shawn Presser’s work and is part of EleutherAi/The Pile dataset.

      This dataset contains all of bibliotik in plain .txt form, aka 197,000 books processed in exactly the same way as did for bookcorpusopen (a.k.a. books1). seems to be similar to OpenAI’s mysterious “books2” dataset referenced in their papers. Unfortunately OpenAI will not give details, so we know very little about any differences. People suspect it’s “all of libgen”, but it’s purely conjecture.

    • SpeakinTelnet@programming.dev
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      I see it more like your address is public in a sense that if I could knock on every door and look through every window I would eventually see where you live. But, I probably wouldn’t be able to quickly search where you live because it’s not made to be public knowledge.

      AI take everything and makes it easily searchable for itself even if it wasn’t made to be.

  • tocopherol@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Shout out to the show Person of Interest, pretty clichéd crime/action show but gets good with AI, surveillance/police state and related themes. SLIGHT SPOILERS But basically a super intelligent AI was created in the pursuit of perfecting the surveillance state.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This is the best summary I could come up with:


    If you ask Signal president Meredith Whittaker (and I did), she’ll tell you it’s simply because “AI is a surveillance technology.”

    Onstage at TechCrunch Disrupt 2023, Whittaker explained her perspective that AI is largely inseparable from the big data and targeting industry perpetuated by the likes of Google and Meta, as well as less consumer-focused but equally prominent enterprise and defense companies.

    “You know, you walk past a facial recognition camera that’s instrumented with pseudo-scientific emotion recognition, and it produces data about you, right or wrong, that says ‘you are happy, you are sad, you have a bad character, you’re a liar, whatever.’ These are ultimately surveillance systems that are being marketed to those who have power over us generally: our employers, governments, border control, etc., to make determinations and predictions that will shape our access to resources and opportunities.”

    Ironically, she pointed out, the data that underlies these systems is frequently organized and annotated (a necessary step in the AI dataset assembly process) by the very workers at whom it can be aimed.

    It’s not actually that good… but it helps detect faces in crowd photos and blur them, so that when you share them on social media you’re not revealing people’s intimate biometric data to, say, Clearview.”

    Like… yeah, that’s a great use of AI, and doesn’t that just disabuse us of all this negativity I’ve been throwing out onstage,” she added.


    The original article contains 512 words, the summary contains 234 words. Saved 54%. I’m a bot and I’m open source!