Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.

Her take is very interesting: what if we could actually use AI against that?

Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.

Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?

How could this be achieved?

  • relic4322@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    44 minutes ago

    Ok, got another one for ya based on some comments below. You have all the usual addons to block ads and such, but you create a sock-puppet identify, and use AI to “click” ads in the background (stolen from a comment) that align with that identity. You dont see the ads, but the traffic pattern supports the identity you are wearing.

    So rather than random, its aligned with a fake identity.

  • stupid_asshole69 [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    4
    ·
    4 hours ago

    This isn’t a very smart idea.

    People trying to obfuscate their actions would suddenly have massive associated datasets of actions to sift through and it would be trivial to distinguish between the browsing behaviors of a person and a bot.

    Someone else said this is like chaff or flare anti missile defense and that’s a good analog. Anti missile defenses like that are deployed when the target recognizes a danger and sees an opportunity to confuse that danger temporarily. They’re used in conjunction with maneuvering and other flight techniques to maximize the potential of avoiding certain death, not constantly once the operator comes in contact with an opponent.

    On a more philosophical tip, the masters tools cannot be turned against him.

      • stupid_asshole69 [none/use name]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        29 minutes ago

        spray-bottle

        No, you can’t.

        You are not the hero, effortlessly weaving down the highway between minivans on your 1300cc motorcycle, katana strapped across your back, using dual handlebar mounted twiddler boards to hack the multiverse.

        If ai driven agentic systems were used to obfuscate a persons interactions online then the fact that they were using those systems would become incredibly obvious and provide a trove of information that could be easily used to locate and document what that person was doing.

        But let’s assume what the op did worked, and no one could tell the difference.

        That would be worse! Suddenly there’s hundreds of thousands of data points that could be linked to you and all that’s needed for a warrant are two or three that could be interpreted as probable cause of a crime!

        You thought you were helping yourself out by turning the fuzzer on before reading trot pamphlets hosted on marxists.org but now they have an expressed interest in drain cleaner and glitter bombs and best case scenario you gotta adopt a new pitt mix from the humane society.

  • Ardens@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    4 hours ago

    So, she is talking about an AI-war? Where those who don’t want us to be private, controls the weapons? Anyone else see a problem with that logic?

    Thousands of “you” browsing different sites, will use an obscene amount of power and bandwidth. Imagine a million people doing that, not a billion… That’s just stupid in all kinds of ways.

  • moseschrute@lemmy.ml
    link
    fedilink
    English
    arrow-up
    87
    arrow-down
    2
    ·
    edit-2
    8 hours ago

    I feel like I woke up in the stupidest timeline where climate change is about to kill us, we decide stupidly to 10x our power needs by shoving LLMs down everyone’s throats, and the only solution to stay private is to 10x our personal LLM usage by generating tons of noise about us just to stay private. So now we’re 100x ing everyone’s power usage and we’re going to die even sooner.

    I think your idea is interesting – I was also thinking that same thing awhile back – but how tf did we get here.

    • octobob@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      edit-2
      5 hours ago

      Yeah agreed. What’s going on in my state of Pennsylvania is they’re reopening the Three Mile Island nuclear plant out near Harrisburg for the sole reason of powering Microsoft’s AI data centers. This will be Unit 1 which was closed in 2019. Unit 2 was the one that was permanently closed after the meltdown in 1979.

      I’m all for nuclear power. I think it’s our best option for an alternative energy source. But the only reason they’re opening the plant again is because our grid can’t keep up with AI. I believe the data centers is the only thing the nuke plant will power.

      I’ve also seen the scale of things in my work in terms of power demands. I’m an industrial electrical technician, and part of our business is the control panels for cooling the server racks for Amazon data centers. They just keep buying more more and more of them, projected til at least 2035 right now. All these big tech companies are totally revamping everything for AI. Like before a typical rack section might have drawn let’s say 1000 watts, now it’s more like 10,000 watts. Again, just for AI.

      • moseschrute@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        3 hours ago

        Totally agree nuclear is a great tool but totally being used for the wrong purpose here. Use those power plants to solve our existing energy crisis before you crate an even bigger energy crisis.

    • blargh513@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      There are ais that can detect use of ai. This is a losing strategy as we burn resources playing cat and mouse.

      As with all things greed is at the root of this problem. Until privacy has any legislative teeth, it will continue to be a notion for the few and an elusive one at that.

  • fubbernuckin@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    5 hours ago

    I don’t know if there’s a clean way to do this right now, but I’d love to see a software project dedicated to doing this. Once a data set is poisoned it becomes very difficult to un-poison. The companies would probably implement some semi-effective but heavy-handed means of defending against it if it actually affected them, but I’m all for making them pay for that arms race.

  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 hours ago

    I have been a longtime advocate of data poisoning. Especially in the case of surveillance pricing. Unfortunately there doesn’t seem to be many tools for this outside of AdNauseum.

  • SendMePhotos@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    7 hours ago

    Obscuration is what you’re thinking and it works with things like adnauseun (firefox add on that will click all ads in the background to obscure preference data). It’s a nice way to smear the data and probably better to do sooner (while the data collection is in infancy) rather than later (where the companies may be able to filter obscuration attempts).

    I like it. I am really not a fan of being profiled, collected, and categorized. I agree with others, I hate this time line. It’s so uncanny.

    • HelloRoot@lemy.lol
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      I still don’t really understand adnauseum. What is the difference in privacy compared to clicking on none of the ads?

      • SendMePhotos@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Whatever data profile they already have on your can be obscured to make it useless vs them probably trickling in data.

        Think of it like um…

        Having a picture of you with a moderate amount of notes that are accurate, vs having a picture of you with so much irrelevant/inaccurate data that you can’t be certain of anything.

        • HelloRoot@lemy.lol
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 hours ago

          But the picture of me they have is: doesn’t click ads like all the other adblocker people (which is accurate)

          Why would I want to change it to: clicks ALL the ads like all the other adnauseum people (which is also accurate)

          • JustinTheGM@ttrpg.network
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 hours ago

            They build this picture from many other sources besides ad clicks, so the point is to obscure that. Problem is, if you’re only obscuring your ad click behavior, it should be relatively easy to filter out of the model.

            • HelloRoot@lemy.lol
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              edit-2
              5 hours ago

              You are just moving the problem one step further, but that doesn’t change anything (if I am wrong please correct me).

              You say it is ad behaviour + other data points.

              So the picture of me they have is: [other data] + doesn’t click ads like all the other adblocker people (which is accurate)

              Why would I want to change it to: [other data] + clicks ALL the ads like all the other adnauseum people (which is also accurate)

              How does adnauseum or not matter? I genuinely don’t get it. It’s the same [other data] in both cases. Whether you click on none of the ads or all of the ads can be detected.


              As a bonus, if adnauseum would click just a couple random ads, they would have a wrong assumption of my ad clicking behaviour.

              But if I click none of the ads they have no accurate assumption of my ad clicking behaviour either.

              Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests, which are collected via other browsing behavious from your ad clicking behaviour if they contradict each other or when one of the two seems random.

              • Ulrich@feddit.org
                link
                fedilink
                English
                arrow-up
                3
                ·
                5 hours ago

                [other data] + clicks ALL the ads like all the other adnauseum people

                adnauseum does not click “all the other ads”, it just clicks some of them. Like normal people do. Only those ads are not relevant to your interests, they’re just random, so it obscures your online profile by filling it with a bunch of random information.

                Judging by incidents like the cambridge analytica scandal, the algorithms that analyze the data are sophisticated enough to differentiate your true interests

                Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

                • HelloRoot@lemy.lol
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 hours ago

                  adnauseun (firefox add on that will click all ads in the background to obscure preference data)

                  is what the top level comment said, so I went off this info. Thanks for explaining.

                  Huh? No one in the Cambridge Analytica scandal was poisoning their data with irrelevant information.

                  I didn’t mean it like that.

                  I meant it in an illustrative manner - the results of their mass tracking and psychological profiling analysis was so dystopian, that filtering out random false data seems trivial in comparison. I feel like a bachelor or master thesis would be enough to come up with a sufficiently precise method.

                  In comparison to that it seems extremely complicated to algorithmically figure out what exact customized lie you have to tell to every single inidividual to manipulate them into behaving a certain way. That probably needed a larger team of smart people working together for many years.

                  But ofc I may be wrong. Cheers

  • edel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 hours ago

    First, Naomi and her team are doing a fantastic work in security for masses, easily top 5 worldwide!

    AI is capable but we are still failing at program it properly, gosh, even well funded companies are still doing a poor job at it… (just look at the misplaced ads and ineffective we still get.)

    What I want, and it is easy to do TODAY, is AI checking our FOSS… so many we use and just a tiny, tiny minority of it goes with some scrutiny. We need AI to go through the FOSS code looking for maliciousness now.

  • Ænima@feddit.online
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 hours ago

    I did this with period trackers. I’m male and my wife and I would always chuckle when my period was about to start.

  • relic4322@lemmy.ml
    link
    fedilink
    arrow-up
    8
    ·
    7 hours ago

    This is like chaff, and I think it would work. But you would have to deal with the fact that whatever patterns it was showing you were doing “you would be doing”.

    I think there are other ways that AI can be used for privacy.

    For example, did you know that you can be identified by how you type/speak online? what if you filtered everything you said through an LLM first, normalizing it. Takes away a fingerprinting option. Could use a pretty small local LLM model that could run on a modest local desktop…

  • wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    7 hours ago

    In a different direction now is a good time to start looking at how local AI can liberate us from big tech.

    • dodgeflailimpose@lemmy.zipOP
      link
      fedilink
      arrow-up
      1
      ·
      6 hours ago

      Local AI requires Investments in local compute power which sadly is not affordable for private users. We would need some entity that we can trust to host. I am happy to pay for that

  • a14o@feddit.org
    link
    fedilink
    arrow-up
    8
    ·
    8 hours ago

    It’s a good idea in theory, but it’s a challenging concept to have to explain to immigration officials at the airport.

  • slackness@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    6 hours ago

    You would be able to do this for a short while but unless you can make an agent that’s indistinguishable from you or you already have very bot-like traffic, they’d catch up pretty quickly. They aren’t going to just let a trillion dollar industry die out because some bots are generating traffic.

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    8 hours ago

    It’s an interesting concept, but I’m not sure the payoff justifies the effort.

    Even with AI-generated noise, you’re still being tracked through logins, device fingerprints, and other signals. And in the process, you would probably end up degrading your own experience; getting irrelevant ads, broken recommendations, or tripping security systems.

    There’s also the environmental cost to consider. If enough people ran decoy traffic 24/7, the energy use could become significant. All for a strategy that platforms would likely adapt to pretty quickly.

    I get the appeal, but I wonder if the practical downsides outweigh the potential privacy gains.

    • fubbernuckin@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      Okay but irrelevant ads is the dream. I’d prefer not to get recommendations at all either. I’ll hear from word of mouth what’s worthwhile to watch, or I’ll look for it myself. Recommendations consistently muddy things up, it makes all modern social media useless, I have no idea how people can put up with it.

      • edel@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        My entire family is ad free for years… with the exception in podcasts. I am tempted to block them too (is there a way now?) but still not too intrusive… it is a way for me to keep connected to the ad world anyways. Now, the moment they abuse them here tii… I’ll find a way to block these.

      • DominusOfMegadeus@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        6 hours ago

        I’m not, but OP would if they started opening up their IP and fingerprints to anyone who wants them, in order to inundate those parties with garbage data. Admittedly, I might be missing some clever part of their plan.