Just listened to Naomi Brockwell talk about how AI is basically the perfect surveillance tool now.
Her take is very interesting: what if we could actually use AI against that?
Like instead of trying to stay hidden (which honestly feels impossible these days), what if AI could generate tons of fake, realistic data about us? Flood the system with so much artificial nonsense that our real profiles basically disappear in the noise.
Imagine thousands of AI versions of me browsing random sites, faking interests, triggering ads, making fake patterns. Wouldn’t that mess with the profiling systems?
How could this be achieved?
This is like chaff, and I think it would work. But you would have to deal with the fact that whatever patterns it was showing you were doing “you would be doing”.
I think there are other ways that AI can be used for privacy.
For example, did you know that you can be identified by how you type/speak online? what if you filtered everything you said through an LLM first, normalizing it. Takes away a fingerprinting option. Could use a pretty small local LLM model that could run on a modest local desktop…
I really like this idea