I read that as including human interaction as part of the pain point. They already offer bounties, so they’re doing some money management as it is, but the human element becomes very different when you want up-front money from EVERYONE. When an actual human’s report is rejected, that human will resent getting ‘robbed’. It is much easier to get people to goof around for free than to charge THEM to do work for YOU. You might offer a refund on the charge later, but you’ll lose a ton of testers as soon as they have to pay.
That said, the blog’s link to sample AI slop bugs immediately showed how much time humans are being forced to waste on bad reports. I’d burn out fast if I had to examine and reply about all those bogus reports.
These attacks do not have to be reliable to be successful. They only need to work often enough to be cost-effective, and the cost of LLM text generation is cheap and falling. Their sophistication will rise. Link-spam will be augmented by personal posts, images, video, and more subtle, influencer-style recommendations—“Oh my god, you guys, this new electro plug is incredible.” Networks of bots will positively interact with one another, throwing up chaff for moderators. I would not at all be surprised for LLM spambots to contest moderation decisions via email.
I don’t know how to run a community forum in this future. I do not have the time or emotional energy to screen out regular attacks by Large Language Models, with the knowledge that making the wrong decision costs a real human being their connection to a niche community.
Ouch. I’d never want to tell someone ‘Denied. I think you’re a bot.’ – but I really hate the number of bots already out there. I was fine with the occasional bots that would provide a wiki-link and even the ones who would reply to movie quotes with their own quotes. Those were obvious and you could easily opt to ignore/hide their accounts. As the article states, the particular bot here was also easy to spot once they got in the door, but the initial contact could easily have been human and we can expect bots to continuously seem human as AI improves.
Bots are already driving policy decisions in government by promoting/demoting particular posts and writing their own comments that can redirect conversations. They make it look like there is broad consensus for the views they’re paid to promote, and at least some people will take that as a sign that the view is a valid option (ad populum).
Sometimes it feels like the internet is a crowd of bots all shouting at one another and stifling the humans trying to get a word in. The tricky part is that I WANT actual unpaid humans to tell me what they actually: like/hate/do/avoid. I WANT to hear actual stories from real humans. I don’t want to find out the ‘Am I the A-hole?’ story getting everyone so worked up was an ‘AI-hole’ experiment in manipulating emotions.
I wish I could offer some means to successfully determine human vs. generated content, but the only solutions I’ve come up with require revealing real-world identities to sites, and that feels as awful as having bots. Otherwise, I imagine that identifying bots will be an ever escalating war akin to Search Engine Optimization wars.
The bits that hit me most:
It wasn’t just author profiles that the magazine repeatedly replaced. Each time an author was switched out, the posts they supposedly penned would be reattributed to the new persona, with no editor’s note explaining the change in byline.
authors at TheStreet with highly specific biographies detailing seemingly flesh-and-blood humans with specific areas of expertise — but … these fake writers are periodically wiped from existence and their articles reattributed to new names, with no disclosure about the use of AI.
We caught CNET and Bankrate, both owned by Red Ventures, publishing barely-disclosed AI content that was filled with factual mistakes and even plagiarism;
Infocom.
Zork, Hitchhiker’s Guide, Leather Goddesses of Phobos.
You are standing in an open field west of a white house, with a boarded front door.
There is a small mailbox here.
>
Wanna be the bigwig on your block? Have I got a product for YOU! Solar Panels! Make your house shine with newfangled tech that’ll be the envy of all your neighbors! Go solar, baby! Stick it to the electric company and make THEM pay for a change. Solar! You’ll be beaming.
ok, I suck at faking ai chat
“Godfather of AI” Geoff Hinton, in recent public talks, explains that one of the greatest risks is not that chatbots will become super-intelligent, but that they will generate text that is super-persuasive without being intelligent, in the manner of Donald Trump or Boris Johnson. In a world where evidence and logic are not respected in public debate, Hinton imagines that systems operating without evidence or logic could become our overlords by becoming superhumanly persuasive, imitating and supplanting the worst kinds of political leader.
Why is “superhumanly persuasive” always being done for stupid stuff and not, I don’t know, getting people to drive fuel efficient cars instead of giant pickups and suvs?
After decades of sci-fi/fantasy entertainment to prime us, the primal part of the human brain that reacts to in-group and out-group members suddenly changes in every human and we start reflexively and unintentionally classifying all earth life as friends and space/environmental threats as enemies.
Humanity immediately gets serious about climate change, CO2 reduction, and the like, but we also get way too zealous about deploying space lasers.
Kudos! I no longer have to deal with any of that, but I appreciate it’s been a problem and am glad you took action. Thank you!