The fact that some of you are putting the blame on instance owners/moderators is just showing that you have about the same amount of brain rot as the people actually posting this vile trash
Right. This is a community effort, and it’s important we support our instances and figure out how to best keep them safe.
Honestly, my first thoughts were that reddit had probably funded some blackhats to sabotage shit because they’re still salty. Then, they could have it reported.
Honestly dude if you believe this is true you should speak with a therapist.
deleted by creator
Ignore these people telling you that you’re being too paranoid. I assumed the same about the series of DDoS attacks that lemmy.world experienced in the last few months. Reddit admins trying to undercut lemmy’s growing popularity “by any means necessary” is perfectly logical. DDoS followed by content attacks even follows Reddit’s own struggles over the years.
rise of uselessserver093? confirmed
I’m a bit confused, how does locking down a single community help?
Are the spammers really just focusing on one community instead of switching to the next after it gets banned?
I do hope there is an IP ban option, so someone can’t just use the same IP again to create an account on another instance and post CSAM from there. Obviously I do know about VPNs, but it makes it a tiny bit more difficult to spam in large amounts.
Most people don’t have static IP addresses, so banning their IP will only stop them temporarily. Then whoever gets that dynamic IP address next will be banned too. Then there’s CGNAT where 1 IP address can have up to 128 people using it at once and the address changes even more frequently.
We’re talking about temporary bans here, which do work against spam. Private users do have dynamic IPs, but at home I think I’ve had the same IP for years. They don’t wildly switch them around.
On second thought the IP is probably not federated though, so if there isn’t a common IP block list which instances subscribe to it won’t work.
Looks like some CSAM fuzzy hashing would go a long way to catch someone trying to submit that kind of content if each uploaded image is scanned.
https://blog.cloudflare.com/the-csam-scanning-tool/
Not saying to go with CloudFlare (just showing how the detection works overall), but some kind of builtin detection system coded into Lemmy that grabs an updated hash table periodically
Not a bad idea, but I was working on a project once that would support user uploaded images and looked into PhotoDNA, but it was an incredible pain in the ass to get access to. I’m surprised that someone hasn’t realized that this should just be free and available. Kind of gross that it is put behind an application/paywall, imo. They’re just hashes and a library to generate the hashes. Why shouldn’t that just be open source and available through the NCMEC?
deleted by creator
They could tweak their images regardless. Security through obscurity is never a good solution.
I can understand the reporting requirement.
These comments so far stink, yall are something else.
OK, I am going to take a minute away from the shit stirring and potentially provide some insight speaking as an admin who’s had the misfortune of dealing with this so I can maybe shift this comment section into an actually meaningful discussion.
You can have your own opinion and feelings against lemmy.world but, this?
The only thing that could have prevented this is better moderation tools. And while a lot of the instance admins have been asking for this, it doesn’t seem to be on the developers roadmap for the time being. There are just two full-time developers on this project and they seem to have other priorities. No offense to them but it doesn’t inspire much faith for the future of Lemmy.
This is correct. Most lemmy admins likely agree as well, I don’t speak for anyone but myself but I can say that I think it would be hard to find someone who disagreed. What happened today is a result of a catastrophic failure on lemmys end, with issues that should have been addressed over a month ago just being completely ignored. The lemmy devs shared a roadmap during their AMA & they essentially were more concerned with making shit go faster… that’s about it.
Okay, honest question. What mod tools are lacking. If there’s something needed, what is that thing or things?
I went over to the feature request page for Lemmy and I couldn’t find anything massive in terms of requests for moderation tools that would have been sure fire ways to stop this particular event.
That said, there is over 400 open feature requests alone on Lemmy’s github. I obviously couldn’t go through every single one. But coming from the kbin side I’m just curious about our Lemmy brothers and sisters. It sounds dire and I’m woefully under informed on how bad it is.
There aren’t enough roles. There’s admin, moderator, and user, but it would be best to have tiers of user in between. Reports go to 4 categories of user when you file a report. Report a comment for violating a fun rule your community decided to implement (all post titles must contain “Jon Bois Rules!”)? That report goes to: the community moderators (good), the community’s host instance’s admin (bad), your instance’s admin (bad), the user who posted the “offending post”'s instance’s admin (bad).
Only admins can permanently remove illegal content. If a mod “removes” it, it still sits visible to all in modlog, and for the purposes of CSAM specifically, that counts as distribution which is prosecuted as a worse crime than possession. Federation with other instances is effectively binary. You can or cannot federate, you cannot set traffic as unidirectional like you can on most other fediverse platforms. The modlogs make it hard to parse who the moderator performing an action is acting on the behalf of. Was it a community mod? An admin? Your admin?
There’s more but my phone is getting low on battery
Agreed, I don’t know what AutoMod did on Reddit but if what mods need is a rule-configurable post remover then I’d be happy to clobber together something in Python
There’s this bot that is used in a couple of communities on feddit.de:
Here’s some things Beehaw admind have been asking for from moderation since June: https://beehaw.org/comment/397674
Got a link to this AMA? Couldn’t find it.
I agree with @Cube6392@beehaw.org, if modtools (one of the reasons for Reddit API protests in the first place) aren’t being prioritized, a hard fork of Lemmy will be inevitable. I know the Lemmy devs are known for being strangely hardheaded about certain issues.
They have shifted gears recently and been pretty receptive to this major critique. Things are going in a much better direction now that 2 months have passed. If I can find the AMA I will link you.
As an admin, how do kbin moderation tools compare?
Also does lemmy.world have the spare cash to offer cash for features?
Kbin moderation tools are worse. And potentially. I guess a bug bounty could be started up.
I don’t know this for sure, but I have a feeling that a hard fork is in Lemmy’s future. I don’t want to get super into it, but programming is a form of communication. What features you bake into a platform are reflective of the messages you want to propogate on that platform. Lemmy’s devs vision for what the platform should be might not be reflective of what most of us might think it should be. The moderation tools might not be a focus for a while, even if most of us view that as the greatest need