“Having manual ways to remove csam” means almost nothing. All of lemmy has a “manual way to remove csam”. “closely working with NCMEC” can mean they just use the cloudflare mechanism which is just hash matching. Point is, it’s very easy for a malicious actor to upload csam and then report them to patreon for it, without ever reporting it to them.
Can’t you always attempt uploads until they bypass arbitrary filters and then report-snipe on that?
How would a content-based filter prevent this if the malicious actor simply needs to upload correspondingly more images?
I think the sad reality is that the only escape here is scale. Once you have been hit by this attack and been cleared by the 3rd parties, you’d have precedent for when this happens again and should hopefully be placed in a special bin for better treatment.
Scale means you will be fire-tested, and are more likely to receive sane treatment instead of the ai-support special.
“Having manual ways to remove csam” means almost nothing. All of lemmy has a “manual way to remove csam”. “closely working with NCMEC” can mean they just use the cloudflare mechanism which is just hash matching. Point is, it’s very easy for a malicious actor to upload csam and then report them to patreon for it, without ever reporting it to them.
Can’t you always attempt uploads until they bypass arbitrary filters and then report-snipe on that?
How would a content-based filter prevent this if the malicious actor simply needs to upload correspondingly more images?
I think the sad reality is that the only escape here is scale. Once you have been hit by this attack and been cleared by the 3rd parties, you’d have precedent for when this happens again and should hopefully be placed in a special bin for better treatment.
Scale means you will be fire-tested, and are more likely to receive sane treatment instead of the ai-support special.
There can be warning about someone getting caught with multiple failed attempts