That just means that folk from vulnerable minorities each individually have to downvote every new troll account targetting them, until the person just moves on to a new troll account.
Which in turn is how you end up with communities full of nothing but white, straight middle class western cis men who think that trolling each other is a national sport.
The cracking-resistance of this system is in the voters who are smart enough to vote as they like (flatworms can do it, so can we) and the depth and complexity of an organic voter/votee history, which would be hard to fake or quickly synthesize.
Of course, yes, the proof requires pudding. A Lemmy fork? Ugh, it’s a lot of work. Maybe a friendly hs teacher can make it the class project.
You miss the point. Your approach requires the targetted minority to experience the hate first, and then react to it, and gives them no method of pro-actively avoiding the content from new sources. It also ensures that every member of the minority in the community in question has a chance to see it, and has to individually remove it.
That suits bigots fine, and unsurprisingly, isn’t sustainable for many targets of bigotry.
Your approach requires the targetted minority to experience the hate first
That isn’t so. There is vote propagation among peers.
If a trusted (upvoted) peer or peers downvotes a bigot (by downvoting the bigot’s posts) then you will see that bigot downvoted in your own perspective as well.
You still see it though, especially if it’s a direct reply. And it is still a responsive system, that lets bigots just come back with new accounts and spew hate until they get downvoted in to silence, when they just come back with another account.
Whilst the latter problem still exists even with moderators, at least a moderator can reduce the number of people exposed to hate.
I’ve lived this. I have zero desire to use the system you describe, because I know it leads to toxicity that I don’t need.
Because the GIFT corrupts even more, and faster.
Also, the mods are subject to GIFT too. In all probability even even moreso.
Hello, yes, I think that I would be a great moral authority. I am just the person to tell people what they can and cannot say. That’s me to a T.
You don’t want that guy in charge in a million years.
Then do it with bots. Bots are uncorruptable or at least perfectly auditable.
Alright, we’ll write a bot that can accurately moderate arbitrary internet content with an acceptably low rate of false negatives and false positives.
You first.
Here’s an idea
When you read a post you vote it.
This vote is also sticks to the person who wrote it.
Whenever he posts, his post automatically get a (weighted) rating based on the history of your votes of his posts.
Also, any post he votes automatically gets a (weighted) rating, for you, on his recommendation, based on his rating.
This post voting rating propagates. And of course works for both positive and negative voting.
Then you filter however.
Everybody starts at 0. Which is also informative of course.
That just means that folk from vulnerable minorities each individually have to downvote every new troll account targetting them, until the person just moves on to a new troll account.
Which in turn is how you end up with communities full of nothing but white, straight middle class western cis men who think that trolling each other is a national sport.
The cracking-resistance of this system is in the voters who are smart enough to vote as they like (flatworms can do it, so can we) and the depth and complexity of an organic voter/votee history, which would be hard to fake or quickly synthesize.
Of course, yes, the proof requires pudding. A Lemmy fork? Ugh, it’s a lot of work. Maybe a friendly hs teacher can make it the class project.
You miss the point. Your approach requires the targetted minority to experience the hate first, and then react to it, and gives them no method of pro-actively avoiding the content from new sources. It also ensures that every member of the minority in the community in question has a chance to see it, and has to individually remove it.
That suits bigots fine, and unsurprisingly, isn’t sustainable for many targets of bigotry.
That isn’t so. There is vote propagation among peers.
If a trusted (upvoted) peer or peers downvotes a bigot (by downvoting the bigot’s posts) then you will see that bigot downvoted in your own perspective as well.
You still see it though, especially if it’s a direct reply. And it is still a responsive system, that lets bigots just come back with new accounts and spew hate until they get downvoted in to silence, when they just come back with another account.
Whilst the latter problem still exists even with moderators, at least a moderator can reduce the number of people exposed to hate.
I’ve lived this. I have zero desire to use the system you describe, because I know it leads to toxicity that I don’t need.
Okay, and if a new account posts CSAM, how does that get removed ASAP?
bots are only “uncorruptible” in so far as they are built “corrupt” at their very conception
but you can look at the code. That inhibits shenanigans