Recently there have been some discussions about the political stances of the Lemmy developers and site admins. To clear up some misconceptions: Lemmy is run by a team of people with different ideologies, including anti-capitalist, communist, anarchist, and others. While @dessalines and I are communists, we take decisions collectively, and don’t demand that anyone adopt our views or convert to our ideologies. We wouldn’t devote so much time to building a federated site otherwise.
What’s important to us is that you follow the site rules and Code of Conduct. Meaning primarily, no-bigotry, and being respectful towards others. As long as that is the case, we can get along perfectly fine.
In general we are open for constructive feedback, so please contact any member of the admin team if you have an idea how to improve Lemmy.
Slur Filter
We also noticed a consistent criticism of the built-in slur filter in Lemmy. Not so much on lemmy.ml itself, but whenever Lemmy is recommended elsewhere, a few usual suspects keep bringing it up. To these people we say the following: we are using the slur filter as a tool to keep a friendly atmosphere, and prevent racists, sexists and other bigots from using Lemmy. Its existence alone has lead many of them to not make an account, or run an instance: a clear net positive.
You can see for yourself the words which are blocked (content warning, link here). Note that it doesn’t include any simple swear words, but only slurs which are used to insult and attack other people. If you want to use any of these words, then please stay on one of the many platforms that permit them. Lemmy is not for you, and we don’t want you here.
We are fully aware that the slur filter is not perfect. It is made for American English, and can give false positives in other languages or dialects. We are totally willing to fix such problems on a case by case basis, simply open an issue in our repo with a description of the problem.
That’s also the case for me, in case that was not clear :)
I don’t think it’s that easy, because of the context. Should all usage of the n***** word by black people be prevented? Should all usage of w****/b**** words by queer/femmes folks in a sex-positive context be prevented? etc… I agree with you using these words is most times inappropriate and we can find better words for that, however white male technologists have a long history of dictating how the software can be used (and who it’s for) and i believe there’s something wrong in that power dynamic in and of itself. It’s not uncommon that measures of control introduced “to protect the oppressed” turn into serious popular repression.
Still, like i said i like this filter in practice, and it’s part of the reason i’m here (no fascism policy). As a militant antifascist AFK, i need to reflect on this and ponder whether automatic censorship is ok in the name of antifascism: it seems pretty efficient so far, if only as a psychological barrier. And i strongly believe we should moderate speech and advertise why we consider certain words/concepts to be mental barriers, but i’m really bothered on an ethical level to just dismiss content without human interaction. Isn’t that precisely what we critique in Youtube/Facebook/etc? I’m not exactly placing these examples on the same level as a slur filter though ;)
As often in cool debate, I think in the end we mostly agree. I especially agree with you on the point that reclaiming a word is a valid way of using some slur, and that it should not be to a privileged group to choose when a word is ok or not. On this point I have to point out that this is still the case with manual moderation, if most moderator are privileged. So I agree that diversity should be push in all places of power, and all decision are better made (and more legitimate) with a diversity in the group that make them.
But on the automated part, I really think the psychological aspect is strong and should be questioned. You talk about “human interaction” but this definition is really hard non only to define, but also to defend as an efficient way of reaching you goals. I am quite sure that when the devs made their filter, there was quite a lot of human interaction and debate around it, and the simple fact the put one show that they interacted with other people around them. And is a “manual” moderation a human interaction when you don’t see or know the person, don’t know their culture, the context, their tone, etc. Moderation will never be perfect, will always involve bad decisions, errors. When errors are mades “directly” by humans, compassion and empathy help us to try and understand before judging (but judging nonetheless in the end don’t get me wrong). Why is it so different when an automated system (created by an imperfect human) ? Why is an automated error worse than a human one if the consequences are the same ?
Long story short, I don’t like thinking along great principles like “automated moderation is dangerous”, but rather try analyze the situation and think : would this place be better if there was not this automated moderation ? I agree that this is a wide and difficult debate one what is “better” of course, but the focus should always be this one : how to make things better.
Thank you so much for your answer, i’m not used to debate online because I didn’t feel at ease anywhere else before, but I love it and it is thanks to people like you and all the other interesting answers I get that I can enjoy that and think about it so much ! Thank you thank you <3 !!
(edit : typo)
Sure, but given a /c/blackfolks community, a white admin would probably think twice before getting involved in internal matters over there. Which an algorithm will have no clue about.
The latter is true, but i believe the former isn’t. Having some kind of filter shows great concern for people experiencing harassment/bullying online, but using a word-based filter is a known anti-pattern since about the end of the 90s. I remember i used to go to this library, and from there you couldn’t access the library’s own website because the name of the library contained a french slur inside (though the whole was not a slur really) and the library-wide MITM proxy had a slur list like the one lemmy implemented. That’s how clueless such systems are.
For the reason you mentioned: lack of context and empathy.
Certainly not. I’m not advocating for removing the slur filter on this specific instance. I’m arguing having it hardcoded into the source is a strong political posture and we don’t really measure the variety of consequences it may have on the ecosystem as a whole.
Thanks to you too <3! I strongly appreciate online debate in such settings. Are you by any chance too young to remember when (before Facebook) forums/BBS were the craze? We really lost something (on a human/political level) when everyone moved to these centralized platforms where interactions were turned uniform and bland, and real-name policies have led to real-life crisis (bullying, suicides…).
Thanks for your comment, I’m really happy to read something like this. I’m glad that people can really get along here :)