Perhaps this isn’t new, as I’ve only been on Lemmy for around 3 months, but up until this point I hadn’t noticed spam, advertising, scams, etc at all on Lemmy. However, within the last 2 days I’ve seen at least 3 examples of obvious spam posts, made by accounts clearly dedicated to that purpose. Has anyone else noticed this? And are there steps we could take to counter it (perhaps a report button)?
Every platform that becomes popular eventually ends up being spammed.
My suggestion would be some kind of filter that keeps track of several metrics related to the domain name linked, ie how often the domain name is part of reports, how recently the domain name has been registered, etc and if the link seems untrustworthy, have the submission or comment filtered and require a manual approval by the community mod(s) before it shows up for everyone else.
And personally I’d auto-block any URL shorteners services, they don’t serve a valid purpose here and can be used to hide the destination URL.
How do you detect URL shorteners? Simply by checking for a redirect using curl, or do you check against a list of urls? Domain review would be a lot of work to implement, i hope we can avoid that.
I currently just use a list of known URL shorteners domain names, and it reduced the spam a bit on the subreddit I moderate.
Problem is that someone needs to maintain that list.
Every list needs a maintainer ;)
EDIT: Here’s a preliminary list
https://github.com/m-p-3/domain-lists/blob/main/url-shorteners
And that maintainer needs to be trusted. But if we manage without a list, there is no need for extra trust nor maintenance work.
Fair point.
I found it kind of funny first. This site is a smaller forum filled with people who are interested in privacy and security and are generally tech literate enough to spot a scam. Not sure what they hope to gain over doing this on a bigger website, but it’s interesting we are on these people’s radar.
Spam knows no economic or physical boundaries. They just spam indiscriminately.
I think the Lemmy devs should really consider implementing privacy pass for this problem: https://docs.rs/challenge-bypass-ristretto/0.1.0-pre.2/challenge_bypass_ristretto/#challenge-bypass-ristretto---build-status
Outside of other solutions ppl proposed below, we do just need more active admins, across different timezones. The report queue has really helped, but there’s not enough of us looking at them.
Cleaning things up only takes a few seconds with the ban + remove content action.
Also a lot of these spam posts do seem automated, which means our captcha here isn’t doing as good a job as it should be 😞
I think its just that captcha is so cheap and easy to bypass. I’m sure you know about the farms of people solving captcha for bots and other spam services.
Captcha is more of a user annoyance at this point.
I’m able to help!
Would some sort of Bayesian filter help? At least from what I’ve seen on PeerTube, WriteFreely, and the history of email is that certain patterns crop up in the posts.
It might, but either signup applications like we’re getting ready for the next release, or some kind of minimal activity restrictions would probably work best.
Maybe, in addition to admins, there could be demi-mods where when they report something, it becomes hidden? Or some other democratic approach; I remember League of Legends did a “tribunal” thing where users could vote on whether something was appropriate. Maybe something like that could distribute the admin-load without giving people unilateral-ban-power.
Spam moves faster than democracy.
It moves faster than fascism too =P
Nah fascism is just better at disguising their spam as an attack on your personal liberties by the communists.
I’m saying the current moderation strategy is pure fascism
Yes I’ve noticed it’s just been occasional so far tho and there’s actually a report button just click the three dots under a post then click the flag to fill out the report form and click report when you’re finished
Does the report go to the community mods or the lemmy mods?
the former
Its both, both admins and community mods see reports now.
Alright, that’s good to know. Thanks!
Some people just have way too much time on their hands.🤷
Most of it seems to be done by companies, so there must be a way they profit from it.
Maybe nofollow and ugc attrib on outgoing links could help?
Could be Reddit’s attempt to make Lemmy less appealing by adding spam to it
lol
doubt it
deleted by creator
I could be wrong, but I was under the impression that Lemmy doesn’t keep track of user karma. Perhaps it does internally but doesn’t display it in the ui? Otherwise, this sounds like a good suggestion.
Karma is kept track of on the back end. But Lemmy ui doesn’t show it. It’s in the API tho.
deleted by creator
The code for total user karma is in the backend, but just isn’t utilized in this instance. You absolutely can access it though.
prly bc i was away 😎
seriously though, there’s been a ton more spam the last few weeks, i guess that’s the price you pay for more users 🤷♀️
There are even seemingly spam-communities like !wetshaving
I take care of 2 giant sublemmys and remove spam within 12-24 hours. I hope we could get more good moderators on board to be able to put in required community work.
Giant? What number of uswrs/posts qualify a sub for giant label?
c/privacy and c/technology are the main big sublemmys here.
I’m able to help!
There’s a report button already but I believe it only reports to the instance admin and not the whole federation. From what ive seen is the spam is mostly coming from instances with not much activity otherwise.
This is one of the reasons voat died, they didn’t want to pay anyone to add any anti spam measures since the entire thing was just a school project that blew up.
Bans arent federated yet, so an admin on instance A might ban a user + remove his posts, but other instance wont know about it. This will need a bit more time to implement, and should improve the spam situation a lot.
Just a quick reminder to users: if you see spam, simply commenting “spam” or downvoting it doesn’t alert the mods. It’s best to hit the report button and/or comment directly mentioning the mods of the community or the admins of the instance where the post originates from.
If you report the offending post to the admins of another instance (Lemmy.ml sees this a lot since we one of the biggest instances that almost everyone knows), they can only remove and ban on their own instance, whereas the home instance admins can ban the account from posting at all, and if they remove it, it should propagate through to the federated instances.
How can someone present theirself as admin?