cross-posted from: https://sh.itjust.works/post/1823812
This is an update to my previous post about suspicious inactive accounts on a handful of instances: (https://sh.itjust.works/post/998307).
I ended up messaging the admins at the 16 instances show in the attached image. I pointed out their wild user numbers, and referenced the lemmy.ninja post detailing how that instance scrubbed suspicious accounts from their user database.
6 admins responded. They had all noticed the odd accounts and either thought the numbers were wrong, or weren’t sure how to purge the suspicious accounts without nuking their databases. In the end they managed to delete a combined total of about 338k dormant accounts from their instances. (One of the instances seems to have gone down since then.)
I never received a reply from the other 10 instance admins, though 8 of those 10 instances appear to be down (as of 27 July 2023). 2 instances are still up and unchanged.
Between the actively removed accounts and the downed instances, this represents a loss of 930,004 inactive Lemmy accounts!
You can see the drop in the graphs on The Federation. The total number of Lemmy accounts has been cut in half over the past 3 weeks, from a peak of 2.18M to today’s 1.09M. The change is mostly from these 16 instances.
I have to admit, I did not expect such a large change when I started this! Hopefully this bodes well for Lemmy’s future as a place where actual humans interact, rather than a cesspool of automated comments and upvote/downvote brigading.
That’s all I have for now. Keep your stick on the ice; we’re all in this together.
Well done. I for one appreciate the effort you’re putting into making this a better place by keeping the bots out. Any thoughts on what can be done to keep bots from signing up to begin with or is the plan to continuously purge inactive accounts? I know from experience that a lot of these bad actors are going to pivot and redouble their efforts. This is unfortunately a cat and mouse game that will continually need to be addressed. But, again, thank you for your work on this!
Instances should enable verification to create accounts (email or captcha). I think everyone learned that pretty quickly last month. Other than that, it’s up to users to diligently flag content and moderators to be responsive. Maybe there are good automod tools coming to Lemmy someday, but those are an arms race, too.
How does email handle it?
Are you referring to email verification on sign up? If so, it’s unfortunately easily overcome by bad actors. Depending on how the platform handles it, one email can be used over and over again to verify accounts or there are many services out there that provide an endless amount of quick and easy emails. The automation of this has already been solved too. For the first scenario, limits on how many times an email is used for account verification is useful. For the second scenario, we really start the cat and mouse game. You can block sign up from accounts using spam email domains. There are lists out there that can help. If someone is really persistent, they may have a trove of legitimate email addresses they can use. Then you have to start considering where the sign ups are coming from, the IP, it’s reputation, the behaviors, and hopefully it’s fingerprints from the device. You could serve a captcha but most are trivial to bypass with code straight from GitHub or captcha passing services. Overall, this is not an easy problem to solve. I know a lot of conversation on Lemmy is being had regarding this topic. It’s going to take all of us together to help solve the problem.
Email is federated very similarly to ActivityPub. How does Email handle filtering for bad instances?
I know they have sophisticated systems built up over decades that now seems to work quite well, but I don’t really know the details.
I do believe if I stand up my own email server right now that I can still send email to people without being blocked, but I’m not positive.