From the article:
A volunteer-made project that fights bots on Reddit is shutting down. BotDefense, a tool that helps fight bots in more than 3,600 subreddits and has nearly 150,000 accounts on its bans list, will be going away.
As for why: The community of users and moderators submitting accounts to us depend on Pushshift, the API, and third-party apps. And we would be deluding ourselves if we believed any assurances from Reddit given the track record of broken promises. Investing further resources into Reddit as a platform presents significant risks, and it’s safer to allocate one’s time, energy, and passions elsewhere.
And we would be deluding ourselves if we believed any assurances from Reddit given the track record of broken promises.
This is what bothers me the most about the mods and devs who are still bending over for spez. I’m glad the BotDefense devs have some dignity.
If you sunk unhealthy amount of time into a community, it’s hard to break up. At this point the mods are just in your run of the mill toxic relationship with reddit.
Reddit is entering a death spiral that it might recover from. But it’s not looking good. Fediverse is the new cool.
It would be lovely if these groups could be convinced to move to the fediverse. There’s a large set of problems like this that we don’t have the tooling to handle yet, and it’d be a shame if all the knowledge on how to deal with stuff like bots and brigading died with reddit.
Dang spez, hope the money you’ll be getting from each api call is enough to pay for all the free work the community has been doing over the years
It’s the money from Reddit’s IPO launch that he is after, the API changes are about sending a message to investors that their needs will come first.
BotDefense is wrapping up operations
TL;DR below.
When we announced the BotDefense project in 2019, we had no idea how large the project would become. Our initial list of bots was just 879 accounts. Most of them were annoying rather than outright malicious.
Since then, we’ve witnessed the rise of malicious bots being used to farm karma for the purpose of spamming and scamming users across Reddit and we’ve done our best to help communities stem the tide. We spent countless hours finding and reviewing accounts, writing code to automate detections, and reviewing appeals (mostly from outright criminals and karma farmers definitely running bots, but we typically unban about 4 accounts per month, and unlike similar bots an unban means that we unban the account everywhere we banned it).
Along the way, we’ve struggled with the scope of the problem, rewritting our back-end code multiple times and figuring out how to scale to the 3,650 subreddits that BotDefense now moderates. We came up with new algorithms to identify content theft, reduce the number of times we accidentally ban an innocent account, and more. In January of 2023, we added an incredible 10,070 bots to our ban list which now stands at an incredible 144,926 accounts.
Like many anti-abuse projects on Reddit, we’ve done all of this for free while putting up with Reddit’s penchant for springing detrimental changes on developers and moderators (e.g., adding API limits without advance notice and blocking Pushshift) and figuring out workarounds for numerous scalability issues that Reddit never seems to fix. Without Pushshift, the number of malicious bots we were able to ban dropped to 5,517 in May.
Now, Reddit has changed the Reddit API terms to destroy third-party apps and harm communities. A group of developers and moderators tried to convince Reddit to not continue down this path and communities protested like never before, but that was all in vain. Reddit is so brazenly hostile to moderators and developers that the CEO of Reddit has referred to us as “landed gentry”.
With these changes and in this environment, we no longer believe we can effectively perform our mission. The community of users and moderators submitting accounts to us depend on Pushshift, the API, and third-party apps. And we would be deluding ourselves if we believed any assurances from Reddit given the track record of broken promises. Investing further resources into Reddit as a platform presents significant risks, and it’s safer to allocate one’s time, energy, and passions elsewhere.
Therefore, we have already disabled submissions of new accounts and our back-end analytics, and we will be disabling future actions on malicious and annoying bots. We will continue to review appeals and process unbans for a minimum of 90 days, or until Reddit breaks the code running BotDefense.
We’d rather be figuring out how to combat the influx of ChatGPT bots flooding Reddit, temu bots flooding subreddits with fake comments, and every other malicious bot out there, of course.
At this time, we advise keeping BotDefense as a moderator through October 3rd so any future unbans can be processed. We will provide updates if the situation changes or if we have any other news to share.
Finally, I want to thank all of the users and moderators who have contributed accounts, my co-moderators who have helped review countless accounts, and to all of the communities that have trusted us with helping moderate their subreddits.
Regards.
— dequeued
TL;DR With the API changes now in place, we no longer believe we can effectively perform our mission so we are sunsetting BotDefense. We recommend keeping BotDefense on as a moderator through October 3rd so any unbans can be processed.
One day in the future reddit will just be bots advertising to bots.
Future? On some more niche subs already half the posts were made by bots.
More bots on Reddit? Isn’t this what spez was going for?
As long as he can fool advertisers about activity
tfw the most reddit addicted city is an AWS datacenter
Better than Eglin airforce base (kinda)
I think he looked at /r/subredditsimulator and decided it was something to be taken seriously.
that was one of my favorite subs ever
wish the creator would have let it run for longer
I actually participated in this from time to time. It was sometimes fun to spend a few hours creeping through bot profiles and building a network of them and reporting them all.
Hey, I hope you don’t mind me asking a question.
Given that upvotes and downvotes are public on Lemmy, do you think it would be possible to use that info to potentially detect bots or vote manipulation?
Yeah, I’d imagine it’s possible to identify the behaviour that some bots followed on Reddit. It also does help that global karma isn’t a thing since it was a big part of the incentive that led to bots in the first place. I don’t know if you ever encountered the subreddit “FreeKarma4u” but it was basically just a sub with no rules that allowed bots to upvote each other’s posts and comments until they met the karma threshold to start posting on other subs.
The worse was the disingenuous removal of the Pushshift API. Framing it like devs have been using that API for things they didn’t envision (like unddit using it to retrieve deleted comments so you could see if an admin deleted a genuine comment or if it was just hate speech)
Terrible situation overall.
I can understand why they are shutting down but imagine if they just changed it to specifically go after pro admin bots.
This bodes well for the elections coming up next year…
deleted by creator
This can only end hilariously.
I, for one, welcome our new bot overlords.
It really blows that so many great developers who did free work for Reddit are getting shafted so Reddit can attempt to make more money.
deleted by creator