But when anyone can run an instance, you can’t control it. Someone has an instance which allows them to make as many posts as they want, and then all that content is federated to connect servers
Really though? You can implement the same limits for federated posts, and just drop the ones exceeding the rate limit. Who knows, might be frustrating for normal users that genuinely exceed the rate limits, because their stuff won’t be seen by everyone without any notice, but if they are sane it should be minimal.
The notice might still be able to be implemented though. idk how federation works exactly, but when a federated post is sent/retrieved, you can also exchange that it has been rejected. The local server of the user can then inform the user that their content has been rejected by other servers.
There are solutions for a lot of things, it just takes the time to think about & implement them, which is incredibly limited.
Not to suggest it isn’t a problem that needs to be solved. But from my understanding of activitypub protocol, there isn’t a way to control content federation on a per message basis, solely on allow/block instances as a whole
It’s an interesting problem to be sure. It feels like it should be possible for servers to automagically detect spam on incoming federated feeds and decline to accept spam posts.
There’s already plenty of tools that do this automatically, sadly they’re very often proprietary and paid-for services. You just have to have a way to appeal false positives, because there will always be some, and, depending on how aggressive it is, sometimes a lot.
i look forward to an automated mechanism, like with image checking…
that said, the existing tools arent all that terrible, even if its after the fact.
‘purge content’ does a pretty good job of dumping data from know bad actors. and then being able blocking users/instances.
if everything was rate limited to some degree, we would manually catch these earlier, and block before the rest of the content made its way over… maybe.
Perhaps a case to be made for a federated minimum-config. If servers don’t adhere to a minimum viable contract, say meeting requirements for rate-limiting, or not requiring 2fa, or other config-level things… They become defederated.
A way of enforcing adherence to an agreed upon minimum standard of behaviour, of sorts
It would be very easy to spoof those values in a handshake though, unless you’re proposing that in the initial data exchange a remote server gets a dump of every post and computationally verifies compliance.
Federated trust is an unsolved problem in computer science because of how complex of a problem it is.
Spoofing that handshake would be a bad faith action, one that would not go unnoticed longer term. Instances with a bunch of bad faith actions will make the case for not federating with themselves.
But when anyone can run an instance, you can’t control it. Someone has an instance which allows them to make as many posts as they want, and then all that content is federated to connect servers
Really though? You can implement the same limits for federated posts, and just drop the ones exceeding the rate limit. Who knows, might be frustrating for normal users that genuinely exceed the rate limits, because their stuff won’t be seen by everyone without any notice, but if they are sane it should be minimal.
The notice might still be able to be implemented though. idk how federation works exactly, but when a federated post is sent/retrieved, you can also exchange that it has been rejected. The local server of the user can then inform the user that their content has been rejected by other servers.
There are solutions for a lot of things, it just takes the time to think about & implement them, which is incredibly limited.
deleted by creator
Even a “normal” user needs to chill out a bit when they start reliably hitting a (for example) 3-post-a-minute threshold.
Not to suggest it isn’t a problem that needs to be solved. But from my understanding of activitypub protocol, there isn’t a way to control content federation on a per message basis, solely on allow/block instances as a whole
It’s an interesting problem to be sure. It feels like it should be possible for servers to automagically detect spam on incoming federated feeds and decline to accept spam posts.
Maybe an _actual _ useful application of LLMs
There’s already plenty of tools that do this automatically, sadly they’re very often proprietary and paid-for services. You just have to have a way to appeal false positives, because there will always be some, and, depending on how aggressive it is, sometimes a lot.
i look forward to an automated mechanism, like with image checking…
that said, the existing tools arent all that terrible, even if its after the fact.
‘purge content’ does a pretty good job of dumping data from know bad actors. and then being able blocking users/instances.
if everything was rate limited to some degree, we would manually catch these earlier, and block before the rest of the content made its way over… maybe.
Perhaps a case to be made for a federated minimum-config. If servers don’t adhere to a minimum viable contract, say meeting requirements for rate-limiting, or not requiring 2fa, or other config-level things… They become defederated.
A way of enforcing adherence to an agreed upon minimum standard of behaviour, of sorts
It would be very easy to spoof those values in a handshake though, unless you’re proposing that in the initial data exchange a remote server gets a dump of every post and computationally verifies compliance.
Federated trust is an unsolved problem in computer science because of how complex of a problem it is.
Spoofing that handshake would be a bad faith action, one that would not go unnoticed longer term. Instances with a bunch of bad faith actions will make the case for not federating with themselves.
It just has to go unnoticed long enough to spam for a few days, get defederated, delete itself, start over