I’ll give you my opinion though you haven’t asked for it:
Some right wingers (libertarian mostly) don’t want to ban books, they want books in fact to be reliably available, and having one centralized Internet Archive to store all of them is not reliable.
(Or in the same logic for humanity to be knowledgeable and resistant to propaganda, and treating sources’ availability as a given being harmful towards that goal - naive people can believe wrong things.)
See Babylon V example with kicking the ant hive again and again to some well-meaning goal, of the evolution kind.
Mind that I don’t think these people have such an intent.
It’s just in my childhood someone has gaslighted me into trying to be optimistic in such cases. Like “if someone is digging a grave for you, just wait till they’re done, you’ll get a nice pond”. Same as a precedent that is created with one intent and interpretation, but works for all possible intents and interpretations, because it’s a real world event.
So, other than gaslighting, real effects are real. Including positive ones, like all of us right now realizing that a centralized IA is unacceptable, we need something like “IA@home”, with a degree of forkability without duplicating the data, so that someone who’d somehow hijack the private key or whatever identifying said new IA’s authority wouldn’t be able to harm existing versions and they wouldn’t require much more storage.
Shit, I can’t stop thinking about that “common network and identities and metadata exchange, but data storage shared per communities one joins, Freenet-like” idea, but I don’t even remotely know where to start developing it and doubt I’ll ever.
4 years ago (best number I can find, considering IAs blog pages are down) IA used about 50 petabytes on servers that have 250 terabytes of storage and 2gbps network.
From this, we can conclude that 1 TB of storage requires 8mbps of network speed.
Let’s just say that average/all residential broadband has spare bandwidth for 8mbps symmetrical.
We would need 50,000 volunteers to cover the absolute minimum.
Probably 100k to 200k to have any sort of reliability, considering it’s all residential networking and commodity hardware.
In the last 4 years, I imagine IA has increased their storage requirements significantly.
And all of that would need to be coordinated, so some shards don’t get over-replicated
This seems to confirm my critique of “manual” solutions with torrents and such offered in other comments, resulting in the idea shortly described in the comment you were answering.
Yes, this would require a lot of people, but some would contribute more and some less, just like with other public P2P solutions.
From my POV the biggest problem is synchronizing indexes (similar to superblock maybe) of such a storage, and balancing replication based on them, in a decentralized way. Because it would seem that those indexes by themselves would be not small.
There should also be all the usual stuff with controlling data integrity.
I think it’s realistic to attract many volunteers, if the thing in question will also be the user client, similar to Freenet and torrents socially, and bigger storage will allow them to faster get things they access more often, as a cache. But then balancing between that and storing necessary, but unpopular parts of the space, is a question.
There are really good, incentivized versions of decentralized storage networks. Unfortunately discussions about them are stigmatized under the “crypto” umbrella so the mere mention typically gets you buried.
I’ll give you my opinion though you haven’t asked for it:
Some right wingers (libertarian mostly) don’t want to ban books, they want books in fact to be reliably available, and having one centralized Internet Archive to store all of them is not reliable.
(Or in the same logic for humanity to be knowledgeable and resistant to propaganda, and treating sources’ availability as a given being harmful towards that goal - naive people can believe wrong things.)
See Babylon V example with kicking the ant hive again and again to some well-meaning goal, of the evolution kind.
Mind that I don’t think these people have such an intent.
It’s just in my childhood someone has gaslighted me into trying to be optimistic in such cases. Like “if someone is digging a grave for you, just wait till they’re done, you’ll get a nice pond”. Same as a precedent that is created with one intent and interpretation, but works for all possible intents and interpretations, because it’s a real world event.
So, other than gaslighting, real effects are real. Including positive ones, like all of us right now realizing that a centralized IA is unacceptable, we need something like “IA@home”, with a degree of forkability without duplicating the data, so that someone who’d somehow hijack the private key or whatever identifying said new IA’s authority wouldn’t be able to harm existing versions and they wouldn’t require much more storage.
Shit, I can’t stop thinking about that “common network and identities and metadata exchange, but data storage shared per communities one joins, Freenet-like” idea, but I don’t even remotely know where to start developing it and doubt I’ll ever.
4 years ago (best number I can find, considering IAs blog pages are down) IA used about 50 petabytes on servers that have 250 terabytes of storage and 2gbps network.
From this, we can conclude that 1 TB of storage requires 8mbps of network speed.
Let’s just say that average/all residential broadband has spare bandwidth for 8mbps symmetrical.
We would need 50,000 volunteers to cover the absolute minimum.
Probably 100k to 200k to have any sort of reliability, considering it’s all residential networking and commodity hardware.
In the last 4 years, I imagine IA has increased their storage requirements significantly.
And all of that would need to be coordinated, so some shards don’t get over-replicated
This seems to confirm my critique of “manual” solutions with torrents and such offered in other comments, resulting in the idea shortly described in the comment you were answering.
Yes, this would require a lot of people, but some would contribute more and some less, just like with other public P2P solutions.
From my POV the biggest problem is synchronizing indexes (similar to superblock maybe) of such a storage, and balancing replication based on them, in a decentralized way. Because it would seem that those indexes by themselves would be not small.
There should also be all the usual stuff with controlling data integrity.
I think it’s realistic to attract many volunteers, if the thing in question will also be the user client, similar to Freenet and torrents socially, and bigger storage will allow them to faster get things they access more often, as a cache. But then balancing between that and storing necessary, but unpopular parts of the space, is a question.
I think I need to read up.
There are really good, incentivized versions of decentralized storage networks. Unfortunately discussions about them are stigmatized under the “crypto” umbrella so the mere mention typically gets you buried.
If you have an open mind, check them out!