All the posts about Reddit blocking everyone except Google and Brave got me thinking: What if SearNGX was federated? I.E. when data is retrieved via a providers API, that data is then federated to all other instances.

It would spread the API load out amongst instances, removing the API bottlenecks that come from search providers.

It would allow for more anonymous search, since users could cycle between instances and get the same results.

Geographic bias would be a thing of the past.

Other than ActivityPub overhead and storage, which could be reduced by federating text-only content, I fail to see any downside.

Thoughts?

    • kbal@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      3 个月前

      Ah, I wondered if something like that had been tried before. Looks like it is maybe still running: https://yacy.net/

      The demo isn’t giving me useful search results.

      • Buelldozer@lemmy.today
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 个月前

        There’s only been about 700 yacy peers online in the last 30 days which is pretty low for a “crowd sourced” search engine, especially when many of those are, I think, temporary peers that come and go. It looks like it has only maybe 200 “master” servers which wouldn’t be nearly enough to keep up with the Internet these days.

        The good news is that if there’s websites / urls that you care about you can point your own yacy instance at them and schedule the crawls to keep up with content changes.

        I remember reading about yacy some years ago and now that I’ve bumped it into again it’s sparked my interest. I may stand up a docker instance and play with it for awhile. If nothing else it could make a very useful “arrrrr” search engine.

      • Wxnzxn@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 个月前

        I ran an instance for a while out of curiosity a few years back - building the database seemed to work fine and appeared like a good idea, had a lot of fun to see the connections with other servers and my crawler filling holes of unknown spaces. But I think the search algorithm itself was (most likely is) not sophisticated enough, it just did not give relevant results often enough, and it was extremely vulnerable to very simple SEO tactics to push trash to the top.