This might be a real stupid question but why is discover updating to a lower version? Is there any place I can read up why this is the case?
P.S.: Yes, I could have absolutely google this but lemmy is about more than just shitposts and memes imo. Asking some rather noobish questions will make us appear on google btw.
Sounds like https://bugs.kde.org/show_bug.cgi?id=465864
Aaaand solved! Thanks for this precise reply! Pretty sure its exactly that.
It is called “downgrading”, and it is not uncommon to have some packages downgrading when updating/upgrading a system, due to several reasons.
No. This is just a thing Discover does. Unless nearly every update I’ve done for every Flatpak I have installed on my Steam Deck have actually been downgrades.
As someone else pointed out, its a bug and mentioned on the kde bugtracker.
No, it’s just a (long fixed!) bug. In the case of the Deck, the next version of SteamOS comes with the fix soon… in the case of Debian, they don’t ship our bugfix releases, so it’ll be stuck with this until Debian 13 :/
Under what circumstances? I don’t think I’ve ever seen a package downgraded during an upgrade.
I somehow missed this to be a flatpak via Discover. Granted this may not be usual in distros with a traditional update model, downgrading packages may be present in rolling distros, or distros with overlapping minor versions, or having 3rd party repos providing conflicting packages to those of the distro.
I offer my system as example:
The following product is going to be upgraded: openSUSE Tumbleweed 20240211-0 -> 20240313-0 The following 14 packages are going to be downgraded: ghc-binary ghc-containers ghc-deepseq ghc-directory ghc-exceptions ghc-mtl ghc-parsec ghc-pretty ghc-process ghc-stm ghc-template-haskell ghc-text ghc-time ghc-transformers
Pretty sure this is a bug in either discover or flatpak. My guess is flatpak has the 2 versions it feeds discover swapped, so the versions appear swapped, but in reality it will be fine
Okay! Thanks for the reply. :)
edit: turns out I’m wrong. google does index lemmy pages. it’s just not easy to find them through google
I dont think its possible to find any lemmy posts through GoogleHonestly a shame. R*ddit is full of helpful information, as is Lemmy, but the latter is not indexed.
I just tried searching “element lemmy” and got the article Lemmy: Fans call for periodic table element to be named after Motörhead frontman
Whereas “element reddit” gives /r/elementchat/
Lemmy is indexed on Google as using the
site:
operator will show, e.g. “rust site:programming.dev” gives sensible results, but there’s not a way to search across Lemmy. Well, not with Google anyway (Kagi has a Fediverse lens that works fairly well).I couldn’t find it in my comment history, but I saw a thread months ago where someone was lamenting migrating from reddit where they used to just google “episode ### discussion” for the show they’re watching and would find a corresponding reddit thread, but the same thing wasn’t working for them with Lemmy. Someone else pointed out that it might be because Google personalises some of the search results now, so I tried their example query and the top link was to the post I was commenting on. It had already indexed to the most relevant result about an hour after the original post
Nah, we already are on google but it depends on a lot of factors. A lemmy frontend is just another webpage so google will crawl it if you allow it. So if an instance disallows it specifically or has incorrect/unfamiliar sitemap it might not work.
hmm, maybe that’s the case for lemmy.world. I wasn’t able to find anything from my comment or post history, even though I copied it exactly into the search.
edit: just had the idea to put quotation marks around the thing I copied to search for that exact string, and now I found it. Yeah, it really just seems like lemmy pages are not very popular results so google pushes them all the way to the bottom.
The major difference between lemmy and reddit is that there’s many instances for search engines to crawl, compared to a single reddit.com. They likely treat each instance seperately, which leads to a lot of duplicate content and most of lemmy isn’t search engine optimized.
Sadly I don’t see a better way to do it than for search engines to be optimized for this kind of federated platforms. It’s not obvious from the outside which is the preferred instance to show to a user.
I’ve had some luck finding content on lemmy by forcing a specific instance using
site:lemmy.instance.domain
, but it depends on the search engine whether it’s respected.yeah good point about multiple domains
Exactly. The reason is that google favors pages that play the algorithm instead of actual content. That and popularity.