My biggest gripe right now is how often everything goes down. About 6 times out of 10 when I go to load anything on lemmy it is down, confirmed on https://lemmy-world.statuspage.io/
lemmy.world got too big IMO. So I ended up switching to a different instance. If all the users were to evenly distribute themselves across many instances, and not just have everyone on like 1-3 different instances, I think that the issues with unavailability, lag, and whatnot would happen much less.
I had the most issues on my lemmy.world account. I’ve had less issues on my lemmy.ca account, maybe try making an account on a different instance? World has been getting attacked a lot lately.
I’m actually trying to solve this issue on my own Lemmy app. It automatically switches instances when the requested one is down. Works only in the Feed right now and, of course, accounts are still instance-bound - but I will fix that soon.
it’s not just you, the stats bear it out. LW has the worst uptime of any major instance. 92% so far this month according to their own monitoring and worse on bad days. They see a lot of load and have not scaled up enough to resolve it, nor have they restricted user signups at all to spread out new users across the lemmyverse. Seems like a very growth oriented mindset, which I don’t love to see spreading to the fediverse.
I personally wouldn’t use one the larger instances, as they’re usually going to have the most issues (more complex deployments, longer maintenance downtimes, etc)
I am most comfortable with lemmy.world because of their old reddit formatting. It made the transition easy. Call it lazy or low brain, but maybe if other instances will also do the old format, they might get more users shifting to their instances too?
Still trying to learn here, but so far I find myself jumping back to old.lemmy.world simple because of familiarity.
My biggest gripe right now is how often everything goes down. About 6 times out of 10 when I go to load anything on lemmy it is down, confirmed on https://lemmy-world.statuspage.io/
lemmy.world got too big IMO. So I ended up switching to a different instance. If all the users were to evenly distribute themselves across many instances, and not just have everyone on like 1-3 different instances, I think that the issues with unavailability, lag, and whatnot would happen much less.
I had the most issues on my lemmy.world account. I’ve had less issues on my lemmy.ca account, maybe try making an account on a different instance? World has been getting attacked a lot lately.
I’m actually trying to solve this issue on my own Lemmy app. It automatically switches instances when the requested one is down. Works only in the Feed right now and, of course, accounts are still instance-bound - but I will fix that soon.
considering hopping onto a different instance because of this. Lemmy.world in particular always seems to be experiencing issues.
https://lemmon.zerobytes.monster/index
it’s not just you, the stats bear it out. LW has the worst uptime of any major instance. 92% so far this month according to their own monitoring and worse on bad days. They see a lot of load and have not scaled up enough to resolve it, nor have they restricted user signups at all to spread out new users across the lemmyverse. Seems like a very growth oriented mindset, which I don’t love to see spreading to the fediverse.
I personally wouldn’t use one the larger instances, as they’re usually going to have the most issues (more complex deployments, longer maintenance downtimes, etc)
I am most comfortable with lemmy.world because of their old reddit formatting. It made the transition easy. Call it lazy or low brain, but maybe if other instances will also do the old format, they might get more users shifting to their instances too?
Still trying to learn here, but so far I find myself jumping back to old.lemmy.world simple because of familiarity.
After I went to my local instance of lemmy, sdf.org, I have literally zero issue
This is why it wasn’t until we didn’t commit to migrating /r/android over until lemdro.id was setup for us ([email protected]).