This issue is already quite widely publicized and quite frankly “we’re handling it and removing this” is a much more harmful response than I would hope to see. Especially as the admins of that instance have not yet upgraded the frontend version to apply the urgent fix.
It’s not like this was a confidential bug fix, this is a zero day being actively exploited. Please be more cooperative and open regarding these issues in your own administration if you’re hosting an instance. 🙏
Exactly
Yes, the vulnerability is out there. Maybe the root cause actually introduced a LOT of vulnerabilities. The fix is being pushed at a frantic pace. To expect the devs to take time out of the mad rush to notify those impacted to do a proper writeup is just insanity.
The way I see it? This (hopefully) got fixed pretty much instantly and there is active work to get the fix applied by the people who need to apply it. That is what should be done. Give it a week or two to see how they handle the public disclosure side of things.
I strongly disagree with some of your points.
It’s not insanity. It’s called incident management and it’s something the development team needs to build a proper procedure around, given the expanded scope of this project. I agree that the devs working on identifying, mitigating, and fixing the vulnerability should not be expected to also handle the communication. They need to designate someone for that role.
A 0-day was actively being exploited in the wild. There was confusion, misinformation, and a general lack of information.
You need to:
And how do you know this since it’s not been communicated? Most of the information I (as a person running a lemmy server) have been able to glean is from random threads spread across random communities.
A couple of weeks for a postmortem. Sure. A couple of weeks for an active, in the wild, 0-day, to officially communicate that the problem exists and how to mitigate/patch it. Absolutely not. I still don’t see a security alert on the GitHub telling me I should be updating to <insert version> to patch an active exploit and it’s been how many hours now?
And if this were a large company? Yes.
This is an open source project with less than 200 devs with the VAST majority coming from two.
Part of this is very much the learning curve and why you should very much think twice about using open source passion projects in “production”. This is the kind of stress testing that comes from lemmy/mastodon/The Fediverse actually having users.
But also?
So you are saying that you were told there is an issue. And you can do exactly what I did while writing this message: Check the github page.
Do I think the lemmy devs are doing everything by the book? Hell no.
Do I think, given the resources available and the timeframe of the attack, that they are doing it correctly? Yes. They identified the vulnerability, (hopefully) implemented a mitigation, and pushed that all within 24 hours. Popular docker containers have already been updated, users are spreading The Good Word, and so forth. And I would much rather they use their limited resources to focus on actual fixes than doing proper writeups, just so long as the fixes are getting propagated.
Optimally? I want those proper reports filed within the next day or two. Given that this is likely NOT a full time job and all the chaos of the past 24 hours or so? I’ll give them a week.
And if your complaint is that they aren’t behaving the same way large corporations and massive projects (that often became corporations) do? Maybe Lemmy is not for you. And I don’t mean that in an insulting manner. If I were tasked with finding a message board solution or whatever for my company, there is absolutely zero chance I would recommend Lemmy. It is not production quality.
But for shitposting and actively not providing PII or anything useful? Let’s see how things get hardened from here on out.
Is the project small? Yes.
Did it explode in popularity leaving the devs overwhelmed? Certainly.
Do I expect them to strictly follow established ITIL incident management? No.
Do I expect them to communicate in a consistent way when an incident happens? Yes.
I agree the primary developers should be left to fixing the problems but there are enough active members of that project that someone could have handled communication in a more concise and official way. I don’t consider random posts in asklemmy or selfhosting by random users just guessing to be a substitute for that.
If the project is going to persist and grow it needs to get better at that. Pointing it out isn’t shitposting.
Again, how many “active members” are likely to understand the issue well enough to make that report. Or are they going to need to use up the time of those core developers to understand well enough to write it up?
I’ve been through similar a decent number of times on the corporate side. Something has gone very wrong. People want answers. A good manager assesses the situation and responds back “Look, we know what is going on and all hands are on deck to fix it. Making a powerpoint is not fixing it. We’ll do a proper write up for next week but we can either have So and So fix it or report on it.”
Obviously that stops being an option as you begin impacting investors. But that is when it becomes a trade off of “Okay, Jen barely understands what Roy and Moss are doing. But she can say something that hopefully won’t be too wrong and then apologize and give a correction tomorrow”
But people very much don’t seem to understand how small this project is. Spend time with passion projects and “open source” projects that AREN’T on the scale of a small-medium sized company and you understand that standards are going to be lower because people have day jobs and so forth.
I mean, there is a reason reddit hired so many people over the years. And if you are going to jump down the throats of people who prioritize fixing an issue and counting on “active members” to notify users over writing up the reports that many of those users won’t even look at? You want a production quality piece of software. That means Reddit or Threads or Bluesky.
Why are you getting so defensive? The only throat getting jumped down is mine, by you. I’m expressing my opinion of gaps in the communication of the project and how I think it can be improved. In a conversation thread on selfhosted no less. I’m not out in [email protected] bitching them out, submitting issues, or otherwise harassing the devs. Pointing out a gap and suggesting solutions is neither shitposting nor jumping down someone’s throat.
I think you’re the one confusing this with a large corporate project. Not me. There’s no managers here, there’s no powerpoints, and at no point have I asked for a detailed write-up. I asked for someone on the project, who isn’t actively working on identifying and coding the fix, to be the “point man”. Post a simple sticky at the top of [email protected] xposted to [email protected] that indicates there’s a problem, they’re aware of it, and a fix it being worked on. Once mitigations are identified or fixes are published, update the post with that. Ideally, a github security incident would be also be published with the same info so people not watching lemmy at the moment can notified via that channel.
I get it. I have pretty low standards. I’m just saying that a consistent communication strategy going forward for this project would be beneficial.
I’m with you. I figured out through various comments that I should update my UI to
0.18.2-rc.1
, and also run an update statement on my database to fix the modlog. Only after that did I find the matrix channel. Eventually I also found [email protected] which is great, but the only thread there on this issue doesn’t even mention updating the UI. I think if we can get to the point where critical information that admins need to know is consistently posted in one place, it’ll make everybody’s life easier. I don’t think that’s too much to ask.Your typical dev is not a technical writer, and shouldn’t be doing the proper write-up.
If you feel (and it seems you do) that this skill is missing from the Lemmy team, perhaps you should volunteer some time.