This same story was posted yesterday, so I’ll rewrite what I did back then:
Most of this report is patently ridiculous. HRW asked people who follow the HRW social media accounts to please send in perceived instances of censorship they’ve seen for the Palestinian conflict social media, they got about a thousand examples from a self-selecting population, then published a big exposé about it.
There’s no comparative analysis (either quantitative nor qualitative) to whether similar censorship happened for other topics discussed, other viewpoints discussed, or at other times in the past.They allege, for example, that pro-Palestinian posters didn’t have an option to request a review of the takedown. The obvious next step is to contextualize such a claim- is that standard policy? Does it happen when discussing other topics? Is it a bug? How often does it happen? But they don’t seem to want to look into it further, they just allude to some sense of nebulous wrongdoing then move on to the next assertion. Rinse and repeat.
The one part of the report actually grounded in reality (and a discussion that should be had) is how to handle content that runs afoul of standards against positive or neutral portrayal of terrorist organizations, especially concerning those with political wings like the Hamas. It’s an interesting challenge on where to draw the line on what to allow- but blindly presenting a thousand taken down posts like it’s concrete evidence of a global conspiracy isn’t at all productive to that discussion.
I have a lot of people that I blocked on social media because of the things they were sharing.
Many of them claimed they were getting censored by Meta or whatever…but I think it was just people like me silencing their stories, reporting their posts, or blocking them.
I kept them all around (and still have some people sharing pro Palestine stuff), but I blocked the ones who were sharing images or videos of people dying, people with graphic injuries, or other disturbing imagery.
Not everyone wants to see that, and social media companies have the right to enforce their rules, which often forbid sharing images or videos of graphic violence.
This same story was posted yesterday, so I’ll rewrite what I did back then:
Most of this report is patently ridiculous. HRW asked people who follow the HRW social media accounts to please send in perceived instances of censorship they’ve seen for the Palestinian conflict social media, they got about a thousand examples from a self-selecting population, then published a big exposé about it.
There’s no comparative analysis (either quantitative nor qualitative) to whether similar censorship happened for other topics discussed, other viewpoints discussed, or at other times in the past.They allege, for example, that pro-Palestinian posters didn’t have an option to request a review of the takedown. The obvious next step is to contextualize such a claim- is that standard policy? Does it happen when discussing other topics? Is it a bug? How often does it happen? But they don’t seem to want to look into it further, they just allude to some sense of nebulous wrongdoing then move on to the next assertion. Rinse and repeat.
The one part of the report actually grounded in reality (and a discussion that should be had) is how to handle content that runs afoul of standards against positive or neutral portrayal of terrorist organizations, especially concerning those with political wings like the Hamas. It’s an interesting challenge on where to draw the line on what to allow- but blindly presenting a thousand taken down posts like it’s concrete evidence of a global conspiracy isn’t at all productive to that discussion.
I have a lot of people that I blocked on social media because of the things they were sharing.
Many of them claimed they were getting censored by Meta or whatever…but I think it was just people like me silencing their stories, reporting their posts, or blocking them.
I kept them all around (and still have some people sharing pro Palestine stuff), but I blocked the ones who were sharing images or videos of people dying, people with graphic injuries, or other disturbing imagery.
Not everyone wants to see that, and social media companies have the right to enforce their rules, which often forbid sharing images or videos of graphic violence.