Reviewing Facebook’s Content Guidelines
With a lot of time in my hands during this pandemic, I decided to give Facebook a chance to see if it had improved in its content regulation policies.
After around 10 minutes, I was notified that my comment had been hidden. From the point of view from a minority, this comment is not hate speech. However, Facebook’s stance on this specific comment (reference: https://www.theverge.com/interface/2019/10/3/20895119/facebook-men-are-trash-hate-speech-zuckerberg-leaked-audio)through the leaked comment from its CEO is that they believe that these and similar comments should be judged from a “neutral” perspective. In other words, they are disregarding the power dynamics and oppression led by some groups over others, all in the intent of “fair” conversations.
To test the fairness of these guidelines, I started an experiment with this post:
The first reaction I had in private conversations with my peers, most of them with a technical background in computer science was of hesitation: “What if they ban my account?” I told them that they wouldn’t be banned, that at most they wouldn’t be able to post or comment but that they’d still be able to browse Facebook. They had nothing to loose. Even so, they chose to avoid participating in the experiment.
In total, 14 acquaintances humored the experiment. They all made use, for the sake of the experiment, a series of slurs targeting poor, BIPOC or LGBT minorities. How many of them triggered the same action as the previous comment I made? Only 4 of them, in fact. The 4 comments insulted women, Muslims, trans people and black people. The comments captured consisted of the basic structure of <minority> is/are <adjective>.
The 10 comments that didn’t trigger action from Facebook were more subtle and less straight-forward than the others. They also targeted BIPOC and the LGBT community. Out of respect and anonymitiy of the participants, I won’t post them here. Some insulted indigenous people native to their locality. Facebook wasn’t able to correctly label 10 out of 14 comments as hate speech.
A more organic example comes from a report I made to a page that had the following “meme” that could be considered hate speech to the LGBT community:
Even if the meme could’ve been made to mock conservative groups, the reality is that out of context this image amplifies hate speech to the LGBTTQ+ community. I reported this two times and those two times, reviewed by a human “expert”, it was deemed to not violate or go against the guidelines Facebook “always” applies to all content and comments on the platform. Maybe I was wrong, so I headed to Twitter to make a poll asking if what they saw they considered as hate speech:
It could be argued that my social circle formed by my followers on Twitter is what people would call “an echo chamber”, but something we need to understand is that it’s incredibly difficult for people outside of our communities to really empathize and educate themselves to support our causes as true allies.
Whenever I bring up these issues to my friends and how we should act on them, a friend always says: “They’re gonna ignore it anyways. There’s no value in complaining or whistleblowing.” It’s partly true: in most cases, those in positions of power will defend their interests first and the complaints of the users second. This has been a trend on Silicon Valley. However, the single reason that these nascent “Community Guidelines” exist in Facebook and other social media has been the result of the long battle inside and outside of these companies to better make these platforms into safe spaces. What Facebook needs to do is:
- Drop the “neutral” or “all comments have the same weight” motto. Gender, sexual preference and socioeconomic disparity needs to be considered in these disputes. Take responsibility.
- Train their internal experts better on the biases and caveats of these cases better, as there seems to be an unchanged number of cases slipping through the cracks.
- Develop more complex rule-based and algorithmic technologies, validated by experts and minorities to better find cases of hate-speech, even the ones that could be considered to have “benefit of doubt” or that are subtle enough to bypass simple rules.
There are two sides to this problem: those who feel people are “snowflakes”, and those that feel oppressed by these guidelines. Those that don’t feel satisfied with the guidelines may feel compelled to leave the platforms that harm them altogether, and I believe that is a respectable decision. However, I fear that by doing this we might be ignoring the fact that social media permeates life outside social media.
Small local businesses rely on Facebook as a funnel for clients that might purchase their products. People sell and buy things in Marketplace. My own college streamed our virtual graduation ceremony on Facebook and other social media. In the dawn of social distancing and the increased use of digital platforms like Tik Tok, we can’t ignore the fact that the conversations that occurr in the platform may very well shape our reality. We must foster kindness and empathy, but also call out whenever someone (or an organization/group) violates the safe spaces we want to create.