In an attempt to counter the disproportionate impact such speech has on minority communities, the organization tells the media, Facebook has begun to weight anti-Black hate speech on its site as a higher priority than hate speech aimed at white people, men, and Americans.
The consequence is that the automated moderation systems of Facebook for detecting and taking action against hate speech should now search the web for such racist material more proactively. In the meanwhile, more harmless types of hate speech, such as those targeted at white people or men in general, are perceived to be less relevant and left alone until they are recorded by a consumer. Facebook has internally deemed this approach “WOW,” or “worst of the worst” for the types of behaviors it now wants to focus its resources on.
The initiative is part of a new Facebook hate speech project, first reported by The Washington Post earlier today, that seeks to resolve years of inaction on the platform regarding racial discrimination. Facebook has long been accused by activists, civil rights advocates, and platform analysts of promoting hate speech and running a moderation mechanism that does not take into consideration the real-world implications of prejudice and the way racism affects minorities overwhelmingly.
Now, instead of approaching the issue as one that concerns everyone in equal measure, the organization says it is taking measures to ensure it moderates its platform to assist the most vulnerable victims of hate speech and violence. The recent improvements in moderation are not only aimed at helping to weed out anti-Black hate speech, but also hate speech directed towards Muslims, Jews, and LGBTQ+ community members.
“We know that hate speech directed at underrepresented groups can be the most damaging, which is why we have based our technologies on identifying the most serious hate speech that users and experts tell us,” says Sally Aldous, a spokesperson for Facebook, in a statement released earlier today.
We have also revised our policies over the past year to capture more overt hate speech, such as content portraying Blackface, myths of Jewish people running the world, and forbidden denial of the holocaust,” adds Aldous.” “We proactively detect 95% of the content we remove, thanks to significant investments in our technology, and we continue to improve how we enforce our rules as hate speech evolves over time.”