Facebook, Twitter and Google Finally Got Some Good Feedback on Their Anti-Hate Speech Efforts
The EU’s crusade against hate speech is a long-running issue that has involved threats of new regulation, as happened in Germany, if the big social media firms don’t do more to tackle the problem. As things stand, the companies have signed up to a voluntary code of conduct.
On Monday the European Commission—the bloc’s executive body—said its clean-up drive is bearing fruit. According to its statistics, 89% of suspect content is being evaluated within a day of someone flagging it up, and 72% of the content that is found to be illegal is removed.
For comparison’s sake, those figures were 40% and 28% respectively, back in 2016. The stats come from civil society organizations that monitor take-downs across various EU countries.
“Illegal hate speech online is not only a crime, it represents a threat to free speech and democratic engagement,” said Justice Commissioner Vĕra Jourová in a statement. “In May 2016, I initiated the Code of conduct on online hate speech, because we urgently needed to do something about this phenomenon. Today, after two and a half years, we can say that we found the right approach and established a standard throughout Europe on how to tackle this serious issue, while fully protecting freedom of speech.”
The European Commission claimed “there is no sign of over-removal,” and noted that content using racial slurs and degrading images in reference to certain groups is only removed in 58.5% of cases.
Facebook is apparently being especially cooperative, assessing 92.6% of hate-speech notifications within 24 hours. This is notable, as CEO Mark Zuckerberg recently pledged to make Facebook—whose AI isn’t yet fully up to the task—better at properly adjudicating what qualifies as hate speech and what doesn’t. The company’s nature as a conduit for hate speech was highlighted last year by its reported role in the Myanmar genocide.
The Commission’s one gripe on Monday was the lack of transparency and feedback to users, when content gets flagged up and removed—the levels of feedback actually fell over the last year, from 68.9% to 65.4%. Again, Facebook did well here, while YouTube failed quite badly, offering feedback less than quarter of the time.