Tech giants’ slowing progress on hate speech removals underscores need for law, says EC

DMCA / Correction Notice
- Advertisement -

The tech giant has been vilified for removing illegal hate speech from its platforms under a voluntary arrangement in the European Union, according to the latest assessment by the Commission.

- Advertisement -

NS Sixth Assessment Report The EU’s code of conduct on removing illegal hate speech found what bloc executives call a “mixed picture”, with platforms reviewing 81% of information within 24 hours and an average of 62.5% removing flagged content.

These results are below the average recorded in both 2019 and 2020, notes the commission.


The self-regulatory initiative began back in 2016, when Facebook, Microsoft, Twitter and YouTube agreed to remove hate speech that violates their Community Guidelines in less than 24 hours.

Since then Instagram, Google+, Snapchat, Dailymotion,, TikTok and LinkedIn have also signed up for the code.

- Advertisement -

While the headline promises were bold, the reality of the way the platform has performed has often fallen short of what was pledged. And while there had been a tendency to improve performance, it has now stalled—or has stopped, according to the commission—between Facebook and YouTube platforms performing worse than in earlier monitoring rounds.

Screengrab of a chart showing the varying rates of removal per company from a European Commission fact sheet on hate speech removal codes (image credit: The European Commission)

A key driver for the EU to establish the code five years ago was concerns about the spread of terrorist content online, as lawmakers sought ways to quickly pressure platforms to expedite the removal of hate material.

But the bloc now has a regulation for it: in April the EU adopted a law on removing terrorist material that sets one hour as the default time for removal.

EU lawmakers have also proposed a comprehensive update to digital regulations that will expand requirements on platforms and digital services to areas around the handling of illegal content and/or goods.

The Digital Services Act (DSA) hasn’t passed yet, so the self-regulatory code is still in place — for now.

The commission said today that it would like to discuss the development of the code with the signatories, in light of the “forthcoming obligations and collaborative framework in the proposal”. digital services act“. So whether the code is retired entirely – or is extended to complement the coming legal framework – remains to be seen.

Upon dissolution, where the European Union also operates a voluntary code to squeeze the tech industry to combat the spread of harmful non-true material, the Commission has stated that it will strengthen the measures and add compliance to it – It intends to keep the obligations voluntary – at least for the largest platforms – for a legally binding DSA.

The stalled improvement in the removal of hate speech of platforms under voluntary codes suggests that the approach may have run its course. Or that the platforms are taking their foot off the gas while they wait to see what specific legal requirements they will have.

The commission noted that some companies had “markedly worsened” results, while others “improved” the monitoring period. But such bad results are probably the main limitation of non-binding code.

EU lawmakers also pointed out that the “inadequate feedback” (via notifications) to users remains a “main weakness” of the code, as in previous monitoring rounds. So, again, legal force seems necessary – and the DSA proposes standardized rules for elements such as reporting procedures.

Commenting on the latest report on the hate speech code in a statement, the Commission’s VP for Values ​​and Transparency, Vera Jaurova, looking at the upcoming regulations, said: “Our unique code has brought great results but did not disappoint Platform Guard. And the shortcomings need to be addressed. And gentlemen’s consent will not suffice here. The Digital Services Act will provide strong regulatory tools to fight against illegal hate speech online.

“The results show that IT companies cannot be satisfied: just because the results were great in previous years, they cannot take their work less seriously,” Justice Commissioner Didier Reynders said in another supporting statement. “They have to address any downward trend without any delay. It is a matter of protecting a democratic space and fundamental rights of all users. I am confident that the rapid adoption of the Digital Services Act will also help address some of the existing gaps, such as insufficient transparency and feedback for users.”

Other findings of the practice of monitoring the removal of illegal hate speech include:

  • Removal rates vary depending on the severity of the hateful content; 69% Content calling for murder or violence against specific groups was removed, while 55% Content that used abusive words or images targeting certain groups was removed. In contrast, in 2020, the respective results were 83.5% And 57.8%.
  • IT companies gave feedback 60.3% Number of notifications received, which is less than during the last monitoring exercise (67.1%).
  • In this monitoring exercise, sexual orientation The most commonly reported base of hate speech is (18.2%) Then xenophobia (18%) and anti-gypsy (12.5%).

The commission also said that for the first time the signatories reported “detailed information” about measures taken to combat hate speech outside the surveillance exercise, including actions to automatically detect and remove content.

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories