Social networks struggle to shutdown racist abuse after England's Euro Cup final loss


England’s Bukayo Saka was consoled by head coach Gareth Southgate.

Sunday night was a moment of national mourning among England’s football fans as the country moved closer to winning its first major international tournament in more than half a century, before losing to Italy in a penalty shootout. It also marked another ugly incident of racism on social media, with some supporters throwing all their anger and frustration at the three players who missed their penalty kicks – all of whom were Black.

While national team and manager Gareth Southgate clarified that the loss was something the entire team was taking on together, some disgruntled supporters took to Twitter and Instagram and specifically targeted Marcus Rashford, Jadon Sancho and Bukayo Saka.

The vitriol posed a direct challenge to the social network — an event-specific spike in hate speech that required them to redouble their sobriety efforts to stem the harm. It marks only the latest phenomenon for the social network, which needs to be vigilant during highly charged political or cultural events. While these companies have a routine process that involves the deployment of machine-automated tools and human moderators to remove content, this latest incident is yet another source of frustration for those who believe that social networks are the only way to respond. are not sufficient.

To bridge the gap, companies rely on users to report content that violates the guidelines. After Sunday’s match, many users were sharing tips and guides on how to best report content to both the platform and the police. It was disappointing for those same users to report that the company’s moderation technology found nothing wrong with the racist abuse they uncovered.

This left many users thinking that, while Facebook, for example, is a billion-dollar company, it was unprepared to deal with the easily anticipated influx of racist content and instead sent it to unpaid Good Samaritan users. was left to report. .

There is no gray area when it comes to racism

For social media companies, restraint can fall into a gray area between protecting freedom of speech and protecting users from hate speech. In these cases, they must determine whether User Content violates their own Platform Policies. But it was not one of those gray areas.

Racist abuse is classified as a hate crime in the UK, and London’s Met Police said a statement That it will investigate the incidents that happened online after the match. In a follow-up email, a spokesperson for the Met said incidents of abuse were being tracked down by the Home Office and then circulated to local police forces to deal with.

Twitter “rapidly” removed more than 1,000 tweets through a combination of machine-based automation and human review, a spokesperson said in a statement. In addition, it permanently suspended “a number”, “the vast majority” of accounts, of which it found itself actively involved. “The hateful racist abuse last night on England players has no place on Twitter,” the spokesperson said.

Meanwhile, there was frustration among Instagram users, who were identifying and reporting other abusive content, strings of monkey emoji (a common racist trope) being posted on the accounts of black players.

According to Instagram’s policies, it is against the company’s hate speech policies to use emoji to attack people based on protected characteristics, including race. Human moderators working for the company take context into account when reviewing emoji use.

But with several cases reported by Instagram users where the platform failed to remove the monkey emoji, it appears that the review was not done by human reviewers. Instead, their report was dealt with by the company’s automated software, which told them “our technology has detected that this comment may not be against our Community Guidelines.”

An Instagram spokesperson said in a statement that “no one should ever experience racist abuse anywhere, and we don’t want it on Instagram.”

“We immediately removed comments and accounts directing abuse on England footballers last night and we will continue to take action against those who break our rules,” he added. “In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a tool that is meant to ensure no one sees abuse in their comments or DMs. Will not fix the challenge overnight, but we are committed to keeping our community safe from abuse.”

Football’s racism problem meets technology’s moderation problem

Social media companies shouldn’t have been surprised by the response.

Soccer professionals are feeling the strain of the racial abuse they face online – and not just after this one game in England. In April, Football Association of England social media boycott “In response to the ongoing and continuing discriminatory abuse received online by players and many others associated with football.”

The racism problem of English football is nothing new. In 1993, the problem forced the Football Association, the Premier League and the Professional Footballers’ Association to launch Kick It Out, a program to fight racism, which became a full-fledged organization in 1997. Led by Southgate, the current iteration of England’s squad has embraced anti-racism more vocally than ever, taking a knee in support of the Black Lives Matter movement before matches. Still, racism remains in play – online and off.

On Monday, the Football Association strongly condemned online abuse after Sunday’s match, saying it was “horrified” at racism aimed at players. “We could not be clear why anyone behind this kind of disgusting behavior is not welcome to follow the team,” it said. “We will do everything we can to support affected players, as well as urging the harshest penalties for anyone responsible.”

Social media users, politicians and rights organizations are calling for Internet-specific tools to combat online abuse – as well as to prosecute perpetrators of racist abuse as they would offline. on the part of “No Yellow Card” The campaign is calling on the Center for Countering Digital Hate platform to ban users who abuse racism for life.

In the UK, the government is trying to implement regulation that would force tech companies to take tough action against harmful content, including racist abuse, in the form of an online security bill. But it has also been criticized for moving too slowly to implement the law.

Tony Burnett, CEO of the Kick It Out campaign (which both Facebook and Twitter publicly support), said in a statement Monday that both social media companies and the government need to take steps to stop racist abuse online. His words were echoed by Julian Knight, Member of Parliament and chairman of the Committee on Digital, Culture, Media and Sport.

“Government needs to legislate tech giants,” Knight said in a statement. “Suffice it to drag, all those who have suffered at the hands of racists, not just England players, now deserve better protection.”

As pressure mounts on them to act, social networks are also stepping up their own moderation efforts and creating new tools – with varying success. Companies track and measure their progress. Facebook appoints its own independent oversight board to assess its performance.

But critics of the social network also point out that the way their business models are set up gives them little incentive to discourage racism. Any and all engagement will increase ad revenue, he argues, even if that engagement is getting people to like and comment on racist posts.

“Facebook made content moderation difficult by creating and ignoring its own vague rules and by inflicting harassment and hate to boost its stock price,” said the former Reddit CEO. Ellen Pao said on Twitter on Monday. “Negative PR is forcing them to address the racism that has been on its platform…

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories