Twitter could cut back on hate speech with suspension warnings, study says Polite warnings as a response to violent language may be more effective than an immediate ban.

DMCA / Correction Notice
- Advertisement -


A polite warning as a response to violent language can be more effective than an immediate ban.

- Advertisement -

Jordan K. of Alameda, Calif. holds a sign with an enlarged tweet while protesting with the activist group Change the Terms Reducing Hate Online outside Twitter headquarters in San Francisco on November 19, 2019.

advertisement

Since Twitter’s launch in 2006, it has become a massive networking event, bar hangout, meme-generator, and casual conversation hub. But for every 280-word-long news update and witty comment, you’ll get one violent, hateful post.

- Advertisement -

In a team of experts strategizing to disarm Twitter’s dark side, a team from New York University conducted an experiment to test whether suspension warnings resulting from hate speech are a functional technique. Turns out, it can be very effective.

Don’t miss a minute of the action with our live coverage of Black Friday 2021 deals.

After studying more than 4,300 Twitter users and 600,000 tweets, scientists found warning accounts of results like this “could significantly reduce their hate speech for a week.” This fallacy was even more pronounced when the warnings were politely expressed.

It is hoped that the team’s paper, published Monday in the journal Perspectives on Politics, will help address the racist, vicious and abusive content that pollutes social media.

“Debate on the effectiveness of social media account suspensions and bans on abusive users is high, but we are very concerned about the impact of outright suspension warnings to warn a user of suspending an account or reduce hate speech. Little is known,” Mustafa Mikdat Yildirim, an NYU doctoral candidate and the paper’s lead author, said in a statement.

“Even though the effect of the warnings is temporary, the research offers a potential path forward for platforms that want to reduce the use of hateful language by users.”

These warnings, Mikdat Yildirim observed, don’t even have to come from Twitter. The proportion of tweets containing hate speech per user dropped to between 10% and 20%, even though the warning originated from a standard Twitter account with only 100 followers – an “account” created by the team for experimental purposes.

“We suspect, too, that these are conservative estimates, in the sense that an increase in the number of followers of our account could have even greater effects … write the paper.

At this point you might be wondering: Why bother “warning” hate speech supporters when we can get rid of them on Twitter? Intuitively, an instant suspension should achieve the same, if not stronger, effect.

Why not just ban hate speech as soon as possible?

While online hate speech has existed for decades, it has grown in recent years especially towards minorities. There has also been an increase in physical violence resulting from such negativity. This includes tragedies like mass shootings and lynchings.

But there is evidence to show that deleting the undeclared account may not be the way to deal with the matter.

As an example, the paper points to former President Donald Trump’s infamous and inaccurate tweets following the 2020 United States presidential election. These included election misinformation, such as calling the results fraudulent and praising the rioters who stormed the Capitol on January 6, 2021. His account was immediately suspended.

Twitter said the suspension was “due to the risk of further inciting violence,” but the problem was that Trump later attempted to access other methods of posting online, such as tweeting through the official @Potus account. “Even when restrictions do reduce unwanted deviant behavior within a platform, they may fail to reduce overall deviant behavior within the online arena,” the paper says.

Twitter suspended the Twitter account of President Donald Trump on January 8, 2021.

Twitter suspended the Twitter account of President Donald Trump on January 8, 2021.

In contrast to a quick ban or suspension, Mikdat Yildirim and fellow researchers say that the account suspension warning could prevent this issue in the long term as users would try to protect their account rather than go elsewhere as a last resort.

Experimental evidence for warning signs

There were a few phases to the team’s experiment. First, he created six Twitter accounts with names like @basic_person_12, @hate_suspension and @warner_on_hate.

Then, on July 21, 2020, they downloaded 600,000 tweets that had been posted a week earlier to identify the accounts to be suspended during the study. The period also saw a rise in hate speech against Asian and black communities, researchers say, due to the COVID-19 backlash and the Black Lives Matter movement.

Through those tweets, the team picked out any that used hate speech and isolated those created after January 1, 2020, according to a dictionary outlined by a researcher in 2017. He argued that new accounts were more likely to be suspended – more than 50 were, in fact, those accounts suspended.

Anticipating those suspensions, the researchers had already collected a list of 27 followers of those accounts. After filtering a bit more, the researchers ended up with 4,327 Twitterers to study. “We limited our participant population to people who had previously used hate speech on Twitter And Followed someone who was actually just suspended,” he explains in the paper.

Next, the team sent warnings from various political levels – the most modest of which created an air of “legitimate” – to the candidates divided into six groups by each account. A control group did not receive any messages.

He believed that legitimacy was important because “for a warning message to be effectively delivered to its target, the message needs to make the target aware of the consequences of their behavior and also convince them that These results will be administered,” they write.

Ultimately, this method reduced the ratio of hateful posts by 10% to blunt warnings such as “If you continue to use hate speech, you may lose your posts, friends and followers, and your account back.” Can’t get it” and up to 15% to 20% with more respectable warnings including “I understand that you have every right to express yourself, but please be aware that using hate speech can get you suspended.” Huh.”

But this is not so easy

Nevertheless, the research team notes that “however, we stop to explicitly recommend that Twitter implement the system we tested without further study because of two important caveats.”

Most important, they say that a message from a large corporation like Twitter could generate a response in a way that the smaller accounts in the study did not. Second, Twitter will not benefit from obscurity in suspension messages. They can’t really say that “you” can lose your account. Thus, they would need a blanket rule.

And as with any overarching rule, there can be users wrongly accused.

“It will be important to weigh the incremental harm that such a warning program could bring to a wrongly suspended user,” the team writes.

Although the main effect of the team’s warnings went dematerialized after about a month and there are still ways to be explored, they urge that the technology could be a reasonable alternative for reducing violent, racist and abusive speech. Which is putting the Twitter community at risk. ,

- Advertisement -

Stay on top - Get the daily news in your inbox

Recent Articles

Related Stories