Misinformation and hate speech flood TikTok ahead of Kenyan elections

- Advertisement -


Last August TikTok account @aironixon shared a video overlaying scenes from a Netflix documentary series. How to become a tyrant with video and screenshots of Kenyan Vice President and presidential candidate William Ruto. An added caption read: “Is Ruto a tyrant?”

- Advertisement -

This video is one of 130 identified by Mozilla Foundation staffer Odang Madung, who detailed his findings in new report. Overall, Madung found hate speech and misinformation in the videos, which had over 4 million views after being shared by 33 TikTok accounts. They all appear to be violating the company’s terms of service and are targeting the upcoming Kenyan elections on August 9th.

- Advertisement -

“We have noticed that there are a lot of cases where a certain page, for example, can have 5,000 subscribers, but the type of content it posts ends up getting [more than] 500,000 views because it was boosted by the platform itself,” says Madung.

While Facebook, WhatsApp and Instagram remain the most popular platforms in Kenya, TikTok has grown steadily in popularity over the past two years and is among the most downloaded social applications in the country.

- Advertisement -

For Kenyans approaching election season, disinformation is hardly new, especially since millions of its citizens appear online over the past ten days. Madung worries that the country’s history of electoral violence makes it particularly explosive. In 2007, a contested presidential election between Raila Odinga and Mwai Kibaki led to widespread violence that left over 1,000 people dead and 600,000 displaced. The current president of the country, Uhuru Kenyatta, was accused assistance in inciting violence by charging armed of the Kikuyu, the ethnic group that both he and Kibaki belong to, to target the Luos, the Odinga ethnic group.

Kenyatta became president in 2013. His re-election in 2017 also sparked protests that were quickly put down by the police. Human Rights Watch documented at least 42 people are killed by the police, although she estimates the real number is higher.

Earlier this year, the National Commission for Cohesion and Integration of the Country, which was formed in the aftermath of the 2007 violence to ensure peaceful elections, warned on “the misuse of social media platforms to perpetuate ethnically motivated hate speech and incite violence”, stating that hate speech on social platforms increased by 20 percent in 2022. The commission also cited growing inter-communal conflicts and personal attacks on politicians, as well as other conditions that could make this year’s elections particularly tumultuous.

The TikTok video of Ruto’s speech included the caption, “Ruto hates Kikuyu and wants revenge in 2022.” It got over 400,000 views.

“There is a very clear attempt to use the ghosts of 2007 to move voters in one way or another to take advantage of or glorify past violence,” Madung says. “This is something that is completely ignored by TikTok’s own rules on hate speech.”

Unlike Facebook or Twitter, TikTok provides users with content based not on who they follow, but on what the platform considers their interests. This can make it difficult for researchers like Madung to determine how content is distributed and to whom. “There is no tool like Crowdtangle for TikTok,” he says. “Doing research on TikTok is exhausting and sometimes terrible because I had to watch all the videos to the end in order to perform content analysis.”

By its very nature, TikTok is harder to moderate than many other social media platforms, according to Cameron Hickey, project director at the Algorithmic Transparency Institute. The brevity of videos, and the fact that many of them can include audio, visual, and textual elements, makes human discernment all the more necessary when deciding whether something violates the rules of the platform. Even advanced AI tools, such as speech-to-text to quickly identify problematic words, become more complex “when the sound you’re dealing with also contains music,” says Hickey. “The default mode for people creating content on TikTok is to also embed music.”

This becomes even more difficult for languages ​​other than English.

“In general, we know that platforms are best at handling problematic content where they are based, or in the languages ​​spoken by the people who created them,” says Hickey. “And there are more people doing bad things than there are people in these companies trying to get rid of bad things.”

Many of the pieces of disinformation Madung uncovered were “synthetic content,” videos made to look like they might be from old newscasts, or they used screenshots that appeared to be from legitimate news outlets.

“Since 2017, we have noticed that at that time there was a trend towards assigning the identity of the mainstream media brands,” says Madung. “We are seeing rampant use of this tactic across the platform and it appears to be working exceptionally well.”

Madung also spoke with former TikTok content moderator Ghadir Ayed to get a broader picture of the company’s moderation efforts. While Ayed did not moderate TikTok from Kenya, she told Madung she was often asked to moderate content in languages ​​or contexts she was not familiar with and would not have had the context to determine if there was media manipulation.

“It is common to find moderators who are asked to moderate videos that were in a language and context different from what they understood,” Ayed Madungu said. “For example, at one time I had to moderate a video in Hebrew, despite the fact that I did not know either the language or the context. All I could rely on was a visual depiction of what I could see, but nothing written that I couldn’t moderate.”

A TikTok spokesperson told WIRED that the company prohibits election misinformation and advocacy of violence and is “committed to protecting the integrity of [its] platform and dedicated team working to protect TikTok during Kenya’s elections.” The spokesperson also said that he is working with fact-checking organizations including Agence France-Presse in Kenya and plans to introduce features to connect his “community with reliable Kenya election information on our app.”

But even if TikTok removes offensive content, Hickey says it might not be enough. “One person can do remixes, duets, share other people’s content,” says Hickey. This means that even if the original video is deleted, other versions will remain unnoticed. TikTok videos can also be uploaded and shared on other platforms such as Facebook and Twitter, which is how Madung first encountered some of them.

Several videos flagged in the Mozilla Foundation report have since been taken down, but TikTok did not respond to questions about whether they removed other videos or if the videos themselves were part of a coordinated effort.

But Madung suspects that they may be. “Some of the most egregious hashtags were the ones I found while looking at coordinated campaigns on Twitter and then thought, what if I searched for it on TikTok?”


Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox