TikTok will roll out content filters and maturity ratings to make the app more secure

- Advertisement -


Earlier this year TikTok said he was developing a new system this would restrict teen users from viewing certain types of adult content. Today the company is introduction the first version of this system, called “Content Layers”, should be launched within a few weeks. It’s also preparing to release a new tool that will allow users to filter videos with certain words or hashtags so they don’t show up in their feeds.

- Advertisement -

Together, these features are designed to give users more control over their TikTok experience, making the app more secure, especially for younger users. This is an area where TikTok is facing increased scrutiny today – not only from regulators and lawmakers who are looking to increase control over social media platforms in general, but also from those seeking justice for the harms of social media.

- Advertisement -

For example, a group of parents recently sued TikTok after their children died after trying dangerous tests they allegedly saw on TikTok. Meanwhile, former content moderators sued the company for his failure to support their mental health despite the agonizing nature of their work.

With new tools, TikTok aims to give users and content creators more control over moderation.

- Advertisement -

The upcoming content tier system is intended to provide a means of classifying content within the app, similar to how movies, TV shows, and video games also have age ratings.

Image credits: TikTok content levels

While adult content is prohibited, TikTok says that some content on its app may contain “adult or complex topics that may reflect personal experiences or real events and are intended for an older audience.” Its content tier system will work on classifying that content and assigning a maturity score.

In the coming weeks, TikTok will introduce an early version of a content tiering system designed to prevent content with overtly mature topics from being shown to users aged 13 to 17. Videos with mature themes, such as fictional scenes that may be too intimidating or intense for younger users. – Will be given a maturity score to prevent them from being seen by TikTok users under 18. Over time, the system will expand to offer filtering options for the entire community, not just teenagers.

We’ve been told that the trust and safety moderator will assign a maturity score to videos that are growing in popularity or that have been reported on the app.

Previously, Tiktok said content creators may also be asked to tag their content, but this aspect has not yet been covered in detail. However, a spokesperson said it was a separate effort from what was announced today.

Image credits: tik tak

In addition, TikTok will soon launch another tool to filter content from your For You and Following feeds.

This feature will allow users to manually block videos with specific words or hashtags from their channels. This doesn’t have to be used to filter out potentially problematic content or trigger content – it can also be used to prevent the algorithm from showing you topics you just don’t want or are tired of seeing. TikTok suggests you use it to block dairy or meat recipes if you go vegan, for example, or stop viewing tutorials after you’ve completed said home project.

Image credits: TikTok content filters

With these new features, the company said it is expanding its existing test system, which works to diversify recommendations to prevent users from re-experiencing potentially problematic content, such as videos about extreme diets or fitness, sadness or breakups.

This test launched last year in the US 2021 congressional investigation into social apps like TikTok and others on how their algorithmic recommendation systems can promote harmful content about eating disorders to younger users.

TikTok acknowledges that the system still needs some work due to the nuances. For example, it can be difficult to separate recovery-focused content from eating disorder content, which can have both sad and hopeful themes. The company says it is currently training the system to support more languages ​​for future expansion into new markets.

As stated, this trio of tools could provide a healthier way to interact with an application, but in reality, automated systems like these often fail.

So far, TikTok has not yet been able to suppress problem content in some cases – be it children destruction of toilets in public schools, shoot each other with machine guns or jump from milk cratesamong other things dangerous challenges as well as viral tricks. It also allowed hate content related to misogyny, white supremacy or transphobic statements that sometimes fail, together with disinformation.

To what extent TikTok’s new tools actually affect who sees what content remains to be seen.

“As we continue to build and improve these systems, we are excited to be able to contribute to a long-standing industry-wide challenge in terms of building cross-audience and recommender systems,” the head of trust and security wrote on TikTok. Cormac Keenan in a blog post. “We also recognize that what we are aiming for is complex and we can make some mistakes,” he added.


Credit: techcrunch.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox