TikTok to further limit potential exposure to harmful content with automated deletions

TikTok to further limit potential exposure to harmful content with automated deletions

TikTok to further strengthen its automated detection tools for policy violations, with a new process who will see the content it detects as violating its fully deleted download policies, ensuring that no one ever sees it.

As TikTok Explain, currently, as part of the download process, all TikTok videos go through its automated analysis system, which helps identify potential policy violations for further review by a member of the security team. A member of the security team will then notify the user if a breach has been detected – but TikTok-wide this leaves some room for error and exposure before a review is completed.

Now TikTok is working to improve that, or at least ensure that potentially violating content never reaches viewers.

As explained by TIC Tac:

“Over the next few weeks, we will begin using technology to automatically remove certain types of non-compliant content identified during upload, in addition to deletions confirmed by our security team. Automation will be reserved for content categories where our technology has the highest degree of accuracy, starting with violations of our policies on child safety, adult nudity and sexual activity, violent and graphic content, and illegal activities and regulated goods. “

So rather than pass up potential breaches, TikTok’s system will now prevent them from downloading, which could help limit harmful exposure in the app.

Which, of course, will see some false positives, which will cause some angst among creators – but TikTok notes that its detection systems have proven to be very accurate.

“We have found that the false positive rate for automated deletions is 5% and the appeal against video deletion has remained consistent. We hope to continue to improve our accuracy over time.”

I mean, 5%, at billions of downloads per day, can still be a significant number in raw numbers. Still, the risks of exposure are significant and it makes sense for TikTok to focus more on automated detection at this error rate.

And there is also another important advantage:

“In addition to improving the overall experience on TikTok, we hope this update will also support resiliency within our security team by reducing the volume of painful videos viewed by moderators and allowing them to spend more than time in highly contextual and nuanced areas, such as bullying and harassment, misinformation and hateful behavior. “

Moderation in the content of the toll that staff can take is important, as has been documented in several surveys, and whatever steps can be taken to reduce them are probably worth it.

On top of that, TikTok is also rolling out a new view for account violations and reports, to improve transparency – and ideally, prevent users from pushing the limits.

As you can see here, the new The system will display the violations accumulated by each user, while it will also see new warnings displayed in different areas of the app as reminders of the same.

Penalties for such warnings range from these initial warnings to outright bans, based on repeated issues, while for more serious issues, such as child sexual abuse material, TikTok will automatically delete accounts, while it can also block a device to prevent the creation of future accounts.

These are important metrics, especially given TikTok’s young user base. Internal data published by The New York Times last year showed that about one-third of TikTok’s user base is 14 or younger, which means that there is a significant risk of exposure for young people – as creators or viewers – within the app.

TikTok has faced various investigations on this front before, including temporary bans in some regions due to its content. Last year, TikTok came under scrutiny in Italy after a ten year old girl died while trying to replicate a viral trend of the app.

Cases like this underscore the need for TikTok, in particular, to implement more measures to protect users from dangerous exposures, and these new tools should help fight violations and prevent them from happening. never seen.

TikTok also notes that 60% of people who received a first warning for violating its guidelines did not experience a second violation, which is another vote of confidence in the process.

And while there will be some false positives, the risks far outweigh the potential disadvantages in this regard.

You can read more about the new security updates from TikTok here.

Source link

Leave a Reply

%d bloggers like this: