Darren Adam 1am - 4am
TikTok working to remove suicide video clips
8 September 2020, 16:04
The video-sharing app has said it will ban users that repeatedly try to share the footage on its platform.
TikTok says it is working to remove videos of a man taking his own life and is banning users who try to share the clip on the platform.
The footage was originally live-streamed on Facebook but has since circulated on other social media platforms.
The video-sharing app said it was using human reviewers as well as automated systems to detect and block the clip from being shared.
“On Sunday night, clips of a suicide that had originally been live-streamed on Facebook circulated on other platforms, including TikTok,” a TikTok spokesperson said.
“Our systems, together with our moderation teams, have been detecting and blocking these clips for violating our policies against content that displays, praises, glorifies, or promotes suicide.
“We are banning accounts that repeatedly try to upload clips, and we appreciate our community members who’ve reported content and warned others against watching, engaging, or sharing such videos on any platform out of respect for the person and their family.
“If anyone in our community is struggling with thoughts of suicide or concerned about someone who is, we encourage them to seek support, and we provide access to hotlines directly from our app and in our Safety Centre.”
Facebook confirmed it was aware of the footage and had been blocking further attempts to share it since.
“We removed the original video from Facebook last month on the day it was streamed and have used automation technology to remove copies and uploads since that time,” a Facebook spokesperson said.
The incident comes amid ongoing concerns about social media platforms and their handling of content linked to suicide and self-harm, with many raising concerns about the damage such content can cause, in particular, to younger users.
Fears about the impact of social media on vulnerable people come amid suicides such as that of 14-year-old schoolgirl Molly Russell in 2017, who was found to have viewed harmful content online.
Molly’s father Ian, who now campaigns for online safety, has previously said the “pushy algorithms” of social media “helped kill my daughter”.
The Government is currently preparing its Online Harms Bill, which will introduce stricter regulation for internet companies and social media platforms, with large fines and other penalties for those who fail to protect their users.
Andy Burrows, head of child safety online policy at the NSPCC, said: “It is an important challenge to win this cat and mouse game, as tech firms try to take down this horrific content while malicious actors continue to spread it.
“After the Christchurch terror attack was livestreamed and spread widely, this is a test of whether industry is working across platforms, and has rapid response arrangements in place to take down live and recorded video as consistently as they do with still images.
“Situations like this underline that platforms have a duty of care to act on harmful content, and that Government must make progress on the Online Harms Bill this autumn to hold companies and bosses to account.”