Suicide and self-harm content recommended ‘at industrial scale’ by TikTok and Instagram, research claims
Suicide and self-harm content is being recommended ‘at industrial scale’ by TikTok and Instagram, a new report by the Molly Rose foundation has claimed.
Listen to this article
Molly Russell died by suicide aged just 14, prompting her family and their friends to set-up a charitable foundation in her memory.
An inquest into her death found that Molly saw 2,100 suicide, self-harm and depression posts on Instagram alone in the six months before she died and concluded that social media contributed to her death.
The research found that TikTok and Instagram were recommending industrial levels of harmful suicide and self-harm content to teens just weeks before the Online Safety Act came into effect, actively putting young lives at risk.
Almost eight years on from her tragic death, new research has found algorithmically driven depression, suicide and self-harm content being recommended at a vast scale to accounts opened as a 15-year-old-girl.
On teenage accounts which had engaged with suicide, self-harm and depression posts, the research shows algorithms continue to bombard young people with a tsunami of harmful content on Instagram Reels and TikTok’s For You page.
The report found:
- Almost all of the recommended videos watched on Instagram Reels (97%) and TikTok (96%) were found to be harmful: bombarding teens with harmful content in a similar way to what happened to Molly.
- Over half (55%) of recommended harmful posts on TikTok’s For You Page actively contained references to suicide and self-harm ideation and 16% referenced suicide methods: recommended videos included posts that promoted and glorified suicide, referenced suicide methods and normalised intense feelings of misery and despair.
- Harmful content is achieving deeply disturbing levels of reach: one in ten harmful videos on TikTok’s For You Page had been liked at least a million times, and on Instagram Reels one in five harmful recommended videos had been liked more than 250,000 times.
- New high-risk features on TikTok’s For You Page make it even more likely teenagers can discover rabbit holes in a single click: for example, new AI generated search prompts shown alongside recommended content even introduced researchers to new suicide methods.
Conducted in the weeks leading up to the implementation of the Online Safety Act, the research found both platforms to be gaming Ofcom’s new rules.
While both platforms had enabled teenagers to offer negative feedback on content being recommended to them, as required by Ofcom, they had also provided an option to be recommended more harmful content – including suicide and intense depression posts.
The report, Pervasive-by-design, was produced in partnership with The Bright Initiative by Bright Data and found that the material recommended by both TikTok and Instagram would have the same harmful impact as content which Molly Russell saw before her death in 2017.
Previous research in 2023 found similar problems, and today’s report suggests the situation has either stagnated or deteriorated, particularly on TikTok, according to the foundation.
Read more: Revealed: AI deepfakes of famous doctors, including Michael Mosley, used by firms to push products
Read more: Billy Connolly issues urgent warning as scammers use AI to impersonate him online
The report coincides with Ofcom’s rollout of children’s safety codes under the Online Safety Act, which the regulator has said will help to 'tame toxic algorithms.'
But the Molly Rose Foundation warned that the measures fall far short of what is needed to address the scale of harm revealed in the findings.
The charity said companies have been recommended to spend as little as £80,000 to amend the algorithms linked to Molly’s death.
It is now urging the government to strengthen the Online Safety Act so that tech firms are required to identify and mitigate all risks faced by young people on their platforms.
Today, Molly's father and Chair of Molly Rose Foundation Ian Russell warns that Ofcom’s timid implementation of the Online Safety Act is a sticking plaster in the face of this disturbing level of preventable harm, and the Prime Minister must strengthen the legislation to prevent further young lives being lost.
'Staggering'
Ian Russell, Chair of Molly Rose Foundation, said: “It is staggering that eight years after Molly’s death incredibly harmful suicide, self-harm and depression content like she saw is still pervasive across social media.
“Ofcom’s recent child safety codes do not match the sheer scale of harm being suggested to vulnerable users and ultimately do little to prevent more deaths like Molly’s.
“For over a year, this entirely preventable harm has been happening on the Prime Minister’s watch and where Ofcom have been timid it is time for him to be strong and bring forward strengthened, life-saving legislation without delay.”
'Shocking'
Andy Burrows, Chief Executive of Molly Rose Foundation, said: “Harmful algorithms continue to bombard teenagers with shocking levels of harmful content, and on the most popular platforms for young people this can happen at an industrial scale.
“It is shocking that in the two years since we last conducted this research the scale of harm has still not been properly addressed, and on TikTok the risks have actively got worse.
“The measures set out by Ofcom to tackle algorithmic harm are at best a sticking plaster and will not be enough to address preventable harm. It is crucial that the Government and regulator act decisively to bring in much stronger measures that platforms cannot game or ignore.”
'Damning'
Gregor Poynton Labour MP for Livingston and Chair of the All-Party Parliamentary Group on Children's Online Safety, said: "This damning report highlights how social media companies are still unforgivably pushing the most devastating harmful content to children as the Online Safety Act comes into force.
"It is crucial that Ofcom's response lives up to the scale of the harm children are facing online but the regulator is currently falling short.
"We urgently need to tackle these issues head on and parents will rightly judge us by whether we do everything possible to protect their children from this pervasive and preventable harm."
A Meta spokesperson said: “We disagree with the assertions of this report and the limited methodology behind it.
"Tens of millions of teens are now in Instagram Teen Accounts, which offer built-in protections that limit who can contact them, the content they see, and the time they spend on Instagram.
"We continue to use automated technology to remove content encouraging suicide and self-injury, with 99% proactively actioned before being reported to us.
"We developed Teen Accounts to help protect teens online and continue to work tirelessly to do just that.”
A TikTok spokesperson said: "Teen accounts on TikTok have 50+ features and settings designed to help them safely express themselves, discover and learn, and parents can further customise 20+ content and privacy settings through Family Pairing.
"With over 99% of violative content proactively removed by TikTok, the findings don't reflect the real experience of people on our platform which the report admits."
Those feeling distressed or suicidal can call Samaritans for help on 116 123, email jo@samaritans.org or visit www.samaritans.org for more information.