Instagram brings enhanced self-harm content detection tools to the UK

11 November 2020, 06:54

Social media stock
Social media stock. Picture: PA

The social media platform is introducing new technology to the UK and Europe which is better able to detect self-harm content.

Instagram is introducing new technology to its app in the UK and Europe that is able to better identify suicide and self-harm content which breaks the app’s rules.

The new moderation tools are able to more proactively spot self-harm content and automatically make it less visible in the app, and in some cases remove it completely after 24 hours if the machine learning is confident it breaks the site’s rules.

The feature is already used on Facebook and Instagram outside of the EU, where it includes additional layers which also see posts referred to human reviewers once spotted, who can then take further action such as connecting the poster to local help organisations and in the most severe cases, calling emergency services.

However, Instagram confirmed these referral aspects are not yet ready to be introduced to the UK and Europe because of data privacy considerations linked to the General Data Protection Regulation (GDPR).

NSPCC’s How Safe Are Our Children? conference
Ian Russell, father of Molly Russell, who took her own life in 2017 after viewing disturbing material online (Helen William/PA)

The social media giant said it hoped it would be able to introduce the full set of tools in the future.

Instagram’s public policy director in Europe, Tara Hopkins, said: “In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community.”

She said that because in a small number of cases an assessment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a “mental health assessment” and therefore a part of special category data, which receives greater protection under GDPR.

Ms Hopkins said the company was in discussions with the Irish Data Protection Commission (IDPC) – Facebook’s lead regulator in the EU – and others over the tools and a potential introduction in the future.

“There are ongoing conversations that have been very constructive and there’s a huge amount of sympathy for what we’re trying to achieve and that balancing act of privacy and the safety of our users,” she said.

In a blog post announcing the update, Instagram boss Adam Mosseri said it was an “important step” but that the company want to do “a lot more”.

He said not having the full capabilities in place in the EU meant it was “harder for us to remove more harmful content, and connect people to local organisations and emergency services”.

He added that the firm was in discussions with regulators and governments about “how best to bring this technology to the EU, while recognising their privacy considerations”.

Facebook and Instagram are among the social media platforms to come under scrutiny for their approach to and handling of suicide and self-harm material.

Concerns have been raised about self-harm and suicide content online, particularly how platforms handle such content and its impact on vulnerable users, especially young people.

And fears about the impact of social media on vulnerable people have also increased amid cases such as that of 14-year-old schoolgirl Molly Russell, who took her own life in 2017 and was found to have viewed harmful content online.

Molly’s father, Ian, who now campaigns for online safety, has previously said the “pushy algorithms” of social media “helped kill my daughter”.

In September, Facebook and its family of apps were among the companies to agree to guidelines published by Samaritans in an effort to set industry standards on how to handle the issue.

Ms Hopkins said Instagram was trying to balance its policies on self-harm content by also “allowing space for admission” by people who have considered self-harm.

“It’s okay to admit that and we want there to be a space on Instagram and Facebook for that admission,” she said.

“We’re told by experts that can help to destigmatise issues around suicide. It’s a balancing act and we’re trying to get to the right spot where we’re able to provide that kind of platform in that space, while also keeping people safe from seeing this kind of content if they’re vulnerable.”

By Press Association

More Technology News

See more More Technology News

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays

Europe Digital Rules

Meta unveils latest AI model as chatbot competition intensifies

AI technology

Younger children increasingly online and unsupervised, Ofcom says

Migrant Channel crossing incidents

Ministers will be told to use AI to screen migrants for threats, adviser says

Nothing smartphone

UK tech firm Nothing to integrate ChatGPT into its devices

The Google offices in Six Pancras Square, London

Google confirms more job cuts as part of company reorganisation

Person using laptop

Housing association reprimanded after residents’ data compromised

A screengrab of an arrest in connection with the LabHost website

Arrests made and thousands of victims contacted after scammer site taken offline

Social media apps on a smartphone

Three-quarters of public fear misinformation will affect UK elections – report

Businessman racing with a robot

TUC calls for AI to be regulated in the workplace

The ChatGPT website

AI chatbot ‘could be better at assessing eye problems than medics’

FastRig wingsail launch

Scottish-made wingsail set for sea tests after launch on land


Rollout of eVisas begins as Government aims for digital immigration by 2025

Elon Musk in 2024

X may start charging new users to post, says Elon Musk