New rules unveiled to protect young children on social media under the Online Safety Act

9 November 2023, 14:32

The first codes focus on illegal materials online, including sexual child abuse material, grooming content and fraud.
The first codes focus on illegal materials online, including sexual child abuse material, grooming content and fraud. Picture: Alamy
Jasmine Moody

By Jasmine Moody

Social media platforms will be required to protect children online from dangerous users, under the Online Safety Act, says Ofcom.

Listen to this article

Loading audio...

The new online safety regulator has published its first draft codes of practice under the Online Safety Act, which was signed into law last week.

Major social media platforms will be required to protect children online by ensuring the privacy of their location data, preventing them from being featured on suggested friend lists, and restricting who can send them direct messages.

The first codes focus on illegal materials online, including sexual child abuse material, grooming content and fraud.

Figures from Ofcom reveal that over one in ten 11-18-year-olds have been sent naked or semi-naked images.

The code also encouraged platforms to use hash-matching technology to identify illegal images of abuse and tools to detect websites that host the material.

More rules will be published by the regulatory body in the next few months concerning online safety, as well as the promotion of material on suicide and self-harm.

Read more: Will the Molly Russell inquest change the Online Safety Bill?

Read more: UK to be ‘safest place in the world to be online’ as Online Safety Bill to become law - but what does it mean?

Ofcom hopes the codes will be enforced by the end of 2024.

Each new code will require parliamentary approval before it can be installed.

Technology secretary Michelle Donelan said the publication of the first codes would help with "cleaning up the Wild West of social media and making the UK the safest place in the world to be online."

TV star explains why she supports plan to amend the new online safety bill

Mike Tunks, from the Internet Watch Foundation (IWF), welcomes the bill, which took six years to be signed into law

Mr Tunks told LBC: "It's really good news and it's been quite a long time coming.

"It's great news Ofcom has acted so quickly in bringing these proposals forward.

"Since the pandemic, we've seen a 1,000 per cent increase in the material we're removing from the internet.

"[In] the first nine months of this year, we've removed 200,000 webpages from the internet and 95 per cent of that has involved self-generated indecent images of children.

"This is where children have been groomed, or coerced, or tricked into generating images of themselves that are some of the worst child sexual abuse you could imagine."

Since the pandemic, there has been a 1,000 per cent increase in child sexual abuse material..
Since the pandemic, there has been a 1,000 per cent increase in child sexual abuse material. Picture: Alamy

Mr Tunks has described the Act as the "raising of the bar" for tech firms, as they will now be required to keep their users safe from child sexual abuse material.

If social media platforms do not comply with these rules, Ofocm can charge them up to £18 million, or 10 per cent of their global annual revenue - whichever is the largest.

Tech executives may also face criminal charges if their platforms constantly fail to protect children.

The IWF are "delighted to support those businesses with keeping their platforms free of child sexual abuse material", Mr Tunks added.

AI generated images

Mr Tunks acknowledged that the act will not end the overall problem and mentioned the issues with AI-generated images of child sexual abuse.

Around 3,000 AI-generated images of child abuse have been shared on the dark web since September, according to data from the IWF.

The Online Satey Act does provide the "framework" needed to combat such images, said Mr Tunks.

Child Sexual Abuse material is illegal under the Protection of Children Act 1978, criminalising the taking, distribution and possession of indecent or pseudo photographs of children.

Private messaging

The hashing technology will not apply to private or encrypted messages and Ofcom has stressed that it will not be making proposals to break encryption.

Powers in the bill that could force private messaging apps to scan for child sexual abuse material, if certain conditions are met, have been controversial.

Apps such as iMessage and WhatsApp use end-to-end encryption meaning that even the tech firm cannot read the messages.

Some major apps have said they will not comply as it would weaken the privacy of their systems and the security of systems that protect users including children.