
Natasha Devon 6pm - 9pm
12 March 2025, 07:47
Harmful online content can damage reputations and destroy lives, fuel social unrest, encourage criminal activity, shift the share prices of companies, influence voters and facilitate sexual abuse.
It causes real harm to real people and not just those in the public eye who we read about in the media.
The increased use of social media, and changing behaviours, only serves to increase the risk and harm. An Ofcom report from 2023, dealing with news consumption, reported that 83% of 16-24 year olds consume their news online.
Mobile phones and tablets are part of everyday life. Screentime by children and young people has increased and more screentime increases the risks of coming across harmful content. Deepfakes and false information are now commonplace.
Children and young people are particularly at risk of online abuse. Some of the content is illegal, such as content that involves sexual abuse or terrorism. However there is other content that fits in the bracket of “lawful but awful.” Such content is still capable of causing significant harm to young people but is not strictly illegal.
There are different way in which children and young people can be exposed to harmful content online. The use of algorithms by online platforms, where content is suggested based on what is popular or what the user has previously interacted about, can lead to inappropriate content being suggested and accessed. They might be sent harmful content by others or accidentally stumble across it. Also, the risk of contact from strangers cannot be ignored.
Harmful content can have a negative impact on wellbeing. Children and young people can feel scared for their safety, worried, or upset and drawn in to a dark world where they feel that they have to view the content or share it with others.
Given the harms, what is being done to keep people safe in the online world? The Online Safety Act. Tech companies are now required to take steps to protect children online. They must undertake risk assessments to identify the potential risk of harm and then set out how they will mitigate those risks. The use of age verification checks should be employed to restrict access to content that is pornographic, encourages or promotes suicide, self-harm and eating disorders.
________________
Mark Jones is a Partner at Payne Hicks Beach.
LBC Opinion provides a platform for diverse opinions on current affairs and matters of public interest.
The views expressed are those of the authors and do not necessarily reflect the official LBC position.
To contact us email views@lbc.co.uk