Key questions: Social media moderation and inciting violence online

9 August 2024, 15:14

A police officer watches a burning car on a street
Southport incident. Picture: PA

Social media used as a tool to spread misinformation and incite violence has been a key issue during the recent unrest seen on Britain’s streets.

The role of social media in the violence and disorder on Britain’s streets has become a key issue, with the moderation and regulation of platforms coming under scrutiny.

Misinformation spreading online in part helped sparked the riots, and now people are being arrested and charged for inciting hatred or violence through social media platforms.

Here is a closer look at how social media content moderation currently works, how posting hateful material can be a crime and how regulation of the sector could change moderation going forward.

– How do social media sites moderate content currently?

All major social media platforms have community rules that they require their users to follow, but how they enforce these rules can vary depending on how their content moderation teams are set up and how they carry out that process.

Most of the biggest sites have several thousand human moderators looking at content that has been flagged to them or has been found proactively by human staff or software and AI-powered tools designed to spot harmful material.

– What are the limitations as it stands?

There are several key issues with content moderation in general, including: the size of social media makes it hard to find and remove everything harmful posted; moderators – both human and artificial – can struggle to spot nuanced or localised context and, therefore, sometimes mistake the harmful for the innocent; and moderation is heavily reliant on users reporting content to moderators – something which does not always happen in online echo chambers.

Furthermore, the use of encrypted messaging on some sites means not all content is publicly visible and can be spotted and reported by other users. Instead, they rely on those inside encrypted groups reporting potentially harmful content.

A view of the Twitter, Instagram and Facebook apps on an Iphone screen
In many instances, social media platforms are taking action against posts inciting or encouraging the disorder (PA)

Crucially, a number of cuts have also been made to content moderation teams at many tech giants recently, often because of financial pressures, which have also affected content teams’ ability to respond.

At X, formerly Twitter, Elon Musk drastically cut back the site’s moderation staff after taking over the company, as part of his cost-saving measures and as he repositioned the site as a platform that would allow more “free speech”, substantially loosening its policies around prohibited content.

The result is harmful material is able to spread on the biggest platforms. It is also why there have long been calls for tougher regulation to force sites to do more.

– How can posting to social media become an offence?

Offences around incitement, provoking violence and harassment under UK law predate the social media age, and are covered under the Public Order Act 1986, but this does include offences which occur online as well as offline.

Most social media sites also explicitly forbid such content under their rules, meaning they, as well as the police, can take action based on any such posts.

– So how realistic is it to expect all harmful content to be removed by the platforms?

Under the current set-up, not very.

In many instances, social media platforms are taking action against posts inciting or encouraging the disorder.

However, the speed at which this harmful or misleading content spreads can make it difficult for platforms to get every post taken down or have its visibility restricted before it is seen by many other users.

New regulation of social media platforms – the Online Safety Act – became law in the UK last year but has not yet fully come into effect.

Once in place, it will require platforms to take “robust action” against illegal content and activity, including around offences such as inciting violence.

The act will also introduce criminal offences covering the sending of threatening communications online, and sharing false information intended to cause non-trivial harm.

Ofcom investigations into GB News
Ofcom published an open letter on Wednesday (Yui Mok/PA)

– So how will the Online Safety Act help?

The new laws will, for the first time, make firms legally responsible for keeping users, and in particular children, safe when they use their services.

Overseen by Ofcom, the new laws will not specifically focus on the regulator removing pieces of content itself, but it will require platforms to put in place clear and proportionate safety measures to prevent illegal and other harmful content from appearing and spreading on their sites.

Crucially, clear penalties will be in place for those who do not comply with the rules.

Ofcom will have the power to fine companies up to £18 million or 10% of their global revenue, whichever is greater, meaning potentially billions of pounds for the largest platforms.

In more severe cases, Ofcom will be able to seek a court order imposing business disruption measures, which could include forcing internet service providers to limit access to the platform in question.

And most strikingly, senior managers can be held criminally liable for failing to comply with Ofcom in some instances, a set of penalties it hopes will compel platforms to take greater action on harmful content.

In an open letter published on Wednesday, Ofcom urged social media companies to do more to deal with content stirring up hatred or provoking violence on Britain’s streets.

The watchdog said: “In a few months, new safety duties under the Online Safety Act will be in place, but you can act now – there is no need to wait to make your sites and apps safer for users.”

The letter, signed by Ofcom director for online safety Gill Whitehead, said it would publish guidance “later this year” setting out what social media companies are required to do to tackle “content involving hatred, disorder, provoking violence or certain instances of disinformation”.

It added: “We expect continued engagement with companies over this period to understand the specific issues they face and we welcome the proactive approaches that have been deployed by some services in relation to these acts of violence across the UK.”

By Press Association

More Technology News

See more More Technology News

A man using a laptop

Next steps for plans to help give people more control over payments set out

Hands of a young girl using a smartphone

Swinney willing to work with UK ministers on tackling explicit deepfake images

The HSBC tower

HSBC scraps global money app Zing a year after launch

Phone app stock

Apple and Google’s mobile ecosystems investigated by competition watchdog

LinkedIn website

LinkedIn accused of sharing users’ private messages with other firms to train AI

v

Royal Shakespeare Company reimagines Macbeth for video game users

Social media apps on a phone

New task force to examine action to tackle online harms

x

Meta accused of auto-following social media accounts of Donald Trump and JD Vance - as users unable to reverse move

Samsung Galaxy S25 Series smartphones

Samsung ‘doubles down’ on AI tools with new S25 Series smartphones

The icons of social media apps, including Facebook, Instagram, YouTube and WhatsApp, displayed on a mobile phone screen

Meta denies claims it is pushing users to follow accounts linked to Trump team

Samsung Galaxy S25 Series smartphones

People using AI will take jobs in the future, not AI itself, Samsung exec says

Prime Minister Sir Keir Starmer

Starmer promises tougher rules on online knife sales after Southport murders

Bridget Phillipson making a speech

Using technology in schools can tackle absences and staff shortages – Phillipson

Sir Stephen Fry pointing

Sir Stephen Fry says AI is ‘not immune from contamination’ and can do ‘too much’

Sir Keir Starmer delivers a statement on the Southport murders

Starmer promises action to end ‘shockingly easy’ access to knives online

Nicola Coughlan as Penelope Featherington and Luke Newton as Colin Bridgerton in Netflix series Bridgerton

Netflix raising prices in US and other countries after topping 300m subscribers