Instagram explains move to block Olympic champion over copyright breach

4 August 2021, 19:04

Elaine Thompson-Herah
Double gold medallist Elaine Thompson-Herah was banned from Instagram for posting videos of her victories (Martin Rickett/PA). Picture: PA

The social media platform said spotting copyrighted material was much simpler than identifying hate speech and blocking it.

Instagram has explained its moderation policies after being criticised over its response to posts from Olympic champion Elaine Thompson-Herah.

The Jamaican sprinter said her Instagram account had been briefly blocked because she had shared video footage of her victories in the 100 and 200 metres at the Tokyo Olympics which she did not own the rights to.

Instagram confirmed the block was the result of an International Olympic Committee (IOC) copyright infringement, but said it was a mistake to restrict the account and it had quickly been restored.

The incident led to criticism of the platform over the speed of its response to apparent breaches of rules around copyright and intellectual property, in contrast to recent incidents of racist abuse, including that which was directed at several England footballers in the wake of their Euro 2020 final defeat.

But in response, the social media giant argued it was misleading to compare hate speech to intellectual property violations.

Instagram said it was a much more simple process to detect images or videos which violate intellectual property (IP) rules, saying it was easier to spot content that was using visuals or audio covered by IP rules because they often look or sound identical to the original.

But in contrast, it said hate speech often required context, citing examples where it might be initially unclear if certain words were being used as abuse or as reclaimed speech.

The firm said it was therefore hard to instantly detect and remove without risking overenforcement against people who were using the platform to campaign against hate.

The company added that it can sometimes take time to analyse hate speech content and acknowledged that it sometimes makes mistakes, but reiterated that it would remove any content found to be abusive.

Social media consultant and industry commentator, Matt Navarra, said copyright takedowns and online abuse were “very different things” when it came to how content moderation technology is set up.

And he agreed that there was a “far greater complexity” in correctly detecting abusive content.

“Many of the world’s biggest rights holders such as recording artists, their record label, or the Olympics, use a range of automated rights management tools which have a far simpler task – by comparison to moderating online abuse – of matching the use of a piece of music or video, with its online usage rights database, to then serve takedown requests,” Mr Navarra told PA.

“Much of the process is highly automated and requires far less human intervention versus the far more complex and nuanced requirements for the moderation of hate speech, for example.

“The technology to accurately auto-moderate hate speech or online abuse online is just not sophisticated enough, yet.

“Human moderation and case reviews or appeals will invariably take up more time and slow down the process.

“Enforcing the rights of humans online understandably demands more time to make the most appropriate takedown decision. Enforcing the content rights of the Olympics, rightly or wrongly, is a much simpler business activity.”

By Press Association

More Technology News

See more More Technology News

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays

Europe Digital Rules

Meta unveils latest AI model as chatbot competition intensifies

AI technology

Younger children increasingly online and unsupervised, Ofcom says