Iain Dale 7pm - 10pm
Instagram users who report racist posts told they ‘do not qualify for ban’
14 July 2021, 10:32 | Updated: 14 July 2021, 12:55
Instagram has been criticised for failing to remove racist comments targeting England's football stars, after people who complained about posts said they were told they do not break the platform's rules.
One LBC caller said she had reported hundreds of comments after the Euro final on Sunday night but that many had not been removed, either because they were not deemed to go against the guidelines or in some cases, the review team were not even 'able to view the report’ because of the high number of complaints.
Sophie from Sandwich, Kent, told LBC’s Nick Ferrari: “It [the racist content] was worse than just monkey emojis, there was N-word, references to slaves.
“I got a response from Instagram a bit later in the night saying ‘We do not have enough time to review all these comments and we have deemed that they do not go against Instagram’s guidelines’.”
She then went on to flag that Instagram had a feature where users can mute certain words or phrases from their timeline – so “the technology must be there” for racist terms to be automatically deleted.
Other Instagram users have taken to Twitter to share their anger that racist comments have remained on the site.
One user tweeted: “I’ve spent an hour today reporting racist comments, but according to @instagram monkey emojis and the N word are ok.”
The tweet was accompanied with a screenshot, with a response from Instagram saying they had not suspended the reported account.
The response read: “We’ve found that this account likely doesn’t go against our Community Guidelines. If you think we’ve made a mistake, please report it again.
“Because Instagram is a global community, we understand that people may express themselves differently. We’ll use your feedback to make this experience better for everyone.”
It then advised the user to block the account in order to avoid seeing posts.
In response to the accusation that the 'n-word' and use of emojis were 'ok', Instagram said that the assumption they were not a violation of its guidelines was "absolutely untrue", and that it does not allow attacks on individuals based on protected characteristics - which includes race.
Other users were told reported accounts were not removed because of “the high volume of reports” the platform receives.
A response to a user who reported an account read: “Due to the high volume of reports that we receive, our review team hasn’t been able to view your report."
The response then advised that the comment "probably" did not go against Instagram's guidelines.
It added: “We understand this may be frustrating."
The platform then again told the user to block the offensive account.
In response, the user tweeted: "Perhaps the reason for the high volume of reports that you cannot review yourselves is because everyone is reporting disgusting racist comments."
Oh hey @instagram - perhaps the reason for the high volume of reports that you cannot review yourselves is because everyone is reporting disgusting racist comments - maybe you need to review your 'technology' to identify that this a hate crime and this is hate speech... pic.twitter.com/Ch0mGAWkwh— Emma (@emmapicks) July 12, 2021
In a similar vein to queries over the automatic filtering out of words, others have questioned why Instagram is able to automatically detect posts about coronavirus, but the software cannot be applied to racist posts and comments.
“So [you're] telling me that Instagram is able to create software to detect when someone posts or comments about coronavirus but not when someone is being racist on their platform,” tweeted one user.
“@instagram make it make sense.”
So your telling me that Instagram is able to create software to detect when someone posts or comments about coronavirus but not when someone is being racist on their platform…..@instagram make it make sense #SayNoToRacism #StopHate— Andy (@littlebitamused) July 13, 2021
In response to the variety of accusations, a Facebook company spokesperson said: “Using emojis, like monkey or banana emojis, to racially abuse someone is not OK and completely against our rules.
"We use technology to help us review and remove harmful content, but we know these systems aren't perfect, and we're constantly working to improve.
"Since Sunday’s final, we've been removing comments - including those that contain emojis - and disabling accounts that repeatedly break our rules, and we'll continue to do so.”
Instagram also said it had removed over 33 million pieces of hate speech across Facebook and Instagram between January and March - over 90 per cent of which was taken down before anyone reported it.
England footballers Marcus Rashford, Jadon Sancho and Bukayo Saka were all subjected to racist abuse on social media after missing their penalties at the Euro final on Sunday.
Following the abuse, labelled “disgusting” by the Football Association (FA), social media companies vowed to do more to tackle online abuse.
On Monday Twitter said it had deleted over 1,000 racist tweets, calling them “abhorrent” and “unacceptable”.
Instagram owner Facebook previously said they were "quickly" removing racist comments and the accounts they came from, and said they were "committed" to protecting their community from abuse.
"No one should have to experience racist abuse anywhere, and we don’t want it on Instagram,” a Facebook spokesperson said.
“We quickly removed comments and accounts directing abuse at England’s footballers last night and we’ll continue to take action against those that break our rules.
“In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a tool which means no one has to see abuse in their comments or DMs.
“No one thing will fix this challenge overnight, but we’re committed to keeping our community safe from abuse.”