Skip to main content
On Air Now
Exclusive

'ISIS has young people exactly where it wants them': Inside the terror networks operating on Instagram

An LBC investigation looking into the networks of Islamist terror accounts on Instagram has discovered at least 125 accounts with a combined follower count of more than 136,000 posting content sympathetic to proscribed groups

Share

Teenagers can access Islamic State propaganda and execution content on Instagram within 30 seconds of opening the app.
Teenagers can access Islamic State propaganda and execution content on Instagram within 30 seconds of opening the app. Picture: LBC

By Rebecca Henrys

Islamic State propaganda and execution content can be accessed by teenagers on Instagram within 30 seconds of opening the app.

Listen to this article

Loading audio...

An LBC investigation looking into the networks of Islamist terror accounts on Instagram has discovered at least 125 accounts with a combined follower count of more than 136,000 posting content sympathetic to proscribed groups.

Adam Hadley, founder of Tech Against Terrorism, told a Home Affairs select committee in October that Islamic State networks can be accessed within 30 seconds if a user knows which keywords to use.

We decided to put this to the test.

Using an account set up for a teenager, with access to content aged 13+, LBC uncovered the official media branches of at least two Islamic State sub-groups, as well as countless other accounts sharing content that blatantly supports the organisation.

Read more: Facebook hosted sickening Islamist execution videos, with some remaining online for more than a decade

Read more: Ministers consider social media curfew in nationwide online safety discussion

Meta's teen accounts work on the "strictest" setting for sensitive content, which means they are less likely to be recommended anything flagged as sensitive.

Throughout the entire investigation, we only encountered one instance of a post being blocked due to age restrictions.

Meta has since deleted the content we uncovered in response to LBC's investigation.

the Meta and Instagram logos are seen on screens
the Meta and Instagram logos are seen on screens. Picture: Ezra Acayan/Getty Images

The accounts wildly vary in content, from the innocuous to the absurd, and, at face value, may not seem like they're supporting the terrorist group.

But, with our teen account, we were able to view images showing a man holding two severed heads, multiple dead bodies, beheadings, executions, suicide bombings and other violent content.

Once we began travelling down the rabbit hole of Islamist content on Instagram, the platform started pushing other accounts linked with those we identified as being part of the terror network.

Meta says that the Islamic State is banned from its platforms and they "do not allow the group or its supporters to maintain a presence". When they are made aware of content supporting, glorifying, or representing these groups, they work to remove this content and the accounts sharing it.

Researcher Harold Chambers said that the algorithms serve as a "positive feedback loop" that will continue to push you in the direction you're going.

He said: "If you're looking at a little bit more radical content, it will serve you a little bit more radical content and so on and so forth until eventually, you're in legitimate Islamic State terrorist content.

"But then once you're there, you're there, and the algorithm will just keep on feeding it to you.

"The way that social media is set up at a very fundamental level facilitates radicalisation, so eventually you can't just deal with them on the basis of regulating the content.

"You're also going to have to, at some point, have the much more difficult conversation of changes to the way that the social media companies are actually set up to behave."

A photograph taken from Suruc district of Sanliurfa, Turkey, shows smoke rising from the Syrian border town of Kobani (Ayn al-Arab) following the US-led coalition airstrikes against the Islamic State of Iraq and the Levant (ISIL) on October 15, 2014
A photograph taken from Suruc district of Sanliurfa, Turkey, shows smoke rising from the Syrian border town of Kobani (Ayn al-Arab) following the US-led coalition airstrikes against the Islamic State of Iraq and the Levant (ISIL) on October 15, 2014. Picture: Omer Urer/Anadolu Agency/Getty Images

We found one account that aimed to appeal to Gen Z users and featured a 'fan-cam' of 'Jihadi John', the alias of Mohammed Emwazi, as well as an edit showing three terrorists with a popular Japanese anime singer edited into the graphic.

Jihadi John became well-known in 2014 and 2015 for featuring in several Islamic State videos where he is shown beheading captives. He was killed in a drone strike in Raqqa in late 2015.

Fan cams are traditionally used by K-Pop fans to encourage support for their favourite group member by editing together a selection of popular clips into one video, which is then posted on social media.

We also saw a photo of Islamic State supporters standing in a row with the lyrics to the chorus of What Makes You Beautiful by One Direction shown on screen.

Dr Charlotte Littlewood, a former Prevent officer and expert in counter-extremism, told LBC that "we should be increasingly concerned" that children can access this kind of content in the "current climate".

She says that young people in Western democracies don't have as much resilience in their own identity as they did a decade ago.

Countries have struggled to articulate a civic identity that binds communities together and builds confidence in institutions.

There's now a vacuum where people are searching for belonging or moral certainty that leaves them vulnerable to alternative ideologies, which are exploited by Islamist movements.

Dr Littlewood said: "In particular, ISIS, Al-Qaeda and various other Muslim Brotherhood-linked groups are putting forward their narrative of what the world should look like and what someone should be and what is just and what is good."

She added that there is now a fight over what is good and what is evil, with malicious actors stepping in to fill the gaps left by liberal democratic societies and manipulating real grievances to suit their own narratives.

Islamists have adopted the language of human rights and anti-discrimination to undermine trust in norms.

"Unfortunately, I don't think the West is putting its strongest foot forward on this one," Dr Littlewood said.

"We are now at a point where young people are almost inverting what is good and what is bad and are starting to really hold the West with disdain and, in the post-October 7 climate, that Islamist narrative is gaining real strength.

"Built on the back of real Palestinian grievances, manipulated for the strength of their narrative, and that's having real success.

"Groups such as ISIS have young people exactly where they want them right now."

The propaganda is mixed in with more generic content featuring nasheeds - a vocal musical piece that focuses on religious themes - or sermons from clerics.

By digging deeper into the accounts, by looking at their highlighted stories, or simply scrolling further through their feed, you will stumble upon more extreme content.

An infamous video from 2014 of former IS leader Abu Bakr al-Baghdadi giving a speech in Mosul is frequently used by these accounts. In this speech, he declared himself caliph (ruler) of the Islamic State, and called for muslims to support him.

Whether the video is used in full, or merely snapshots are shown of him stepping up to the podium, it is a clear indicator that a user is supporting the organisation.

Other accounts were more explicit in their support for proscribed organisations, with symbols for the Islamic State media arm clearly seen in others, as well as a blurred-out Islamic State flag in the corner.

"It's very much a cat and mouse game of they'll try to post content in one way, Meta will crack down, they'll then try modifying, and it just keeps on going," said Mr Chambers, a researcher focusing on the Islamic State.

"In terms of 'are Meta doing enough', it's always difficult to say. But there are a lot of very simple, well-known techniques that Islamic State supporters use that are not being addressed.

"There are lots of different things that they use, and this is all in English. When you start working on other languages is actually when you start to see an even larger breakdown.

"There are some very simple straightforward things to me that are mind-blowing, that this is not moderated."

A fighter of the Syrian Democratic Forces (SDF) stands guard on a rooftop in Raqa on October 20, 2017, after retaking the city from Islamic State (IS) group fighters
A fighter of the Syrian Democratic Forces (SDF) stands guard on a rooftop in Raqa on October 20, 2017, after retaking the city from Islamic State (IS) group fighters. Picture: BULENT KILIC/AFP via Getty Images

At the height of the Islamic State in 2014 and 2015, Meta was very quick at cracking down on content from the terrorist group, but some content and accounts we discovered had been left online for years.

Extremist organisations have found very simple ways to get around any moderation practices that are constantly evolving.

They replace letters with symbols and spaces, use nonsense captions on their posts, blur elements of photos and videos, separate a video with black bars, or hide symbols in landscapes using AI.

Meta acknowledges that the techniques used by these groups to evade practices are constantly evolving, so they are regularly "evaluating and updating" their approach.

Mr Chambers added: "There are a lot of different ways that they try to get around, but frankly, it's not actually that hard to detect it if you know what you're looking for.

"I think that probably one of the biggest issues is that you don't have dedicated terrorism subject matter experts who aren't necessarily trying to just go straight off of the content, but are actually working through the networks and looking at the users and user connections."

He said that this could be monitored better by working with people with expert knowledge who can look at the content and know exactly where a video screenshot comes from, the affiliations of a preacher, and whether someone shown is a key member of a terror group.

Also, tech companies could expand their language content moderation to cover Arabic and other languages spoken by extremists.

Meta says that it "routinely commissions independent research from think-tanks, academics and NGOs" on this topic to help the industry make progress on issues.

A 14-year-old boy holds an iPhone displaying various social media and messaging apps
A 14-year-old boy holds an iPhone displaying various social media and messaging apps. Picture: Anna Barclay/Getty Images

Harold said: "The fact that terrorism content is just not being moderated clearly speaks to not caring because they had the ability to do it previously, but all of a sudden now they're not.

"I would find it hard to believe that it would be due to such a mass proliferation of content online that would be a cause of, oh, we just can't keep up.

"So clearly, it's that the current methods of detection and moderation do not work."

Meta works with hundreds of people with expertise across a wide range of sectors who are dedicated to tackling this content on their platforms.

Moderation involves using AI to detect video, audio, text, and graphics like logos and depictions of violence.

It has also made a "Hasher Matcher Actioner" tool available to companies across the industry that can be used for free to identify and remove content.

Former Prevent officer Dr Littlewood believes that online terror networks represent a "multifaceted problem," however, it is one that tech companies are attempting to tackle by developing better AI moderation with a view to recognising and reviewing this type of content on their platforms.

She adds that more needs to be done to develop "resilience to extremist narratives".

"We're at a point now in the UK where our Prevent agenda is essentially in flux. We've had review after review with the recommendations unimplemented, and the Prevent agenda is really unsure of itself and its future.

"We haven't had a counter extremism strategy and counter extremism coordinators now for years, so that pre-violence space is completely unmanaged from the governmental side of things.

"We don't have a strategy for tackling soft, nonviolent extremism, which is linked to the same ideas through which the violence is then put upon."

Dr Littlewood adds that some of the "softer" accounts may be kept open for the purpose of monitoring by the intelligence agencies and tech companies.

She said: "These are very dangerous and violent accounts, but quite often the softer accounts where we know someone is of concern, they are kept open for the purpose of monitoring and understanding their narrative and watching their network.

"This is obviously a double-edged sword, that is the ability through which you're able to understand and monitor, but the extent by which there will be radicalisation as an effect of that account staying up."

A 13-year-old boy looks at an iPhone
A 13-year-old boy looks at an iPhone. Picture: Anna Barclay/Getty Images

Mr Chambers thinks governments need to be "more strictly regulating" tech companies and the types of content they are allowing onto their platforms.

He added that there are going to have to be "very long and difficult conversations" about the algorithms and how they support the proliferation of extremist content.

"I think at some point there will have to be a reckoning of you are actually somewhat responsible for what goes on your sites and I think that's the one that everybody tries to avoid," Mr Chambers said.

"But it's that sort of threat of your platform is actively facilitating terrorism, or your site is actively facilitating organised crime, which was one of the big things with Telegram.

"Actually holding them responsible and for facilitating because they're not moderating and if you're not going to moderate it, then you're facilitating - that's what it is."

A Government spokesperson said: Material that promotes or glorifies terrorism is both against the law and completely unacceptable in our society. It has no place anywhere in Britain, online or offline.

“Social media platforms have a legal duty to take action to prevent illegal content on their sites, including terrorist and violent material. As the independent regulator, Ofcom also has strong enforcement powers it can use where providers fail to comply and has our full backing to take action were necessary.”

A spokesperson for Meta said: “There is no place on our platforms for people or content that promotes violence or terrorism, and we remove all representation, glorification and support when we become aware of it.”

An Ofcom spokesperson said: “Under the Online Safety Act, social media companies must take appropriate steps to protect their UK users from illegal terrorist content or activity. If a post is reported to a platform, it must decide whether the content is illegal under UK law, and take it down swiftly if it is. Our job is to make sure sites and apps have appropriate measures in place to comply with their duties, and we’ve shown we’ll take action if evidence suggests companies are falling short.

“We’ve worked with a range of organisations to gather evidence that suggests terrorist content and illegal hate speech is persisting on some of the largest social media sites.

"We have opened a compliance programme to determine whether the biggest social media companies have adequate systems and processes for assessing and swiftly removing illegal hate and terror material that has been reported to them. We will provide an update on this in due course.”