Generative AI ‘helping criminals create more sophisticated cyber attacks’

30 November 2023, 00:04

ChatGPT
AI research. Picture: PA

The UK’s National Cyber Security Centre has also highlighted the use of AI to create and spread disinformation as a key threat.

The rise of generative AI tools such as ChatGPT is helping cybercriminals create more convincing sophisticated scams, cybersecurity experts have warned.

As ChatGPT marks the first anniversary of its launch to the public, a number of industry experts have said the technology is being leveraged by bad actors online.

They warn that generative AI tools for text and image creation are making it easier for criminals to create convincing scams, but also that AI is being used to help boost cyber defences by helping identify evolving threats as they appear.

At the UK’s AI Safety Summit earlier this month, the threat of more sophisticated cyber attacks powered by AI was highlighted as a key risk going forward, with world leaders agreeing to work together on the issue.

The UK’s National Cyber Security Centre (NCSC) has also highlighted the use of AI to create and spread disinformation as a key threat in years to come, especially around elections.

James McQuiggan, security awareness advocate at cyber security firm KnowBe4, said the impact of generative AI, and the large language models (LLMs) which power them, was already being felt.

“ChatGPT has revolutionised the threat landscape, open source investigations, and cybersecurity in general,” he told the PA news agency.

“Cybercriminals leverage LLMs to generate well-written documents with proper grammar and no spelling mistakes to level up their attacks and circumvent one of the biggest red flags taught in security awareness programmes – the notion that poor grammar and spelling mistakes are indicative of social engineering email or phishing attacks.

“Unsurprisingly, there have been increased sophistication and volume of phishing attacks in various styles, creating challenges for businesses and consumers alike.

“With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale.”

The next generation of generative AI models are expected to start appearing in 2024, with experts predicting they will be significantly more capable than the current generation models.

Looking ahead to potential future uses of generative AI by bad actors, Borja Rodriguez, manager of threat intelligence operations at cyber security firm Outpost24, said hackers could develop AI tools to write malicious code for them.

“Currently, tools like Copilot from GitHub help developers generate code automatically,” he said.

“Not far from that, someone could create a similar tool specifically to assist in creating malicious code, scripts, backdoors and more, aiding script kiddies (novice hackers) with low levels of technical knowledge to achieve things they weren’t capable of in the past.

“These tools will assist underground communities in executing complex attacks without much expertise, lowering the skill requirements for those executing them.”

The rate of advancement of generative AI, and the general unknown potential of the technology for the years to come, has created an uncertainty around it, the experts say.

Many governments and world leaders have begun discussions on how to potentially regulate AI, but without knowing more about the possibilities of technology, piecing together successful regulation will be unlikely.

Etay Maor, senior director of security strategy at Cato Networks, said the issue of trust remained key in regard to LLMs, which are trained on large amounts of text data, and how they are programmed.

“As the excitement surrounding LLMs settles into a more balanced perspective, it becomes imperative to acknowledge both their strengths and limitations,” he said.

“Users must verify critical information from reliable sources, recognising that, despite their prowess, LLMs are not immune to errors.

“LLMs such as ChatGPT and Bard have already reshaped the landscape.

“However, a lingering uncertainty persists as the industry grapples with understanding where these tools source their information and whether they can be fully trusted.”

By Press Association

More Technology News

See more More Technology News

X logo

Irish watchdog ‘surprised’ over X move on user data

A sign reminding people of new UK customs rules (PA)

Global trade to go digital as UK and 90 other countries agree paperless switch

A broadband router

Now most complained-about broadband and landline provider – latest Ofcom figures

Tasty Spoon

High-tech spoon developed to enrich lives of dementia patients

The NCSC said the Andariel group has been compromising organisations around the world (PA)

North Korea-backed cyber group sought to steal nuclear secrets, NCSC says

Tanaiste Micheal Martin speaks to the media

Tanaiste: Fake ads about me originated in Russia

Revolut card on a table

Revolut secures UK banking licence after three-year wait

IT outages

CrowdStrike faces backlash over 10 dollar apology vouchers for IT outage

Charlie Nunn, the boss of Lloyds, wearing a suit and tie outisde a building

Lloyds boss says tech outages a ‘really important issue’ for bank

A woman using a mobile

Accessing GP services online could pose risk to patient safety, probe finds

Overhead view of a man using a laptop computer

AI could help two-thirds of workers with daily tasks, says study

A TikTok logo on a mobile phone screen alongside logos for other apps

TikTok fined £1.8m over failure to provide accurate information to Ofcom

A hand pressing on laptop keys

UK competition regulator signs AI agreement with EU and US counterparts

A woman using a mobile phone

Third of UK adults use mobile contactless payments at least every month

Businessman hand touching password login device screen, cyber security concept

Lawlessness ‘characterises’ pornography online, says MP in plea to reform laws

Hands on a computer keyboard

State threat law watchdog calls for greater transparency from tech giants