Generative AI ‘helping criminals create more sophisticated cyber attacks’

30 November 2023, 00:04

AI research. Picture: PA

The UK’s National Cyber Security Centre has also highlighted the use of AI to create and spread disinformation as a key threat.

The rise of generative AI tools such as ChatGPT is helping cybercriminals create more convincing sophisticated scams, cybersecurity experts have warned.

As ChatGPT marks the first anniversary of its launch to the public, a number of industry experts have said the technology is being leveraged by bad actors online.

They warn that generative AI tools for text and image creation are making it easier for criminals to create convincing scams, but also that AI is being used to help boost cyber defences by helping identify evolving threats as they appear.

At the UK’s AI Safety Summit earlier this month, the threat of more sophisticated cyber attacks powered by AI was highlighted as a key risk going forward, with world leaders agreeing to work together on the issue.

The UK’s National Cyber Security Centre (NCSC) has also highlighted the use of AI to create and spread disinformation as a key threat in years to come, especially around elections.

James McQuiggan, security awareness advocate at cyber security firm KnowBe4, said the impact of generative AI, and the large language models (LLMs) which power them, was already being felt.

“ChatGPT has revolutionised the threat landscape, open source investigations, and cybersecurity in general,” he told the PA news agency.

“Cybercriminals leverage LLMs to generate well-written documents with proper grammar and no spelling mistakes to level up their attacks and circumvent one of the biggest red flags taught in security awareness programmes – the notion that poor grammar and spelling mistakes are indicative of social engineering email or phishing attacks.

“Unsurprisingly, there have been increased sophistication and volume of phishing attacks in various styles, creating challenges for businesses and consumers alike.

“With generative AI also lowering the technical barrier to creating convincing profile pictures, impeccable text and even malware, AI and LLMs like ChatGPT are increasingly being used to create more convincing phishing messages at scale.”

The next generation of generative AI models are expected to start appearing in 2024, with experts predicting they will be significantly more capable than the current generation models.

Looking ahead to potential future uses of generative AI by bad actors, Borja Rodriguez, manager of threat intelligence operations at cyber security firm Outpost24, said hackers could develop AI tools to write malicious code for them.

“Currently, tools like Copilot from GitHub help developers generate code automatically,” he said.

“Not far from that, someone could create a similar tool specifically to assist in creating malicious code, scripts, backdoors and more, aiding script kiddies (novice hackers) with low levels of technical knowledge to achieve things they weren’t capable of in the past.

“These tools will assist underground communities in executing complex attacks without much expertise, lowering the skill requirements for those executing them.”

The rate of advancement of generative AI, and the general unknown potential of the technology for the years to come, has created an uncertainty around it, the experts say.

Many governments and world leaders have begun discussions on how to potentially regulate AI, but without knowing more about the possibilities of technology, piecing together successful regulation will be unlikely.

Etay Maor, senior director of security strategy at Cato Networks, said the issue of trust remained key in regard to LLMs, which are trained on large amounts of text data, and how they are programmed.

“As the excitement surrounding LLMs settles into a more balanced perspective, it becomes imperative to acknowledge both their strengths and limitations,” he said.

“Users must verify critical information from reliable sources, recognising that, despite their prowess, LLMs are not immune to errors.

“LLMs such as ChatGPT and Bard have already reshaped the landscape.

“However, a lingering uncertainty persists as the industry grapples with understanding where these tools source their information and whether they can be fully trusted.”

By Press Association

More Technology News

See more More Technology News

EA Sports sign

Games developer Electronic Arts to cut 5% of workforce

This super speedy acceleration makes "not an especially nice noise", with the journalist saying it is "bowel loosening".

Jeremy Clarkson brands supercars 'too difficult' after driving new £308k vehicle

A Google building

Inaccurate images generated by AI chatbot were ‘unacceptable’, says Google boss

An Apple store

Apple shuts down electric car project – reports

Sage sign

AI will ‘change nature’ of accounting, says Sage boss

A child using a laptop computer (PA)

More guidance needed on AI in schools, report says

Most influential Scots on TikTok

Gen Z ushering in ‘post-truth media age’, says former No 10 communications chief

UK mobile phone networks stock

Vodafone completes 3G network switch off

Sony PlayStation 5

Sony to close London studio as part of 900 PlayStation job cuts

James Cleverly visit to US

Use of AI for medical diagnoses and police work discussed by Home Secretary

A wallet containing cash and cards

80% of 85 to 95-year-olds now pay with contactless, says Barclays

Woman using a laptop computer

More small digital firms making £100,000-plus across the UK, data shows

James Cleverly

Deepfakes provide ‘perfect storm’ for hijackers in general election – Cleverly

Jack Ritchie inquest

Online slot games ‘highly addictive’ despite new limits, say bereaved parents

The Spotify app is shown in an Apple iPad mini

Apple hits out at Spotify over ongoing EU competition complaint

Serco signage

Serco Leisure ordered to stop using facial recognition tech on workers