OpenAI and Microsoft disrupt state-backed hackers using ChatGPT

15 February 2024, 10:14

The ChatGPT website
Higher Education Policy Institute report. Picture: PA

The AI firm said hacker groups backed by China, Iran, North Korea and Russia had been found and removed from the chatbot.

ChatGPT maker OpenAI says it has disrupted multiple networks of state-backed hackers trying to use the AI chatbot to further their “malicious” activities.

The company said that alongside Microsoft, it had disrupted five groups from China, Iran, North Korea and Russia.

OpenAI said the different state-linked groups had used its AI services to carry out research, translation, find coding errors and run basic coding tasks to support their activities.

The ChatGPT maker said the identified accounts associated with the various groups had been terminated.

“We build AI tools that improve lives and help solve complex challenges, but we know that malicious actors will sometimes try to abuse our tools to harm others, including in furtherance of cyber operations,” OpenAI said.

“Among those malicious actors, state-affiliated groups – which may have access to advanced technology, large financial resources, and skilled personnel – can pose unique risks to the digital ecosystem and human welfare.

“In partnership with Microsoft Threat Intelligence, we have disrupted five state-affiliated actors that sought to use AI services in support of malicious cyber activities.

“The activities of these actors are consistent with previous red team assessments we conducted in partnership with external cybersecurity experts, which found that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what is already achievable with publicly available, non-AI powered tools.”

During the UK-hosted AI Safety Summit last year, world leaders and tech companies discussed the potential dangers of bad actors using generative AI applications to help assist them in carrying out cyber attacks and other nefarious activities.

The UK’s National Cyber Security Centre (NCSC) has also warned that more amateur and low-skilled hackers were already using generative AI tools to help upskill and create more effective and convincing spam and phishing attacks.

OpenAI noted that several of the groups it has disrupted have been using its services to draft and create content “that could be used in phishing campaigns”.

By Press Association

More Technology News

See more More Technology News

Person on laptop

UK cybersecurity firm Darktrace to be bought by US private equity firm

Mint Butterfield is missing in the Tenerd

Billionaire heiress, 16, disappears in San Francisco neighbourhood known for drugs and crime

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays