ChatGPT and other chatbots ‘can be tricked into making code for cyber attacks’

24 October 2023, 16:04

The ChatGPT website
ChatGPT study. Picture: PA

Research from the University of Sheffield found scientists could manipulate chatbots into creating malicious code.

Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyber attacks, according to research.

A study by researchers from the University of Sheffield’s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.

Generative AI tools such as ChatGPT can create content based on user commands or prompts and are expected to have a substantial impact on daily life as they become more widely used in industry, education and healthcare.

But the researchers have warned that vulnerabilities exist, and said their research found they were able to trick the chatbots into helping steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks.

In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood

Xutan Peng, University of Sheffield PhD student

In all, the university study found vulnerabilities in six commercial AI tools – of which ChatGPT was the most well-known.

On Chinese platform Baidu-Unit, the scientists were able to use malicious code to obtain confidential Baidu server configurations and tampered with one server node.

In response, the research has been recognised by Baidu, which addressed and fixed the reported vulnerabilities and financially rewarded the scientists, the university said.

Xutan Peng, a PhD student at the University of Sheffield, who co-led the research, said: “In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood.

“At the moment, ChatGPT is receiving a lot of attention. It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services.”

The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are

Xutan Peng, University of Sheffield PhD student

The researchers also warned that people using AI to learn programming languages was a danger, as they could inadvertently create damaging code.

“The risk with AIs like ChatGPT is that more and more people are using them as productivity tools, rather than a conversational bot, and this is where our research shows the vulnerabilities are,” Peng said.

“For example, a nurse could ask ChatGPT to write an (programming language) SQL command so that they can interact with a database, such as one that stores clinical records.

“As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”

The UK will host an AI Safety Summit next week, with the Government inviting world leaders and industry giants to come together to discuss the opportunities and safety concerns around artificial intelligence.

By Press Association

More Technology News

See more More Technology News

X logo

Data watchdog to investigate X’s Grok AI tool

Doctor using AI algorithm and machine learning to detect pneumonia

AI could lead to patient harm, researchers suggest

Elon Musk, CEO of Tesla and senior advisor to the president of the United States, has frozen Tesla sales in China.

Elon Musk freezes Tesla orders to China as Trump's trade war continues

Nearly a quarter of children spend more than four hours a day on an internet-enabled device, a survey for the Children’s Commissioner has suggested.

Nearly quarter of children spend more than four hours a day on devices

A laptop user with their hood up

Four in 10 UK businesses hit by cyber attack or breach in the last year

The remote-controlled mine plough system Weevil being put through its paces

Minefield-clearing robot to be trialled for British Army front lines

Elon Musk 'rage quits' favourite video game after being ‘cyber-bullied’ by players

Elon Musk 'rage quits' favourite video game after being ‘cyber-bullied’ by players

Exclusive
A video game which touts itself as an "incest and non-consensual sex" simulator has been banned in the UK

Home Secretary hails victory for LBC after vile rape and incest game pulled from download in UK

Young girl playing on an apple iPad tablet computer

Nearly quarter of children spend more than four hours per day on internet device, survey finds

School mobile phone bans

Nearly quarter of children spend more than four hours a day on devices – poll

School attendance

Government should ban phones in schools to alleviate pressures – union leader

Exclusive
The computer game "No Mercy" centres around a male protagonist who is encouraged to "become every woman's worst nightmare", and "never take no for an answer."

Video game encouraging rape and incest removed from major gaming platform in the UK after LBC investigation

Gerry Adams

Gerry Adams seeking legal advice over Meta use of books for AI systems

Exclusive
The computer game "No Mercy" centres around a male protagonist who is encouraged to "become every woman's worst nightmare", and  "never take no for an answer."

Australia and Canada pull rape and incest game that tells players to be 'women's worst nightmare'

The iPhone 16 on display

Will the Trump tariffs cause gadget prices to rise?

A laptop user with their hood up

Suicide forum investigated under new online safety laws