AI could lead to patient harm, researchers suggest

11 April 2025, 16:04

Doctor using AI algorithm and machine learning to detect pneumonia
Doctor using AI algorithm and machine learning to detect pneumonia. Picture: PA

The findings highlight the ‘inherent importance’ of ‘applying human reasoning and assessment to AI judgements’, experts said.

Artificial intelligence (AI) could lead to patient harm if the development of models is focused more on accurately predicting outcomes than treatment, researchers have suggested.

Experts warned the technology could create “self-fulfilling prophecies” when trained on historic data that does not account for demographics or the under-treatment of certain medical conditions.

They added that the findings highlight the “inherent importance” of applying “human reasoning” to AI decisions.

Academics in the Netherlands looked at outcome prediction models (OPMs), which use a patient’s individual features such as health history and lifestyle information, to help medics weigh up the benefits and risks of treatment.

AI can perform these tasks in real-time to further support clinical decision-making.

Using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment

Research team

The team then created mathematical scenarios to test how AI may harm patient health and suggest that these models “can lead to harm”.

“Many expect that by predicting patient-specific outcomes, these models have the potential to inform treatment decisions and they are frequently lauded as instruments for personalised, data-driven healthcare,” researchers said.

“We show, however, that using prediction models for decision-making can lead to harm, even when the predictions exhibit good discrimination after deployment.

“These models are harmful self-fulfilling prophecies: their deployment harms a group of patients, but the worse outcome of these patients does not diminish the discrimination of the model.”

The article, published in the data-science journal Patterns, also suggests the development of AI model development “needs to shift its primary focus away from predictive performance and instead toward changes in treatment policy and patient outcome”.

Reacting to the risks outlined in the study, Dr Catherine Menon, a principal lecturer at the University of Hertfordshire’s department of computer science, said: “This happens when AI models have been trained on historical data, where the data does not necessarily account for such factors as historical under-treatment of some medical conditions or demographics.

“These models will accurately predict poor outcomes for patients in these demographics.

“This creates a ‘self-fulfilling prophecy’ if doctors decide not to treat these patients due to the associated treatment risks and the fact that the AI predicts a poor outcome for them.

“Even worse, this perpetuates the same historic error: under-treating these patients means that they will continue to have poorer outcomes.

“Use of these AI models therefore risks worsening outcomes for patients who have typically been historically discriminated against in medical settings due to factors such as race, gender or educational background.

“This demonstrates the inherent importance of evaluating AI decisions in context and applying human reasoning and assessment to AI judgments.”

While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions

Professor Ewen Harrison, University of Edinburgh

AI is currently used across the NHS in England to help clinicians read X-rays and CT scans to free up staff time, as well as speeding up the diagnosis of strokes.

In January, Prime Minister Sir Keir Starmer pledged that the UK will be an “AI superpower” and said the technology could be used to tackle NHS waiting lists.

Ian Simpson, a professor of biomedical informatics at the University of Edinburgh, highlighted that AI OPMs “are not that widely used at the moment in the NHS”.

“Here they tend to be used in parallel with existing clinical management policies and often either for assisting diagnostics and/or speeding up processes like image segmentation,” he said.

Ewen Harrison, a professor of surgery and data science and co-director of the centre for medical informatics at the University of Edinburgh, said: “While these tools promise more accurate and personalised care, this study highlights one of a number of concerning downsides: predictions themselves can unintentionally harm patients by influencing treatment decisions.

“Say a hospital introduces a new AI tool to estimate who is likely to have a poor recovery after knee replacement surgery. The tool uses characteristics such as age, body weight, existing health problems and physical fitness.

“Initially, doctors intend to use this tool to decide which patients would benefit from intensive rehabilitation therapy.

“However, due to limited availability and cost, it is decided instead to reserve intensive rehab primarily for patients predicted to have the best outcomes.

“Patients labelled by the algorithm as having a ‘poor predicted recovery’ receive less attention, fewer physiotherapy sessions and less encouragement overall.”

He added that this leads to a slower recovery, more pain and reduced mobility in some patients.

“These are real issues affecting AI development in the UK,” Prof Harrison said.

By Press Association

More Technology News

See more More Technology News

People ride an upward escalator next to the Dior store at the Icon Siam shopping mall on June 12, 2024 in Bangkok, Thailand.

Luxury fashion giant Dior latest high-profile retailer to be hit by cyber attack as customer data accessed

A plane spotter with binoculars from behind watching a British Airways plane landing

‘Flying taxis’ could appear in UK skies as early as 2028, minister says

Apple App Store

Take on Apple and Google to boost UK economy, think tank says

A survey of more than 1,000 employers found that around one in eight thought AI would give them a competitive edge and would lead to fewer staff.

One in three employers believe AI will boost productivity, research finds

Hands on a laptop showing an AI search

One in three employers believe AI will boost productivity, research finds

Music creators and politicians take part in a protest calling on the Government to ditch plans to allow AI tech firms to steal their work without payment or permission opposite the Houses of Parliament in London.

Creatives face a 'kind-of apocalyptic moment’ over AI concerns, minister says

Ngamba Island Chimpanzee Sanctuary on Lake Victoria, Uganda

Chimps use medicinal plants to treat each other's wounds and practice 'self-care' as scientists hail fascinating discovery

Close up of a person's hands on the laptop keyboard

Ofcom investigating pornography site over alleged Online Safety Act breaches

The Monzo app on a smartphone

Monzo customers can cancel bank transfers if they quickly spot an error

Co-op sign

Co-op to re-stock empty shelves as it recovers from major hack

The study said that it was often too easy for adult strangers to pick out girls online and send them unsolicited messages.

Social media platforms are failing to protect women and girls from harm, new research reveals

Peter Kyle leaves 10 Downing Street, London

Government-built AI tool used to cut admin work for human staff

In its last reported annual headcount in June 2024, Microsoft employed 228,000 full-time workers

Microsoft axes 6,000 jobs despite strong profits in recent quarters

Airbnb logo

Airbnb unveils revamp as it expands ‘beyond stays’ to challenge hotel sector

A car key on top of a Certificate of Motor Insurance and Policy Schedule

Drivers losing thousands to ghost broker scams – the red flags to watch out for

Marks and Spencer cyber attack

M&S customers urged to ‘stay vigilant’ for fraud after data breach confirmed