Google ‘working to fix’ AI picture bot after inaccuracy row

22 February 2024, 13:14

Undated file photo of the Google homepage (PA)
Google AI upgrade. Picture: PA

The tech giant apologised after it was suggested its AI image generation tool was over-correcting against the risk of being racist.

Google has said it is working to fix its new AI-powered image generation tool, after users claimed it was creating historically inaccurate images to over-correct long-standing racial bias problems within the technology.

Users of the Gemini generative AI chatbot have claimed that the app generated images showing a range of ethnicities and genders, even when doing so was historically inaccurate.

Several examples have been posted to social media, including where prompts to generate images of certain historical figures – such as the US founding fathers – returned images depicting women and people of colour.

Google has acknowledged the issue, saying in a statement that Gemini’s AI image generation purposefully generates a wide range of people because the tool is used by people around the world and that should be reflected, but admitted the tool was “missing the mark here”.

“We’re working to improve these kinds of depictions immediately,” the company’s statement, posted to X, said.

“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Jack Krawczyk, senior director for Gemini experiences at Google, said in a post on X: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.

“As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously.

“We will continue to do this for open ended prompts (images of a person walking a dog are universal!).

“Historical contexts have more nuance to them and we will further tune to accommodate that.”

He added that it was part of the “alignment process” of rolling out AI technology, and thanked users for their feedback.

Some critics have labelled the tool woke in response to the incident, while others have suggested Google has over-corrected in an effort to avoid repeating previous incidents involving artificial intelligence, racial bias and diversity.

There have been several examples in recent years involving technology and bias, including facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

The incident comes as debate around the safety and influence of AI continues, with industry experts and safety groups warning AI-generated disinformation campaigns will likely be deployed to disrupt elections throughout 2024, as well as to sow division between people online.

By Press Association

More Technology News

See more More Technology News

The GCHQ building in Cheltenham (GCHQ)

‘Broader and deeper’ online risk to UK from criminals and state-backed hackers

Riot police at a demonstration outside a hotel in Rotherham (

Oversight Board to examine Facebook posts about summer riots

The Microsoft logo

Microsoft facing £1 billion legal claim from UK businesses

A rendering of a computer chip with a human brain image superimposed on it

Most people happy to share health data to develop artificial intelligence – poll

Hands on a keyboard with code on a computer screen

Cyber risk facing UK being ‘widely underestimated’, security chief warns

Ms Barkworth-Nanton, from Swindon was honoured for services to people affected by domestic abuse and homicide at Buckingham Palace on Thursday (Aaron Chown/PA)

Social media ban for children ‘brilliant idea’ for tackling abuse – charity boss

Baroness Cass sounded the note of caution as she made her maiden speech in the House of Lords (Yui Mok/PA)

Mobiles in schools could become like ‘smoking behind the bike shed’

A young girl looks at social media apps, including TikTok, Instagram, Snapchat and WhatsApp, on a smartphone.

Australian social media ban for under-16s a ‘retrograde step’, UK charity says

Australia will ban social media for under-16s.

Australia passes world-first law banning under-16s from social media

Pacific 24 rigid inflatable boat

‘Robot Rib’ drone boat tested by Royal Navy in UK waters for first time

A child using a laptop

Girls to learn AI skills as part of new Girlguiding activities

A young girl using a mobile phone in the dark

Women spend more time online than men, but worry more about online harms – Ofcom

A person using the Uber app on a smartphone

Uber launches teen accounts, giving parents option to track children’s journeys

A woman using her mobile phone

O2 launches AI-powered scam call detection tool

Google's homepage

Google needs ‘right conditions’ to build more AI infrastructure in UK

Prime Minister Sir Keir Starmer gives a speech during a visit to Google’s new AI Campus in Somers Town, north-west London

Starmer encourages young people to get involved in AI ‘revolution’