AI could lead to 'extinction', warn experts including heads of OpenAI and Google Deepmind

30 May 2023, 15:23

OpenAI head Sam Altman recently testified before Congress calling for better regulation of AI tech
OpenAI head Sam Altman recently testified before Congress calling for better regulation of AI tech. Picture: Alamy

By Asher McShane

World leaders are being called on to halt the risk of 'extinction' at the hands of artificial intelligence.

Business and academic leaders, including the heads of OpenAI and Google Deepmind, said the risks from AI should be treated with the same urgency as pandemics or nuclear war.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," they said.

The statement was organised by the Centre for AI Safety, a San Francisco-based non-profit which aims "to reduce societal-scale risks from AI".

Listen and subscribe to Unprecedented: Inside Downing Street on Global Player

Legal battle looms after Covid inquiry demanded Boris Johnson’s unredacted WhatsApp messages

It said the use of AI in warfare could be "extremely harmful" as it could be used to develop new chemical weapons and enhance aerial combat.

The letter was signed by some of the biggest names in the field, including Geoffrey Hinton, who is sometimes nicknamed the "Godfather of AI".

The signatories also include Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT-developer OpenAI.

The list also included dozens of academics, senior bosses at companies like Google DeepMind, the co-founder of Skype, and the founders of AI company Anthropic.

AI is now in the global consciousness after several firms released new tools allowing users to generate text, images and even computer code by just asking for what they want.

Experts say the technology could take over jobs from humans - but this statement warns of an even deeper concern.