Google drops pledge not to use AI for weapons

5 February 2025, 14:04

Google image
Google Stock. Picture: PA

The tech giant has unveiled an update to its AI principles, which removes references to not pursuing work on weapons and surveillance.

Google has removed a pledge from its artificial intelligence (AI) principles that had said the company would not use the technology to develop weapons.

The technology giant has rewritten the principles that guide its development and use of AI – which are published online – but a section pledging not to develop tech “that cause or are likely to cause harm” has now been removed.

That section had said the firm would not pursue applications in the areas of weapons or “that gather or use information for surveillance violating internationally accepted norms”.

Instead, the newly streamlined principles now feature a section on “responsible development and deployment” which says the tech giant will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

In a blog post, Google senior vice president James Manyika and Sir Demis Hassabis, who leads the firm’s AI lab, Google DeepMind, said the company needed to update its AI principles as they had been first published in 2018 and the technology has “evolved rapidly” since then.

Sir Demis Hassabis
Sir Demis Hassabis leads Google’s DeepMind (Toby Melville/PA)

“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,” they said.

“It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”

They said this had meant increased international collaborative efforts on common principles, which the blog post said Google was “encouraged” by.

But Mr Manyika and Sir Demis said “global competition” for AI leadership was taking place within an “increasingly complex geopolitical landscape”.

“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” they said.

“And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”

There is an ongoing debate among AI experts, governments, regulators, tech firms and academics about how the development powerful emerging technology should be monitored or regulated.

Previous international summits have seen countries and tech firms sign non-binding agreements to develop AI “responsibly”, but no binding international law on the issue is yet in place.

In the past, Google’s contracts to provide technology, such as cloud services, to the US and Israeli military have sparked internal protests from employees.

James Fisher, chief strategy officer at AI firm Qlik, said Google’s decision was concerning, and highlighted the need for countries such as the UK to push for more international governance.

“Changing or removing responsible AI policies raises concerns about how accountable organisations are for their technology, and around the ethical boundaries of AI deployment,” he told the PA news agency.

“AI governance will of course need to flex and evolve as the technology develops, but adherence to certain standards should be a non-negotiable.

“For businesses, this decision shows we are likely to face a complex AI landscape going forwards, where ethical considerations are weighed up against industry competition and geopolitics.

“For the UK, which has attempted to position itself as a leader in AI safety and regulation, this decision only makes it more important to put robust, enforceable AI governance frameworks in place.

“The UK’s ability to balance innovation with ethical safeguards could set a global precedent, but it will require collaboration between government, industry and international partners to ensure AI remains a force for good.”

By Press Association

More Technology News

See more More Technology News

Shop sign stock

EE working to fix service issues as users report losing phone signal

A 23andMe saliva collection kit (Barbara Ortutay/AP)

Delete personal 23andMe data, privacy experts urge users

A woman’s hands on a laptop keyboard

Out-of-date government IT systems ‘hampering public sector adoption of AI’

Back view closeup of young gamer boy playing video games online on computer in dark room wearing headphones with microphone

What are Com networks and what threat do they pose?

A man taking a photo of a mobile phone mast using a mobile phone

Smartphones to receive phone signals from space under Ofcom proposals

Chancellor of the Exchequer Rachel Reeves head shot

Chancellor faces ‘tough balancing act’ if tax on big tech firms is scrapped

Health minister Stephen Kinnock said the Government is taking steps to address online harms (PA)

Government urged to ‘grasp the nettle’ on social media’s impact on young men

Brianna Ghey

Social media companies will not put lives before profit – Brianna Ghey’s mother

Facebook

Meta considering subscription option for UK Facebook users

Professor Stephen Hawking

Cambridge University sparks row over claims Stephen Hawking 'benefited from slavery'

Queen's University Belfast Vice Chancellor Professor Sir Ian Greer (left) with Goodloe Sutton, Vice President of Strategy and Advocacy at Boeing Government Operations

Queen’s receives Boeing investment for aerospace engineering research lab

A girl holding a mobile phone while blurred figures sit in the background

Toxic ‘bro’ culture driving Gen Z women from social media, survey suggests

Scanner

New scanner technique may offer hope for patients with drug-resistant epilepsy

Amazon accused of 'pushing propaganda' after mum asks Alexa to name celebrities - and is given list of Republicans

Amazon accused of 'pushing propaganda' after mum asks Alexa for celebrities - and is given Trump, Vance and Musk

Stephen Graham

Adolescence creators accept invitation to discuss online safety with MPs

A Norwegian man filed a complaint against the creators of ChatGPT

Norwegian man calls for fines after ChatGPT ‘hallucinated’ that he’d killed his children