'Things can go quite wrong… democracy is threatened': ChatGPT creators face questions from US Congress

16 May 2023, 16:33 | Updated: 25 July 2023, 11:46

Mr Altman spoke about ChatGPT at the US Congress
Mr Altman spoke about ChatGPT at the US Congress. Picture: Alamy/Screengrab

By Will Taylor

The creator of ChatGPT has admitted governments will need to regulate more powerful AI models as a professor warned democracy could be "under threat."

Listen to this article

Loading audio...

Sam Altman, the CEO of Open AI - which owns ChatGPT - said it had been "immensely gratifying" to see people get use from the system.

It has amazed users with its depth of knowledge, coherent "speech" and responsiveness to user requests and statements.

But its impressive capabilities have thrown renewed focus on the danger of increasingly powerful AIs and the impact they could have on human civilisation - including unintended consequences.

Read more: Listen and subscribe to Unprecedented: Inside Downing Street on Global Player

Mr Altman told US senators that "regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models", describing it as "critical".

He added that it is important "companies have their own responsibility no matter what the government does" and that artificial intelligence models are made "with democratic principles in mind".

Mr Altman discussed AI with senators
Mr Altman discussed AI with senators. Picture: Alamy

But Professor Emeritus Gary Marcus, of New York University, warned people's safety cannot be guaranteed and compared development to "bulls in a china shop... powerful, reckless and difficult to control".

He compared the problem with AI to social media, saying governments were too slow to get to grips with the phenomenon.

He said outsiders will use AI to affect elections, and models will be capable of mass producing lies.

"Democracy is threatened," he said.

Read more: Driver successfully uses ChatGPT to challenge 'ridiculous' airport drop-off fine

He claimed an open-source language bot appeared to have played a role in a person's suicide, asking why they didn't take their own life "earlier".

Another encouraged a person posing as a 13-year-old told them how to lie to their parents about their relationship with an older man, he claimed.

But he said there are "unprecedented opportunities" afforded by the technology.

Mr Altman also said he was "nervous" about the issue of AI’s predicting public opinion and having an effect on elections.

He suggested AI firms should adopt “guidelines about what's expected in terms of disclosure from a company providing a model".

He said that AI could be a “printing press moment,” saying they were committed to “maximising the safety of AI systems.” He went on to describe how he hoped the tools could one day help address “some of humanity’s biggest challenges, like climate change or curing cancer.”

“We love seeing people use our tools to create, to learn, to be more productive,” he added.

Europe is currently looking at the AI act - a law which proposes a  complete ban on facial recognition tech in public places, and varying levels of rules depending on the impact of the tool in question. The US has only so far opted for looser guidelines and recommendations rather than laws.

The UK is trying to position itself somewhere in the middle.

The Congress panel heard from an expert witness who suggested a new government agency would need to be created to monitor AI systems. AI professor Gary Marcus told the committee: ““My view is we probably need a cabinet-level organisation within the US.”