From ChatGPT to the AI Safety Summit: The year in AI

24 December 2023, 00:04

AI safety summit
AI safety summit. Picture: PA

The technology has become increasingly part of everyday lives over the last year.

Artificial intelligence has become one of the biggest issues in tech in 2023, driven by the rise of generative AI and apps such as ChatGPT.

Since OpenAI rolled out ChatGPT to the public in late 2022, awareness of the technology and its potential has exploded – from being discussed in parliaments around the world to being used to write TV news segments.

The public interest in generative AI models has also pushed many of the world’s largest tech companies to introduce their own chatbots, or speak more publicly about how they plan to use AI in the future, while regulators have increased debate around how countries can and should approach the opportunities and potential risks of AI.

In 12 months, conversations around AI have gone from concerns over how it could be exploited by schoolchildren to do their homework for them, to Prime Minister Rishi Sunak hosting the first AI safety summit of nations and technology companies to discuss how to prevent AI from surpassing humanity or even posing an existential threat.

In short, 2023 has been the year of AI.

Much like the technology itself, product launches around AI moved quickly over the last 12 months, with Google, Microsoft and Amazon all following OpenAI in announcing generative AI products in the wake of ChatGPT’s success.

Google unveiled Bard, an app it said would have the edge over any of its rivals in the new AI chatbot space because it was powered by the data from Google’s industry-leading search engine, and established Google Assistant virtual helper, found in its smartphones and smart speakers.

On a similar note, Amazon used its big product launch of the year to talk about how it was using AI to make its virtual assistant Alexa sound and respond in a more human fashion – able to understand context and react to follow-up questions more seamlessly.

And Microsoft began the rollout of its new Copilot, its take on combining generative AI with a virtual assistant on Windows, allowing users to ask for help with any task they were doing, from writing a report to organising the open windows on their screen.

Elsewhere, Elon Musk announced the creation of xAI, a new start-up focused on work in the artificial intelligence space.

The first product from that start-up has already appeared in the form of Grok, a conversational AI available to paying subscribers to Musk-owned X, formerly known as Twitter.

Such large-scale developments in the sector could not be ignored by governments and regulators, and debate around regulation of the AI sector has also intensified during the year.

In March, the Government published its White Paper on AI, which proposed using existing regulators in different sectors to carry out AI governance, rather than give responsibility to a new single regulator.

But any AI Bill is still yet to be brought forward, a delay that has been criticised by some experts, who have warned that it risks allowing the technology to go unchecked just as the use of AI tools is exploding.

The Government has said it does not want to rush to legislate while the world is still getting to grips with the potential of AI, and says its approach is more agile and allows for innovation.

In contrast, earlier this month the EU agreed on its own set of rules around AI oversight, although they are unlikely to become law before 2025, which will give regulators the power to scrutinise AI models and be provided with details on how models are trained.

But Mr Sunak’s desire for the UK to be a key player in AI regulation was highlighted in November as he hosted world leaders and industry figures at Bletchley Park for the world’s first AI Safety Summit.

Mr Sunak and Technology Secretary Michelle Donelan used the two-day summit to discuss the threats of so-called “frontier AI”, cutting edge aspects of the technology which, in the wrong hands, could be used for nefarious means.

The summit saw all the international attendees, including the US and China, sign the Bletchley Declaration, which acknowledged the risks of AI and pledged to develop safe and responsible models.

And the Prime Minister announced the launch of the UK’s AI Safety Institute, alongside a voluntary agreement with leading firms including OpenAI and Google DeepMind, to allow the institute to test new AI models before they are released.

Although not a binding agreement, it has laid the groundwork for AI safety to become an increasingly prominent part of the debate moving forwards.

Elsewhere, the AI industry witnessed some major boardroom soap opera to end the year, as ChatGPT maker OpenAI sensationally ousted chief executive Sam Altman in late November.

But it sparked backlash among staff, nearly all of whom signed a letter pledging to leave the company and join Altman on a proposed new AI research team at Microsoft if he was not reinstated.

Within days Altman was back at the helm of OpenAI and the board had been reconfigured, with the reasoning behind the saga still unclear.

Since then, the UK’s Competition and Markets Authority (CMA) has asked for views from within the industry on Microsoft’s partnership with OpenAI, which has seen the tech giant invest billions into the AI firm and have an observer on its board.

The CMA said it was minded to look into the partnership in part because of the Altman saga.

Another sign that the coming year is likely to see scrutiny of the AI sector continue to intensify.

By Press Association

More Technology News

See more More Technology News

People ride an upward escalator next to the Dior store at the Icon Siam shopping mall on June 12, 2024 in Bangkok, Thailand.

Luxury fashion giant Dior latest high-profile retailer to be hit by cyber attack as customer data accessed

A plane spotter with binoculars from behind watching a British Airways plane landing

‘Flying taxis’ could appear in UK skies as early as 2028, minister says

Apple App Store

Take on Apple and Google to boost UK economy, think tank says

A survey of more than 1,000 employers found that around one in eight thought AI would give them a competitive edge and would lead to fewer staff.

One in three employers believe AI will boost productivity, research finds

Hands on a laptop showing an AI search

One in three employers believe AI will boost productivity, research finds

Music creators and politicians take part in a protest calling on the Government to ditch plans to allow AI tech firms to steal their work without payment or permission opposite the Houses of Parliament in London.

Creatives face a 'kind-of apocalyptic moment’ over AI concerns, minister says

Ngamba Island Chimpanzee Sanctuary on Lake Victoria, Uganda

Chimps use medicinal plants to treat each other's wounds and practice 'self-care' as scientists hail fascinating discovery

Close up of a person's hands on the laptop keyboard

Ofcom investigating pornography site over alleged Online Safety Act breaches

The Monzo app on a smartphone

Monzo customers can cancel bank transfers if they quickly spot an error

Co-op sign

Co-op to re-stock empty shelves as it recovers from major hack

The study said that it was often too easy for adult strangers to pick out girls online and send them unsolicited messages.

Social media platforms are failing to protect women and girls from harm, new research reveals

Peter Kyle leaves 10 Downing Street, London

Government-built AI tool used to cut admin work for human staff

In its last reported annual headcount in June 2024, Microsoft employed 228,000 full-time workers

Microsoft axes 6,000 jobs despite strong profits in recent quarters

Airbnb logo

Airbnb unveils revamp as it expands ‘beyond stays’ to challenge hotel sector

A car key on top of a Certificate of Motor Insurance and Policy Schedule

Drivers losing thousands to ghost broker scams – the red flags to watch out for

Marks and Spencer cyber attack

M&S customers urged to ‘stay vigilant’ for fraud after data breach confirmed