Politics

Rishi Sunak has said the UK’s AI safety summit will “tip the balance in favour of humanity” after governments reached a “landmark agreement” with tech companies to test their models before their release.

The prime minister said while the event at Bletchley Park was “only the beginning of the conversation”, it showed there was a “will and capability to control the technology”.

Follow latest: UK has ‘led conversation’ to allow world access to benefits of AI

Powerful AI models like OpenAI’s ChatGPT and Google‘s Bard are trained on huge amounts of data to respond to prompts and make predictions.

One of the concerns is a lack of transparency around the data they are trained on, and Mr Sunak has claimed government regulation won’t be possible without more openness about how they work.

“In order to regulate this technology, to make sure it is safe, we have to have the capability to understand what these models are capable of,” he told Sky’s science and technology editor Tom Clarke.

The agreement struck with AI companies to collaborate on safety testing before new models are released is a “necessary” step, he added.

Click to subscribe to the Sky News Daily wherever you get your podcasts

The UK and US governments will set up their own AI safety institutes to carry out such testing and share findings.

But not everyone at the summit appeared convinced by the arrangement, with Elon Musk appearing to mock the politicians who brokered the deal just hours before he holds talks with the prime minister in Downing Street.

“Sigh,” he posted, alongside a cartoon casting doubt on governments’ willingness to collaborate.

PM to hold one-on-one with Musk

Billionaire Musk was one of the star guests at two-day summit in Milton Keynes, which took place at the home of Britain’s Second World War codebreakers.

On day one, Musk told Sky News AI is a “risk” to humanity.

His post on X came just as the prime minister began a news conference on Thursday afternoon.

Musk is due to visit Number 10 for talks on later tonight, streamed on the SpaceX and Tesla owner’s X site.

Please use Chrome browser for a more accessible video player


1:22

Elon Musk: ‘AI is a risk’

PM: AI can ‘transform our lives’

The outspoken tycoon was one of more than 100 politicians, tech bosses, and academics at the UK’s summit to discuss challenges posed by artificial intelligence.

It resulted in the Bletchley Declaration, which saw 28 nations including the US and China agree to collaborate to research safety concerns around the world’s most capable AI models.

Mr Sunak said while the technology had the potential to “transform our lives”, impacting sectors from education to health care, it could present dangers “on a scale like pandemics and nuclear war”.

Please use Chrome browser for a more accessible video player


0:09

OpenAI CEO Sam Altman at summit

The Bletchley Declaration says any threats are “best addressed through international cooperation”, and also set out plans for more global summits next year.

But there was little sign of a concrete approach to regulation or any suggestions of a pause in AI’s development, which experts including Musk called for earlier this year.

It also did little to satisfy critics who warned Mr Sunak ahead of the summit he was too focused on hypothetical future threats, rather than present dangers like job losses and misinformation.

Please use Chrome browser for a more accessible video player


1:14

What is the AI Safety Summit?

US VP warns not to forget ‘everyday threats’

Mr Sunak had previously announced leading AI companies had agreed to share their models with the UK, with a government safety institute launched to research them and flag any concerns.

The White House detailed similar plans this week as part of a wide set of safeguards which include AI-generated content having to be watermarked to combat deepfake content.

US vice president Kamala Harris, who attended the UK summit on Thursday, has said “everyday threats” can’t be ignored despite fears around the more far-flung dangers.

Mr Sunak has been more cautious than the US about AI safety legislation, arguing it would risk stifling innovation.

Instead, the government has tasked existing regulators like the Competition and Markets Authority, Ofcom, and the Health and Safety Executive to apply key principles around safety, transparency, and accountability to AI.

Articles You May Like

Tesla sued by deceased driver’s family over ‘fraudulent misrepresentation’ of Autopilot safety
Jeep Recon EV looks familiar in the latest spy photos and here’s a first look at the interior
Google Says it Has Cracked a Quantum Computing Challenge with New Chip
World Cup hosts in 2030 and 2034 confirmed
Tesla releases new video of Optimus robot walking and it rings a bell