On Wednesday and Thursday, delegates from 27 governments around the world, as well as the heads of top artificial intelligence companies, gathered for the world’s first AI Safety Summit at this former stately home near London, now a museum. Among the attendees: representatives of the U.S. and Chinese governments, Elon Musk, and OpenAI CEO Sam Altman.
Category
🗞
NewsTranscript
00:00 So my hope for the summit is that there's a consensus, an international consensus on
00:09 the initially insight into advanced AI.
00:13 We start with insight.
00:14 I think there's a lot of concern among people in the AI field that the government will sort
00:19 of jump the gun on rules before knowing what to do.
00:24 And I think that's unlikely to happen.
00:26 I think what we're really aiming for here is to establish a framework for insight so
00:32 that there's at least a third party referee, an independent referee, that can observe what
00:36 leading AI companies are doing and at least sound the alarm if they have concerns.
00:47 We acknowledge as well, of course, the benefits, but we do acknowledge that there are risks.
00:52 And part of the role of the United States in these meetings has been to require that
00:57 there be some understanding and appreciation for the full spectrum of risks.
01:02 We're going to do everything we can.
01:04 And what I can tell you coming out of these conversations and conversations I've been
01:09 having with heads of state is that this is one of the biggest concerns that most leaders
01:15 have is the prevalence and the ubiquity of misinformation and what that can do to deteriorate
01:25 confidence in democracies.
01:28 I truly believe there is nothing in our foreseeable future that will be more transformative for
01:33 our economies, our societies, and all our lives than the development of technologies
01:38 like artificial intelligence.
01:41 But as with every wave of new technology, it also brings new fears and dangers.
01:46 So no matter how difficult it may be, it is the right and responsible long-term decision
01:51 for leaders to address them.
01:53 That is why I called this summit.
01:55 For the first time ever, we have brought together CEOs of world-leading AI companies with countries
02:02 most advanced in using it and representatives from across academia and civil society.
02:08 And while this was only the beginning of the conversation, I believe the achievements of
02:12 this summit will tip the balance in favor of humanity.
02:16 Until this week, the world did not even have a shared understanding of the risks.
02:22 So our first step was to have open and inclusive conversation to seek that shared understanding.
02:28 And yesterday, we agreed and published the first ever international statement about the
02:33 nature of all those risks.
02:36 It was signed by every single nation represented at this summit, covering all continents across
02:41 the globe and including the United States and China.
02:46 Some said we shouldn't even invite China.
02:49 Others said that we could never get an agreement with them.
02:52 Both were wrong.
02:53 A serious strategy for AI safety has to begin with engaging all the world's leading AI
02:59 powers.
03:00 And all of them have signed the Bletchley Park Communique.
03:04 Until now, the only people testing the safety of new AI models have been the very companies
03:09 developing it.
03:11 That must change.
03:13 So building on the G7 Hiroshima process and the global partnership on AI, like-minded
03:18 governments and AI companies have today reached a landmark agreement.
03:23 We will work together on testing the safety of new AI models before they are released.