Global AI Safety Summit, can AI be effectively regulated?

  • last year
Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge talk with CGTN Europe on the Global AI Safety Summit.
Transcript
00:00 Neil Lawrence, Professor of Machine Learning at Cambridge University. Neil, talking shop
00:05 or useful summit, which is it?
00:10 I think it's incredibly significant to have the United States and China signing the same
00:15 document and I don't think the significance of that can be underestimated. It may not
00:21 be that there's an enormous amount to say in that document, but in order to get both
00:27 those great nations to sign such a document, that's always going to have to be the case.
00:31 So we'll look back on this moment, but we'll be looking back on it from whatever it leads
00:37 to and I think it will be seen as a significant turn where we get these two great countries
00:41 talking again.
00:42 I can't do artificial intelligence, but any intelligence observer would suggest that we
00:47 can't even regulate the internet. I mean, you can't regulate AI, can you?
00:52 I think you point really well made, Jamie. I think one of the things that is unfortunate
00:57 about the summit is it's not addressing the real issues that we're facing in our different
01:01 societies with our different approaches to regulating them. Artificial intelligence is
01:06 just an extension of internet. It's just an extension of this rapid form of communications,
01:11 which is massively disruptive to the way we run our societies. And there are benefits
01:16 accruing to certain people and it's the reactions that our societies take institutionally to
01:23 that take a long time and it's incredibly difficult to regulate that. But that's not
01:26 just AI, that's the whole of cyber because of the speed of deployment of these new capabilities.
01:31 As you say, some big players there, China's there, the United States is there, the UK
01:35 is there, the EU is there, but I mean, there are lots of people who are not there. Can
01:39 this gathering and the later ones make any progress if there isn't a complete buy-in
01:46 from everyone around the world?
01:49 I think it's an incredibly good point. I think that there's a real danger for the developed
01:54 world to be navel-gazing here. So the issues that we might feel in professionalized societies
02:00 where we have large numbers of lawyers and accountants whose roles they feel are threatened
02:03 by these technologies are extremely different to the issues that developing economies are
02:08 facing. And I think many of them are looking on at this quite bemused in terms of the nature
02:13 of the conversation we're having when these technologies prevent enormous opportunities,
02:17 but enormous dangers for their societies. The potential for misinformation to disrupt
02:22 delicate societies is absolutely tremendous. And many people have already died because
02:27 of that. And I think that it would be worthwhile having more of that conversation as well as
02:32 the conversation on these frontier risks.
02:35 Where and how is AI already being used safely, if I can put it like that?
02:41 Well, it's quite pervasively used. I mean, in terms of recommendations on the internet,
02:48 it's filtering a lot of what we see. Now, whether it's being used safely, I think, in
02:52 that context is now the question, because although those are quite inconsequential decisions,
02:56 what adverts we see, what social media posts we see, we see they have a downstream effect
03:00 on creating divisions in society where certain political groups will only look at their own
03:06 posts, whereas in the past, they may have been presented more information from a wider
03:10 spectrum of media.
03:11 I think one really interesting domain where AI is progressing slowly but has enormous
03:16 potential for benefit is in medicine, where, of course, we already have an enormous amount
03:19 of regulation about what a medical device consists of and what that should do before
03:24 it can be safely deployed. So there are lots of domains where we've got lots of good regulation,
03:29 and that's already being used in collaboration with people who are deploying these technologies.
03:33 But naturally, those things move a bit slower because to conform to the regulation requires
03:37 a lot of work.
03:38 Neil, I'm sure we'll talk again in coming weeks and months. For the moment, thank you
03:41 very much. Neil Lawrence, Professor of Machine Learning at Cambridge University.

Recommended