• 8 months ago
Emran Mian, Director General, Digital Technologies and Telecoms, Department for Science, Innovation and Technology (DSIT) Robyn Scott, Co-founder and CEO, Apolitical Guy Williams, Head, UK Defence, Palantir Technologies Moderator: Alan Murray, Chief Executive Officer, FORTUNE

Category

🤖
Tech
Transcript
00:00 (upbeat music)
00:01 - Thank you.
00:02 We have a lot to talk about
00:04 and a very short time to talk about it.
00:06 Emron, I'm eager to talk to you about
00:08 what can government actually do
00:10 to mitigate the downside risks of AI.
00:13 Robin, I'm eager to talk to you about
00:14 how can government use AI to just be better at what it does.
00:19 But Guy, I think I have to start with you
00:21 because you're in the defense business.
00:24 You work for Palantir,
00:25 but work closely with the UK government.
00:27 In the last 48 hours,
00:30 we've all seen on television a pretty extraordinary event
00:34 that Iran sent more than 300 drones,
00:42 ballistic missiles, cruise missiles,
00:45 aimed straight at Israel.
00:47 And we're told that 99% of them
00:51 were shot out of the sky before they hit the ground,
00:54 which is, if true, extraordinary.
00:57 What does this event that we've all witnessed
00:59 in the last 48 hours tell us about
01:02 the changing face of national defense?
01:06 - I think the thing that's really highlighted for me
01:07 is if you go back in time,
01:10 only not even 10 years,
01:11 but say four or four, maybe even three,
01:14 military operations are all about
01:17 the speed of decision-making,
01:19 or decisions at the speed of relevance,
01:21 I think is the term often used.
01:23 And historically, we've seen things like,
01:26 if you go all the way to before something's launched,
01:29 it might be looking at satellite imagery
01:31 to understand what things have moved,
01:33 what changes have taken place.
01:35 Now, with the introduction of computer vision models,
01:37 you're able to take a two kilometer by five kilometer image
01:40 and within a second,
01:42 identify all of the things you might be interested in,
01:44 whether it's changed
01:45 or whether it is something that you recognize.
01:48 Historically, that might have taken an analyst hours
01:50 to get to that point on one single image.
01:53 And now you're able to take thousands of images.
01:55 300 of them in a pretty limited time.
01:58 - Yeah, exactly.
01:59 And so the idea of identifying
02:01 where these things might be coming from
02:03 and deciding, yeah, these are the ones of interest
02:05 and then maybe tasking things,
02:07 the collection assets that you might have,
02:09 which could be sensitive or what have you,
02:10 and then passing that structured data through a model
02:14 to then understand what are the interesting things
02:16 do you care about
02:17 or the things that might be good indicators and warnings.
02:19 - So it's a sea change.
02:21 - What, sorry?
02:22 - It is a sea change.
02:23 - Totally.
02:24 - The compression of decision-making times has been--
02:26 - You were directly involved,
02:28 my understanding is you were directly involved
02:30 in working with Ukraine to,
02:34 very much the same weaponry, by the way,
02:36 both the offensive weaponry and the defensive weaponry
02:40 that Russia was using against Ukraine.
02:43 What did you learn in that experience?
02:45 - The Ukraine example is fascinating and very unique
02:50 in the fact that you have seen
02:52 a complete tech sector nationalized into the defense sector
02:56 and the speed of change that they've been able to deliver
02:59 in terms of transformation and the use of AI
03:03 and other technologies has been incredible.
03:04 - Well, they had some help.
03:06 - A little help, yeah.
03:07 But I mean, again, one of the earlier commentators
03:11 was talking about getting your data in a healthy place
03:14 to then run the models across it,
03:16 and that's a very important part.
03:17 - The thing is we're fortunate in both of those cases
03:20 that the defensive capabilities were so strong,
03:24 but these are gonna empower
03:25 offensive capabilities as well, right?
03:27 I mean, it's gonna be a race.
03:30 - Yeah, I think the idea of strategic competition
03:33 is definitely with us,
03:34 and I think in terms of governments
03:37 investing into future technologies and AI
03:40 is gonna be pretty game-changing and deterministic.
03:45 - Emraan, I wanna back up a little bit
03:47 and get to the broader question
03:49 of what AI means for governments,
03:51 which is the question you wrestle with every day
03:54 in your position with the UK government.
03:57 I guess the thing,
03:58 and someone said this from the stage earlier today,
04:00 is this is moving so fast, so fast.
04:04 How can government hope to keep up with the pace of change
04:09 and mitigate the negative consequences in any way?
04:13 - Yeah, so a big part of our approach
04:14 has been to do things differently.
04:16 So we've been much more open to bringing in expertise
04:18 from outside of government
04:19 than perhaps sometimes we are as government.
04:21 So we've brought in lots of expertise from AI firms,
04:25 from academia, including Ian Hogarth,
04:28 who chairs our AI Safety Institute,
04:30 who I think was speaking earlier in the conference,
04:33 and we've been much more open to that expertise
04:34 than sometimes we are as government.
04:36 We've also moved really, really quickly,
04:38 and in order to move as quickly as that,
04:41 you really need political support
04:42 all the way up to the top of government,
04:44 and we've absolutely had that.
04:45 We've had a prime minister
04:46 who's chosen to prioritize this set of issues,
04:49 and in particular, he lent into saying
04:51 that the UK would host the first AI Safety Summit,
04:54 which we ran in November last year.
04:57 And I think that's another important lesson for us
04:59 in terms of dealing with some of the risks
05:02 or some of the potential downsides
05:04 of artificial intelligence.
05:06 We've gotta do it with other governments.
05:08 We've gotta do this together.
05:09 We've gotta build international governance around this,
05:12 and that's why it's been so important to us
05:14 to kind of begin to lead that conversation.
05:17 So we hosted the first summit last year.
05:19 We're co-hosting the next summit with Korea next month.
05:23 We are also trying to collaborate really extensively
05:26 on AI safety testing with other countries.
05:29 So our AI Safety Institute
05:31 just signed a partnership agreement
05:33 with the same institute in the US,
05:36 and we're talking to these capabilities
05:38 as they begin to be developed in other countries as well.
05:40 So kind of that collaborative approach,
05:42 I think, is really fundamental.
05:43 - And so what are sort of the top three things
05:45 that you think you can accomplish with this AI safety effort?
05:49 - So I think the first thing
05:50 is to bind this set of countries together,
05:53 to build that connective tissue between the countries
05:55 which have frontier AI firms
05:58 and which you would expect to be at the frontier
06:00 of AI being deployed across the wider economy.
06:04 I think the second thing I'd say is
06:05 there's still a lot to do in terms of understanding
06:08 the science of AI safety.
06:11 So there's a lot of discussion about
06:13 what's the right approach to regulating frontier AI.
06:17 Our view has been that the best thing to do first
06:20 is to figure out what does good AI safety testing look like
06:24 and set standards for that,
06:26 and to do that in collaboration with other people
06:29 who are at the frontier of doing that work.
06:31 And that's what the AI Safety Institute has been focused on.
06:34 We wanna understand
06:35 what really good safety testing looks like,
06:37 do that in collaboration with the firms,
06:39 and do that in collaboration with other countries.
06:41 - And as the end result,
06:42 you give a seal of approval to certain models
06:45 and not to others.
06:46 What does this look like five or 10 years down the road?
06:49 - I think it's hard at this point to say
06:50 what it looks like five or 10 years down the road.
06:53 I think our approach at the moment
06:54 is let's establish the basic science,
06:56 let's figure out what good safety testing looks like,
06:59 and alongside that,
07:00 let's use our existing regulatory system
07:03 for the harms that are manifesting potentially here and now.
07:06 And I think that's another important part of our approach.
07:09 The digital markets that we're talking about here
07:11 are markets that we already regulate in important ways.
07:14 We have just passed legislation
07:17 to allow us to better regulate for harms that occur online.
07:20 Many of those harms will now also be created with AI,
07:25 but we've just regulated for that.
07:27 So we should use that regulatory framework
07:30 to deal with AI-generated harms
07:32 in the same way as we were planning to use it
07:33 for other things.
07:35 - Yeah.
07:35 Robin, you're working from a different angle
07:40 on these issues.
07:41 You're trying to figure out
07:42 how can AI make government better at what it does?
07:45 Can you talk a little bit about what you're doing
07:47 and where you're seeing results?
07:49 - Yeah, so the perspective I bring to this
07:52 is running a political,
07:54 which is a global network
07:55 used by about a quarter of a million public servants.
07:59 We also do online training,
08:00 and as part of that,
08:01 we have a government AI campus,
08:03 and we've trained already about 10,000 public officials
08:06 around the world.
08:06 So that's our lens.
08:08 The headlines are the potential is immense.
08:11 The Turing Institute came out with a report recently,
08:15 which speaks to this in the UK context,
08:18 but it's generalizable.
08:20 There are about a billion transactions a year
08:23 between government and citizens.
08:26 About 140 million of those
08:31 are complex but repeatable tasks,
08:35 and of those, the Institute estimates
08:38 that about 84% can be automated with AI.
08:42 So incredible potential to cut into the work of government,
08:47 improve citizen services,
08:49 improve the return on taxpayer dollars.
08:52 There's also a secondary reason for doing this
08:55 and for adopting AI,
08:56 because better users of AI are better regulators of AI,
09:01 and they're better buyers of AI.
09:04 And I think the UK really has shown leadership
09:07 on the general adoption front in government as well.
09:10 - And is there a willingness to do this?
09:11 Gets back to the retraining issue
09:13 we were talking about a few minutes ago.
09:15 Is there a willingness in government
09:16 to really embrace the effort?
09:19 - Yeah, absolutely.
09:20 So it's very interesting to look at government's
09:22 current status, sort of position on AI.
09:25 First of all, somewhat surprisingly,
09:28 your average public servant,
09:29 we've done a lot of polling on this,
09:30 is more positive than negative on the potential of AI.
09:33 That doesn't hold necessarily for prior technologies,
09:37 and given all the talk about concerns,
09:40 we've been surprised how generally optimistic they are.
09:43 There is much more optimism in emerging markets
09:47 versus developed markets.
09:49 The conversation in emerging markets is all about
09:52 this is gonna solve personalized education,
09:55 this is gonna bring great healthcare.
09:57 The conversation in developed markets
09:58 tends to have much more of a bend
10:00 towards safety and harms.
10:03 And economic nationalism is also becoming
10:08 a more and more important force.
10:11 Sorry, AI nationalism rather,
10:13 and we've seen big plays recently in the UAE and India,
10:16 and expect to see many more around the world.
10:18 So all those are context for adoption,
10:22 but there is general willingness to use it.
10:25 The vast majority of public servants
10:27 have already played with generative AI,
10:29 sometimes, I think, under approved guidance,
10:32 probably sometimes somewhat off the books.
10:35 And that's only going to increase pretty dramatically.
10:40 - Yeah, I'm glad you mentioned economic nationalism
10:42 because one of the things that makes this conversation
10:46 about geopolitical effects interesting and challenging
10:50 is the existence of China,
10:52 not represented on this stage.
10:55 China is certainly an AI superpower.
10:59 There are some people out there who seem to think
11:01 it will become the leading AI superpower.
11:04 And it has a very different background set of values.
11:08 It could be on the other side of any actual warfare.
11:15 It takes a very different approach
11:17 to many of the questions.
11:19 To use a simple example,
11:20 the UK has said very clearly,
11:23 we're not gonna use facial recognition
11:25 or other technologies to pick people out
11:27 in a massive crowd crossing the street.
11:30 I believe China's done the exact opposite,
11:32 say that we can make great gains in social cohesion
11:36 if we have the ability to do that.
11:39 So how does China's strength in AI affect the world?
11:44 How does AI affect these conversations?
11:46 Emraan, I'll start with you.
11:48 - Yeah, so our approach has been to include China
11:52 in the international conversation about safety.
11:55 So China will represent it
11:56 at the AI Safety Summit in Bletchley Park.
12:00 And we think that's important
12:01 because China produces a lot of fundamental research in AI.
12:05 And it has a very busy and a very active
12:08 and a very interesting AI ecosystem.
12:10 So having China as part of this conversation
12:12 we think is important.
12:14 But at the same time,
12:15 we've been really clear as a government
12:17 that where China is engaged in activities
12:19 that affect or harm UK interests,
12:22 we will say so clearly and plainly.
12:25 So we've done that in the past few weeks.
12:27 We have called out where China has been engaged
12:32 in offensive cyber operations in the UK.
12:37 And we think that those two things are entirely consistent.
12:40 - Well, there are a lot of people in the US these days
12:41 who think maybe they aren't consistent
12:43 and that should we be sharing all our research with China
12:47 if we're going to be in conflict
12:49 over whether it's cyber or actual warfare?
12:52 - Yeah, and that's why our approach will not be
12:56 to share all of our research with China.
12:59 We're very clear on there are areas of research.
13:01 - That you will protect.
13:02 - That we will protect and that we do protect.
13:05 - Robin, how do you think about this?
13:08 - We're looking very much at the practice,
13:11 the applied side of AI in government.
13:13 And one of the interesting general emissions,
13:16 I think there's a general emission around thinking
13:18 around capability in governments.
13:21 There's a lot of talk about we must do this.
13:23 There's a lot of talk about adoption,
13:25 but there isn't the focus on building
13:26 the kind of diffusion capability
13:29 so you can actually roll it out across government.
13:31 So we look at that and then we look at how the exchange
13:34 of best practice happens because the field is moving so fast.
13:38 You are never gonna win if you just take
13:40 top-down guidance or directives.
13:43 You've got to talk to others who are in the field.
13:46 And I think one of the interesting dynamics
13:48 is actually the vast majority of countries
13:51 are quite good at collaborating and sharing best practice.
13:54 We see a lot of that on our platform.
13:56 China kind of doesn't take part, for the most part,
14:01 explicitly, at least in the same way,
14:03 in that sharing of best practice.
14:04 So the question I'm interested in is,
14:07 is there gonna be a siloing around how AI is rolled out
14:11 in the Chinese government versus elsewhere?
14:13 And should we be trying to get more permeability
14:17 so there can be mutual learning?
14:19 Around non-controversial things like
14:21 how do you get great services to citizens?
14:24 How do you bring down the carbon footprint
14:27 of these AI technologies?
14:28 We heard about that earlier.
14:30 - Yep, and Guy in defense, how do you think about China?
14:33 - Well, I mean, as you pointed out,
14:36 social recognition is reportedly fantastic,
14:39 but when it comes to things like large language models,
14:41 apparently there's a slight limitation
14:42 because they restrict elements of the internet
14:45 available to specific areas.
14:46 So there is a theory that they might be slightly behind,
14:50 but to pick up Robin's point there,
14:52 I think what's really important is almost,
14:55 not ignoring it, but just more internal focused
14:58 of looking at an organization,
15:00 and in parallel to the regulatory and the safety side,
15:03 is how do you really push the use
15:05 of these technologies to win?
15:07 Like, what are you doing day in, day out
15:09 to make the most of this tech
15:11 to actually improve your work?
15:13 And there's often this concern about losing jobs,
15:16 but we like to view it as, in a Marvel context,
15:19 of Team Iron Man versus Team Ultron.
15:22 And it's how do you bring this technology to humans
15:24 so they can be 1,000 times better at their job,
15:28 and with the view to how you actually win in the long run.
15:30 - Was that a national defense answer?
15:34 - Yeah, I think so.
15:35 I think, whether you're an analyst working
15:37 in defense intelligence, if you're an operational commander,
15:41 it's how do you bring up this technology?
15:43 - But you have to be thinking about,
15:45 do their capabilities match ours?
15:49 Are our capabilities superior?
15:51 That's the nature of defense.
15:52 - And I think that's the important thing
15:53 of balancing the innovation and driving innovation forward
15:57 in parallel to the safety side,
15:59 rather than letting the safety side restrict
16:01 what you do in terms of innovation.
16:02 - Okay, last question I'd like to ask all of you to answer.
16:06 If AI ran the government,
16:07 what would we be doing differently today?
16:12 What would we be doing that we aren't doing?
16:14 And Guy, I'm gonna make you go first.
16:16 - We'd be doing meaningful work, not busy work,
16:19 would be my blunt answer.
16:21 - Still too much busy work in government.
16:22 - I think it's very hard.
16:25 - And AI can help us get rid of it.
16:27 - I hope so, yeah.
16:28 - Robin.
16:28 - So there's been some data suggesting,
16:31 this is an aside, that a majority of people,
16:35 I think it's younger people in Europe,
16:36 actually want AI politicians,
16:38 so they want AI to run the government.
16:39 - As opposed to the politicians they have.
16:40 - As opposed to the flesh and blood ones of today.
16:43 Coming to a government sooner than you may think.
16:47 But in terms of what that would look like,
16:48 assuming it's benevolent and aligned AI,
16:51 I think the potential, the dream,
16:54 is that you get much better, much more personalized services,
16:59 hyper-personalized services to citizens.
17:02 And you can invest a lot more in those
17:04 because you are saving money
17:07 because of so many efficiencies that you're also finding.
17:11 - Amran, final word.
17:12 - Yeah, I'd add, I think government is a complex activity
17:15 and it's too often connoted as being an elite activity.
17:19 And I think AI gives us some real opportunities
17:21 to deal with that.
17:23 I think the way in which AI can help to simplify
17:26 and explain things to citizens, I think,
17:27 has real power to it.
17:29 I think the way in which it can help
17:30 to simplify interactions with citizens,
17:34 I think, has real power to it.
17:36 So I think breaking down that elitism
17:39 that I think a lot of people experience
17:40 in their interactions with government,
17:41 I think AI can help us with that.
17:43 - Well, thank you all.
17:44 You've given us a lot to think about.
17:46 Thanks for being part of Brainstorm AI.
17:48 (audience applauding)
17:51 [BLANK_AUDIO]

Recommended