Ein Beamter des Außenministeriums drängt am Donnerstag darauf, dass China und Russland erklären, dass nur Menschen - und nicht künstliche Intelligenz - Entscheidungen über den Einsatz von Atomwaffen treffen werden.
Paul Dean, ein Beamter im Büro für Rüstungskontrolle, Abschreckung und Stabilität des Außenministeriums, sagte während einer Pressebriefing, dass die USA bereits "eine sehr klare und starke Verpflichtung eingegangen sind, dass in Fällen des Atomwaffeneinsatzes diese Entscheidung nur von einem Menschen getroffen würde.
"Wir würden niemals eine Entscheidung über den Einsatz von Atomwaffen an KI delegieren. Wir stehen fest zu dieser Aussage, und wir haben sie öffentlich mit unseren Kollegen im Vereinigten Königreich und in Frankreich gemacht", fuhr er fort.
"Wir würden eine ähnliche Erklärung von China und der Russischen Föderation begrüßen", fügte Dean hinzu und betonte, dass "wir dies für eine äußerst wichtige Norm verantwortungsvollen Verhaltens halten."
Das Außenministerium sagte, dass sich Außenminister Blinken und der chinesische Außenminister Wang Yi während eines Treffens am vergangenen Freitag in Peking über "Risiken und Sicherheit künstlicher Intelligenz" unterhielten.
"Ich glaube wirklich, dass es derzeit eine echte Gelegenheit gibt, da Länder zunehmend auf künstliche Intelligenz zurückgreifen, um festzulegen, wie die Regeln für verantwortungsvolles und stabilisierendes Verhalten aussehen werden. Und ich denke, das ist eine Unterhaltung... die ich denke, alle großen Militärs und großen Volkswirtschaften - wie die Vereinigten Staaten und China - führen müssen", sagte Dean am Donnerstag.
Er erwähnte, dass die USA und 54 Partner - ohne China und Russland - eine politische Erklärung über verantwortungsvolle Nutzung militärischer KI unterstützt haben, die sicherstellen wird, dass es keine Verantwortungslücke bei der Verwendung von künstlicher Intelligenz im Militär gibt und dass die Anwendungen gemäß strenger technischer Spezifikationen entwickelt und verwendet werden, wobei einige Designs eingebaut sind, um sicherzustellen, dass es Sicherheitsvorkehrungen gibt und dass die Technologie verantwortungsbewusst eingesetzt werden kann.
"Diese Technologie wird Militärs in einer Vielzahl von Anwendungen wirklich revolutionieren", sagte Dean auch.
"Und ich möchte hier betonen, dass das Problem nicht auf den Einsatz im Kampf beschränkt ist, sondern dass diese Technologien von Militärs in allen Bereichen ihrer Operationen eingesetzt werden, um Effizienz und Logistik, Entscheidungsfindung zu verbessern. Und ich denke, das birgt großes Potenzial, und ich denke, es gibt erhebliche Vorteile darin, aber natürlich gibt es, wie bei allen neuen Technologien, Risiken, wenn die Technologie nicht verantwortungsvoll eingesetzt wird."
Category
🗞
NewsTranscript
00:00You're watching The Context. It is time for our regularly weekly segment AI Decoded.
00:09Welcome to AI Decoded. It is that time of the week when we deep dive into the world of artificial
00:15intelligence. Last week we looked at the advance of AI chatbots and the role they can play in
00:21disseminating false, misleading or harmful information. Tonight we're looking at the
00:26benefits and risks posed by AI on the battlefield. In defence the leverage that the West will get
00:33from its advantage in technology and specifically AI over adversaries that may have greater mass
00:40is going to be the future of warfare. Yeah we'll bring you a fascinating interview tonight with
00:45one of the men building the Pentagon's most advanced systems, Palantir's European Vice
00:51President Louis Mosley. But first this from Reuters. Don't let AI control your nukes. That's
00:58according to a senior US official this week who has urged China and Russia to match declarations
01:03made by the United States and others that only humans and never artificial intelligence would
01:08make decisions on deploying nuclear weapons. And what about the battlefield itself? The Atlantic
01:15profiles Boston Dynamics, one of the most advanced robotic companies in the world. It makes
01:20robots for the US military. We'll show you a clip of that later in the programme. Also here
01:26tonight Stephanie Hare is with us, a regular companion, AI contributor and technology author.
01:34Hello. We're going to talk about AI in the military and a system that Palantir has built.
01:41It's called Titan. Full disclosure you once worked for Palantir. So help us understand what it does.
01:47Right. So full disclosure, excuse me, the last time I worked at Palantir was in 2017. So it's
01:51been a while and I have not personally worked on this system nor personally seen it. So it's just
01:56important to clarify that. That said, what I've read about it, what's publicly available is really
02:01interesting. Palantir is very advanced in its use of software and particularly it has gone very big
02:07on AI. It is integrating software with hardware to come up with what's almost like a full technology
02:13system. So using input from sensors, that's an example of hardware, into other data streams,
02:20it's all going to be integrated to get like a 360 degree view of the battlefield.
02:25Okay. Well, we'll come back to that because we sent our North America correspondent,
02:30Sumi Somaskanda, to the Hill and Valley Forum in DC. It's a meeting of some of the most important
02:35figures in national security. And there she met Palantir's European Vice President, Louis Moseley.
02:42What is the tool that you think is right now going to fundamentally change
02:46warfare as we know it? So I think AI is the short answer to that.
02:50So we make operating systems for AI, both corporate and government organizations. But in defense,
02:57the leverage that the West will get from its advantage in technology and specifically AI
03:02over adversaries that may have greater mass is going to be the future of warfare.
03:08It is in use right now in Ukraine. What promising opportunities have you seen there? Because we
03:13know that Ukraine has been something of an experiment as well for some of these technologies.
03:18It is absolutely the laboratory. It is the frontline. It's where we're seeing the newest
03:24and most interesting technologies get tested. And in fact, the speed at which we are learning
03:30is extraordinary. So technologies like drones, for example, and by the way, those are the essential
03:35technologies on the frontline right now. They are changing every six to eight weeks fundamentally.
03:41So the design itself of the drone, the obsolescence is measured in weeks and months. And I don't think
03:47there's been an iteration like that, an innovation cycle since the Second World War.
03:52What is the challenge that you face in deploying some of that technology on the battlefield in
03:58Ukraine? So the challenge is Russian countermeasures. So jamming technologies,
04:05never underestimate the Russian capabilities and they are learning also very, very fast.
04:10And that's what drives that cat and mouse game between the Ukrainians and the Russians.
04:14If you look at the situation on the battlefield right now, despite the use of this technology,
04:20the battlefield has stalled and that is for a lack of conventional weapons. Quite frankly,
04:24the Ukrainians have run out of bullets. Does that show us the limits that this type of technology
04:29has? So obviously, there are limits. In the end, it will come down to ammunition stocks,
04:35number of people. But this is where technology can also add that advantage that we discussed
04:40before. Because if you are in the Ukrainian situation, where you're outnumbered,
04:45Russia is three and a half times the population, it has far bigger manufacturing capacity,
04:50you're going to have to make each soldier count a great deal more. Each of those bullets,
04:54each of those missiles count a lot more. And that is where with the addition of AI
04:58and technology, you can ensure that those all go as far as they need to go.
05:03Yeah, those efficiencies are really important. Thanks to Sumi for going to do that interview
05:08for us. I mean, what he describes there is an arms race, that line that drone technology is
05:14increasing and improving every six to eight weeks is terrifying. How do you control that?
05:20You don't control that necessarily, because what we're talking about is an iteration cycle.
05:24So Palantir developing its weaponry, all of the other defence companies doing the same,
05:29but so are Russia, so would China, so would fill in the blank of any country you want on earth.
05:34Until we have some sort of international arms regulation for this type of weaponry,
05:39it is open playing field at the moment. What needs to be ironed out, obviously,
05:43are the unintended consequences. How do you disengage or deactivate a system that's behaving
05:49not as you expect? And ultimately, who is overseeing that development? Is it the Department
05:55of Defence or is there independent oversight? I mean, that is the key question. The US Department
06:02of Defence has AI ethics. Believe it or not, they have their AI ethics principles that anyone can
06:07read. The question is, is it the companies marking their own homework? Is it the US Department of
06:13Defence deciding if something is ethical or not, or someone who's independent? And at the moment,
06:17I would say it's not really anyone. Right now, it's very marketing for all parties concerned.
06:23Everyone can point to their policy, but there's no enforcement.
06:27OK, we'll come back to that thought. After the break, we're going to explore some of it.
06:32In the company of this man.
06:34Humanity will no longer be the only or even the most intelligent species on the planet.
06:43And everything will change. And if we don't control them,
06:48then the future will belong to them, not to us.
06:52That is Conor Leahy, CEO of Conjecture, an AI safety company, who is warning that the
06:57AI trajectory that we're on could quickly overwhelm our ability to control it. And
07:02nowhere is that more concerning than in defence and robotics. We're joined now by Conor Leahy,
07:08who is the founder and CEO of Conjecture. Welcome to the programme. It's an AI startup
07:14that is controlling AI systems and aligning them to human values. I've got that right, haven't I?
07:20That's correct.
07:20Yeah. OK. You're not a defeatist on this. You've spoken out quite a bit about the
07:26threats that are coming from AI. In fact, you went to Davos, I think, earlier this year,
07:30armed with an array of policy solutions to control the advances that we're talking about.
07:35What did you hear in that interview there with the chief executive from – or the vice president,
07:42I should say – of Palantir? What alarms you about what he was talking about?
07:46Well, it was definitely nothing I didn't expect, let me put it that way. What we're seeing right
07:50now, I think you already said very accurately, is an arms race, both on the battlefield and
07:55off the battlefield. And in an arms race, there really is no winner and there's only one loser,
08:00and that's humanity. What we're doing here is that we're signing over more and more control,
08:05more and more speed, more and more of our systems, our decision making to machines.
08:10And these are machines that we do not necessarily understand. This is a very
08:13important thing to understand about AI systems. They're not like normal software. Normal software
08:18is like computer code. It's written by a human, sits down, writes lines of code of what the
08:24computer is supposed to do. And the computer does those things step by step. This is not how AI
08:28functions. AI is more like grown. You have huge piles of data and then you use massive supercomputers
08:36to grow a program to solve your data. And this program does not look like code written by a
08:43human. It looks like a bunch of numbers, a huge pile of billions of numbers. And if you execute
08:48these numbers on your computer, you can do a bunch of amazing things. But fundamentally, to this day,
08:53even our top scientists don't really know what's inside of these systems. And is that the kind of
08:59system you want making battlefield decisions, or for that matter, civilian decisions?
09:03I mean, picking through it, the way you described that, I've never heard it described like that.
09:10In real-time battle scenarios, who is going to go through it and pick out the numbers that count?
09:15I mean, the truth is, more and more, nobody. What's most likely going to happen, and this
09:19has already started to happen in various drone programs and the like, is that it starts out as
09:24a system. It gives advice to the human. But then more and more, the human starts becoming a rubber
09:28stamp machine. And more and more, it turns out, well, the system seems to detect good enough.
09:32Just click the button. Just click the button. Just click the button.
09:35The Future of Life Institute has a fun dramatization of this, well, fun in quotation
09:40marks, called artificial escalation, where they go through a hypothetical scenario,
09:45where it could be when two countries are unsure of a situation. You have some signal, but you
09:52maybe don't know what it is. It can come down to the second. It can come to the minute. You
09:56might not have time to make a human decision. And whoever has more AI decision-making can act
10:02faster and then not even know what's going on. In 1983, there was a man named Lieutenant Colonel
10:09Stanislav Petrov, a Russian soldier stationed in a bunker outside of Moscow. And one day,
10:16while he was surveilling for American nuclear first strikes, he saw a missile appear on his
10:22radar. And he had very clear orders to escalate this up the chain of command in order to trigger
10:29a nuclear counter-strike, and therefore, ending the world as we know it. But he decided that this
10:35is unlikely to see a single missile. This must be a technical malfunction. And thank God, he then
10:40decided to not push the button. We could be in a very different state right now if Mr. Petrov had
10:45not been there that day. Come back to that thought, Stephanie. So that's a great example of what they
10:50call human-in-a-loop decision-making. And you're right that we are often reassured that we can use
10:55AI in the battlefield because we'll always keep a human in the loop. We'll always be a human that
11:00makes the final decision to whatever AI recommends. But I'd be interested to hear what you think about
11:05this based on not just the Palantir interview, but everything that we've been hearing, not just in
11:09Ukraine, but in Gaza, other theatres of war. Do you think human-in-the-loop should give us comfort?
11:17Unfortunately, I just don't think that that is a practical political reality. The truth is that
11:22whoever is a race to the bottom, whoever has the least humans in the loop, they can act the fastest.
11:27Humans are slow. They make mistakes. They're expensive. They sleep. They get tired. If you just
11:32have a machine, we already now have simulated dogfights between fighter jets controlled by AI
11:37systems that completely destroy even our top gun pilots. Even the best pilots in the US military
11:43cannot hold up with how fast these systems can react. So sooner or later, whoever is the first
11:50one to ditch humans entirely is going to be the one who's going to get the superiority.
11:55And this is a very tempting thing for people to do. So I expect people to do this. As you said
12:00earlier, I was at Davos and I run a company. So I talked to many enterprises and also civilian
12:06actors about this. And the number one thing I disappointingly hear a lot when I talk to XX,
12:12various large companies across the globe, is publicly, they'll say, oh, AI is to assist
12:17humans. It's to enable more creativity and all these kind of things. But I tell you,
12:22at the negotiating table, at the sales table, the only thing they want to hear is,
12:26how many people can I lay off if I buy your product?
12:29Let's expand that thought into this story from Reuters today, that the White House is now pushing
12:35for a non-proliferation treaty on AI, and specifically when it comes to nuclear weapons.
12:40Is that feasible?
12:42I think many things are feasible and possible. It is a proliferation problem. I think this is
12:47an important way to think about this. This is a proliferation problem similar to how nuclear
12:51proliferation with the Soviets was a very hard diplomatic challenge. But the truth is, at some
12:57point, we as humanity have to make a choice. The choice is, do we continue this race to the bottom,
13:04give more and more to the machines, until one day we're just not in control anymore?
13:08Or do we get together somehow? And I'm not saying I have a solution for this.
13:12But in the current climate, that's unlikely, isn't it? I mean, we can't even agree on Ukraine or
13:17matters in the Middle East. I mean, there just surely isn't the bandwidth for
13:21a conversation such as that on AI, when the suspicion is so great.
13:25Last time, a while ago, I was in the House of Lords, and I spoke a bit about this topic.
13:31And a similar question was raised of, surely, the Chinese or the Russians would not be willing
13:38to talk. It's impossible. And a venerable lord, Lord Brown, stood up and he said, well,
13:43we did it with the Soviets during the Cold War. It was hard. Yes, it was very, very hard. But
13:48we're still here. I'm not saying it's easy by any means. And we are lucky that, at least at the
13:54current moment, the West is very technologically advanced compared to its competitors on the AI
14:02sphere. AI is currently very bottlenecked by a very tangible resource, which is computing power.
14:08To build powerful AI systems, you need very, very large supercomputers. These are very big,
14:13very expensive, and are only made by, like, three companies.
14:17So far, we've talked about missiles. We've talked about fighter jets that can attack one another.
14:23What about the soldiers of the future? I want to show you something. This is from a company called
14:27Boston Dynamics. They are the most advanced robotic company in the world. They make robots
14:33for the US military. And this is their latest model.
14:58Imagine that coming towards you with a weapon. Now, the one thing you've got to remember when
15:02it comes to human intelligence is that we are mortal. The intelligence dies with us. But as you
15:09were explaining, and as this article in Forbes pointed out today, that robot is going to soak
15:15up every one of your numbers, every piece of human intelligence. It's going to grow exponentially.
15:21Into what? Well, the answer is, of course, that we don't know. And I don't think that particular
15:26robot, hopefully, will be very dangerous. But if we look at where things are heading,
15:31larger and larger supercomputers, more advanced algorithms, better and better intelligence,
15:35systems that can process data in ways humans can't possibly imagine. Computers can think on
15:41orders of millions or thousands of times faster than humans. Imagine if we managed to build an AI
15:45system that is as smart as a smart human or a scientist. It runs at a thousand times the speed
15:51of a human. Well, that means for every day, it can do two years of research, two years of thinking,
15:56two years of planning per day. We know this is possible, given not much more than we already
16:03have today. We already have systems that can read every book ever written in an afternoon.
16:09We already have systems that can generate thousands of photorealistic pictures just like
16:15that. We already have dogfights, systems that can control fighter planes beyond any human
16:21capabilities. This is already possible today. And it's only speeding up. It's an exponential.
16:26Speeding up with the wars. And one of the things I'm thinking about while listening to you is that
16:31after the Second World War, the Allies had to come together and create a new type of war crime
16:35called crimes against humanity in order to bring to justice the Axis powers, which were then Germany
16:41and Japan, to hold them accountable. I'm wondering if we're actually going to have to update some of
16:47our war crimes legislation to take account what's going to happen when the first machine kills a
16:53human, either intentionally or accidentally. But who's responsible? Well, that's what I was
16:58going to say. Who do you pin it on? Manufacturer, person who programmed it. But you and I both know,
17:05I mean, this is recycled code or many, many authors of code, the sources of the data,
17:10the government who bought it. Honestly, this is going to be an absolute nightmare for jurists.
17:15Just on that issue, can we talk about oversight? Because the Wall Street Journal have got a piece
17:20today. They're saying that the CEOs of some of the biggest companies, so we're talking about the
17:24likes of Sam Altman at OpenAI, they're going to join this new Artificial Intelligence Safety Board
17:29to advise on regulation. Should they be on it? Well, it depends what the purpose of this
17:36institution truly is. If its purpose is to help these companies grade their own homework and get
17:43good government contracts, well, then it's doing its purpose properly, isn't it? But it's kind of
17:47like putting oil companies on your climate change committee, which we laugh at, but this is not
17:52uncommon. This is quite a common practice, in fact. Well, who would know more about climate than the
17:58oil execs? They have all the information, correct? And so it's a similar thing here. Well, who do you
18:02put on your AI safety panel? Well, the people knowing about AI, right? But as we've learned,
18:08such as with putting tobacco companies on your cancer research committee or putting oil companies
18:14on your climate change committee, there might be a conflict of interest here.
18:17Okay. I think we've terrified everybody who's watching at home. I'm terrified just listening
18:21to it. I don't know about you. But shall we lighten the mood a little? Because it's not all
18:25doom and gloom. There's a lot of good things out there with AI. And there's a new music app that's
18:29been previewed by Rolling Stone magazine a few weeks ago called Oudio. And I'm going to introduce
18:34you to a comedian and musician, Ashley Freeze, who does a lot of stand-up and a large part of his set
18:40is given over to music. You can tell by the guitars on the back wall. He's also a computer nut.
18:45So how are you using this app? What do you use it for? Well, it's amazing. It's on much lower
18:51stakes than all this defence stuff, thankfully. You put in some silly lyrics, it sings you a
18:56silly song. It's a lot more fun than the rest of the stuff you've been talking about. And you've
19:02made us a song dedicated to the BBC. I thought it was on message, yeah. Right, let's play it.
19:10Oh, the BBC
19:18The BBC is owned by you and me
19:22Funded by the licence fee If it went away, I'd get the blues
19:29What about my BBC News? What about all those programmes I adore?
19:36Oh my god, it's like the Director General meets Michael Bublé. How long did that take you?
19:46It took me about 25 seconds to write a lyric and then a few presses of the button, tweaking
19:52of parameters, until I got something that sounded worth sharing with someone else.
19:57Why would this be better than sitting down with one of those guitars to create something new?
20:04It's not. It's absolutely a party trick and a gimmick. But it is the most astonishing thing
20:11as a creator of music. It's the most astonishing thing I've ever played with. You're like opening
20:15a toy box and suddenly there's all this Harry Potter magic in there that can realise any silly
20:20idea into what feels like a produced song in a few seconds. So I got addicted to it and it's just
20:27been so much fun making these songs. Will we be hearing from the estate of
20:31Frank Sinatra after that wonderful tune? Because I feel that Mr Sinatra's family will be interested.
20:40That's it. Are you worried about the rights, Ashley, when you put this music together?
20:46So I've had to take the assumption that this is ethically sourced in that it's reviewed such a
20:51large data set that rather than deliberately impersonating one thing, it's doing what we all
20:57do, which is kind of just mimicry of an archetype rather than the specific thing. So I don't believe
21:03that is a Frank Sinatra song rewritten. I think that's just the average of lots of swing music
21:09put together. And, you know, as somebody who makes pastiche songs, it's remarkable that the
21:14machine can do it better than I can. I could probably get rid of these guitars. It's better than me.
21:20Ashley, it's been great talking to you. Keep up the good work. Thank you for the music. Will you
21:24come back and do one for A.I. Decoded? Anytime you like. I look forward to that, Ashley Freeze
21:30there. Actually, just before we end, Samir who comes on sometimes, he says to me, look,
21:35our jobs are all going to change. We're not going to be creators. We're going to be curators. And
21:41you can see why that would be the case when you see what is possible. As a researcher and
21:46writer, I'm going to challenge that and say that there's still something that's going on in my brain
21:51from reading, thinking and the writing and editing process that I have yet to see any
21:55software system come even close to touching. Well, I won't give up the music lessons just yet then.
22:00That's it from us this week. Thanks to Stephanie and also to Connor for coming in. Extraordinary
22:05stuff that we've learned this week. Next week, it's A.I. and energy. How are we going to power
22:10these immensely sophisticated systems with the limited resources we have? Join us for that.
22:16Same time next week.