• yesterday
20 per cent of world's R&D budget is spent on AI: Toby Walsh, chief scientist at UNSW
Transcript
00:00Good afternoon.
00:02Good afternoon.
00:03There's one word you used in the introduction that I really would like to talk about which
00:08is rapid.
00:10Because I don't think people really fully understand, you know, I've worked in the field
00:15now for actually it's almost 40 years.
00:1740 years.
00:1840 years, yes.
00:19You know, so 40 years when there was no AI.
00:22Well we were working on it.
00:25But the thing that surprises me and many of my colleagues working at the cutting edge
00:30of AI today is not the technology.
00:34I mean the technology is like I imagined it was 40 years ago.
00:39Maybe indeed I had hoped we'd get here a bit quicker.
00:42But what really has surprised me is the rate at which the field is unfolding.
00:49I did the math the other day.
00:51A billion dollars is being spent on AI every day around the world.
00:57We have never seen that scale of investment in one technology ever before.
01:02I don't think, it's hard to comprehend what, you know, what is a billion dollars a day.
01:08But to try and help you understand what that means, that's 20% of the world's R&D budget
01:15is being spent on one technology.
01:17Again that's without precedent.
01:18I mean the previous most expensive scientific journey was the Manhattan Project.
01:26And it's orders of magnitude more, even in adjusting for inflation, than that was being
01:31spent on the Manhattan Project.
01:34You know, for our audience here, just take a moment for letting that sink in with what
01:39Mr. Walsh just told us.
01:4120% of the world's budget today is spent on artificial intelligence.
01:47And Mr. Walsh has been in the field for the last 30, 40 years, researching, and now an
01:53expert.
01:54We have him with us, thrilled to have him with us on the stage today, Mr. Walsh.
01:57You know, where most of us, the only introduction to AI was possibly the Matrix or Terminator,
02:03you had practically envisioned yourself being here and having this conversation.
02:07Well, yes, indeed.
02:09I mean, for most people, and indeed, that's why I'm here, was because as a young boy,
02:14I read the science fiction, I read, you know, authors like Arthur C. Clarke and Isaac Asimov.
02:21And they painted a picture of a future where we had all these intelligent computers and
02:26robots that these intelligent computers were controlling.
02:30And I remember realizing back then as a young boy, that that future was going to arrive.
02:36And we already had the tools, the computers that were going to help us build that future.
02:42And so we should try and set about building that future.
02:45That's what I started, you know, with the goal of doing.
02:50My father, who's in his 90s, said to me the other day, he said, it was a remarkable press
02:54interview back then to think of that future.
02:57And I said, yes.
02:58He said, you were playing the long game, though, weren't you?
03:02No, it's commendable, commendable.
03:06You know, to tell our audience also, Mr. Walsh is coming fresh off France, the AI Safety
03:11Summit, which turned into this AI Action Summit.
03:14That's right.
03:15Yes.
03:16The UK held the first summit at Bletchley Park, where, of course, they cracked the German
03:23Enigma codes in the Second World War.
03:27And a follow up summit hosted by France and indeed co-hosted by India.
03:34The French, though, renamed the summit.
03:37And I think this is actually quite telling.
03:39There's something about the geopolitical direction that we're going, it's not just
03:44the scientific direction, but the direction the field is going in.
03:48It was renamed from a safety summit, which was exclusively focused on how we regulate,
03:53how we deal with the challenges, the technology froze up, to the French called it the, President
04:00Macron obviously is an action man, so he called it the Action Summit.
04:05And you did see a real shift in perspective there that, indeed, it was epitomized by Vice
04:14President Vance's speech, where it was really, let's not regulate, let's, this is an all
04:20out race for technological economic dominance, a race that was, of course, at the time, made
04:29even more poignant by the announcement from China of DeepSeek, their state of the art
04:36model that had really shocked many people, especially people in the US, that China was
04:42a major player, was competitive despite the tech restrictions.
04:47But what I think is interesting is also to look at how that conversation now is going
04:53to shift, because India is hosting the next summit.
04:56You know, that's really interesting, because with what you say, where our Prime Minister
04:59is concerned, AI is right on top of our priority list right now.
05:03And there is a conversation that we are also having on how, you know, to regulate without
05:09stifling innovation.
05:12And there's also this race that you see, which you've just touched on, where, of course,
05:16America is doing really well, China is doing it cheaper by the dozen.
05:19And then there are other countries which seemingly might, you know, fall on the wayside of this
05:24AI race.
05:25So I think India is looking at that keenly.
05:27We'd like to develop rather than just watch.
05:30Yeah, well, I'm not surprised.
05:33I had the privilege and pleasure to have a meeting with Prime Minister Modi.
05:37Oh, really?
05:38How did that go?
05:39It was one of the more unexpected afternoons of my life when he came to visit Australia.
05:46And I had the pleasure to meet him.
05:49And he spent half an hour asking me questions.
05:52And what really impressed me, I'm amongst friends here, I think, and just between you
05:58and me.
05:59We're just going live.
06:00That's it.
06:01Okay.
06:02Okay.
06:03And the live audience.
06:04I talked to him for 30 minutes.
06:09He didn't have any notes.
06:11And he asked good questions for 30 minutes.
06:14Because like I pointed out, you know, AI is right on the top of India's priority list.
06:18And you see this race, we're hoping to get there.
06:23But would you want to delve a little on the countries that are really not where they should
06:29be in the race where AI is concerned?
06:32Because the kind of money that's being invested, the kind of technology that's needed?
06:36Yeah, well, I mean, you put your finger on, I think what worries people, or perhaps worries
06:41people here in India, which is, if you're being outspent by what's happening in the
06:47US, where most of that billion dollars is being spent in the US, not all of it, but
06:52a large chunk of it's being spent in the US, a lot of it's being spent by China.
06:57And those are the two leading nations, and very much neck and neck today.
07:02And there, you know, there are, but what's interesting, and what you should take away
07:07from the Chinese deep seek model, is not that, oh, China is going to win this race, and what
07:14consequence is that going to have to the balance of power in the world, but that China
07:18did it with much less money, only a few million dollars, and they did it despite the fact
07:26that the US had put trade restrictions on the latest GPUs, the latest computer hardware
07:32that you need to run some of this.
07:34So China, the fact that China could do that, despite these limitations, less money, less
07:40GPUs, tells you that India can do it.
07:44Tells you that where I live, Australia could do it.
07:47Tells you that it's not just the people with the deepest pocket.
07:51This is a technology.
07:53It's often compared to electricity.
07:55And actually, that's quite a good analogy, because the world benefits from electricity.
08:04It wasn't just Edison and the followers of Edison who got to have the profit of electricity.
08:11Electricity is everywhere in India, it's everywhere in Africa, it's everywhere in the world.
08:17Everywhere you go, electricity is powering so many things, and also driving the data.
08:23And so AI is going to be the same.
08:26You're going to see some people take a little bit of a first mover advantage, like Google
08:32has taken a bit of an advantage in the internet world.
08:35But it's just the beginning.
08:36But that's just the beginning.
08:37And in the long term, it is a technology.
08:39It's a technology that we know works very well in distributed fashion, because intelligence
08:46is already distributed.
08:48There's 8 billion intelligences around the world, and they coordinate and cooperate and
08:53compete.
08:55But that actually is quite a good way to break things down.
08:58And AI will be the same.
08:59It won't be the one super-duper Google intelligence, or the one super-duper deep-seek intelligence
09:06that wins.
09:07We'll all have our own individual AI, or run on our smartphone, run on our computers.
09:14And that's actually quite a good way to spread the power, spread the benefits, protect your
09:19privacy and all those other things.
09:21No, no, India is quite keen on that, because we completely do understand that AI is the
09:25new geopolitical weapon, and we must be on that bandwagon.
09:29But I'll draw you back.
09:30Let me say one other thing about India before we move on, which is that India, if it plays
09:36its cards right, could do very well.
09:39So whilst everyone is going to benefit from AI, certain people will benefit more.
09:47From electricity, we know that Siemens and Westinghouse perhaps did a bit better than
09:51a few other people.
09:53Well, India potentially has the raw materials to do really well in AI.
10:00The two most important raw materials are people.
10:04It's the brainpower of the people building it, and data.
10:09All of this is being driven by what we can learn from the data.
10:13Well, India's got a billion-something people.
10:17That's a huge amount of data.
10:20So if you plan it well, you have the raw ingredients to do quite well.
10:26Let me draw you back.
10:27How often do you get the question in the future, will AI be limited to agents or master?
10:33The ethics of AI.
10:35How many talks have you delivered on that?
10:36It's a concern.
10:37It is.
10:38And indeed, actually, I think people are more concerned about it today than they were even
10:43three or four years ago.
10:46Almost every conversation I have now, people say, well, if the machine's taking over, should
10:51I be worried?
10:53Should we be worried?
10:54And how worried should we be?
10:57I don't think you need to be too worried today about the machines taking over.
11:02I think you should be worried about humans behaving badly, whether that be presidents
11:09of countries on the other side of the world, that humans will use these tools to amplify
11:18the harm they do.
11:19But at the moment, machines do exactly what we tell them to do.
11:23That's all they ever do.
11:24Now, sometimes we haven't thought carefully enough about what we tell them to do.
11:29And anyone who's programmed a computer knows it's incredibly frustratingly literally minded.
11:34It does exactly what we tell it to do, even if that is quite harmful.
11:40But computers don't have any initiative of their own.
11:43They don't have any free will of their own.
11:45In that sense, you don't have to worry about the Terminator.
11:49But you do have to worry that people are going to use the technologies.
11:54Humans are going to direct the technologies in ways that could be harmful.
11:58We had the three chiefs on this morning.
12:01And that's one thing that does worry me.
12:03I've had the privilege to speak at the United Nations half a dozen times about how AI is
12:10changing completely the character of war.
12:14You only have to look at vision coming back from the Ukraine to see that war is being
12:20transformed by artificial intelligence.
12:24You only have to look at what was happening, the tragedy unfolding in Gaza to see again
12:29where AI is making the decisions as to where the IDF was intervening.
12:34Again, AI is completely changing the character and nature of war to think that this is something
12:40that isn't going to be necessarily purely a good and that we're going to have to be careful
12:46and mindful and to think about it.
12:48Because let's not forget, there are many technologies that we have regulated as a planet.
12:55We've regulated nuclear weapons.
12:58We've regulated chemical weapons, biological weapons, clustering munitions, blinding lasers.
13:03Actually, when you come down to it, you add it all up.
13:06There's quite a bit of technology we've thought about.
13:08Well, that is actually morally repugnant.
13:12That's going to be distasteful.
13:13We already actually have good enough ways.
13:16F-35 fighters.
13:19We have howitzers and lots of ways of defending ourselves and protecting.
13:27Do we really need to use this newest technology in ways that are going to look like a bad
13:33Hollywood movie?
13:34You know, Mr. Walsh, with what you've said, I gather that there is no real current threat
13:40that an agent is going to turn into a master anytime soon.
13:43But then...
13:44Not a current threat.
13:45I think actually we're going to be our own worst enemies.
13:49That we will enslave ourselves because we'll dumb ourselves down.
13:54We saw this, you know, I've always said social media should have been a wake-up call.
13:59A wake-up call to when we hand over our attention to machines.
14:06That we're our own worst enemies.
14:08You know, it's us that are addicted to our smartphones.
14:12Anyway, the machines are designed to be addictive, I have to admit.
14:16But we're easily drawn in, we're easily seduced.
14:20It's our own human failings that get us there.
14:23And I do worry that we're going to supercharge that with AI.
14:27And if we're not careful, we'll wake up and we realize that we've dumbed ourselves down.
14:32We've handed over so many of the things that we do to the machines.
14:36And it's not that the machines have taken over.
14:39We've given them the right to do that.
14:42And that if we no longer write business letters, because ChatGPT always writes our business letters.
14:49If we no longer write our speeches, because ChatGPT writes our speeches.
14:53Well maybe we lose the ability to communicate.
14:57But Mr. Walsh, when you read about, you know, very disconcerting events like, you know,
15:04the chatbot Gemini of Google actually telling a student to go kill himself, or he needs to die.
15:10Or, you know, and tell me if it's true, because some of us were really worried and then we thought it was just,
15:15we were, you know, maybe overreacting.
15:18Where, when this, you know, word came in that there were two AI agents who were, you know,
15:24speaking with each other, and then they realized that they are AI,
15:27and then they switched to a different language, which is AI language. Did it happen?
15:30It did. Yes, it did.
15:32Now that's disconcerting. Now I will think that machine's taking over.
15:36No, a friend of mine actually was behind this experiment.
15:39Okay, could you explain?
15:41But it was reported in a way that was much more sensational than what happened.
15:47So they were building some agents, some AI agents, that could learn to trade.
15:52So, you know, I want two bags of sugar and I want a bag of flour.
15:58And they got the AI agents to trade with each other.
16:02And to begin with, there was a human in the loop, so the agents were speaking in English.
16:07But when the human got out of the loop, the agents worked out, well, English is quite inefficient.
16:14I could say, you know, one X and three Rs.
16:17And it worked out X stood for sugar and R stands for flour.
16:22That was much more efficient.
16:23And so it wasn't being intentionally, you know, malevolent in any way at all.
16:29Just realize that English is pretty inefficient.
16:32Computer speak is much quicker and so much easier, much more precise, much less ambiguous.
16:38There were lots of reasons why that made sense.
16:41But that does, you know, leave room for being malevolent.
16:44Yes, it does.
16:46And so human oversight is always going to be really important,
16:50especially where the applications have significance, a high stake.
16:56But, I mean, the other thing, and to talk about chatbots, you know,
17:00encouraging people to commit suicide, that has really happened.
17:04People, as far as we know, have been encouraged.
17:06But again, I look at social media and say, we didn't learn the lesson.
17:11We're running this big experiment on humanity, especially our young people,
17:19which is having, as if you look at the statistics in most countries,
17:23anxiety levels amongst young people are going through the roof.
17:29Mental health issues are going through the roof,
17:32coinciding with the introduction of social media.
17:35Now, it's very hard to know where the correlation is causation,
17:40but one has to start suspecting strongly that, you know,
17:44like the experiment we ran with tobacco, this is not proving to be a good thing.
17:49And why is it that we let the technology companies
17:54put these tools out in the public sphere
17:58without running the tests and the safeguards?
18:02If I said to you that there was a drug company
18:05and it was testing its product on the general public,
18:09it didn't have any regulatory oversight,
18:12people were dying.
18:14You'd say, no, no, no, no, surely, surely that's outrageous.
18:17And I'd say, no, of course, that doesn't happen.
18:19Drug companies are carefully and appropriately regulated.
18:23They have to run trials, they have to produce evidence
18:26to show that their products are not going to harm people.
18:29And that's with their physical health.
18:33But somehow we forget, for people's mental health,
18:36that it's just as important.
18:39And social media, and now AI,
18:42is running a big experiment on humanity.
18:45And I don't think we're putting in the appropriate oversight
18:48to ensure that, you know, there'll be some good things with it.
18:51These tools are very helpful for, you know,
18:54people exploring issues, people, you know,
18:56people who are having, you know, issues around, you know,
19:00addressing their sexuality or whatever.
19:02Fair point, one needs to, you know, have that bit of regulation
19:04while not to stifle innovation.
19:07We need to walk that thin line.
19:08We do, but I think it's a bit of a myth
19:12that the tech companies have helped perpetuate
19:15that regulation necessarily stifles innovation.
19:18And indeed, there's actually quite a good argument
19:21that regulation will prevent innovation from being stifled.
19:27That we saw this, for example,
19:29with genetically modified foods.
19:32The public was not really brought along with the idea,
19:35was not felt that it was being appropriately regulated.
19:38And now, in many countries I go to,
19:40people talk about frankenfoods and that they're,
19:45we won't get the benefits, the climate resistance,
19:47the disease resistance that we need
19:49from genetically modified.
19:51Forgetting, of course, that the last 2000 years
19:54of agriculture have been slowly genetically modifying food.
19:58Mr. Walsh, I would, you know,
19:59like to talk a bit more about this
20:01and we're gonna take that off stage,
20:02but I wanna quickly come,
20:04one of the books of the four books that Mr. Walsh has written
20:08he's actually set a date, 2062.
20:11And it's very disconcerting and scary
20:13because he seems to suggest by 2062,
20:17AI is gonna be as efficient as humans.
20:21Now, does that mean that AI will develop a conscience?
20:24There'll be, what do you mean by that?
20:28So I should say, this wasn't just my prediction.
20:31I surveyed 300 colleagues around the world,
20:34other experts in AI.
20:35That was their average prediction, 2062.
20:38Some of them said it was gonna be quicker.
20:40Some of them said it was gonna be slower,
20:41but the average was 2062.
20:43Which is-
20:44Will they develop true creativity?
20:45Will they general intelligence, consciousness?
20:48What are we talking about?
20:50So the question I asked them was,
20:52when would computers match humans
20:54in all of their cognitive intellectual abilities?
20:58Whether they could do all the things that we do
21:01that require intelligence.
21:03I think it would be conceited to think
21:05that they wouldn't necessarily be creative.
21:08Whether they'd be conscious.
21:10Now that, I think is one of the most interesting,
21:14open scientific questions of the century.
21:17Because we know so little about consciousness.
21:20I can summarize the scientific knowledge of consciousness.
21:24It happens in the brain,
21:26somewhere towards the base of the brain.
21:28That's it.
21:29That's about what science knows about
21:32what is actually the most profound human experience.
21:35When you woke up this morning,
21:37you opened your eyes,
21:38you didn't say, oh, I'm intelligent.
21:40You said, I'm awake, I'm conscious again.
21:42That experience of being alive,
21:44which is the rich thing that we all enjoy,
21:47and is something that we know so little about.
21:51And what will happen by 2062
21:53is where we have an answer to the question,
21:56is that something that's limited to biology?
21:58Or is that something that could be recreated in silicon?
22:03We have no idea today what the answer to that question is.
22:07They might be conscious,
22:09or they might be zombie intelligence.
22:12Intelligent, but without the consciousness
22:13that goes with it.
22:14Consciousness and intelligence in the animal world
22:17seem to be very connected.
22:19All the things that are intelligent.
22:20Do you see AI to be a threat to humanity in the long run?
22:23Getting to the point.
22:24No, I see humanity being a threat to humanity.
22:28Much greater threat.
22:29Tell me something then,
22:30you know, because we're reaching the end of the session.
22:32For someone who studied AI for so long,
22:35what is that one thing that you feel is not right
22:38about the way with which AI is developing?
22:43I think the one thing that's wrong
22:44is that it shouldn't be in the hands of me,
22:47or my colleagues.
22:49It should be in the hands of everyone.
22:50It's a technology that's going to touch all of our lives,
22:54and it shouldn't be up to people,
22:57many of them in Silicon Valley,
22:58many of them white male people with their own agendas,
23:03who are making those fundamental decisions
23:05about what is the future,
23:07what is 2060 going to be like?
23:10What is the world our children are going to be inherited?
23:13It's not the choice of them,
23:15it should be the choice of all of us.
23:17We're also living in a geopolitical world
23:19where AI has become, like I said, a possible weapon.
23:22Which field do you think that where we stand today
23:25could be most harmed by if, you know,
23:29that line blurs between an agent and a master?
23:34I do worry about two things.
23:37One is the way that when we touched upon this a little bit,
23:40the way that we may transform warfare.
23:43And the other thing that I really worry about,
23:45because it is a very delicate thing
23:47and a very important thing, is our democracy.
23:51I'm here in the world's largest democracy.
23:54And I do worry that AI will be a tool
23:57that will supercharge misinformation, disinformation,
24:01will be used to polarize us.
24:03We've seen, I think just unfortunately,
24:06a taste of what that can happen with social media.
24:09And now we're going to add to that.
24:11So with social media, we can reach very clearly,
24:14very cheaply into people's lives.
24:16And now we're going to be able to put AI into that,
24:19personalize, and put really persuasive content
24:23that will, I fear, not bring us together,
24:27but drive us apart.
24:29And democracies seem to be quite delicate at the moment,
24:33not just in this country, but in most democracies.
24:36Democracy, you know, the Democracy Index
24:38is actually going down.
24:39The number of functioning democracies,
24:42which does not actually apparently include
24:44the United States, according to the Democracy Index,
24:47is going down, which is a worrying trend.
24:50For many years, from the start of the last century,
24:53the number of democracies was increasing.
24:54We're becoming a more democratic world
24:56and more representative world.
24:58That seemed to me a good trend to go.
25:00Unfortunately, we're going backwards.
25:02And I do worry about how technologies, social media, AI,
25:07are providing the tools to help bad people
25:13make those changes.
25:15You know what I'm going to do is I'm going to open the floor.
25:18Our vice chairperson wants to ask you a question,
25:20Ms. Kalipuri.
25:24Hello, hi.
25:25I just want to go back to an earlier question
25:27that Preeti asked you about two AI agents
25:30realizing they're talking to each other
25:33and switching from English to a more efficient language,
25:36which is, you know, their own language.
25:39What I want to ask you is, in that experiment,
25:42did the AI agents realize that it's more efficient?
25:47Or did the humans who were running the experiment
25:50say to the AI agents, hey, it's more efficient, switch?
25:53Which way was it?
25:55And were the people carrying on the experiment
25:59expecting this to happen?
26:03Good questions.
26:05The way it happened was they gave the AIs the goal
26:10of as quickly and as efficiently as possible,
26:13agree a deal, an exchange of goods with each other.
26:17And so the AIs worked out,
26:18well, it was quicker and more efficient
26:20if we stopped speaking English
26:22and start speaking our own language in that sense.
26:26Now, when it was reported in the press,
26:28people said, oh, the scientists switched it off in horror.
26:32That wasn't the case.
26:33My friend said, we switched it off
26:35because it wasn't what we were trying to achieve.
26:39It wasn't very, we weren't very interested
26:41in what they were doing.
26:42We couldn't really, you know, they could,
26:45they did retrospectively debug what the messages were,
26:49but they said, it wasn't that we were frightened
26:51of the machines going off and making other plans.
26:55They were still sticking to the goals.
26:57Again, computers do what we tell them to do.
26:59They were still just trying to exchange goods
27:01with each other.
27:02They were just doing it in a more efficient way
27:04because they discovered that English
27:06was an efficient way to do it.
27:09And your friend wasn't horrified with that
27:11because that sounds quite horrifying to us.
27:14She was horrified with the way it was reported,
27:17the way that it's turned into a conspiracy theory.
27:20Mr. Walsh, you've been researching AI too long
27:24to be this casual about this.
27:26No, but it highlights a point,
27:28which is that we do live in a world
27:31in which conspiracy theories and rumors
27:35get easily amplified.
27:36By AI as well.
27:37Yes, and therefore, I have great respect
27:40for the important role that The Fourth Estate,
27:43journalism, magazines like India Today,
27:46play in filtering, finding out what is true
27:49and what is not quite true to help us understand
27:53what are the real challenges that face society today
27:56and what are the things that we don't have to worry about.
27:58You know, we've reached the end of the session,
28:00but I'm gonna just ask you one quick question
28:02because you're coming off France
28:03and where they changed the AI safety summit
28:05to the AI action summit.
28:07And India's gonna host the next AI action summit.
28:10Should we change it back to the AI safety summit
28:14is the question that I ask.
28:15No, I think you should change it to the AI for good,
28:18AI for society, AI for all summit.
28:22So that all of us,
28:24not just the rich white people in Silicon Valley.
28:27You're very philanthropist way of looking at AI.
28:31But I appreciate you joining us.
28:33So ladies and gentlemen,
28:34please raise a warm, warm applause for Mr. Walsh.
28:38He should be back with us
28:39when India hosts the AI safety summit.
28:42We're gonna have him back.
28:43Thank you for joining us.
28:45My pleasure.

Recommended