Zoubin Ghahramani, Vice President, DeepMind, Google; Professor, Information Engineering, University of Cambridge Dr. Anne Phelan, Chief Scientific Officer, BenevolentAI Dr. Marc Warner, Co-Founder and CEO, Faculty Moderator: Jeremy Kahn, AI Editor, FORTUNE; Co-chair, Fortune Brainstorm AI London
Category
🤖
TechTranscript
00:00 Thank you all for being with me, and thank you all for joining us.
00:03 So, generative AI is creating a lot of buzz, a lot of excitement about what is coming in
00:10 the future from these models.
00:12 I want to start by talking a little bit with Zubin, who's immediately to my right, about
00:17 Gemini.
00:18 This is a very powerful model that Google has developed and has just come out with a
00:23 couple of new versions of.
00:25 What is so special about Gemini, Zubin, and how is this potentially transformative?
00:29 Let's start there.
00:30 Yeah, thanks, Jeremy.
00:32 It's a pleasure to be here.
00:33 So, first of all, Gemini is a whole family of models, going from on-device models to
00:39 our very largest models.
00:42 And what's really exciting about Gemini is that they're built multimodally from the ground
00:48 up, so they can process images, audio, video, et cetera, alongside text.
00:55 And our latest version of Gemini actually has what's called a one million token context
01:06 window, which means that it can process and keep in memory a tremendous amount of information,
01:11 much larger than any of the other models out there.
01:14 That's like a whole book's worth of material?
01:16 It's many books, a whole podcast, a video, an entire code base.
01:21 And if you think of that in terms of the short-term memory that the model has that you can query
01:29 and interact with, the capabilities of that, we've only really scratched the surface of
01:33 that.
01:34 Interesting.
01:35 Now, I know when Gemini came out, at least one version of the model, there was an issue
01:39 around the image generation capability of the model, that in an effort to try to make
01:43 sure that the images represent a diverse set of people and overcame some of the historical
01:49 biases and the data that people complained about with these models, you guys created
01:55 a model that it was actually very hard to get it to generate images of white people
01:59 even in historically appropriate context for that, and it was quite embarrassing for you.
02:03 How do you overcome issues like that?
02:05 Because I think that's one of the problems now, is that people want a model that you
02:09 can just say, "Okay, well, be diverse when that's appropriate, and when it's not appropriate,
02:16 be correct to the historical context."
02:18 But you can't just tell the model that apparently, so how do you overcome this problem?
02:21 First of all, thanks, Jeremy, for asking.
02:27 It really highlights the fact that these challenges are difficult.
02:31 First of all, we want the tools, the generative AI tools to be relevant to people all around
02:39 the world.
02:40 So when you ask a model to create images of, let's say, a nurse in a hospital or a
02:46 school teacher or a board member of a company, it has to be relevant to people all around
02:53 the world.
02:54 Google is a global company.
02:55 We have people using our tools on every continent.
03:00 But the guardrails that we put in, in that case, were clearly wrong.
03:04 We came out and we actually said that.
03:07 We then pulled the image generation for GemIIni off so that we can improve it and put it back
03:14 out in a way that's more acceptable to more people.
03:19 This is really part, I think, of the iteration process where you create tools.
03:25 These are cutting edge tools.
03:26 You put them out there.
03:28 People find them really useful sometimes.
03:31 Sometimes they find them offensive.
03:33 I think this is a healthy thing that the whole AI community is really grappling with right
03:39 now.
03:40 Mark, I want to get you in here.
03:41 You, at faculty, help a lot of different businesses try to figure out how to implement AI technology,
03:45 help the government do this to some extent.
03:48 I think this all comes down to reliability.
03:50 You have these very powerful systems.
03:51 People are very excited about the headline capabilities, but then they start playing
03:55 around with it and they're like, "Oh, but it's not reliable."
03:58 I think a lot of business leaders are struggling with the idea of how do I implement something
04:01 that's not reliable.
04:02 What advice are you giving people around that?
04:04 We think that people need to think about these in a human-centric way.
04:08 They are tools to be used in combination with people.
04:12 If you're careful and you're thoughtful about how you design that in from the start, you
04:16 give yourself this ability to tune the workload between the person and the machine.
04:21 That ultimately, while one, puts you in a much better place right now because the models
04:27 aren't perfect and we'll have small failure modes that can mess up a particular workflow.
04:34 Two, it does also future-proof you to an extent because over time, you can tune that balance
04:41 between the person and the machine so that as these developing capability, and the truth
04:46 is nobody knows how fast they're going to develop and how far they're going to get,
04:51 but as they do develop in capability, you will be able to tune that workflow as you
04:56 go.
04:57 You'll have this nice possibility of being relatively future-proof.
05:01 Benevolent, you're using these to help with drug discovery.
05:06 A lot of people are very excited about the potential transformational effect on science.
05:10 Maybe you can talk a little bit about where you see this going and how big an impact you
05:14 think these models will have on science.
05:16 Yes.
05:17 Actually, your comments resonated really clearly for me because we do think of it as human
05:20 in the loop.
05:21 We have the capacity to surface huge amounts of data, biomedical data.
05:26 The field is expanding at a colossal rate, omics data, patient-level data, and to be
05:31 able to carefully curate and amalgamate all that information to be able to pose biological
05:37 questions to our data foundations is something that we've given an awful lot of thought.
05:42 Some elements are moving really quickly in terms of automation and some are very heavily
05:46 reliant on the human in the loop kind of process, but fundamentally, drug discovery is very,
05:51 very difficult.
05:52 It's expensive, it's time-consuming.
05:55 Something like 95% of drugs don't make it through clinical development to launch, and
05:59 it costs something in the region of $2.5 billion per drug.
06:03 Anything that we can do, even incrementally, to increase our probability of success at
06:08 each incremental step of the process and inevitably bring down costs is going to have a massive
06:14 societal impact.
06:15 The use of AI is carefully applied to drug discovery.
06:19 When you talk about carefully applying, can you talk a little bit about what's involved
06:22 there, and how do you prevent some of the potential failure modes, as Mark was saying,
06:27 of these models?
06:28 Yeah.
06:29 I think, incrementally, the first step of drug discovery is identifying the target that
06:33 you want to modulate.
06:34 We've spent a lot of time thinking about trying to integrate all biomedical data so that we
06:39 can understand the signaling pathways and mechanisms of health and disease so we can
06:43 choose the right target.
06:45 Fundamentally, if you have the wrong target, it doesn't matter how good you are at the
06:48 rest of it.
06:49 The whole thing's doomed, really.
06:51 Starting really carefully with the right target is a big first step.
06:55 Actually, as we use some of these more generative models, really thinking carefully about the
06:58 prompt and asking very careful questions of the technology so that you don't get these
07:03 kind of hallucinatory answers that you can very carefully ... It's not just one question,
07:08 one answer.
07:09 It's a series of very carefully crafted questions that get you a more and more refined answer
07:13 back that then we can use.
07:15 It's actual data that we can use for the next step of the process.
07:18 Interesting.
07:19 Speaking of questions, we're going to go to questions from the audience in a minute, so
07:22 please think of your questions.
07:24 When you talk about using these tools for science, I know DeepMind's done a lot of work
07:28 around this.
07:29 AlphaFold, a huge success.
07:33 Recently though, there was a paper you guys had on material science that had been published
07:36 in Nature.
07:37 It came up with thousands of new materials with unknown properties of science.
07:40 It sounded very exciting.
07:42 There's recently been some criticism, though, that a lot of those materials are not really
07:44 synthesizable, not easy to manufacture.
07:47 Maybe we're getting ahead of ourselves.
07:49 As someone who's been in this area for a long time looking at the research potential of
07:53 these models, how are you thinking about it?
07:55 Are we overhyping what these can do for science?
08:00 I don't think we're overhyping because I think actually one of the most exciting things that
08:06 AI can do for humanity is advancing our ability to create new technologies, new other technologies.
08:16 Let's talk about material science.
08:18 This is a very exciting area.
08:21 The paper in question actually had 2.2 million new stable crystals that were automatically
08:31 generated from the AI model.
08:35 If you think about what new materials can help you do, they can help you design better
08:41 batteries.
08:43 They can help you design better photovoltaics, solar panels, and all sorts of things that
08:51 will help us address medical devices, all sorts of things that helps us address societal
08:58 challenges.
09:01 The process, we stand by the results of the paper.
09:05 It was peer reviewed in Nature, which is really the top journal academically.
09:11 It's part of a healthy scientific process where you put out some results and people
09:18 come out and question them.
09:20 You go back and forth and hopefully the truth comes out.
09:24 The paper is really exciting along with a lot of the other work that has been done in
09:30 AI for the sciences in many areas like biology with AlphaFold or fusion energy.
09:37 Mark, do you think the hype is getting ahead of ourselves here in terms of what these models
09:41 can accomplish?
09:45 I think the AI for science that DeepMind does is unbelievable and totally brilliant.
09:49 I do think in other contexts, it's easy to over-claim for these models.
09:56 You look at when we were promised fully autonomous cars and it's taken a bit longer than we expected.
10:03 I think if people are thinking in the context of these are just going to automate away jobs
10:08 one for one, that will be much further away than most people anticipate, hopefully much
10:14 further away as well.
10:15 I think it's both true and good that it would be further away.
10:20 None of that is to say anything other than the science these guys are doing is totally
10:23 amazing.
10:24 I want to go to questions from the audience.
10:26 So please, if you have a question, please raise your hand and please wait till Mike
10:29 gets to you and I will call on you.
10:31 There's a question here, if we can get a mic to this lady here.
10:35 If you could please state your name and your affiliation when you stand up.
10:40 Hi, I'm Natalia Jaszczuk.
10:43 I lead product in a net tech called LearnLight.
10:46 Actually, I just wanted to respond to the last comment you made around, well, it takes
10:51 us longer than we sometimes expect to progress with technologies.
10:56 Partly is due to our acceptance or no acceptance of risk.
11:01 Maybe we could have progressed faster with autonomous cars if we're ready for more accidents.
11:07 So it feels to me like it's the same with AI.
11:11 So I wonder, how do you think we can actually find that balance between accepting some risk?
11:17 Is there lesser risk that we're ready to accept because this technology is not a human error
11:21 or actually do we need to?
11:22 That's a great question.
11:24 Mark, I want you to answer that first.
11:26 When you're advising people on that question, like how much risk should I accept?
11:30 How do you reach that balance?
11:32 Well, it's obviously context dependent.
11:34 So if somebody is saying, we want an algorithm that will put a red jumper or a blue jumper
11:39 top of our website, then it's basically use whatever you want.
11:43 It doesn't matter too much.
11:45 Then in something like a medical context where the ultimate decision can, in principle, come
11:51 down to life or death, care much, much more greatly about that risk tolerance.
11:59 At the moment, I don't think there is anything better or more thorough than working with
12:05 the organization themselves.
12:06 Because it is important to remember that even though AI is a new technology, the thing you're
12:12 using it for is still the thing you are always doing in almost all cases.
12:16 So if you're a doctor, you are always making life and death decisions.
12:19 And now AI is coming into that.
12:21 So whether that was like a bureaucratic decision or a technological decision or some simpler
12:26 algorithm, in any case, you are used to wrestling with exactly the ethical dimensions of that
12:33 problem.
12:34 And so in the context of any given problem, there's almost always a large amount of preexisting
12:42 thought that you can tap into carefully.
12:44 And Anne, when you're dealing with a process that costs $2.5 billion and takes 10 years,
12:49 how do you view that sort of risk assessment?
12:51 And is it, oh, if we can shave any time and money off of that, it's worth it?
12:55 Or not always?
12:56 Yeah, I think, I mean, there is a risk associated with it.
12:59 But I think the risk can be quite a positive force for disruption.
13:03 So when you think about drug discovery as in designing the actual drug, we have very
13:07 experienced and very seasoned medicinal chemists who can design.
13:10 They're augmented now with all sorts of different predictive tools to enable them to make different
13:14 molecules.
13:15 And sometimes the tools will suggest something the chemists would never, ever have thought
13:18 of.
13:19 And they inherently think, well, that's not right.
13:21 That's too dangerous.
13:22 But actually, then you kind of think, well, actually, maybe components of that could be
13:26 useful.
13:27 So I think it's risk management.
13:28 You've got to be open to the notion of risk and what it can do in terms of disruption.
13:33 But there has to be like guide rails and controls around how you view that kind of prediction.
13:39 So I think, yeah, there's scope for this as long as it's carefully monitored.
13:42 Great.
13:43 Other questions from folks in the room?
13:45 Please raise your hand.
13:46 There's a question right here.
13:48 Please stand up and say your name.
13:50 Helmut Ludwig, Southern Methodist University and the board of directors of Hitachi.
13:55 At lunch, we spoke about the importance of the triangle between, on one hand, IT backbone,
14:00 on the other hand, data analytics, data science, and domain knowledge.
14:04 You talked about the human in the loop to make sure we get the right quality.
14:08 But often the person with the domain knowledge is the hardest to get into the loop.
14:13 Can you give some practical advice?
14:15 How do we make sure that we actually get this human in the loop in the right way?
14:19 That's a good question.
14:20 Mark, do you want to take that one?
14:21 Yeah.
14:22 Well, we think of it as human in the loop and human over the loop.
14:24 So initially, when you start building a system, you probably do want a human actually in the
14:28 loop, as in nothing goes out without a human seeing it.
14:32 But over time, as you build confidence, you can have a human over the loop putting policies
14:37 to guide it.
14:38 Now, the problem is that you do actually have to build your systems in a slightly different
14:42 way.
14:43 If you want to make them properly human-centric, it requires a slightly different set of technologies,
14:49 a slightly different set of design principles.
14:52 So we've thought quite a lot about how you do that and have not published it in an easily
14:57 accessible form.
14:58 So I promise I will write a blog, and I will send it around.
15:03 But in essence, the way you do it is you start by designing for human and machine to work
15:11 together, and then you structure the algorithms in a more modular fashion so that you basically
15:16 inherently build in more explainability and more causality into the system.
15:20 Fascinating.
15:21 Other questions?
15:22 Over here, there's a gentleman here.
15:24 I see his hand up.
15:26 Mark Selman from The Times.
15:28 How do you address the issue of concentration of power in AI, given the resources needed
15:33 to build models, et cetera, and compute?
15:36 How do you address that?
15:37 I'll let Zubin answer that one from Google.
15:40 Thanks, Jeremy.
15:41 A little tough question.
15:42 Yeah, no, I think it's a very good question.
15:47 When we look at what's happened in AI over the last few years, it's actually really exciting
15:54 to see how many startups have come out with state-of-the-art models.
16:00 It seems like every week we have a new entrant into the field.
16:05 The barriers to entry have been continuing to decrease with open models.
16:10 We've also contributed to that.
16:13 We've published a lot of our work, but we also have produced open models.
16:17 And then if you look at the cloud providers, the whole model is to make these tools available
16:27 very widely.
16:29 I think it's actually a pretty healthy ecosystem of competition right now.
16:36 Although it's expensive to train the very largest models, once you train it once, you
16:41 can make it available, and then people can build on it and use it.
16:45 And we've seen an incredible thriving of the ecosystem.
16:50 We'll have to see what happens over the next few years, but I'm not concerned about that
16:56 myself.
16:57 That brings up an interesting question, though, which we were talking about earlier, about
17:00 do you build these models yourself?
17:02 Do you buy them?
17:03 Do you use open source models?
17:05 How is Benevolent looking at that?
17:06 And then also in the industry of pharmaceuticals, there's this issue about power.
17:10 All the big pharmaceutical companies have traditionally had a lot of data.
17:13 How have you guys as a startup sought to compete with that?
17:16 Yes.
17:17 In terms of the models, we have a hybrid model.
17:19 We have some proprietary capabilities.
17:21 We have open access capabilities.
17:22 We have open access capabilities that we've customized and made them more bespoke to our
17:27 needs.
17:28 So in case of access to data, we've systematically brought in a whole range of different data
17:35 types from the literature, or makes compound structures, or this kind of totality of biomedical
17:41 data so that we can reason over it.
17:43 But we will always have smaller data foundations than the really big pharma who have got decades
17:50 of drug discovery data in-house.
17:52 And that's for them to build their models and reason over.
17:55 But for us, there's a surprising amount of literature publicly available, well, literature
17:59 and other data modalities, that if you systematically build, you can get really decent picture of
18:04 human biology.
18:05 And Mark, when you're talking to customers and clients, how do you advise them on this
18:08 issue of build versus buy?
18:10 Do they use an open source model or do they use the proprietary model that seems to do
18:14 better on a benchmark?
18:15 What do you think?
18:16 Yeah.
18:17 I mean, we think of AI as just better software, right?
18:20 So it's the perennial build versus buy question.
18:23 And the only two answers we know are wrong is all build or all buy.
18:27 Anything in the middle is at least justifiable.
18:31 And then our view is that it depends on how it compares to your capabilities and your
18:35 competitive advantage.
18:36 So if you have, like, if it's close to your competitive advantage, you want to be leaning
18:40 to more towards build.
18:42 If it's further from your competitive advantage, you want to be leaning more towards buy.
18:47 Fantastic.
18:48 That's all the time we have for today with this group of panelists.
18:51 But thank you very much for being here, and thank you all for listening, and we'll get
18:54 on to the next panel.
18:55 Thank you very much.
18:55 [ Applause ]
18:55 [ Silence ]