Sebastian Guth, Chief Operating Officer Global Pharmaceuticals and President, Bayer U.S.
Terra Terwilliger, Director of Strategic Initiatives, Google DeepMind
Moderator: Kristin Stoller, Editorial Director, Fortune
Terra Terwilliger, Director of Strategic Initiatives, Google DeepMind
Moderator: Kristin Stoller, Editorial Director, Fortune
Category
🤖
TechTranscript
00:00I've already spoken with a lot of you over breakfast,
00:02and I think the number one topic that came up
00:05was AI, which is why I'm very excited to be here
00:07with both of our panelists.
00:08But the number one thing I heard was
00:10people asking me, what are the specific use cases?
00:14I need a case study for how other people use it
00:16so I can bring it back to my company.
00:17So Tara, Sebastian, I want to start by asking,
00:20how are you each using AI, both internally and externally?
00:23And Sebastian, let's start with you.
00:25Sure.
00:25You know, look, for us, AI would deploy very widely
00:29across the organization, and we really deploy it
00:31in three different buckets.
00:33One is driving workflow efficiencies,
00:35as probably many others in this room.
00:38And there's a couple examples that come to my mind.
00:42One I would highlight is medical coding.
00:45So in our industry, when we run very large scale
00:48clinical trials, we have to translate
00:51medical events that are often described in plain language
00:55into code so that we can ultimately analyze the data.
01:00And that's something we do quite extensively
01:03through the use of AI.
01:04We've estimated in a recent clinical study
01:08that that has saved us about 170,000 hours of work.
01:13So it's very significant and drives efficiencies.
01:17Another example in that bucket of workflow efficiencies
01:20is regulatory submissions were mandated
01:23by the FDA and many other regulatory bodies
01:27around the world to submit very extensive documentation
01:32as we seek the approval of clinical products of medicines.
01:36And we use generative AI to populate about 70% to 80%
01:40of those dossiers today.
01:41And if you translate that into efficiencies and savings
01:46and ultimately acceleration, it's significant.
01:49The second bucket in which we use AI
01:52is to advance patient care.
01:55So an example in that space is mammograms.
02:00Some of you may know that it's breast cancer awareness months.
02:04And some in the audience may have gotten
02:06or may regularly get mammograms.
02:09But what many individual people don't realize
02:12is that it's, in fact, not one image, but hundreds of images
02:15that are being taken.
02:17And we provide software to radiologists
02:19that helps select the best image so that they can
02:25appropriately diagnose the patient.
02:27And mammograms and breast cancer is one area.
02:31We do this in other spaces as well.
02:33What's most exciting to me personally
02:36as I think about AI, though, is the use of AI and technology
02:41in drug discovery.
02:42Because at the end, we're in the business
02:45of bringing medicines to patients as quickly as we can.
02:48And I'd say that's a pretty challenging task.
02:51We spend about a billion and a half to two billion
02:56on each medicine that we bring to market.
02:58It takes us 12 to 15 years.
03:00And we fail most of the time.
03:03And technology can help us significantly
03:07accelerate that work.
03:10An example in that space for us is our partnership
03:14with a company called Recursion, a company in which we're
03:17an investor and a company that has arguably and likely
03:22the most advanced AI-guided drug discovery
03:25platform in our industry.
03:29And we partner with them in them bringing technology,
03:32so deep expertise in technology, data sciences,
03:37and use our library of compounds to essentially identify
03:44and develop novel cancer therapeutics.
03:48And that just gives me chills, because, hey,
03:51that's the use of AI to drive real impact
03:56and ultimately develop medicines that otherwise would not
04:00see the light of day.
04:01Absolutely.
04:01And it touches everyone here, too,
04:03just like Google DeepMind.
04:04Tara, tell us what you're doing over there.
04:06I think probably everyone in this room
04:07is using the tech that you develop.
04:09So tell us about what you're doing.
04:11So Google DeepMind is Google's artificial intelligence
04:14unit.
04:15We create Gemini, the foundational language model
04:19for Google, as well as a host of other technologies, which
04:23are, while not in the large language model category,
04:26are also being used for, for example,
04:29fundamental scientific discovery,
04:30just as you described, Sebastian.
04:32So we, too, are using internal and external model
04:37in different ways.
04:39Internally, legal is an example of where we have
04:42found great use for Gemini.
04:45In our legal system, you can think of legal problems
04:48as having three different categories.
04:49There's retrieval of information,
04:51there's summarization, and then there is reasoning.
04:55And we are finding Gemini is valuable across all
04:58of those categories.
04:59Retrieval of information, of course,
05:01is finding the information that you know you have somewhere,
05:04but you can't put your hand on it right there,
05:06right at that moment.
05:07Summarization, and that's really the basic, I would say,
05:11use case of simply retrieving and finding that information.
05:14But it saves a remarkable amount of time.
05:17We're estimating that 40% to 50% of the requests
05:19to our legal team could be handled simply
05:22by that kind of automation.
05:24On summarization, again, you're retrieving the information,
05:28but you're also pulling out what is most relevant.
05:30This is a more difficult problem,
05:32particularly in the case of a legal context,
05:35because you don't want to miss an item that might be important.
05:40So our team is increasingly using generative AI
05:43to review, for example, scientific papers
05:47and say, what does need legal review in this case,
05:49and what does not?
05:51And then third, reasoning is probably the most exciting category.
05:56Legal problems, as you know, are a good test case for gen AI,
05:59because they are often unstructured, ambiguous,
06:03and can have many different answers, depending on jurisdiction,
06:07depending on business choices.
06:09So reasoning, we think, is the next frontier here,
06:12and the initial results are promising.
06:14Would not be complete without also adding to your point, Sebastian,
06:17about scientific discovery.
06:20And this is an area, our mission at Google DeepMind
06:23is to build AI responsibly to benefit humanity.
06:27And we are particularly proud of our work in scientific fields,
06:31including AlphaFold, which is the transformational protein folding
06:36technology.
06:37We created it.
06:38We won the international challenge of protein folding
06:40with this technology, and in fact, released all of the 220 million
06:45plus proteins in the database for free for researchers
06:48to use in their own applications, including drug discovery.
06:52Excellent.
06:52And I'm going to come to the audience for questions in just a second,
06:55but I have one for both of you.
06:57There's a lot of fear mongering when it comes to AI.
07:00AI is going to steal my job, both internally and externally.
07:04I feel like people are very, very scared to use it,
07:07whether it's bias and, again, people being scared
07:09about how it's going to affect them.
07:11What would each of you say is the biggest pitfall
07:14that you would advise CEOs against when implementing AI
07:18into their business strategies?
07:20Sebastian, I'll start with you.
07:21Sure.
07:21So first, in our industry, I actually
07:23don't think that AI is going to steal jobs.
07:25Because, hey, unpacking 3 billion years
07:28of evolution compressed in a cell is pretty damn difficult.
07:31That will continue to require, also, human capacity
07:37and the art of science, which we're marrying up with technology.
07:42Big pitfalls?
07:43I mean, in our industry, we cannot and will not
07:46compromise patient safety.
07:47So we're building a lot of redundancy into the way we work.
07:51I gave you the example of medical coding earlier.
07:55And in that example, we use a four-I principle.
07:58So we, in fact, use technology, but then we still
08:00have humans double-check that whatever data entry we
08:04put into the systems is accurate.
08:06And that's only appropriate given that patient safety
08:10cannot be compromised.
08:12One piece of advice I would have from my own personal experience
08:16is that, in large organizations, and maybe that also resonates
08:19with some in the audience, there's, at times, a risk
08:22to use technology for technology's sake.
08:24And in my mind, at the end of the day,
08:28it's a means to an end.
08:29And the end, in our case, is to bring new medicines
08:33to the world at a much faster pace
08:35than we would have otherwise been able to.
08:39And it's not about chasing shiny toys just
08:43for the sake of chasing them.
08:44Makes sense.
08:45Tara, what advice do you have?
08:46Yes, I would echo that in that AI
08:48is an unparalleled tool for fundamental scientific
08:51discovery.
08:52So how can we look beyond the immediate applications
08:56and think about how much there is yet
08:59to be discovered to solve fundamental human problems?
09:03Now, that said, certainly, we want
09:06to be mindful of how we're asking our people to use
09:11artificial intelligence internally.
09:13Because of those fears you just discussed,
09:16to me, the most important principle there is co-creation.
09:20So how can you, with your team, co-create
09:23how the AI will be truly helpful to them
09:26in doing the job that they are already doing?
09:27They are the experts in doing that role.
09:31And I believe there's plenty of low-hanging fruit.
09:34There is plenty of paperwork to be automated
09:37so that high-skilled people can go do more high-value tasks.
09:42But that co-creation piece is key.
09:45I'd say also making sure that you're
09:48valuing the time that people are putting in
09:50to learn how to use these systems
09:53and rewarding them for doing so.
09:55In tech, we have a wonderful culture of dog-fooding
09:57and using our own products internally.
10:01And I would simply say that it's important to value
10:03the time people invest in that.
10:05And then third, think forward about what
10:07this means for people's careers, both in reality
10:10and in perception.
10:11What new training might people need?
10:14How might this contribute to their training and learning
10:17in a way that's intrinsically valuable?
10:19I want to come back to training in a second.
10:21But does anyone in the audience have any questions here?
10:23And raise your hand, and we can run a mic around.
10:26I see one over there.
10:32Yeah, just in general, you talk about the growth of AI
10:36and the need for data centers all over the place.
10:38What do you guys think about power
10:40for the future and your needs?
10:43Power for the data centers?
10:45Power for the data centers.
10:47It's an important question.
10:48It's a question on many people's minds these days.
10:52Again, we have to think about the fundamental scientific
10:55advances that are going to be possible with this technology.
11:00We actually at Google DeepMind work on novel power
11:03applications as well.
11:05So we have a piece on optimal power flow,
11:08on how to increase efficiency in data centers
11:10to reduce the amount of power that is being consumed.
11:14We also do work on fusion, nuclear fusion,
11:18and how to optimize reactors.
11:20And the fusion reaction inside of reactors,
11:23again, to get us closer to that place of renewable energy.
11:27Excellent.
11:28Thank you, Tara.
11:28Any other questions?
11:30I see one over here.
11:32Oh, and over here.
11:33Hi.
11:34Hi, so I work at an autonomous vehicle company,
11:36a Gen AI-based autonomous vehicle company.
11:38And I found it really interesting
11:40with the release of ChatGPT and the popularization of Gen AI,
11:45there's been kind of an increased fear of AI
11:48and the potential impact it could have in society.
11:50I was wondering if you guys saw something similar
11:52and what that shift looked like between the AI conversation
11:55and Gen AI conversation.
11:58Maybe I'll get us started.
11:59In our industry, there's very few individuals
12:03that are ready to put their health into the hands
12:05of machines, and rightfully so, in my mind.
12:09So we've had a lot of conversations over the years,
12:13more so in recent times, to your point,
12:15because it's suddenly in the public domain,
12:18on how we use AI responsibly, and how we ultimately
12:23augment the work physicians and clinicians do,
12:27but don't necessarily aim to replace it.
12:29And that's a philosophy that keeps driving us,
12:31because we continue to believe that humans will, for time
12:36to come, play a very important role when
12:40it comes to advancing the health of those that we serve.
12:43I would add, I think all of you play a role in that, too,
12:46in how you deploy these technologies in the world,
12:50in your workplace.
12:52Makes sense.
12:54I saw one over here.
12:59Hi.
12:59Thank you for the reminder on purpose-driven AI.
13:03I think that was beautiful, so thank you so much.
13:06So my question to both of you is that,
13:09what is the importance in your mind of learning about AI
13:14from other industries?
13:16And if that is important, how do you set your organizations up
13:21for being open to receiving ideas from other industries
13:25around us?
13:28I believe this comes back to co-creation again.
13:31So we want to build artificial intelligence with people,
13:35not just for people.
13:38So that intelligence comes into us
13:40from a variety of different routes.
13:42I think customers are a wonderful source
13:44of information.
13:46We also work with outside organizations
13:49across a variety of constituencies,
13:54industries to try and understand what
13:56will create value for people.
13:58And I'd say at GDM, we are especially
14:00focused on the scientific community.
14:02Because again, we want to understand
14:04what will be of most value to scientists.
14:08How can we help them create the next fundamental breakthrough
14:11that will transform our futures?
14:14And maybe just as a quick build, in our industry,
14:17we learn quite extensively from others.
14:19The promise of AI in the pharmaceuticals industry
14:23is massive.
14:25But we are, for very good reasons as an industry,
14:27probably progressing somewhat slower than others.
14:31I sometimes describe this jokingly
14:34as the race of the turtles.
14:37But that's a very conscious choice.
14:39Because as I said before, we're not
14:41going to compromise patient safety.
14:44And the work we do has such a big impact on those
14:47that we serve that we cannot take shortcuts.
14:50And we probably and possibly can't
14:52sort of experiment to the same extent other industries can.
14:57But that also gives us an opportunity, which is to learn.
15:00Excellent.
15:00Well, thank you, audience, for the incredible questions.
15:03Thank you, Sebastian and Tara, for being here.
15:05I appreciate it.