Dr. Sunita Mishra, Chief Medical Officer, Amazon Health Services Noor Shaker, Founder and CEO, SPATIALX Moderator: Eileen Burbidge, Executive Director, Fertifa; Founding Partner, Passion Capital; Co-chair, Fortune Brainstorm AI London
Category
🤖
TechTranscript
00:00 Thank you so much. It's great to be here. I'm genuinely excited about this conversation
00:05 because I've been listening, obviously, for the last day and a half, and trust is -- I
00:09 think I actually did a word search on the last two days and on the agenda, and it's
00:14 everywhere. But I think it's nowhere more relevant personally as well as professionally
00:19 as when you put it in a context of healthcare. So I'm really grateful to be joined by Dr.
00:24 Mishra and also Noor. I think we're going to have a great conversation. You're going
00:28 to have to keep me on track. I know I'm the moderator here. But there's so many things
00:32 I want to ask you. I might start with you, Dr. Mishra, if that's all right. Sunita, you
00:36 know, when we think about healthcare, we're obviously here at a conference that thinks
00:41 about commercial aspects, you know, professional environments, and what the opportunities are
00:47 from a business context. But the reason I think this is such an important topic is because
00:52 we are all at some point in our lives patients. So how do you think about AI, you know, really
00:58 zooming out before we go straight into data and trusts and the like, how do you think
01:02 about AI and its benefits to potentially patient healthcare and patient outcomes?
01:07 Yeah. So I'm incredibly optimistic about what AI can do in healthcare. What we've heard
01:13 from our customers, and I think if I took a poll, probably every single one of you would
01:18 say that you've had a trying experience with healthcare where you felt it was really impersonal
01:22 and oftentimes inhuman. And so just as we've heard from other panelists yesterday, there's
01:28 an opportunity for us to let humans do what humans do best. And in healthcare, that trust
01:35 between a patient and provider, that therapeutic relationship is so important. And we've now
01:39 put a wall of a computer screen in between the patient and the provider. And so there's
01:44 an opportunity to really move that to the side and let the provider and the patient
01:48 connect on a more human level. So from a provider's perspective, I think that's really valuable
01:54 as well as automating a lot of the work that they do. In some clinics, you can spend over
01:58 50% of your day doing administrative tasks. And I think there's a real opportunity to
02:03 automate that, again, bringing the joy back to practice for the provider. And then for
02:07 the patient, there's a real opportunity to help navigate the system a lot better by sharing
02:14 options for care, making more transparency with the cost, and also helping patients with
02:21 that to-do list that you often end up with after you see the physician, of get this test
02:26 and get that consultation and this imaging, and you are responsible for scheduling that
02:33 all yourself. I think if we can use AI to help close the loop on some of those things,
02:38 there's a tremendous opportunity to improve outcomes.
02:40 I love that. So what you're emphasizing is the fact that you can spend more time on that
02:46 human interaction and the patient, primary care physician, or the doctor experience,
02:52 because AI might be leverageable in the back end or with the administrative tasks and the
02:57 like. That's right.
02:58 So then, Noor, thinking about that in the back end, your specialty amongst many others
03:03 is drug discovery. Do you find that AI can be an enabler for that, but does it not also
03:10 bring up risks for you when you think about traditional drug discovery and the process
03:14 of doing that?
03:16 So my specialty is drug discovery for the past six years, and I've now been moving towards
03:20 diagnostics, cancer diagnostics, and the prognosis, so trying to bridge the gap between what happens
03:26 in the early stages of drug discovery, so figuring out which compound might work to
03:30 what actually could work in a human setting, and looking at the phenotyping of human and
03:36 how do you understand the morphology and the distribution of the human patient population
03:39 and linking that to drug discovery.
03:42 I think AI is, I come from scientific background, so I think a lot about science and how can
03:47 we really make not just patient life easier, but also the physicians and the pathologists
03:53 and the clinicians all benefit from AI capabilities. I think it's very important to understand
03:59 that there's a lot to be happening just in those downstream areas. So in drug discovery,
04:05 there's a lot of potential for AI to bring in new medicines in a way that we humans haven't
04:09 thought possible even, just in terms of designing new chemical entities and bringing those to
04:14 patients. But there's also a lot happening in terms of understanding the patient population
04:18 and who are the patients that could benefit from certain treatment versus the others that
04:23 might not be as beneficial, for the treatment that might not be as beneficial. And there's
04:28 a lot of, I think, when it comes to AI, I think we need to think about it in a way that
04:33 we don't necessarily project our human elements on it because it's a completely different
04:39 thing. It doesn't think in a way that we think and it doesn't operate in the same way. And
04:44 if we think about pathology where we now work, there's a lot that is within the pathology
04:50 slide, so the tissue slide that comes from a patient that we don't see with a human eye,
04:55 even at very expert trained human eye. And pathologists do need that help because of
05:00 the shortness of pathologists, for instance, because they don't have enough hours within
05:04 the day to treat each patient with the same care as they should be treated. So it's in
05:10 part accelerating and making their daily process more efficient, but it's also there's a huge
05:14 amount of information that we don't, it's not even accessible at the moment that AI
05:19 can make accessible. And that will bring a lot of new areas and a new dimension to how
05:25 we see patients and how we understand cancer and how we eventually treat those patients
05:29 as well. Okay. Interesting. So there's a consistent
05:32 theme between what you both said about accessibility, but bringing efficiencies to what I guess
05:39 healthcare and medicine is ultimately about, which is treating patients. Coming back to
05:44 you, Sunita, when you actually said you thought everyone in the room probably has experience,
05:49 I thought you were about to say with an Amazon product and I almost raised my hand. So I'm
05:53 going to now bring this back to trust because we've sort of established that the potential
05:59 of AI can obviously enhance the experience between practitioner and patient. How do you
06:05 then make sure in gender that there's trust in that process because of what's happening
06:10 in the backend? And with Amazon, you're from, as a company, and I know it's not your specific
06:16 remit, but everything from the cloud to devices in home to e-commerce, everywhere, and now
06:21 healthcare, I would imagine you have an opportunity, but there's probably also pitfalls for you
06:27 as a company to try and emphasize the trust of what you do.
06:31 Yeah. We never take for granted a single day that we have our customers' trust. And I think
06:38 that that is even more so true in healthcare given the responsibility. We actually have
06:44 introduced a new leadership principle a couple of years ago, which says with size and scale
06:51 comes responsibility. And that's very true with the way we're designing our AI products.
06:57 Through the sort of complex process by which we build our models, we actually build tenets
07:04 into the way we develop our products. And we use those as ways to probe to make sure
07:10 that our products are built with safety in mind, privacy, fairness, and accuracy. And
07:17 we are actually just adopting those same tenets as we develop products and AI products for
07:24 healthcare as well. Okay. And Noor, do you have a similar, do
07:29 you set up principles, ways of working, guidelines when you are working on diagnostics and sort
07:36 of the, again, the efficiency that you're trying to bring?
07:39 I think as long as you're not touching patient data, there's less of issues when it comes
07:43 to trust and transparency. So in drug discovery, for instance, it was not as, like we didn't
07:49 really talk about trust because it's usually just kind of chemical compounds and proteins.
07:54 So it's like nothing that's related to patient data. But now with pathology, for instance,
07:58 and trying to understand cancer, you need to relate what you see, for instance, in a
08:03 tissue sample or like a biopsy to what actually happens to the patient before and after. So
08:08 what's the journey of the patient? When was he first diagnosed? What's the age, the gender,
08:13 and the ethnicity and so on and so forth? And then the treatment journey. So what treatment
08:17 have been used? What was the outcome of it? And for how long did he survive or what have
08:22 you? So when it comes to these issues, I think
08:25 there's a lot of unknown still in terms of how we treat patient data, who access which
08:31 data, who owns the data. Usually we would assume that the patient, every patient should
08:36 own his own data. And if it is used for commercial, any commercial purposes, then the patient
08:40 ideally should know about that and get some of the benefits. Unfortunately, this is not
08:45 the case. Usually patient just give up data in hospital or what have you. And then the
08:51 hospitals, actually we've seen instances where hospitals sell the data and the patient don't
08:56 even know about it. So there's still a lot to figure out, especially now in terms of
09:00 AI and people starting to understand that data is a gold mine. And the more data you
09:04 have, the more value basically you have. So people are starting to figure out what is
09:08 really the value of what we're building. Even within the hospitals, the hospital value is
09:13 perhaps now, part of it is providing the care and the services that they traditionally used
09:18 to provide. But I think it's becoming more and more the case where they realize that
09:21 there is also value in the data that they've been collecting. But there's also a lot of
09:26 questions around, yes, but like where that boundary sits in terms of do you really own
09:30 the data? Do you get to sell it to other parties? Do you get to decide to who to sell it to?
09:35 Or is it more kind of you should open it for innovation and let the world decide who are
09:40 the best innovators and who are the best people to cure cancer and let it be open because
09:44 nobody really owns it. It's the patient data by the end of the day. And for me, if you
09:49 think about the fundamentals, we are all here to provide the best care and the best care
09:56 possible for patients and to provide the best medication, the best treatment for cancer
10:00 in our case. So I think if we keep this in mind, I'm kind of very much with the open
10:09 system, data should be accessible to all and people should be able to innovate and provide
10:13 solutions that by the end of the day, the aim is just to cure cancer and just to provide
10:17 better care for patients. And it's so clear from what you're saying. And of course, throughout
10:21 your career, your aims have been really, really pure. And like you say, trying to get the
10:28 better patient outcomes. And of course, Denise, you said the same thing. I think the challenge
10:32 is that and this came up in the last session as well, or I think somebody actually asked
10:38 a question from the audience. There are bad actors, right? The reality of the situation
10:42 is there are bad actors. So and as you've also mentioned, Noor, once there's this appreciation
10:48 for the fact that the data is actually intrinsically valuable from a commercial point of view,
10:54 that can also affect the bad behavior or go against what we're wanting for the positive
10:59 outcomes. What can be done? Again, do you have from your expertise, from the experience
11:05 that you've both had, what can be done to mitigate against these risks aside from the
11:10 tenants? What's a sort of a quality assurance or a QA or a checks and balances approach
11:17 to these?
11:18 I mean, what we're doing at Amazon is we're being very transparent with our customers
11:22 about when we would use their data and that would be we wouldn't use it. And we have made
11:28 it very, there's the wall between sort of the consumer side and what we're doing in
11:33 healthcare because we don't want to lose that consumer trust.
11:38 I think for us, given that we're a startup, we try to be innovative in the way that we
11:42 collaborate or build customer base or like commercial traction. So you'd really try to
11:48 kind of try to come up with structure where it's a win-win strategy. So the hospital,
11:52 yes, you own the data. Yes, you basically own the data in a way because you have the
11:58 keys to kind of act for people to access it, but you can't really monetize it or capitalize
12:04 on it unless you have the machine learning expertise or like data science expertise to
12:08 kind of actually use that goldmine to extract information from it and then use it in a meaningful
12:13 way. So it's a way to kind of, we can collaborate together where we as a startup can gain something
12:19 from it, can move innovation forward and bring cures to patients and identify the right patient
12:25 population and what have you. And you as a hospital will be kind of leading on the innovation
12:29 side and be able, for instance, a lot of people are, especially in the hospital pathologists,
12:33 for instance, interested in publication, interested in being the leaders in AI pathology. So you
12:38 have to understand what are the motivations, what are the people incentives, what's really
12:43 makes them happy, enjoying their life and providing the best value and the best care
12:47 possible. You have to align that with the commercial incentive for the hospital. And
12:51 then you have to, as a startup, to stay, to exist and to be able to innovate. So I think
12:56 a lot of it comes to bringing the right stakeholders on board, building the right structure for
13:01 the agreement that would allow everyone to benefit from it, including patients by the
13:06 end of the day. And sorry if this is putting it too fine a point on it. What does that
13:11 structure look like? It's really open to innovation. It's a lot of like really figuring out what
13:16 everyone wants by the end of the day and trying to come up with a structure that is, so basically
13:20 drafted from scratch. It's not like a commercial agreement that exists and you're trying to
13:24 adopt it. You're not taking it off the shelf. You actually have to develop it together.
13:29 Exactly. It's a field that is moving very quickly and everyone is just kind of trying
13:33 to get a sense about what is going on and how do we move quickly with it, but at the
13:37 same time benefit from the outcome. So yeah, it's exciting. And Nora, you mentioned a few
13:41 times being a startup. And so I should probably ask you the sort of opposite of the question
13:46 that I asked Sunita, which is as a startup, when you're working with hospitals, you're
13:50 working with larger organizations, you're working with patient groups, how do you engender
13:55 trust? How do you project the fact that you're trustworthy, you're going to be doing the
14:00 right things, you don't have bad actors or you've got structures in place to mitigate
14:04 against that? I think people tend to trust startups more
14:09 than bigger organizations, just generally speaking. Sure, not yours.
14:13 I'm so sorry. It's just the nature of innovators and new companies trying to kind of push the
14:21 boundaries and innovate and try to fight existing systems and existing untrusted systems, to
14:29 be frank. So I think for us, it's mostly working on the human level, on the personal level,
14:36 because again, because we're a startup, we can afford to talk to every person one-to-one
14:41 and to understand their needs, understand the human value of what they do and understand
14:46 what's the incentive, why they are here, why they're doing what they're doing, and build
14:50 that personal trust, which I think is mostly missing, to be honest, when it comes to as
14:55 you grow, bigger organizations become harder to build that personal relationship, which
15:00 we usually tend to forget about, but it's super important.
15:02 I think that's probably fair in that people will understand that startups can have a more
15:07 personal relationship with them because they're smaller, they're kind of starting out that
15:10 way. I think the risk is that startups have this sweeping generalization that they're
15:15 moving fast, breaking things, and they're trying to work so quickly. That's the part
15:19 where I think there's a concern that there might not be as much compliance or governance
15:23 in place as a large organization.
15:26 Just for both of you, but Nora's already alluded to how you work with larger organizations,
15:30 does Amazon, when you're looking at Amazon Health, Amazon Care, and you're looking at
15:35 expanding what you're doing for patient benefit, do you collaborate with startups and smaller
15:39 organizations as well?
15:42 We actually collaborate quite extensively with hospitals. Just for those who may not
15:48 be familiar with Amazon Health Services, our goal is to improve health outcomes by making
15:54 it easier for customers to get medications, get care. We do the care through a product
16:02 called Amazon Clinic, which is sort of low acuity care that you can get on Amazon.com,
16:06 as well as a primary care system that we've acquired. They actually have extensive relationships
16:12 with health systems.
16:14 One of the things that we find really important, again, to keep that customer trust is being
16:18 able to connect the dots. Going from a primary care experience to something that you may
16:23 need to be in the hospital for, it was important for us to have that ability to connect the
16:27 dots across the journey just because it is so difficult.
16:31 Then the other part of our business is getting medications. That's Amazon Pharmacy. Again,
16:37 we are able to get medications to customers low cost with lots of transparency around
16:45 the pricing where we're using large language models to be able to surface prices. For those
16:51 of you who live in the US, oftentimes you don't know how much something's going to cost
16:55 until you get to the counter. Then you get the surprise bill. We just didn't feel like
17:00 that was an experience that was acceptable. I would say that we've been partnering with
17:07 health systems more so than others.
17:08 Okay. Good start. I warned that there was going to be so many questions I want to ask.
17:13 There's so many things I want to go into, but maybe one more thing before I give people
17:18 in the room a chance to ask questions as well. We were talking about the frame of the session,
17:26 the title. I feel like we should maybe go back to that a little bit. It was about data
17:30 foundations and trust in data foundations. A lot of the conversations these two days
17:35 I think have been really, really interesting because they've forced us all to think about
17:39 bigger picture consequences. I also want to be able to have some practical tactics, again,
17:46 based on the expertise that you both have.
17:48 When you're establishing or trying to establish trust in data foundations, do you have governance
17:54 boards? Seven, eight years ago, I was a member of the DeepMind Health Independent Reviewers
17:59 Panel. Do you put in place layers of review? Do you put in place checks? How do you audit
18:07 the fact that the data is being preserved or protected in such a way, again, so that
18:11 you can engender trust? Are there any tactics or approaches that you've seen that work better
18:15 than others?
18:16 One of the things that we've been doing, because we do have providers now that work with us,
18:23 we have a board of our clinicians that actually works really closely with our teams to ensure
18:29 that there's accurate information. We're also really mindful of being aware of bias. As
18:39 many of you may be aware, there's bias in our healthcare system today. I like to quote
18:45 the statistic that, or the fact that in the US, we weren't mandated to have females as
18:51 part of our clinical trials until 1993. I think this is a real opportunity for us to
18:58 work to make sure that we're being intentional about not propagating that bias. As these
19:03 transformational tools are being created, we actually can have a different system in
19:11 the future. I think there's an opportunity there for us to partner with others, other
19:15 health systems, other people in the ecosystem, to be mindful of that.
19:19 Nora, do you have anything that you do that might be best in class, what people can take
19:24 away or lessen?
19:25 I think when it comes to drug discovery and cancer diagnostics, and when you think about
19:29 like from a software point of view, they're usually standard ISO standards that the software
19:35 should comply with. Those are especially important in healthcare systems. We tend to collaborate
19:42 with people, but there's also a big demand for us to be just compliant and to be able
19:48 to implement those standards in our systems. I think if we take a broader view about what
19:57 it means in the healthcare settings, there's also a lot of issues that relates to data
20:01 biases. If you work with a specific hospital, for instance, you get access to data from
20:07 that geography. The population that we get access to is usually very limited to that
20:12 ethnicity of people and that group of people. We tend to be mindful of the fact that whatever
20:17 method that we develop and that we apply is usually accurate for that specific population.
20:22 It has to be tested again if we are moved to another geography, another country or another
20:27 subgroup. We tend to do studies on specific populations and then we tend to generalize
20:31 those studies and make sure that we're very clear about the intention of the study and
20:36 what are the populations that we address within that study and the fact that there might be
20:41 limitations when it's applied to others.
20:44 Thank you. I'm going to get told off because I was told if we get to two minutes, there's
20:47 no time for questions, but I'm sitting here now. I'm going to see if there's at least
20:51 one question because I do feel like it would be unfair not to let that happen. Can we wait
20:56 for the lady in front? I'm sorry, first one I saw because I'm time-pressed.
21:00 Hi. My name is Namrata. I'm a chief product officer. I've worked in healthcare a few times.
21:06 You mentioned about bias and I think that's a very important topic. I'm just curious,
21:11 this question for Sunita, do you see an adoption bias in your services in terms of communities
21:18 and people who adopt healthcare, especially with modern technologies? Because I have worked
21:22 in healthcare in the past and we do see an adoption bias quite clearly. That's the first
21:27 question. The second question is just scalability of your services. How do you scale it up so
21:31 that it's mass used?
21:33 I might have to limit Sunita to one answer.
21:35 Just one question.
21:36 Okay.
21:37 Thank you.
21:38 Either one.
21:39 I don't know which one to take. They're both such juicy questions. I'll talk the scalability
21:45 one. I think that this is where we think artificial intelligence and generative models are going
21:50 to be really valuable because if we can remove some of those things that clinicians don't
21:54 need to do, they're going to be able to spend more time taking care of more patients as
21:58 well. I think that's the real opportunity to scale.
22:02 Thank you for that. I am going to get told off, but thanks for that question. I'm sorry,
22:05 I couldn't get to more. There were so many other things I wanted to ask about with transparency,
22:09 with bias, with the scalability. Just to recap, I'm really buoyed by the fact that I think
22:15 there was consistency about the fact that the real potential for AI in healthcare, and
22:20 whether it's based on those data foundations or other layers, is that we can enable clinicians
22:25 and physicians to work more closely with their patients and to improve patient outcomes.
22:30 What I heard is having checks in place, whether it's key tenets, whether it's advisory boards,
22:35 whether it's review boards or otherwise, to make sure that the data is being considered
22:40 a sacrosanct and that the intentions are being followed through is what will hopefully keep
22:44 us on the right track. Sorry I talked so much and I ran out of time for more, but thank
22:49 you so much to Nora and to Sunita for joining me this morning.
22:52 Thank you.
22:52 [applause]
22:54 [end of transcript]
22:56 [BLANK_AUDIO]