Brainstorm Health 2024: Defining AI Standards for the Next Era of Health

  • 4 months ago
Aneesh Chopra, President, CareJourney Lloyd Minor, Carl and Elizabeth Naumann Dean, Stanford University School of Medicine; Vice President, Medical Affairs, Stanford University Moderator: Deena Shakir, Lux Capital
Transcript
00:00 Hello everyone. Delighted to have this conversation with two such wonderful
00:05 panelists. Anish and I go way back to our days in the early days of the Obama
00:09 administration and Dr. Minor needs no introduction. So today we're here to talk
00:14 about the acronym on everyone's minds at whatever conference you're at these days
00:19 but in particular at the intersection of AI and health. I want to start with a
00:24 framing question and so let's start with you Anish. Yes. What is AI in the context
00:32 of health care? Well to me there's basically a policy construct for AI and
00:37 health which is to realize the promise that we made we made the investments in
00:42 electronic health records that is we promised a productivity revolution that
00:46 would make the system better. We invested in the technology we've not yet gotten
00:50 the productivity gains and so AI is essentially the maybe the next chapter
00:55 to finish the story of what we initially invested in which is it's the engine
00:59 that will drive productivity and health care. That's sort of one aspect of it. The
01:04 more pragmatic aspect of it is it's a prediction of a next event and one can
01:10 look at that by looking at historical data and then sort of drawing a line
01:13 using math to get to the next event or with these new exciting large language
01:18 models to basically see what happens when you throw all of the information in
01:23 the internet into a system to help it figure out what goes next and it looks
01:27 like a reasoning engine which is what has cut the excitement for all of us
01:30 today. You know if you ask any doctor and there are many of you here in the
01:34 audience about their experience with with health care it's often not a
01:38 positive one. I come from a family of doctors and I distinctly remember the
01:42 day my father had to switch his EHR and how frustrated he was and yet the
01:46 promise of technology has the potential to revolutionize productivity of the
01:51 health care workforce. When I think about AI and health care I remember the early
01:57 announcement of Doximity and how the very same sentence where Doximity was
02:01 announcing an integration with chat GPT included the word fax machine and I
02:06 thought that is the most health care example that I have ever heard in terms
02:10 of technology and AI. Dr. Minor in the context of the work that you are leading
02:14 at Stanford and in particular the conference that you held last week, how
02:18 are you thinking about AI in health education specifically? Sure, thank you
02:23 and I think there's an enormous amount of green space here right in terms of
02:28 the impact that AI is going to have throughout the continuum of medical
02:32 education, care delivery, discovery based science and also importantly access to
02:38 care. We talked a lot about efficiency of health care, talked about improving
02:43 the the quality of the health care delivering but also you know AI has the
02:49 potential to dramatically increase the access to specialists, the access to the
02:55 type of specialty care that oftentimes is only granted today, only given today
02:59 in centers. In the educational sphere you know we're educating physicians and
03:05 scientists for the next 30-40 years of their productive lifetime. Medical
03:11 education even scientific education, biomedical science used to be heavily
03:16 based upon memorization and understanding of a core basis of facts.
03:20 That's been diminished but I think it's going to be diminishing even more as
03:24 generative AI becomes multimodal generative AI and incorporates more than
03:30 language, incorporates images, incorporates data from wearables and
03:33 other information that can be assimilated in ways that today we depend
03:38 upon humans to do it. Whether or not those humans are physicians or
03:41 scientists or a mixture of the two. That's really an exciting space for us. I
03:46 think education has received overall less attention today than some of the
03:52 other applications we've talked about but the potential is just as enormous.
03:56 One area that I wanted to get both of your thoughts on in particular is this
04:00 question of bias in AI which is which is a topic I know that is top of mind
04:05 certainly at Stanford and for the work that you're doing Anish. We talk about
04:09 garbage in and garbage out. If AI is mirroring human bias how can we mitigate
04:15 that impact as AI replicates that scale? So we spend a lot of time on this
04:19 question because we deal with regulating inputs. That's one question how do you
04:24 regulate models but we also think about managing outputs and so acknowledging
04:30 biases but then building processes in place to address them may actually be
04:34 the key. So if you look at the home page of the White House website on algorithmic
04:38 discrimination it's the poster child of the Optum algorithm that allocated care
04:42 management resources to based on spend which in you know incorporated the bias
04:47 of where and how people access health care. But if you focus on the outcome
04:51 which is who has the highest and best need and how do we allocate limited
04:54 resources to address it one can build outcomes programs to deal with this
04:59 issue even if the model itself may not have the right information. By the way
05:03 that is a part and parcel of why over 40 health systems and health plans
05:07 committed at the White House in December that they would take on a more of a
05:11 self-regulatory process to focus on the outcomes in the use of AI as their
05:16 obligation rather than just worrying about what's in the EHR or
05:20 what's in the software. I think outcomes focus is the key. With the focus on
05:24 outcomes is value-based care a better use case for AI? Well we're gonna get
05:28 that in a minute but for absolutely if you said what's the productivity
05:31 function of the health care system at the hospital it might be nurse
05:34 productivity to get staffing ratios down if it's a health plan it might be
05:38 medical loss ratio more broadly. In the case of like what do you want your mom
05:43 and dad to get for the best possible care you want them to live longer and
05:46 healthier and you want them to be on track with all their preventive services.
05:49 A value-based care organizations key performance indicators productivity
05:54 improvement on those aligns best to societal goals in my view. As a VC it's
06:00 my job to be an optimist but I think we need to have a conversation about what
06:04 could go wrong in this case and as we were chatting earlier you know I think
06:08 back to the early 2010s and the promise of social media and technology as a
06:12 democratizing force and a force for good. Years later we look back and realize
06:17 there were unintended consequences and it's not always going to go the way that
06:21 we anticipate. As we think about the rapid deployment of AI and the rapid
06:25 innovation in this space how can we mitigate future unintended consequences
06:28 Dr. Minor? Well I think one important topic is how do we think about the
06:32 responsible role of regulation and compliance and I've been very impressed
06:38 not surprised but very impressed about how federal agencies the Office of
06:41 National Coordinator the FDA are really leaning in and wanting to understand
06:46 more about how systems today are deploying AI how AI models are validated
06:52 what how that validation may needs may need to be incorporated into a national
06:57 regulatory framework but to the analogy with social media and what went wrong in
07:03 a sense and what can be prevented I think with health care from the get-go
07:09 we will have more engagement appropriately from regulatory agencies
07:14 than we've had in social media because of some of the legal constructs
07:18 protecting companies from liability on the way the platforms are used. Certainly
07:24 in health care there'll be decisions made about how liability is apportioned
07:29 but there's no question that systems doctors are responsible for the care
07:34 that's being delivered whether or not that care is being augmented and
07:37 hopefully improved by AI or not. So that that's one I think key difference is
07:44 we'll have more engagement appropriately from federal regulatory agencies to help
07:50 drive this dialogue and make sure that we're focusing on consequences that
07:55 could result from models are only going to be as good in making predictions as
08:00 the data that they're fed as you said AI is in a very real sense a mirror as to
08:06 what it's learning and if what it's learning is from a biased data set then
08:10 its output is going to be biased. So understanding those processes and then
08:16 having appropriate regulatory oversight I think it's going to be really
08:19 important moving forward. Maybe Dean if I could just build on on the Dean's
08:23 thoughtful comments I think the engagement with regulators is a very
08:28 positive but it's a takes two to tango situation. We also need early adopters to
08:33 collaborate in a more of a think of it more like instead of a two sides of the
08:38 negotiating table where I'm representing the folks who are using it and you're
08:41 the regulator we sort of have to be more at a table like we see in this room
08:45 where we're all rolling up our sleeves to learn together. So Harvard Business
08:49 School and Boston Consulting Group did a study and they found that actually the
08:53 smartest analysts were dumbed down not enhanced by AI because they defaulted to
09:00 the answer and sometimes the answer wasn't as good as what the best
09:04 performers could do. So that was a head-scratcher. What does that mean? Well
09:08 the regulators are not going to get to that level of minutiae in what the
09:11 regulation should say. Be mindful of dumbing down. So you want stakeholders to
09:16 set norms. So University of California at San Diego said when we do the inbox
09:21 draft notes and I think Stanford does this too to send messages to patients
09:25 there's no button that says submit. So you get the auto-generated. There's no
09:30 like send without it. The words are start from scratch or edit from where the AI
09:36 left off. That kind of default setting is a norm or a behavior that is designed to
09:41 mitigate against a learning that sometimes these things may have a
09:44 negative effect. So as we learn together and we share best practices
09:49 there may be more coming out of the normative behavior of what should be the
09:53 way we do this with the regulators saying thank you keep going relative to
09:57 say the idea that we're waiting for a rule that may take a year and a half to
10:01 get through the process to actually close whatever the perceived gaps are.
10:04 In just a moment I'll turn to the audience for questions so raise your
10:08 hand and somebody will come to you with a mic but in the meantime I want to hear
10:12 more specific examples about what you're excited about because AI can mean so
10:16 many things and I think to an individual sometimes AI in health care
10:20 means an AI chatbot that's taking care of you and we know that there's so much
10:24 more to it than that. So can you share an example Dr. Minor of an AI use case that
10:28 you're excited about? Sure and let me first follow up on a really important
10:31 point that Anish made and that is there's a really important role for
10:35 convening right now and for sharing ideas. Last week we hosted at Sanford
10:39 four days of conferences related to AI's deployment in health care and
10:44 biomedicine beginning with a conference related to our initiative Raise Health
10:48 Responsible AI for Safe and Equitable Health. So if you ask me what I'm excited
10:52 about I'm excited about the way people are coming together bringing their
10:57 problems their ideas and collectively reasoning how you know we can come
11:02 together to solve those problems. This isn't going to be you know one massive
11:07 deployment it's going to be a lot of incremental deployments it's going to be
11:11 a lot of incremental advances but sharing ideas what's working what's not
11:15 working having methods for validating AI models before they're deployed as the
11:21 Coalition for Health AI or CHI which we convened the first meeting of that group
11:26 last week at Stanford as well. That's going to be the way in which we
11:31 determine the best applications and make sure that AI is being deployed
11:35 responsibly and achieving the benefits that Anish and you and so many others
11:40 have talked about we should be attaining from AI. Absolutely. Question from the
11:45 audience. Yes back there. So we've heard a lot about AI in terms of productivity
11:52 and improving the patient-doctor relationship. My name is Rami Bailuni. I'm
11:58 the CEO of Inara but also a doctor who's been using AI. My question is is what is
12:03 the chances that AI by making certain administrative and third-party services
12:10 within health care so cheap that actually cements bad habits and stifles
12:16 innovation and to give you an example probably one of the largest growth areas
12:22 in AI and health care that's not been talked about is billing coding prior
12:27 authorizations prior authorization denials and I would say this is even
12:30 pre-chat GPT and I will tell you the number of prior authorization denials
12:35 that I've gotten that are AI generated and clearly missed the mark this year
12:40 has expanded exponentially and so when those administrative burdens become so
12:48 cheap does it then decrease the risk-taking of organizations to adopt a
12:55 totally different model to go away from a framework that clearly has sort of
13:00 stifled innovation in the past. If you don't mind I might just take a stab
13:05 because I think this gets to the nature of the question of the micro versus the
13:08 macro policy objective in the goal. So each institution is going to have some
13:14 use case in their administrative performance where they can use AI to be
13:19 a little bit better. So that chart review to find that you know denial generating
13:25 reason that's going to happen at scale right now it's pecked because it's an
13:28 expensive thing for plans to pay for that's going to happen everywhere.
13:31 Conversely you're going to appeal every denial so we can assume that in a
13:35 micro economy this is not the highest and best use of this incredible
13:39 technology. More broadly what's the number one preventable death in this
13:44 country? It's still hypertension it's still uncontrolled hypertension. We
13:49 launched the million hearts campaign in the beginning of Obama if you remember
13:52 that and we're after all this IT investment it's still bad. Okay so what
13:57 do we do to help people manage their hypertension? It's not doesn't need you
14:02 know GLPs or anything you know this is like simple dollar a day aspirin sort of
14:06 thing. So if you look at the productivity function we want that's what the
14:10 Affordable Care Act was helping us to build towards it was to look at these
14:13 broader sort of lower quality outcomes that we're seeing in our system and then
14:18 trying to incentivize organizations to own that as their productivity
14:22 function. So let me take responsibility for a community to make sure that folks
14:26 that have uncontrolled hypertension are better managed and they avoid the
14:29 hospital. So I'm looking for what I refer to as a health information fiduciary
14:34 who's assigned with all the information necessary to identify where someone's
14:39 falling off the tracks and then either directly nudging the individual patient
14:43 or through a care manager or a physician practice brings them in to manage their
14:48 care over time. That's the excitement that I see that'll have the greatest
14:52 societal good. We need to have an economic model that rewards organizations for
14:56 putting that service into play but this is the big moment. Institutional micro
15:01 suboptimal administrative efficiencies AI versus patient-centered macro what do
15:07 we want the world to be fewer deaths due to uncontrolled hypertension and so
15:11 forth and that to me is where I'm gonna bet I'm gonna bet we're gonna figure
15:15 this out faster than just tweaking this. We've time for one more question and I
15:19 see one over here. Hi Eric J. Daza here. So I'm a statistician. I'm bullish on AI.
15:25 Right statistics is the engine of the AI mobile but I'm also a doctor of public
15:30 health and I worry about things like equity in this case retraining and job
15:35 training. So you've talked about job training using these AI tools. Can you
15:39 talk a little bit about oh I have this new AI tool I don't want it to replace
15:44 me or like a lot of my skills how can I get retrained on that so I can still
15:48 make a living? Thank you. Dr. Meiter. Sure well I think in every advance in
15:54 technology we've seen changes in the workforce. I think it's following a very
16:01 important point that Anish made. If deployed correctly I think there will be
16:07 that type of retraining that goes on in the workforce and we'll be able to get
16:12 back in the delivery of patient care both in an individual basis and at a
16:16 larger public on a larger public health basis. We'll be able to get back to the
16:21 type of interpersonal interactions that historically have been the bedrock
16:27 strength of our health care delivery system and that unfortunately both
16:32 through the way the system has evolved and also through the implications of
16:38 current technology we've been oftentimes separated from our patients. So will
16:44 there be changes in the workforce? I think undoubtedly. I think there's no way
16:49 to slow that down really other than to do it responsibly and appropriately but
16:55 if done appropriately I think the workforce can be more effectively far
16:59 more effectively deployed in a more humanistic way in a more rewarding way
17:04 for workers than exists today where there's so much work that's just devoted
17:09 to the administrative functions of the system not directly to improving the
17:13 lives of patients and populations. With just over a minute left I'm gonna do a
17:17 fire round. I'm gonna I'm gonna ask each of you to just say bearish or bullish on
17:22 this particular use case with AI. So let's let's keep it to one word bearish
17:26 or bullish. AI chatbots. Bullish. That one word. Bullish. Prior off. Bearish. Bearish.
17:37 Imaging. Bullish. Bullish definitely. Voice to script. Bullish. Bullish.
17:45 Documentation. Related bullish. Also bullish. EHR innovation. Needed. Needed and
17:56 bullish. Mental health. Bullish but man that's it's a new model.
18:03 Likewise. Bullish. Nursing. Has to be bullish. I hope. Absolutely. I mean I'm the
18:11 protesters scaring me a little bit so that's why I have a question mark the
18:14 the the nurses are protesting the use of AI so anxiety but I think we overcome
18:18 this I hope we'll learn that's more than one word. Pediatrics. I suppose but
18:26 bullish I don't know even that's use case specific is hard. Yeah that's true
18:31 but bullish if it helps to you know we've assumed all too commonly that kids
18:37 are just small adults and that's not true so if it helps us actually tailor
18:40 health care appropriately for children then definitely bullish. Excellent.
18:44 Capital B. Great great way to end. Thank you all so much. Fantastic panel.
18:50 [BLANK_AUDIO]

Recommended