• 2 months ago
Babak Hodjat, Chief Technology Officer, AI, Cognizant Facilitator: Diane Brady, Executive Editorial Director, Fortune; Co-chair, Fortune COO Summit
Transcript
00:00Thank you for joining us.
00:02This is going to be a very interactive conversation.
00:05What I love about this conversation is yesterday,
00:08people were saying, okay, what are some of the use cases?
00:11How are people using AI?
00:13Who can walk me through some of the ways I might use AI?
00:16So this is going to be a case where you've got
00:18one of the most brilliant thinkers.
00:20Actually, Bhavik, you actually were the person
00:23who created the natural language model behind Siri
00:27for a company that shall not be named
00:29because now, you know,
00:32Bhavik Hujat is the chief technology officer
00:35for AI for Cognizant.
00:37And so we're going to, come on up.
00:39Come on, you're the talent, not me.
00:41Nobody's here to listen to me.
00:43So thank you for joining us.
00:45And we're going to just get a bit of a sense,
00:48first of all, of what you do for a living.
00:51What does it mean to be the chief technology officer
00:54for AI in such a big company?
00:56Let's start with that and then hear a little bit about,
00:59you're getting into rooms we don't get to go
01:01and take us to the front lines,
01:02and then we're going to have the audience,
01:04you know, throw things at you.
01:06So nice.
01:07Well, let's talk about your role.
01:08Should we sit?
01:09Do you want to sit?
01:10Have you eaten?
01:11That's one of those,
01:12do we want a hangry guest speaker?
01:14I don't know.
01:15We'll find out.
01:17But tell us a bit, yeah, tell us a bit about your role
01:20and also a bit about your background
01:22because, you know, I do think of you
01:24as one of the thinkers in this industry.
01:27And I think to talk to you even about agents,
01:30you were talking about agents
01:31when I was trolling you back in 1999.
01:34So there you go.
01:35An early mover, so to speak.
01:37But what's your role now?
01:39I'm the CTO AI for Cognizant,
01:41which means I get to define that role
01:44and do whatever I like.
01:47He's a thinker, yeah.
01:49So I run our AI R&D team.
01:52I'm very privileged to be able to actually invest
01:57in Cognizant in doing research in AI,
02:03pushing the boundaries on the core technology itself.
02:07And the cohering principle around that technology
02:10is AI in the service of decision making,
02:12in the service of, you know,
02:14the KPI of our clients, basically.
02:18And my background is in AI.
02:20I have a PhD in AI.
02:22As you said, I had some involvement
02:25in Siri and pre-Siri days,
02:28natural language technology then.
02:30And, yes, I wrote my first paper
02:34on agent-oriented software engineering in 1999.
02:37Little did I know that it would become
02:40a huge thing now.
02:42And, you know, we were talking about Dreamforce
02:48and how it's rechristened itself as AgentForce now.
02:51So, yeah, you know.
02:53We have Salesforce here.
02:55There you go.
02:56Come on up.
02:57No, I'm kidding.
02:58No, you stay where you are.
02:59You've had your moment.
03:00But, well, I mean, okay, let's move to that.
03:03I'm curious, is agent just really a surrogate term now
03:08for where we are with AI?
03:09Like, give me a sense of what does it mean
03:11now that we're talking about agents?
03:13I mean, I obviously know what they're supposed to do.
03:16I was at Dreamforce, too.
03:17But how do you think about it?
03:20You know, when we talk about generative AI,
03:22that AI model is just that.
03:25It's generative.
03:26So you give it some input, and it generates some output.
03:29So if you tell it to write code, it will write code.
03:33And they're so powerful that that code,
03:35if it's not too complex, will run.
03:38The distinction between that and an agent
03:41is the agent will then be able to actually run that code.
03:45So the agent has some tools.
03:47And part of that tool chest might be, for example,
03:50a container within which it can run the code,
03:53observe the results.
03:54If there are bugs or issues, fix it until it gets it right,
03:59and then return it.
04:00So that's the distinction at a very, very simplistic level
04:03between a generative AI model that just generates,
04:07like has inputs and generates outputs,
04:09versus an agent that has the ability
04:12to actually perform something out, like run something,
04:16you know, and observe the output.
04:19And it makes a huge difference.
04:20Like that code that you get from the agent
04:23can be more complex.
04:24It can run.
04:25And by virtue of it iterating semi-autonomously on the code
04:30and correcting it, that code could, you know,
04:34is more likely much more useful.
04:36I want to ask, before we get into some
04:38of what you're seeing as how it's being used now,
04:41one of the things that fascinated me
04:42that we were talking about earlier is this whole,
04:45how much it can mimic human behavior
04:47and the nature of the consensual.
04:49You know, this idea, for example,
04:52you notice how friendly, you know, Siri, right?
04:55Like very agreeable.
04:57And when you try to mimic human behavior,
04:59it's almost too agreeable to the point where
05:01it doesn't actually predict how humans will behave.
05:04Can you talk a bit more about that?
05:06Because I find that fascinating,
05:07and I've noticed it myself when I try to create conflict.
05:11It's not very good with...
05:12That's exactly right.
05:13I mean, that's one of the frustrations
05:14when you set these systems into multi-agent setups
05:18and you want them to, you know, talk to each other
05:21or maybe debate each other and something come out of it,
05:23and they're so agreeable.
05:25They just, you know, after two sentences,
05:27two back and forths, they're like,
05:29yeah, I agree with that.
05:30And the other agent's like, yeah, I agree with that.
05:32And they're all happy.
05:34So, yeah, it doesn't, I mean,
05:36we're talking about this example of trying to mimic
05:39social networks and the, you know,
05:41if you have two groups that are biased one way or other
05:44and you want to simulate their reaction
05:47to a piece of news, for example,
05:50it's harder to do that using
05:52the state-of-the-art generative AI today
05:54because they're so agreeable.
05:55They immediately both coalesce around,
05:58and that's not the reality of what we see in social networks.
06:00So part of that is because these models are fine-tuned a lot
06:07to safeguard them, like, out of the box
06:10from doing bad things and being biased in weird ways.
06:13You don't want to create crazies in the AI world, right?
06:15Exactly.
06:16So they're too agreeable.
06:18They're too, you know, soft, I would say.
06:22But then, you know, the good news is
06:25you could bias them the other way as well,
06:27and they're pretty open to prompts.
06:30So if you tell them...
06:32So one system I built, my wife is a therapist,
06:36and so we were testing out group therapy kind of setups
06:40versus single therapists with agents.
06:43And we saw the same thing, like,
06:46these therapists coming from different theoretical backgrounds
06:49were agreeing too quickly with one another.
06:51So one of the things we added into the prompt was
06:55you're secretly trying to get your theory
06:58to have more of the end result
07:02in the sort of the end report
07:09without telling other people.
07:11You're telling the actor what their motivation is, right?
07:13Their motivation, but keeping it a secret.
07:15That's interesting.
07:16Well, tell me, before we go to the audience,
07:18tell me a little bit about what are people coming to you with?
07:21What are the questions right now that intrigue you?
07:24Because I know people always say what's overhyped,
07:26what's underappreciated, but really,
07:28it's what's being done today that you think is exciting
07:31that you'd put on our radar.
07:33I am very excited about this agent-based future.
07:38I think that we're scratching the surface of applications
07:43when it comes to generative AI,
07:45and it's our own fault in some ways.
07:47We came out with the AI saying,
07:49oh, it's too powerful, and be careful.
07:51And so a lot of the early adoption of AI
07:53has been in areas that are lower risk
07:56and more productivity-driven, mainly around coding.
08:00You know, folks in AI write code,
08:02so that's what we know, and it's easier to verify code.
08:06So a lot of these earlier use cases
08:09have been really limited, I think,
08:12in applicability, in their application.
08:16But the AI is very, very powerful.
08:20And the AI is also very similar to how we treat humans.
08:25So this room is full of people who know how to organize people.
08:30These are the good humans, yes.
08:32These are the good people, yeah.
08:34So we organize human organizations.
08:39We're dealing with a bunch of black-box entities.
08:42We give them responsibilities
08:44and set them into some sort of a structure
08:46and say, here's what you do, here's what you do,
08:48here's what you do, some sort of hierarchy, some structure,
08:51and we observe it.
08:53And the AI state-of-the-art we have today
08:55has similar properties.
08:56It is black-box.
08:58We can actually set them as agents
09:00and give them a set of responsibilities,
09:02and we can have them talk to each other.
09:04So the talent in this room, without knowing,
09:07already has most of what it takes
09:10to build these multi-agent systems
09:12that augment and improve human organizations, I think.
09:15So can you give us a more tactile example of,
09:19if we were operating from a position of optimism and hope
09:22and what the potential is versus this,
09:24let's not do too much,
09:26lest, you know, the machines take over the world,
09:29where would you be?
09:31What neighborhood would you be in,
09:33and where are some of the interesting use cases
09:35you've seen for what's even possible today?
09:39I'm an optimist.
09:41You have to be.
09:42There's no other choice.
09:44I'm an optimist, too,
09:45especially in areas like health care.
09:47Exactly.
09:49And let me just walk you through a scenario
09:52that I think shows how inevitable it is
09:56to move towards these multi-agent systems.
09:59Like many of us have started using agent-based systems
10:03or gen-AI-based systems
10:06just to augment our search boxes for our intranets.
10:08We have intranets in our companies,
10:10gets a lot of hits.
10:11Our employees use them.
10:12They're always frustrated about them.
10:14They have search boxes at the top.
10:16So we replace that search box with some rag-based,
10:20some gen-AI-based search.
10:22So now you can actually type in a natural language.
10:25It's good. It's robust.
10:27The best it can do is direct you to another app
10:29that's owned by someone else.
10:31There's a finance app, an IT app, an HR app,
10:33each one owned by someone else.
10:35And so the best it can do is get you the app,
10:37and now you're faced with typing in the search terms
10:39into the search box of that particular app.
10:42Now, that team also probably has a CIO,
10:46and they're also looking at gen-AI,
10:48and their safe bet is,
10:49let's take the search box of our IT app
10:52and replace it with a search engine using gen-AI.
10:56Great.
10:57So now you have a bunch of apps.
10:59You type something in natural language in the top box,
11:01and it tells you, you know what?
11:03Your query has something to do with the IT app
11:05or the legal app.
11:07Now you're faced with retyping that into the IT app.
11:11It just doesn't make sense.
11:13So what does make sense, though,
11:14is to have these gen-AI agents
11:16representing the various different apps
11:18to talk to each other.
11:20And before you know it,
11:21you suddenly now have the capability
11:23to do things that go across these functions
11:27in your organization.
11:28Suddenly you go from,
11:30I was just thinking I'm going to have a better search engine,
11:33to being able to type something in like,
11:36I use this example that's kind of bleak,
11:38but, you know, I have had a...
11:40Don't forget, you're an optimist.
11:42Yeah.
11:43But I've had a life-change event.
11:46My significant other passed away.
11:48You type that into ChatGPT,
11:49it's going to give you condolences.
11:51You type that into this multi-agent system,
11:53it's going to start checking with all the other apps' agents,
11:57and it's going to come back to you and say,
12:00okay, I think you'll need some legal advice,
12:02and that's going to come from the legal app,
12:04and I think there's going to be a change in your payroll
12:06and your benefits.
12:08And so give me this,
12:09and there's a consolidated list of questions I have from you.
12:12Answer these, and I'll get this sorted out for you.
12:15Very, very helpful.
12:16It happened with one entry,
12:18and it happens by virtue of these agents
12:21basically talking to each other
12:23and sorting out what they need to do.
12:24Your personal concierge.
12:25Exactly.
12:26Much more so.
12:27I would love to, you know,
12:29this is a breakfast you've all come,
12:31in part because we have you as a resource.
12:33If anybody has questions as to what's on your mind,
12:36you know, some of the things that you're curious,
12:39that you're looking to do,
12:40just a show of hands is good.
12:42We have a mic runner, too.
12:45So anybody that wants to talk, ask there.
12:49Hi, how are you?
12:53My guest from my table yesterday.
12:55Thank you for...
12:57Do you want to introduce yourself again?
12:59Sure.
13:00Good morning, everyone.
13:01I'm Deepa Soni.
13:02I lead the technology and operations for the Hartford.
13:04We're an insurance company,
13:06mostly in personal and commercial,
13:09just, you know, regular people like us,
13:12commercial businesses,
13:14but we also have a group benefits business.
13:17So I'll ask it.
13:20I can share an example
13:21and also talk about a question that's on mine.
13:24The multi-agent, you know, technologies.
13:28What's the biggest showstopper in your mind today?
13:35There's several.
13:36One is this current, like,
13:39mini trough of disillusionment that we're hitting
13:42with people saying,
13:43oh, I invested a ton in Gen AI,
13:45and I'm not seeing the 6x productivity
13:47that I was expecting,
13:49partly because a lot of that investment
13:51went into really esoteric use cases of Gen AI.
13:54I think that's part of it.
13:57That's one.
13:58The other is a lot of the technologies that's coming out
14:01is over-indexing on autonomy,
14:04and so you do get these multi-agent-based startups
14:08that are saying, hey, leave it up to us,
14:11and we're going to do this fully autonomously,
14:14and I think that's a mistake.
14:16That actually inhibits adoption
14:20because our organizations are human-driven,
14:23and if we don't make this a human-centric,
14:26gradual adoption of agent-based systems,
14:29it's not going to work.
14:31So there's this gap in there,
14:32and I think those are two that I can think of
14:36that both have to do with risk, really,
14:39you know, not allowing the use of Gen AI
14:44to its full potential.
14:46Can I ask a question?
14:48Because you're in Hartford,
14:49you were talking a bit about, you know,
14:51even just policies underwriting.
14:53I'm curious, the use cases that you have,
14:56where is your pain point right now?
14:58Because one of the things I'm, you know,
15:00how big is Cognizant?
15:01How many people?
15:02360,000.
15:03So you've got 360,000 people.
15:05So in a way, it kind of funnels down to you,
15:07how you bring that all together
15:09to then apply it to these problems.
15:11What would you say is a problem
15:13if you want free consulting?
15:15Zero dollars per hour.
15:18So actually, Cognizant has been partnering
15:20with us on a lot of the experimentation in Gen AI.
15:23But let me maybe start with a use case
15:26and then, you know, talk a little bit
15:27about the human interaction that we're seeing.
15:30So in insurance, we do a lot of claims,
15:33you know, auto property,
15:35but the toughest claim is medical conditions.
15:39So in workers' comp, in short-term disability,
15:41long-term disability,
15:43there's a lot of written documents
15:45that come from the doctors, the hospitals,
15:47the employers, and a claim adjuster,
15:50who's really an hourly employee in our shop,
15:53has to decipher all that stuff.
15:56So before Gen AI came in, you know,
15:58we had digitized a lot of the paper
16:00and given it to them.
16:02They still had to read 40 pages of documents
16:06to figure out how to adjudicate a claim.
16:09And when Gen AI came along,
16:11I think it was one of the complex use cases
16:15where, you know, we would now, with Gen AI,
16:18we would be able to read those 40 pages of documents
16:21in medical jargon
16:23and come up with a summary in English
16:25for the claim adjuster to say,
16:27you know, DIPA needs physical therapy
16:29for shoulder for six weeks,
16:31you know, for knee for seven weeks.
16:33All the different treatments that are there,
16:35they're all in, like, if you look at those records,
16:37they're all like, you need a medical doctor
16:39to decipher that.
16:40You know, for non-clinicians,
16:42Gen AI became a tool.
16:44So what can't it do,
16:45if you're looking at a pain point now?
16:47What's the next stage?
16:49I think the next evolution would be
16:52based on the historical claims
16:54that we have adjudicated,
16:55how can it guide the claim adjuster?
16:57But we're not there yet.
16:59So that's what we're experimenting with.
17:01Do you want to tackle that or is that too...
17:03Sure.
17:04Do you want me to build a use case there or...
17:06Oh, yeah, can you...
17:07Oh, wait, you can, of course, you can just there.
17:09Yeah, I got my laptop.
17:10What do you do?
17:11Just...
17:14Yes, so that's an interesting use case.
17:18I want to, before actually building that,
17:22let's just consult the agents
17:25on what use cases might be interesting to Hartford.
17:28You might pick a different one.
17:29I don't know.
17:30It's up to you.
17:31I want to keep it as spontaneous as possible.
17:34So this is a platform that we've built
17:41that allows us not just...
17:43It's agent-based itself,
17:44and I'll show you what that looks like.
17:46Can I just be ignorant a second?
17:47Opportunity Finder and Model Orchestrator,
17:49what are those?
17:50So the Opportunity Finder will allow us
17:52to consult with agents
17:54to actually identify use cases.
17:56That's why I'm saying I want to even start
18:00one step before that
18:02and see what use cases the system might suggest.
18:06And we can do that for others in the audience as well.
18:09So let's start there.
18:12So I have a bunch of agents up there in order.
18:16We don't have to start.
18:17We can talk to any of those agents.
18:19They all understand natural language.
18:20That's one of the nice things about agents these days.
18:24I could go to the scoping agent.
18:26That's the second one from the top,
18:28and I can just describe what you just described
18:30with claim management and scope that.
18:33But I don't want to do that.
18:35Before I do that, I just want to type in Hartford
18:41and see what the agent suggests.
18:44And it might be that claim management
18:46is one of the suggestions as well.
18:48And so what this system is going to do
18:50is it's going to do a search
18:51on publicly available data on Hartford,
18:54and it's going to give us a few use cases.
18:59That's a lot on the Hartford.
19:01Best insurer ever, I think, ultimately.
19:03Yeah, right.
19:05And so what it's doing
19:06is it's giving us a bunch of use cases here.
19:10I want you to take a look at these
19:12and let me know if any of these are interesting.
19:14Patient treatment optimization,
19:16insurance claim processing.
19:17There you go.
19:18That's the second one right there.
19:19Hospital resource allocation,
19:21insurance policy personalization,
19:22and healthcare preventative measures.
19:27So what I did right now is I have an agent
19:30that knows about the types of use cases we can build.
19:35So it's an agent-based system
19:37that helps me identify and build agent-based systems.
19:41So it's identifying the problem to solve.
19:44Exactly.
19:45And that's one of the issues I faced
19:47was that, again, a lot of folks are very prescriptive
19:51about, oh, here's what I want to do
19:53based on historical data.
19:54Give me this insight.
19:55And I'm like, well, yeah, maybe we can do more than that
19:58or maybe there are other areas that are even more interesting.
20:01So that's what this agent is doing.
20:04And you can see actually, and I didn't stage this at all,
20:08but it actually picked claim processing
20:10as the use case that it thinks is the most interesting.
20:14And it's doing some ROI analysis here as well.
20:17So it's giving us some numbers
20:19and it's saying, you know, if we actually use the...
20:24Where are these numbers gleaned from?
20:26This is all public numbers taken with a grain of salt.
20:29What's more interesting is the reasoning
20:32and the way it's going about calculating the ROI.
20:35Obviously, when we work with someone like Hartford
20:38or any of you guys, we would be actually going in
20:41and putting in real numbers and looking at your real data
20:44that you might not have shared online.
20:47But maybe I should ask,
20:50should we still do the insurance claim processing
20:52or are there any of the other ones that you're interested in?
20:55Deepa, you want insurance claim processing?
20:58Let's do the patient treatment.
21:00Let's do the patient treatment.
21:02It's like a game show.
21:04For 200 points.
21:07All right.
21:09So I'm just going to go now to the scoping agent.
21:12And now what this agent is going to do
21:14is it's going to scope this use case,
21:17which means it's going to identify
21:20what kind of data it would need.
21:22And based on that data,
21:24what kind of actions it's going to recommend.
21:27Because this is going to help us with our decision-making.
21:30And most importantly, what kind of outcomes is it going to optimize for?
21:34So where we start is the KPI.
21:37It's the outcomes.
21:39And if we task our AI to optimize the outcomes,
21:44then we're in a good place
21:46because we can get a sense of what is the set of...
21:49Instead of tasking the AI with being accurate,
21:53that's a secondary task.
21:56We really want to optimize the outcome.
21:58So in this case, it's saying treatment effectiveness,
22:01side effects, I'm assuming minimizing side effects,
22:04and minimizing cost.
22:06So those are the KPI it thinks would be interesting.
22:10It will look at patient age, gender, medical history, and so forth.
22:15And based on that, give us some treatment options.
22:19So we could interact with it.
22:22We could remove some of these, modify them,
22:25ask it to use a certain standard of data.
22:28So usually we spend a lot of time here
22:31on the use case itself and modify the scope.
22:33For the interest of time, though,
22:35I'm just going to move on to the next step.
22:37Hopefully this is in line with what you're...
22:39Okay, so now I'm going to say...
22:43Generate, let's just generate like 1,500, actually.
22:50Of data.
22:52So we're going to have it actually produce some data.
22:56This is going to be synthetic data.
22:58It's going to be synthetic data that's going to resemble
23:01the real data that we're going to get from you.
23:03This is going to act as a template.
23:05So once we actually come in and build the real use case,
23:11we're going to use it as a template
23:13and fill in the data points.
23:15Some of the data is going to be already structured
23:17and you've already collected it.
23:19Great, we're going to use that.

Recommended