Brainstorm Tech 2024: How We’re Actually Using AI

  • 2 months ago
Siddhartha Agarwal, Senior Vice President, Product Strategy and Operations, Freshworks Ely Greenfield, Chief Technology Officer, Digital Media, Adobe Fiona Tan, Chief Technology Officer, Wayfair Subha Tatavarti, Chief Technology Officer, Wipro Ken Washington, Senior Vice President, Chief Technology and Innovation Officer, Medtronic Moderator: Jeremy Kahn, Fortune

Category

🤖
Tech
Transcript
00:00:00Welcome. I'm Jeremy Kahn.
00:00:01I'm Fortune's AI editor.
00:00:03I'm also the author of Mastering AI,
00:00:05a Survival Guide to our Superpowered Future.
00:00:07Thank you for joining us today.
00:00:09We're here to talk about how we're actually using AI.
00:00:11And I can tell by looking around the room
00:00:13that it is clearly a topic of a lot of interest.
00:00:16And I'm looking forward to diving in.
00:00:19Some housekeeping before we start.
00:00:21This session will be on the record.
00:00:23And I want to make it as interactive as possible.
00:00:25So please raise your hand, and we'll be sure to get to you.
00:00:29Make sure to state your name and company
00:00:31before asking your question or commenting.
00:00:33And we need to use these microphones
00:00:34that should be somewhere near you.
00:00:37There's a little button that sort of looks
00:00:39like somebody talking. You press that to speak.
00:00:42Please, when you're done making your comment
00:00:44or asking your question,
00:00:46hit that button again to turn off your mic
00:00:48because we can't have too many of these on,
00:00:50apparently, at the same time.
00:00:51Now, I'd like to introduce our discussion leaders.
00:00:54I will just go down the line here to my right.
00:00:57Immediately to my right is Subha Tattavarti,
00:01:00Chief Technology Officer at Wipro.
00:01:03Next to her is Siddharth Agarwal,
00:01:06Senior Vice President, Product Strategy
00:01:07and Operations at Freshworks.
00:01:10Then we have Ken Washington, Senior Vice President,
00:01:12Chief Technology Officer and Innovation Officer at Medtronic.
00:01:16Then we have Fiona Tan, Chief Technology Officer at Wayfair.
00:01:20And then Eli Greenfield,
00:01:22Chief Technology Officer, Digital Media at Adobe.
00:01:25And we can get started.
00:01:27I first want to go down the line
00:01:28and ask each of our discussion leaders
00:01:32in their own organization,
00:01:34what use case of generative AI have you found
00:01:36to be most transformative so far?
00:01:38And what do you think is going to be most transformative
00:01:40in the next two years?
00:01:42Subha, why don't we start with you?
00:01:44Thank you, Jeremy. Excited to be here.
00:01:47So, the nature of our business for us is such that for us,
00:01:54the idea around synthesis of information is very important.
00:01:59We serve 1,400 customers across the globe.
00:02:02And for us to be able to synthesize information,
00:02:05whether it is for contract negotiations,
00:02:08for responding to RFPs, becomes quite interesting.
00:02:15And that is one area we've seen a significant impact
00:02:19in terms of time to market and time to value.
00:02:23And time to insights.
00:02:24So, all these three metrics have been quite useful,
00:02:27at least in the POC stage.
00:02:29And we are in the beginning of the process
00:02:31of operationalizing them.
00:02:33So, that's one area I think is very valuable for us.
00:02:36Great. Sunarthan.
00:02:40Yeah. So, at Freshworks,
00:02:42we have customer service software,
00:02:44we have IT service management software,
00:02:46and then we have sales and marketing SaaS software.
00:02:48We have 70 AI-powered use cases in production.
00:02:51We're using four public LLMs that power 40 of those use cases
00:02:55and 10 open source LLM models that we tune
00:03:00that power 30 of those use cases.
00:03:02We get about 1.2 million LLM calls per day.
00:03:05And we're processing about 500 million plus tokens,
00:03:08training on 100 million plus tickets.
00:03:10So, just wanted to set that context.
00:03:11And the most transformative use case for us
00:03:14has been both on the employee service
00:03:16when you as employees ask questions
00:03:18or on the customer service side
00:03:20when customers ask questions.
00:03:23It's how the agents that provide answers to those questions
00:03:26can be assisted in the moment.
00:03:28That has seen a rapid uptake from these agents.
00:03:31Some of it has very basic things,
00:03:33for example, summarization of an IT ticket
00:03:36that multiple agents might have worked on.
00:03:38But seeing that summary when they land into that ticket,
00:03:41big kick or tone enhancement, et cetera.
00:03:44But I think the most important ones
00:03:46we've seen that they lean in on is quality coaching
00:03:48that gets delivered to them in the moment,
00:03:50how they could deliver better answers,
00:03:52similar incident suggestions.
00:03:54So, that part of it has resulted in us
00:03:56actually making those agents even more productive.
00:03:59If they stop using it,
00:04:00they've learned a lot through that process.
00:04:02So, the agent assist, what people are calling co-pilot
00:04:05is the most powerful use case we've seen.
00:04:08Great, Ken.
00:04:10Good afternoon, everyone.
00:04:12First of all, for those of you
00:04:13that don't know what Medtronic does,
00:04:15we are a device, medical device manufacturer,
00:04:19and we treat more than 70 conditions in the body.
00:04:22And the AI that we use internally is,
00:04:26you can think of it in multiple layers.
00:04:27In the first layer, it's like many of you are using it,
00:04:31it's for productivity enhancement.
00:04:32We have over 12,000 engineers and medical professionals
00:04:36working on those therapies and those devices.
00:04:39And so, they all have at their fingertips now
00:04:42an internally hosted version of ChatGPT
00:04:44we call Medtronic GPT.
00:04:46And it's boosting our productivity.
00:04:48We use it to do intake on customer calls
00:04:51as well as support our scientists and engineers
00:04:54in formulating their ideas and developing the software
00:04:57that we developed that powers our devices.
00:05:00But at the higher layers in the stack,
00:05:02you can think about AI as being an enabler
00:05:04and a supercharger for making our medical devices
00:05:08smarter and better.
00:05:09We have six FDA approved therapies
00:05:12that are powered by artificial intelligence today.
00:05:15And one of the first ones was a colonoscopy tool
00:05:18called GI Genius that helps medical professionals
00:05:21conducting colonoscopies avoid missing polyps.
00:05:25On the best day, colonoscopies are performed by doctors
00:05:29and when they perform those,
00:05:31they miss one out of four polyps.
00:05:33And so, here's the lesson.
00:05:35If you wanna get a colonoscopy,
00:05:36get it early in the morning
00:05:37because they miss fewer earlier in the day.
00:05:40But if you really wanna get the best service
00:05:42and the best therapy,
00:05:44get it from a physician who uses the GI Genius tool
00:05:47because it reduces the missing of polyps
00:05:50by more than 50%.
00:05:52And so, that's one of our tools.
00:05:54We have several others I can talk about later.
00:05:57But at the end of the day,
00:05:58we see a future where AI will increase the workflow
00:06:02in hospitals and in clinical settings
00:06:04as well as be an enabler for improving diagnostics
00:06:08and improving outcomes for patients
00:06:10because it's making the therapies better, smarter,
00:06:12and it's making diagnoses more accurate.
00:06:15Thanks, Ken.
00:06:16Fiona, let's talk about Wayfair
00:06:18and where are you seeing the biggest impacts?
00:06:21Yeah, so Wayfair, if y'all don't know us,
00:06:24we are the leading online retailer of all things home goods.
00:06:28So, as you can imagine,
00:06:29it's a very emotive, personal, stylized-based category,
00:06:33very difficult to sell online, also fulfill online.
00:06:36So, some of the areas where we've seen good traction,
00:06:39if you are on the second generation
00:06:40of the customer service and sales co-pilots,
00:06:42very similar to what Siddharth had mentioned.
00:06:45And then on the customer experience side,
00:06:47one of the things that we're also working on
00:06:49is sort of a text-to-image,
00:06:50using more of the image generation capabilities
00:06:53to help our customers ideate
00:06:55and see your room in eight different styles
00:06:58that you may or may not even have heard of.
00:06:59So, a lot of the visual stimulation
00:07:02that's required for our category,
00:07:03so that's another area that we've been investing in.
00:07:06And I think, honestly,
00:07:07from a transformation and ROI perspective,
00:07:10and this is very much under the covers,
00:07:12but I think maybe pertinent to a lot of you
00:07:14that are in the tech space,
00:07:14and that's really addressing some of our tech debt,
00:07:16some of the most painful digital transformation problems
00:07:19that we've had.
00:07:20If you think about databases, coding languages,
00:07:23monolithic code bases,
00:07:24they're actually very well suited for LLMs, right?
00:07:29So, we've been able to take some of our most gnarly
00:07:32technology transformation problems,
00:07:34have it broken down,
00:07:35and actually have the acceleration that's using LLMs.
00:07:39So, we work very closely with Google on this,
00:07:41because I think they also recognize it's something
00:07:42that they could probably take to other clients.
00:07:45But I'd say that is certainly not so sexy,
00:07:47way down in the bowels,
00:07:48but there's some really good capabilities
00:07:52that we've been able to unlock there.
00:07:54That's great.
00:07:55And Fiona, is that sort of updating a code base?
00:07:57What's the...
00:07:58Yeah.
00:07:59So, it's two things.
00:07:59One is monolithic databases.
00:08:02So, understanding what the databases are like.
00:08:05In most cases, folks have long since left the organization.
00:08:08So, how do you take that document,
00:08:10understand what it is,
00:08:12and then break it down, look at...
00:08:13You can also help have it break down,
00:08:15and you look at what your target state is.
00:08:16For example, we use REST APIs and GraphQL.
00:08:19How do you generate the code components for that?
00:08:21So, it's still a multi-step process.
00:08:22I won't say that you can just wave a magic wand,
00:08:24and your ginormous database goes away,
00:08:28but you can break those steps down,
00:08:30and it can accelerate every one of those steps.
00:08:32It's also similarly on the code base perspective, right?
00:08:35So, most of us, when you have this,
00:08:36it's monolithic, it's the code base
00:08:38and the database together.
00:08:40So, you can use that as well.
00:08:42So, happy to chat more about it later.
00:08:44Great.
00:08:44Eli, let's talk about Adobe.
00:08:46Sure.
00:08:47So, we're using AI to transform everything,
00:08:50both inside and outside Adobe that we do.
00:08:53I'll start with the easiest to adopt one,
00:08:55in some ways the most boring one,
00:08:56which is we have thousands and thousands of developers.
00:08:59Our goal is to have every one of them
00:09:02use some version of the many different options
00:09:05out there for code assistance,
00:09:07AI-driven code assistance for our development.
00:09:10I forget what the numbers are now.
00:09:12It's seeing a parallel in terms of
00:09:14lots of easy adoption by initial adopters,
00:09:17and then some of our code bases are 40 years old.
00:09:19So, the ability of some of these AIs
00:09:23to understand how some of those code bases work
00:09:24may be a little bit harder,
00:09:26but the people who are adopting it so far,
00:09:28we see massive gains in their productivity
00:09:29and just their ability to move quickly through
00:09:31and get the production act of writing code done quicker
00:09:34so that they can focus on the creative piece.
00:09:36The second area for us is digital marketing.
00:09:39We have Adobe, for those of you who don't know,
00:09:40we have two major businesses.
00:09:41One is our creative side, which is Creative Cloud,
00:09:43Adobe Express, Adobe Acrobat,
00:09:46and then our digital marketing side,
00:09:48which is the Digital Experience Cloud,
00:09:50which is the SaaS products we put out there
00:09:52to help enterprises manage
00:09:53their customer digital experience.
00:09:55We are customer zero for that.
00:09:56We have one of the more sophisticated
00:09:57digital marketing organizations in the world.
00:10:00They've been using AI for a decade
00:10:03to drive a lot of the work we do,
00:10:05but with the rise of generative AI,
00:10:07as we are doing lots of creative work on the creative side
00:10:10and lots of marketing work and leveraging the new AI,
00:10:12every idea we have about how we can accelerate our ability
00:10:16to create content at scale, to create effective content,
00:10:19to get it out to the right people at the right time,
00:10:21to understand the customer's next best action,
00:10:24we develop all of those and deploy them internally
00:10:26and getting paid our own employees,
00:10:29our own marketing department.
00:10:31And then lastly, the most obvious one
00:10:32is all the work that we're doing in our products,
00:10:34getting AI directly into the hands of our customers.
00:10:36So a little over a year ago,
00:10:39we introduced Adobe Firefly,
00:10:40which is our first foray into creative generative AI.
00:10:44We started with images.
00:10:46We're deploying video technology,
00:10:48vector technology, design technology.
00:10:50So as if you saw the great session
00:10:52with Chris and Jake earlier today,
00:10:56generative AI is dramatically changing
00:10:58the creative landscape
00:10:59and the way creative people produce work.
00:11:02And we are driving heavily, invested heavily
00:11:05in transforming our products
00:11:07and transforming our customers' workflows on that.
00:11:09On the Acrobat side, hundreds of millions,
00:11:11I don't even know the numbers,
00:11:12of PDFs are opened and read every day
00:11:15by people all around the globe.
00:11:16All of those people are trying to consume
00:11:18mostly unstructured content.
00:11:20This is what LLMs are built for.
00:11:22And so putting AI directly into Acrobat
00:11:25so that it's right there at the point of integration
00:11:28has been dramatically successful
00:11:31in helping people actually consume long form documents
00:11:33that they don't have the time
00:11:34to go through and read every word of.
00:11:35And then on the marketing side,
00:11:36all those guinea pig features that we build into
00:11:39for our marketers go into our products
00:11:40for our customers as well.
00:11:42So across the board, I think the biggest impact for us
00:11:44is those things where we're taking generative AI
00:11:46and pairing it with all the value
00:11:48we already deliver to customers
00:11:50in order to give them the best combination
00:11:52of those two worlds.
00:11:53Great, that's fantastic, Eli.
00:11:55I wanna ask, before we open this up,
00:11:58when I talk to companies about trying to implement
00:12:00particularly generative AI solutions,
00:12:03and if you can get them to talk off the record a little bit,
00:12:05they complain about two things.
00:12:06One is reliability.
00:12:08They're a little frustrated
00:12:08that the technology's not more reliable.
00:12:10And the other is cost.
00:12:11That yes, there are lots of ways
00:12:13you can use many different models in sequence
00:12:17to improve reliability,
00:12:18but then you're upping the cost
00:12:19because you're running many, many more tokens on each query.
00:12:23And I'm curious how you guys are dealing
00:12:24with both of those issues within your organizations.
00:12:27And maybe we'll start with Eli
00:12:28and go back down the road this way.
00:12:30If you could just quickly sort of give us a sense
00:12:32of how you're addressing both reliability
00:12:34and the cost issue.
00:12:36Sure, so there's a great quote from one of our employees,
00:12:41one of our senior scientists
00:12:43that we've talked about putting on T-shirts,
00:12:44which is, you know, AI is not a customer problem.
00:12:46And here, a customer is either external or internal.
00:12:49You know, we're tackling this
00:12:50by essentially looking at this like any other technology
00:12:53and asking, okay, what is the problem we're trying to solve?
00:12:55And by the way, if it's a new problem,
00:12:57then that's a red flag, right?
00:13:00It's probably, we should look at problems
00:13:01that we already know are out there.
00:13:02We just didn't have a good solution for them before.
00:13:04This might be the right solution.
00:13:06And then we're asking exactly the questions you ask
00:13:08about, okay, the accuracy is not 100%.
00:13:10We need to know that going in.
00:13:12The cost is expensive.
00:13:13We need to know that going in.
00:13:14So just the first question is trying to do an early triage
00:13:19on what problems do we want to tackle?
00:13:20And can we tell upfront whether or not those factors
00:13:24you're describing work for the problem
00:13:27that we are tackling?
00:13:28Beyond that, you know, I think Sid mentioned it.
00:13:33Part of this is there's a lot of good solutions out there.
00:13:36If you're looking at LLMs,
00:13:37there's the big giant models
00:13:38provided by the big hyperscalers.
00:13:40There's great small mini models.
00:13:42There's open source.
00:13:43And so it's about, you know, the first step is,
00:13:47can this be solved by AI?
00:13:49And the second is, how do I fine tune the solution
00:13:52to the right technology for the right problem?
00:13:56Great.
00:13:57Fiona, have you sought to address these two issues?
00:14:00Yeah, no, I think that's right.
00:14:01I think the first thing is, you know,
00:14:02really looking at your use cases.
00:14:04And one of the things we kind of came up with
00:14:06was a matrix to help identify if generative AI
00:14:10or if it's more predictive ML is a better solution
00:14:14for the problem sets, right?
00:14:15So we have a small AI council that helps look at that.
00:14:17And then one of the things we look at as well
00:14:19in conjunction with the ones we've identified that
00:14:21is, you know, how might you also, you know,
00:14:23sometimes you also need to change the metrics,
00:14:26your KPIs around the solutions
00:14:28if you're gonna adapt generative AI.
00:14:30You know, for example,
00:14:31Sid mentioned the customer service use case.
00:14:34What we found when we first rolled it out
00:14:36is that if we used our old measures
00:14:39on how our agents were sort of evaluated,
00:14:42they ended up spending a lot of their time
00:14:44adjusting the output, which was actually very well,
00:14:48good quality, et cetera,
00:14:49because they wanted it in their own voice, right?
00:14:51So you had to explain to them that,
00:14:52okay, we're gonna change the metrics
00:14:53and your job is to make sure it's factually correct
00:14:58and then that's it, don't mess with it
00:15:00because you end up taking up actually more time
00:15:03making the edits to make it sound like them, right?
00:15:05So those are some things I think you need to kind of look at
00:15:08and then the other thing I would say is also
00:15:10in terms of if you're putting it out in front of a customer,
00:15:13where are areas that you could do this
00:15:15where they're gonna be more forgiving?
00:15:17So, you know, when I talked about our image,
00:15:20so text to imagery, sort of ideation,
00:15:23realization kind of customer experience
00:15:26that we put in front of our customers,
00:15:27they were okay, they were fine
00:15:29if the chair leg hallucinated a little bit
00:15:31and was maybe a little askew
00:15:33or it messed a little bit with some of the stuff
00:15:37in their room, maybe put a little ball in there
00:15:38because they just appreciated the ability
00:15:41to see their rooms in different styles.
00:15:43So, you know, in a way it's like you put that out there
00:15:45and the hallucinations would be more tolerable
00:15:48because you have a customer base that understands that,
00:15:50hey, this is still really good
00:15:52but I'm not also gonna be too upset
00:15:54if there's a slight tweak in the leg.
00:15:56So I think look for those two
00:15:57because then you can, those are ones
00:15:58that you could probably put out there
00:15:59and just get the experience and some of the learnings early on
00:16:02without the concern around, you know, hallucinations
00:16:05because that's still a thing.
00:16:06That's a good point.
00:16:07I mean, some of these sort of ideation, you know,
00:16:10use cases of course, like the hallucination,
00:16:12you can be much more forgiving of that
00:16:14and it kind of makes sense.
00:16:15It's not only forgiving, it's a feature.
00:16:16Yeah.
00:16:17Yes.
00:16:17For most of our customers, that's what they want.
00:16:18One should always have five legs.
00:16:19Right, right, yeah.
00:16:21A seven-legged chair is a great thing.
00:16:23Jeremy, can I add one more comment?
00:16:25Yeah, sure.
00:16:25One thing I'll say is I wouldn't worry too much
00:16:28about prematurely optimizing around cost.
00:16:31I think the trick is the dynamics of this world
00:16:33is changing so fast that step one is get in
00:16:37and figure out whether you can solve a problem
00:16:38with the AI today before you think about cost.
00:16:40If you can, then you can either, you know,
00:16:43before you scale up, you can figure out how do you optimize
00:16:45by focusing on different models, swapping out,
00:16:48getting some smart people to go distill a model
00:16:50or wait a month, wait a week, honestly.
00:16:53Like these things are getting better,
00:16:54they're getting cheaper.
00:16:55So get involved in finding the solutions now
00:16:58and then worry about the cost.
00:16:59Great.
00:17:00Ken, I want to get to you on those two questions
00:17:01on cost and reliability.
00:17:03Yeah, well, I'm going to tag off of Fiona's comment
00:17:05about hallucinations and, you know, being more tolerant
00:17:08because I work in a space where, you know,
00:17:10that's not, you know, you can't tolerate hallucinations.
00:17:13And that's why you don't see the use of generative AI
00:17:17in medical technology development.
00:17:19There are no generative AI, FDA-approved therapies
00:17:22or diagnostic tools.
00:17:24They're all large, they're all based on deep learning
00:17:27and machine learning.
00:17:29And that's a fantastic AI application
00:17:33for making a better outcome
00:17:35and creating better therapies for patients.
00:17:39And so we know that there's a lot of work to do
00:17:42to get generative AI to the point
00:17:44where you can reduce hallucinations
00:17:47and you can have the controls in place
00:17:48so that you could actually put it
00:17:50through the rigorous FDA approval process
00:17:52so that you could deliver it as an AI-powered therapy.
00:17:56So the example I gave you earlier
00:17:58was not generative AI, it was based on deep learning models.
00:18:02And there are others that are based on deep learning models
00:18:04that we've brought to the market.
00:18:07And so you'll have to figure out
00:18:09how do you solve problems with, you know,
00:18:11dealing with the inherent probabilistic nature
00:18:13of generative AI before you can take it
00:18:16through that regulatory process
00:18:18and through the security and safety process.
00:18:21And the other factor is a lot of the applications,
00:18:24even internal in terms of productivity,
00:18:27that we want to use generative AI for
00:18:29to actually optimize it for our use,
00:18:31we need to, like, ingest proprietary data
00:18:34into these large language models
00:18:36if we're using generative AI for therapeutic development
00:18:39on the R&D side.
00:18:41And today that's challenging.
00:18:44And so we're working through how do you build micro-LLMs
00:18:48that can ingest proprietary data
00:18:50that we don't want to expose to the whole world
00:18:52so it's not part of some hyperscaler model.
00:18:55And so there's a give and take
00:18:57between large language models and these micro-LLMs.
00:19:01And so that's how we think about it is, you know,
00:19:04everything is not all about generative AI.
00:19:07One of the great things about the generative AI revolution
00:19:10is it's opened our eyes to other applications
00:19:12for deep learning and machine learning AI solutions
00:19:16because they're perfectly good ways to bring innovations
00:19:20to really important problems, you know,
00:19:23like medical technology.
00:19:25And if I can just share one, you know,
00:19:28very clear, simple example that shows you the power of this
00:19:31is in our spinal surgery product,
00:19:34we take the scan of a spinal patient
00:19:38that needs to have spinal surgery to straighten their back
00:19:40and to restore their life.
00:19:43And historically, that scan would be looked at
00:19:46by a very experienced spinal surgeon
00:19:50and they would very meticulously go through the design
00:19:52of a rod that's bent to custom to that patient's back
00:19:56and then they would place the screws
00:19:58in each location on their back
00:20:01and then go through a multi-hour surgery
00:20:03of drilling the screws in place
00:20:05and placing the rod in the patient
00:20:08and then sewing them back up.
00:20:10Today, that is totally revolutionized
00:20:12by using AI technology.
00:20:14We take the scan, put it into our stealth station technology
00:20:18and it plans the actual shape of the rod automatically,
00:20:22sends a plan to the robot that places the screw
00:20:27in the exact right location
00:20:29to take the cognitive load off the doctor.
00:20:31The reason I'm sharing that with you
00:20:33is because this is an example of how deep learning
00:20:35and machine learning work together with a trained clinician
00:20:39to totally revolutionize an extremely important therapy
00:20:43that's changing the lives of thousands of people
00:20:44every day, every year.
00:20:47Thank you, Kenneth.
00:20:49I think it's a good point to make
00:20:51is it's not all about generative AI
00:20:53and certainly in some of these
00:20:54very high consequence areas like medicine,
00:20:57you really need to make sure your solutions are accurate
00:21:01but that there's a possibility for AI technology
00:21:03more broadly to be really revolutionary.
00:21:06Maybe you could address specifically the cost side of this
00:21:09because you mentioned all the different models
00:21:11you're sort of experimenting with and playing with.
00:21:13How have you done that
00:21:14and yet not had cost just balloon on you?
00:21:17No, absolutely.
00:21:18So, when Adobe says or Eli says,
00:21:20cost, don't worry about it, we're a little company.
00:21:22We're a public company, we're a little company
00:21:24and cost is a little important for us.
00:21:27So, what we have to do is
00:21:29we've actually created a model garden
00:21:31inside our engineering team.
00:21:33So our cloud engineering team has created a model garden.
00:21:36They've made 40 to 60 different models
00:21:38available to developers where developers can say,
00:21:41okay, here's my use case and go experiment with the model
00:21:44and it spits out, here's the SLO around,
00:21:47here's the cost profile for it,
00:21:49here's the latency profile for it,
00:21:51here's the performance for it.
00:21:52Now you can decide if that's the right thing
00:21:55and you can also decide whether you wanna use 3.5
00:21:57or 4.0 GPT or you wanna use some other model.
00:22:00So it gives them the ability to experiment
00:22:02and optimize on that cost curve.
00:22:04But what I would say is what companies like us
00:22:07would ideally want, which might be for the startups
00:22:10that are out here, is we give you the use case
00:22:13and we give you an SLO around performance,
00:22:15latency, and cost.
00:22:17And you pick and choose the model.
00:22:19I've had discussions with Azure, OpenAI,
00:22:22with Google and GCP, and with Amazon, AWS,
00:22:26to say, can you guys build that into your platform?
00:22:29I don't need to build that, right?
00:22:30But today I am building that into my platform.
00:22:33So I think that that's how we think about cost.
00:22:35What was their reaction?
00:22:36Were they with?
00:22:37No, they're like, yes, we wanna do it.
00:22:38We don't know how to understand your use case.
00:22:41So there's something about how we all need to think
00:22:44about collaborating and partnering together
00:22:46on how do we get them to understand our use case.
00:22:49But I did wanna say something about reliability,
00:22:50if that's okay.
00:22:51Oh, yeah, very quickly.
00:22:52Yeah, very quickly, okay.
00:22:53So reliability at the end of the day comes down
00:22:56to where's the data that your models are being trained on?
00:22:59And there are two sets of data we use.
00:23:01One is all the ticket data, the conversation data,
00:23:03but two is federating search
00:23:06across our customer's enterprise
00:23:08and leveraging the data in that enterprise
00:23:10so they don't have to put all that stuff
00:23:12into the knowledge base.
00:23:13So now when we're providing responses,
00:23:15those responses are grounded in that data.
00:23:18Two, we cite the sources so an admin can see,
00:23:21hey, all these responses are provided.
00:23:22Here's where those responses came from.
00:23:24And lastly, we put the human in the loop.
00:23:27So for example, when the agent is providing the response,
00:23:29that is putting a human in the loop,
00:23:31even though we suggested what that response is gonna be.
00:23:33So that's how we drive towards reliability, accuracy.
00:23:36Great, thank you.
00:23:38Subha, I wanna get to you again on both these questions,
00:23:42reliability and cost.
00:23:44How's Wipro looking at it?
00:23:45So what's interesting for us is,
00:23:48again, I'll side with Sid in this case,
00:23:51cost is very important to us.
00:23:52We deal primarily in four different categories
00:23:57of use cases for generative AI,
00:23:59content gen, contract gen, code gen, and RPA 2.0.
00:24:04And the reason it's so broad
00:24:05is because we serve every vertical in the world,
00:24:10which is great because then we are able to connect dots.
00:24:14Cost of service becomes very important.
00:24:16So we also have an internal platform
00:24:19which orchestrates across closed
00:24:21as well as open source models.
00:24:24What we've also done is we're able to then create agents
00:24:27to give context to how information is processed
00:24:32and data is sent to LLMs.
00:24:35So that's second.
00:24:36And the third area for us is around drag optimization.
00:24:40And this is an area that we've been heavily investing in
00:24:42in the last couple of months.
00:24:45Because what we've realized is,
00:24:48based on these four broad categories of use cases,
00:24:52how you collect data, how you chunk it,
00:24:57how you store it, how you index it,
00:25:00and how you then send or ground it
00:25:06to be sent to the models is very critical.
00:25:09And the optimizations on both reliability and cost
00:25:15is heavily dependent on that drag.
00:25:20And early benchmarking, at least from what we've done
00:25:23in the last couple of months,
00:25:25has given us very encouraging results.
00:25:30In terms of the quality of output
00:25:33and reduced hallucinations,
00:25:36as well as we have also references
00:25:41so that the references can be documentation,
00:25:44can be audio files, could be video files,
00:25:47could be any data that resides in your enterprise.
00:25:51So that becomes very powerful
00:25:53and how we are leveraging that as commodity models
00:25:56in terms of LLMs in conjunction
00:25:58to what we have done internally
00:26:00in terms of developing a very optimized drag.
00:26:03And then making sure that we have those agents
00:26:06that give you context for these four different use cases.
00:26:10So it is a little bit more involved and evolved
00:26:13than just using LLMs.
00:26:15It's about how do we take what is commoditized already
00:26:18out there and be able to implement for various use cases.
00:26:22The other point I would want to make
00:26:24is we are very excited about not just the open source LLMs
00:26:27but also the small language models.
00:26:31Because of cost of hosting,
00:26:33we are quite hyper-focused on that,
00:26:36especially if we can host them
00:26:38or rather serve them through commodity CPUs.
00:26:42It really reduces our cost of serving,
00:26:45which then can translate into our customer
00:26:49cost savings as well.
00:26:51That's interesting.
00:26:52Fantastic.
00:26:53Thanks, Siva.
00:26:53Let's open this up to questions from the room.
00:26:56Please raise your hand if you have a question
00:26:58and I will come to you and call on you.
00:26:59I see someone down here.
00:27:01Why don't you go ahead and say who you are
00:27:04and state your question or comment.
00:27:06Yeah, hi, my name is Sunil.
00:27:07I'm the founder of an early stage AI company called Hamlet.
00:27:11My question, Eli, you mentioned the sales
00:27:13and marketing use case.
00:27:15My question sort of for all of you is,
00:27:17are you using AI in your own outbound efforts
00:27:21for sales vis-a-vis, let's just say training and LLM
00:27:24to emulate the behaviors
00:27:25of your best enterprise sales rep or your best SDRs?
00:27:29And part two, are they aware of that
00:27:32and has it changed their behaviors at all?
00:27:35So, Narth, you've got your hand up.
00:27:36Do you want to answer?
00:27:37And Eli, we'll come down the road.
00:27:39Oh, was this for Eli?
00:27:40Sorry.
00:27:40Well, no, you go first and then we'll go to you.
00:27:42Okay, so absolutely.
00:27:44In our case, what we've done is we've actually created
00:27:46an expert who understands generative AI
00:27:49and is totally focused on building models,
00:27:51meaning responses, battle cards, et cetera,
00:27:55that people can engage with,
00:27:56the sellers can engage with just by talking to it
00:28:00as opposed to going and learning things, et cetera.
00:28:02And we've actually made that available publicly
00:28:05and it's become really popular.
00:28:06The sellers are very aware that they're using generative AI,
00:28:09but they love it because it's so much easier
00:28:11and they can get in the moment.
00:28:13As they're having a phone call,
00:28:15they're being able to have that conversation
00:28:17and they're being prompted with the real capabilities,
00:28:20differentiators, whatever it might be
00:28:22that they could leverage.
00:28:23So definitely we're using that.
00:28:26And what about the second part of the question?
00:28:27Do the top performers sort of know
00:28:28that their data is being used to train that model?
00:28:33Well, given that they work for us, it doesn't matter.
00:28:36We can tell them that, yeah, your data is being used.
00:28:40I mean, all calls get recorded in a lot of SDR organizations
00:28:44so I don't think there's any concern there
00:28:46about them knowing or not knowing,
00:28:48but I can tell them they don't know.
00:28:50They know now.
00:28:51There you go.
00:28:52Oh, on the record.
00:28:54Eli, how about your organization?
00:28:57Yeah, no, I think we use it in our customer support side.
00:29:00I don't think we use it right now in our sales side at all
00:29:03that I'm aware of, so we haven't gotten to that part yet.
00:29:07We use it in both service and sales,
00:29:09but it's still in a co-pilot mode.
00:29:11So the agents are essentially getting, for example,
00:29:13if they're gonna represent a product,
00:29:15it'll be able to provide all the relevant information
00:29:18to them so they have it at their fingertips.
00:29:21There is that interesting element though around,
00:29:22it does really well in terms of getting new salespeople
00:29:26or new service folks up to a really high level very quickly,
00:29:31but our top sales folks, our top service folks
00:29:33are still better performant
00:29:35than sort of what the model generates.
00:29:36So there was some thought around potentially using
00:29:40kind of their responses to your point
00:29:42around training the system and making it better,
00:29:44but at this point, we're still very much
00:29:46in a co-pilot type of situation for that.
00:29:48I think the other thing to also consider
00:29:50is that as it gets better and better,
00:29:51you might also now see how much you can move
00:29:54towards a completely autonomous interaction.
00:29:57So that's the sort of like part two of our efforts
00:30:00is some of the more autonomous interactions,
00:30:03can we do more in those as well?
00:30:05So you still have the service agent co-pilots,
00:30:07but then you have also more self-serve,
00:30:09autonomous interactions with the bot directly.
00:30:13Subha, I know you wanted to say something and then Ken.
00:30:15Yeah, go ahead first.
00:30:18So this is a hot topic question.
00:30:20So I would love to kind of...
00:30:23So we also, we run one of, I don't know,
00:30:26not many people know this,
00:30:27but we also have a fairly large platform business.
00:30:29One of the platforms is airline business
00:30:31where we serve five, six large airlines in the world,
00:30:36both cargo operations and people operations
00:30:38and airline or plane operations.
00:30:40So one of the areas where we've seen
00:30:43from a service perspective is internal service,
00:30:46which means that we are able by doing what we did,
00:30:50what I just described to you earlier
00:30:51about optimizing on the rag,
00:30:53the responses have become more and more accurate.
00:30:56And then now we are integrating it
00:30:58with that kind of operations.
00:31:00So our time to insights and time to value
00:31:03for our airline operators has gone significant,
00:31:07has become quite good in terms of metrics
00:31:10and the time has gone down significantly.
00:31:12So that's where we've seen
00:31:13a significant amount of improvement.
00:31:15But then those are not closed loop kind of conversations.
00:31:21These are open, which means I have a question,
00:31:23then I get an answer back.
00:31:24Closed loop, I think is still a year at least out
00:31:27where you can actually have actions
00:31:29without human necessarily in the loop.
00:31:31So that's where we are.
00:31:32Shiva, we should talk.
00:31:33You should use Freshworks.
00:31:35All right.
00:31:37Ken, I know you wanted to say something.
00:31:39Yeah, I just wanted to take a slightly different twist
00:31:41on your question
00:31:42because I think it's an important use case.
00:31:46So we do use generative AI for customer service
00:31:50and what we call our complaint handling line
00:31:54and it's added value.
00:31:55But where we're seeing the highest impact
00:31:57for AI in the customer facing space
00:32:00has been in training of medical professionals
00:32:03about how to use our products.
00:32:05And what we're doing there is we have a product
00:32:09that captures video of how doctors and clinicians
00:32:13actually use our medical technology.
00:32:16And it can actually capture a video
00:32:18of a doctor performing a surgery.
00:32:21And that video can be then used to train future physicians
00:32:26of how to do that same surgery
00:32:28or the same physician of how to do it
00:32:31even better the next time.
00:32:32And we're finding that to be an extremely valuable tool
00:32:36for helping our customers both adopt our technology
00:32:42and to use it more effectively,
00:32:44but also in training physicians
00:32:46in how to perform these life-saving therapies.
00:32:49And the AI part of it is it captures not just the raw video
00:32:53but it actually segments, does semantic segmentation
00:32:57and adds intelligence about what was doing what
00:33:01during the surgery.
00:33:02So it's extremely valuable tool to these physicians.
00:33:06That's really interesting.
00:33:07We've got a question here.
00:33:09Please state your name and where you work from.
00:33:12Venk Varadhan, founder and CEO of Nanoware.
00:33:14We're a startup doing healthcare at home remote diagnostics.
00:33:18So my question, you guys are all in different industries.
00:33:22Regulation and standards, two points.
00:33:25I'd love to just get a quick snippet
00:33:26on what you guys and your organization's discussions are
00:33:30with Capitol Hill.
00:33:32I'm familiar, Ken, with what Medtronic's doing
00:33:36because a lot of it is case-by-case in healthcare
00:33:39and some industries are gonna have more regulation than not.
00:33:43So curious on you guys' discussion.
00:33:45And then the second question is,
00:33:46a lot of this unfortunately is being led
00:33:48by the large cap companies.
00:33:51And there's a fear in the startup community
00:33:53that regulatory capture,
00:33:55that there's a reason that everybody in the large side
00:33:57wants DC to be doing something
00:34:00because that'll create more moats for the bigger companies
00:34:03that have all the data
00:34:05from an unstructured or structured standpoint
00:34:07to be the winners.
00:34:09Well, if I could start.
00:34:10Yeah, Ken, that's fine.
00:34:11Because I know Venk knows,
00:34:12but everybody else in the room probably doesn't know.
00:34:14And I think it's a good template,
00:34:15which is the regulatory bodies,
00:34:18they need to hear from industry.
00:34:20And they need to hear from industry,
00:34:21not from like individual companies,
00:34:23although that's useful.
00:34:25They need to hear from industry through consortia,
00:34:28through groups, through industry groups.
00:34:33In the medical case, it's the industry we call AvaMed.
00:34:37And this is a place where you can,
00:34:39in a non-compete kind of fashion,
00:34:41bring the thought leaders from one industry together
00:34:45to then inform the regulators about,
00:34:48what are you doing with this technology?
00:34:50How are you using it responsibly?
00:34:52What are the gaps in the regulatory landscape?
00:34:54How should they be educating themselves
00:34:57about the technology and its usefulness
00:35:00in the domain of interest and so on and so forth?
00:35:03If you're not engaged in that kind of a dialogue
00:35:05in your industry, then I would encourage you to get engaged.
00:35:09Because those conversations are moving the needle
00:35:11and extremely important to have.
00:35:14Eli, are you guys up on Capitol Hill lobbying?
00:35:16We spend a lot of time on Capitol Hill.
00:35:17We work very closely with the regulators,
00:35:20with the lawmakers in the US and the EU around the world.
00:35:25I echo Ken's response, because our experience
00:35:28working with the regulators is it's very binary.
00:35:31Either the industry is running way ahead
00:35:35of the current policy and the laws
00:35:37and the policy can't catch up,
00:35:40or the regulators are very, left to their own devices,
00:35:44are likely to overreact, potentially.
00:35:46Because there's so much nuance in here.
00:35:48And you really have to understand this stuff well
00:35:49to understand where the risk is,
00:35:52where regulation is an important and valuable thing,
00:35:56and where it will be a hindrance.
00:35:57And so we spend a lot of time actually with regulators,
00:35:59helping them understand that question of
00:36:02how do you articulate where there is a risk to society,
00:36:06and make sure, to your point, that they are defining
00:36:10just as much as necessary and no more
00:36:13to be able to mitigate that risk
00:36:14while allowing innovation to happen.
00:36:17So there's a lot of, in our world obviously,
00:36:20well, across the entire industry,
00:36:23as I'm sure you're all aware of,
00:36:24there's a lot of debate and discussion around
00:36:26where does the data come from,
00:36:27to the point somebody made about how that affects
00:36:30the quality of the output and the safety of the output,
00:36:32about copyright, IP law, around who owns rights to data
00:36:36and what the impact of that is.
00:36:40The laws are woefully behind there.
00:36:42So yeah, we invest a lot of time educating.
00:36:48Great, yes, Siddharth.
00:36:49I can make one comment.
00:36:50I mean, we're obviously much smaller,
00:36:53but we're also, we have to comply
00:36:55with regulatory requirements, for example, GDPR or PII.
00:36:59And a couple ways we do it.
00:37:01One is that we, for example, we're using Azure OpenAI
00:37:04for some of our generative AI use cases,
00:37:06and we inherit their enterprise-grade capabilities.
00:37:10So we're only deploying our models
00:37:12in our tenancy within Azure.
00:37:15And for example, BAA, we get BAA inherited
00:37:18because that's being delivered by Azure OpenAI to us
00:37:21and across all their data centers.
00:37:23So that's one way.
00:37:24The other thing that we're doing is
00:37:26the data that goes into the models,
00:37:28we're actually using AWS Comprehend
00:37:30to be able to do PII masking on the data
00:37:33before the data gets sent to the model.
00:37:35The model comes back with the response,
00:37:37and then we blend in the PII data back in
00:37:39so that the PII data is not going into the model.
00:37:42So those are some of the surgical ways that we're using
00:37:45to try and meet those regulatory and compliance requirements.
00:37:50Great. Other questions from the room?
00:37:52We've got quite a few here.
00:37:53Let's go here first.
00:37:55Excuse me.
00:37:57Oh, you were first.
00:37:59Oh, you were first.
00:38:00I didn't see you on the cover of my eye, but that's fine.
00:38:02I'll come to you right next.
00:38:04All right. Thank you.
00:38:08Okay. Yeah.
00:38:11I'm Faye Waddleton,
00:38:13and I'm a co-founder and a director of EraQ Quantum Hardware.
00:38:18So I'm sort of interested.
00:38:20It's been an intriguing two days to attend all the sessions
00:38:25to which I was assigned,
00:38:27and the word quantum has not appeared,
00:38:31let alone discussed.
00:38:32So I'd be very interested in the panel's perspective
00:38:37on this revolution.
00:38:38We speak of AI almost as though it's the end of revolutions,
00:38:43and we're just going to make it just better when, in fact,
00:38:46there is one that is on the frontier and on the ridge
00:38:51that will redefine data collection
00:38:55and information processing in a way that we will all contend
00:39:00with and will change our lives as we know it.
00:39:03And I'm not the least of which are the security issues
00:39:07around medical devices and the other, those issues.
00:39:12And I'm just kind of intrigued, one, that it has not had,
00:39:16it has not come up in the discussion
00:39:17because it is relevant to AI,
00:39:20and AI quantum algorithms are being written.
00:39:24But the security issues are phenomenal in terms
00:39:28of breaking through current encryptions.
00:39:30So I'd just be interested in some of your thoughts on that.
00:39:34Yeah. Ken, since it was directed to you a little bit,
00:39:36why don't we go to you first?
00:39:37No, I directed it to everybody.
00:39:39Yeah. Well, go to LI and see.
00:39:40I'll start.
00:39:41I'm happy to start.
00:39:43Ken, I kind of was looking at you because I come out of this
00:39:45from a medical professional point of view,
00:39:47so I sort of understand some of those simulation issues
00:39:50and what these tools bring.
00:39:53Sure. I love your question because it highlights the fact
00:39:56that AI and generative AI is not the only innovation that's
00:40:01driving the future.
00:40:03I spent a day and a half in upstate New York.
00:40:06You probably know where I was.
00:40:08I was at IBM meeting with Dario Gil and his team,
00:40:12and I got a full immersion in the future of quantum.
00:40:15As you know, they're all in, in quantum,
00:40:18and they're investing seriously in building
00:40:20out quantum computing capabilities.
00:40:22And I walked away from that experience
00:40:26with a newfound appreciation of that innovation.
00:40:32What I also walked away with was an appreciation
00:40:35that generative AI revolution is now.
00:40:39Quantum revolution as it will impact our devices and therapies
00:40:44and patients is next.
00:40:47Is now.
00:40:49It's, well, it's not now.
00:40:50That's my opinion.
00:40:51Ken thinks it's not now.
00:40:53You're going to disagree on it.
00:40:54Yes. Yes.
00:40:56No, I'm a co-founder of Quantum Developer.
00:40:57We're developing hardware.
00:40:59And it is, and already the simulation team,
00:41:03machines are working.
00:41:05Well, that may be true, but my experience and my observation,
00:41:09at least of my own company, you know,
00:41:11we're the largest medical technology company
00:41:13in the world, is we don't use quantum technology for anything.
00:41:18That doesn't mean we shouldn't be paying attention to it,
00:41:20and we are, because material discovery is really important.
00:41:24Because we put materials in people's bodies, and this has
00:41:27to be biocompatible, and there are some materials
00:41:30that we no longer will be able to use in the future
00:41:32because they're materials of concern, and so we need
00:41:35to engineer new materials that have properties that we want
00:41:39that are good for the environment
00:41:41and also biocompatible.
00:41:42So I think quantum's going to play a huge role there.
00:41:44So we're like learning, and we're digging in,
00:41:47and we're getting smart about it.
00:41:49And I think it's important to pay attention to that,
00:41:52as well as other innovations and other technologies
00:41:55like battery technology innovations.
00:41:57We make a cardiac monitor that's the size of a vitamin.
00:42:01It's about that big, and it gets implanted
00:42:04with a little tiny incision right above your heart,
00:42:06and it will detect atrial fibrillation for seven years
00:42:10on one battery charge.
00:42:12That's because of the innovations happening
00:42:14in material chemistry and battery chemistry.
00:42:17And the only way that it does that job for seven years
00:42:20and without false positives is
00:42:23because the signal's coming off
00:42:24of that vitamin-sized implantable cardiac monitor
00:42:27that's protecting you from getting a-fib
00:42:31or signaling your doctor that you have a-fib
00:42:33so you can actually save your life,
00:42:36is that signal gets processed through an AI-trained algorithm
00:42:41for rejecting false positives
00:42:42so that the doctor can trust the signal and respond.
00:42:46That's what I mean when I say AI is now.
00:42:49And quantum algorithms are being developed
00:42:52for battery development also.
00:42:54I want to go down the road.
00:42:55Does anyone else have a comment on quantum
00:42:58I am going to move on to another question,
00:42:59but does anyone first have something else to say
00:43:02on quantum on this panel right now?
00:43:05No.
00:43:05All right, I'm going to go on.
00:43:06First, I've got this gentleman here and then over here,
00:43:09and we'll take that in that order.
00:43:12Great.
00:43:12Awesome.
00:43:13Bob Vinshaw, CEO and founder of a company
00:43:15called Moveworks.
00:43:15We're an enterprise-wide co-pilot
00:43:18that provides employee support.
00:43:20So my question to the panel is in 2023,
00:43:22it was characterized as a lot of demos and experiments.
00:43:262024, people are saying it's about deployments
00:43:29and showing results.
00:43:31Got a two-part question.
00:43:32Curious what you are doing today
00:43:34in terms of your budget allocation
00:43:36towards gen AI investments,
00:43:38both for internal use, for your products.
00:43:41What does that look like?
00:43:42And then when you sit down with your CEOs
00:43:45at the end of this year and next year,
00:43:47what does good look like?
00:43:48Like how do you exceed expectations
00:43:51based on the expectations of your organizations
00:43:54and your use of AI across these different use cases?
00:43:58All right, we're going to go to Subha first
00:44:00to answer this one.
00:44:01So there's a couple of things we're doing in 2024.
00:44:06So first thing, in terms of investments,
00:44:09our approach is multi-pronged.
00:44:12One is internal investments
00:44:14in terms of developing orchestration,
00:44:16because we are in the business of orchestration,
00:44:18but we are platformizing our service-based orchestration
00:44:23so that we are doing what I just talked about,
00:44:26multi-modal orchestration for the right kind of output,
00:44:29for the right kind of use cases.
00:44:31The second thing is training of all our employees,
00:44:34because think about this as,
00:44:37as you have a better understanding
00:44:40of how to use outcomes from LLMs
00:44:44to create the right kinds of inputs
00:44:47for the next process, state, alteration, system,
00:44:51whatever it is,
00:44:53the skill sets of everyone in the stack
00:44:56will evolve and change.
00:44:58So that's the second area of investment for us,
00:45:01retraining training, right?
00:45:03That's the second one.
00:45:04The third part of this is around
00:45:07understanding the limitations of our own enterprise,
00:45:10and we have a very robust Vipro Ventures arms,
00:45:13as well as what we call a startup accelerator,
00:45:16which both are part of our portfolio,
00:45:19which looks at seed, pre-seed startups,
00:45:23because a lot of startups are solving for different problems
00:45:27across that new emerging AI stack,
00:45:30starting from chips design to middleware,
00:45:33to serving, to applications,
00:45:36to prompt engineering, the whole breadth.
00:45:39And we are looking at those investments,
00:45:42both from a venture, Vipro Ventures perspective,
00:45:44as well as startup accelerators perspective
00:45:47to identify early talent
00:45:49and to make sure that we are not missing something
00:45:51that we should not be missing.
00:45:53And the last piece around
00:45:56what is the end of the year conversation with my CEO,
00:46:00it will all come down to ROI.
00:46:03And this is where most companies
00:46:06will either succeed or fail,
00:46:08which is the ROI and investment like such as these,
00:46:11which is, I still will say generative AI is emerging tech,
00:46:14although AI, the traditional AI
00:46:16has been proven for a very long time,
00:46:19is more about how have,
00:46:21what's the ROI in terms of internal metrics,
00:46:25metrics that talks about productivity,
00:46:27metrics that talk about, not revenue metrics,
00:46:30these are external metrics that'll come later on,
00:46:32whether it's productivity, efficacy,
00:46:34time to value, time to insights,
00:46:36what are those metrics?
00:46:36And those metrics are going to be very, very critical
00:46:39in the beginning of the journey.
00:46:41And I think we'll see what happens
00:46:45in end of year 2025 with my conversation with my CEO,
00:46:47but that's our take at least.
00:46:49Great, so, Darth, I mean, do you have a similar take?
00:46:52No, actually, my take is a little different.
00:46:54Good, because that's more interesting.
00:46:56Which is that the conversation with the CEO
00:46:58is actually twofold.
00:47:00Do you want to die now
00:47:01or do you want to die later in a sense, right?
00:47:03Meaning that if you do not invest
00:47:06in staying up with the competition
00:47:08and trying to run as fast as you can on AI,
00:47:11you will get cannibalized, right?
00:47:12It's a slow death.
00:47:13And the future discussion is where is the puck going
00:47:16and how do we get there?
00:47:17Which means we've got to invest
00:47:19in thinking about strategy a little further out, right?
00:47:22For example, workflows today.
00:47:25The way people do workflows today
00:47:26is they use local tools and build workflows, right?
00:47:29But the workflows of the future
00:47:30are going to be where someone talks and says,
00:47:33or there's an operating document that is read,
00:47:35a workflow gets created
00:47:37and automatically AI agents can get spawned
00:47:40and those AI agents orchestrate work across the workflow.
00:47:43And not only that, but the workflows are watched
00:47:46and what is happening actually in the workflow
00:47:48enables us to tune that workflow.
00:47:50So discussion with CEO is, yes, I'll give you ARR,
00:47:53I'll give you these metrics that she was talking about,
00:47:55which is really important, I'll give that to you today,
00:47:57but we also need investment for the future.
00:47:59In terms of our budget,
00:48:00I think we're spending about 20% or a little bit more
00:48:03of our engineering and product spend on the AI,
00:48:08which is both discriminative AI and generative AI.
00:48:11Great.
00:48:13Eli, I'll let you get in.
00:48:14One thing I'll add,
00:48:15actually I'm going to take this opportunity
00:48:16to amend the statement I made earlier,
00:48:18because if it gets out of this room
00:48:19that Eli said he doesn't care about cost,
00:48:21I will have a hundred budget requests
00:48:22in my email tomorrow morning.
00:48:25I think what Subha said is exactly right,
00:48:27which is it's about ROI,
00:48:29it's about what are the metrics that you're moving.
00:48:32The point I made about cost was,
00:48:34I think we are at the point now
00:48:36where if you can show return
00:48:39that isn't justified because the cost of the AI is too big,
00:48:43that's what's going to change, right?
00:48:44Show that there are effective solutions
00:48:46that will improve whatever the metrics are
00:48:48that matter to your business,
00:48:49and that's where it comes back to that idea
00:48:51that AI is not a problem, right?
00:48:53AI is a potential solution,
00:48:55and so we have a center of excellence internally
00:48:57whose job is to, you know,
00:48:59people who really understand
00:49:00what the state of the art with AI is,
00:49:02and they're out there working
00:49:03with each of our functional leaders
00:49:05and product owners and business owners
00:49:07to help them understand, you know,
00:49:09where they can use it to drive their metrics,
00:49:10but the metrics they have to drive is still the same,
00:49:13and so it's not an allocation towards,
00:49:15you have to put AI in here,
00:49:16it's you have to move your metric,
00:49:17the same as you did last year,
00:49:19the same as you did the year before.
00:49:20Be aware of what the state of the art is
00:49:22and how to move that,
00:49:23and start to experiment,
00:49:24and if you find something that is very promising,
00:49:27but the economics don't work out now,
00:49:29they might be better in six months.
00:49:31Great.
00:49:31I wanna, oh, Fiona, let me, yeah,
00:49:33very quickly, and then I wanna get to the next question.
00:49:35Along with all of that,
00:49:37you know, I think Siddharth mentioned
00:49:38a bit of red teaming exercising
00:49:40in terms of thinking about how might your business
00:49:42be disrupted by generative AI.
00:49:43I think spend some time on that.
00:49:45The other thing I would say also is,
00:49:47you know, a lot of the AI models, et cetera,
00:49:49if you look at all the cloud providers,
00:49:50they're providing that sort of as part of their,
00:49:52you know, GCP or AWS, whatever, right?
00:49:55So one of the other things you can do
00:49:56is if you can optimize the rest of your cloud spend,
00:49:59you can then use that to spend
00:50:01towards whatever your minimum commit is.
00:50:03So that's another, you know, way to think about
00:50:05how you might be able to fund the AI, you know, work,
00:50:09by just being more optimized
00:50:11around everything else that you're doing.
00:50:12Great, I wanna get to the question
00:50:13we had over here on the side.
00:50:15Yeah, Marcus McCammon,
00:50:17President and Chief Executive of Karma Automotive.
00:50:19So we're a luxury vehicle,
00:50:21luxury clean energy vehicle manufacturer
00:50:23in Southern California.
00:50:24So it's interesting for me to hear the conversations.
00:50:27In my space, in the auto industry,
00:50:30there's a lot of technical debt, okay?
00:50:32And the move to AI is cumbersome
00:50:35for the industry because, frankly,
00:50:38we make more profits on conventional business,
00:50:40but how we value the investment in the new technology
00:50:44and how do we put KPIs around it
00:50:46such that we know that we're getting the return
00:50:48is literally mission critical.
00:50:51So for our business at Karma,
00:50:54we actually use real-time data from our vehicles
00:50:57in an attempt to shorten our development cycle.
00:50:59What are the use cases that customers are actually facing
00:51:01so that we can take out big chunks
00:51:03of product development time and exercise
00:51:06that's not value-add?
00:51:08But the question, again, two-part,
00:51:10one, how do you draw the line that says
00:51:15I've got enough data to trust the AI at this point?
00:51:17And in my case, it's more about machine learning
00:51:19and deep learning.
00:51:20And then the other side is then how do we translate that?
00:51:24And I think it dovetails to the last conversation.
00:51:26You said, if you don't innovate, you're gonna die, right?
00:51:29How do you translate that to a value metric
00:51:32that you can put on the books today
00:51:34that counteracts a delta between,
00:51:37take Ford Motor Company, right?
00:51:38They lost seven billion on EVs
00:51:41and all their SDV software-defined vehicles last year
00:51:45where the company made 15 billion on conventional business.
00:51:49So I'd love to hear perspective from the team on that.
00:51:52Yeah, that's fascinating.
00:51:53That's sort of a classic innovator's dilemma situations.
00:51:55I'm curious how people wanna address that.
00:51:57But on the first question, maybe Ken,
00:51:58because I know it's sort of maybe some similarities.
00:52:00You're using a lot of traditional machine learning.
00:52:02You're gathering a lot of data from the devices.
00:52:04But how do you know when you have enough data
00:52:06to sort of move to the next iteration of the model?
00:52:08Yeah, and I appreciate the question, Marcus,
00:52:10because I use a lot of car analogies at work
00:52:15because I used to work for a car company.
00:52:18But the other reason I use a lot of car analogies
00:52:20is I think we have the same problem,
00:52:22that medical devices have a ton of tech debt.
00:52:25If you think about the modern platforms
00:52:27for data-driven AI decision models,
00:52:31and we answer that question of when do you know
00:52:34that the AI is worth it
00:52:36by starting at the patient working backwards.
00:52:40So if it leads to a better patient outcome,
00:52:46then it's worth it, right?
00:52:48And if you can convince yourself
00:52:50that the technology has been validated
00:52:56and proven and run through all the regulatory processes,
00:53:00and then still leads to a better patient outcome.
00:53:04And when I say that, I mean, for example,
00:53:06in the link to example that I quoted earlier,
00:53:10it led to a better patient outcome
00:53:11because it rejects false positives every time.
00:53:16And so the patient's not running to the doctor unnecessarily.
00:53:21And it's leading to a better physician experience as well.
00:53:26In the case of our surgical robot,
00:53:31we're applying AI to improve the reliability of a robot
00:53:35because that's gonna lead to a better patient outcome
00:53:38and a better physician experience.
00:53:40So just starting with the patient and working backwards
00:53:43and then knowing that it's worth it.
00:53:46So that's how we think about it.
00:53:47Can I add two things on that?
00:53:50So you talk about data and there's two kinds of data
00:53:53in the question of when do you have enough?
00:53:54I think there's the data that you use
00:53:56as the foundation for the AI,
00:53:58that it's being trained on,
00:53:59whether it's rag or training or fine tuning, et cetera.
00:54:01And then there's the data that you collect
00:54:03to prove whether it is effective,
00:54:05whether it's accurate, whether it's loosening, et cetera.
00:54:07Obviously collecting data,
00:54:09it was in the session right before lunch.
00:54:10One of the investors talked about the four Ds
00:54:12they look for.
00:54:13The first one was look for a company
00:54:14that has access to data.
00:54:15So if you're gonna be investing in AI,
00:54:17that is the surprising pieces.
00:54:19That's the investment you have to make
00:54:20is how do you get the data?
00:54:21And for a lot of us,
00:54:22that is even 30 years into digital revolutions,
00:54:27that's still a new muscle we have to build.
00:54:29And then the other part is the data around testing.
00:54:32And that is also for a lot of us,
00:54:34a new muscle we have to build.
00:54:35We've been building all this image video,
00:54:37creative generation AI,
00:54:38and we had to invest heavily in the validation loop,
00:54:42which there was no way for us to really automate that.
00:54:45We can automate collecting of it,
00:54:46but at the end of the day,
00:54:47we had to put it in front of humans
00:54:49who are the experts in whatever,
00:54:50in your case,
00:54:51it may be experts in making the driving choice for us.
00:54:54It was expert in making the creative choice.
00:54:58But at the end of the day,
00:54:59we invested in a very repeatable structure
00:55:01that allows us to quickly get results
00:55:03out to a panel of users who are representative for us
00:55:07that can give us two pieces of data.
00:55:09The one is top down.
00:55:10It's the outcomes.
00:55:12It's a creative pro for us looking and going,
00:55:15yeah, that's good or that's bad.
00:55:16At the end of the day,
00:55:17that's success.
00:55:19That's not actionable data for us.
00:55:20That can tell us whether we've succeeded,
00:55:21but we don't know what to do with the answer is it's bad.
00:55:24And so the other set of data that we invest in
00:55:25is the bottoms up data
00:55:27where we identify a set of about 10 different dimensions,
00:55:32sometimes 20, 30,
00:55:33depends on the technology
00:55:34that we actually ask humans to evaluate.
00:55:36How many human,
00:55:37how good is it at representing human anatomy?
00:55:39How good is it at color and tone?
00:55:41This is our space.
00:55:42But, and so we have some metric
00:55:44by which we can actually break it down
00:55:45and give that to our R&D team to say,
00:55:48here's where it's breaking down.
00:55:50Both are important
00:55:51because that breakdown
00:55:52may not actually be a good representation of success.
00:55:56The top one has to give us success,
00:55:57but the top one isn't a good signal
00:56:00into what do we do next.
00:56:01But those two investments of collecting the data
00:56:04and then doing a new kind of validation
00:56:06where you're validating computers against human decision.
00:56:09Those are new muscles you have to build.
00:56:12Great.
00:56:13I should add it, augment my earlier comment.
00:56:15Quickly.
00:56:15Eli explained very nicely
00:56:18that the exact process
00:56:19that you go through in a clinical trial,
00:56:22that's exactly how clinical trials work.
00:56:24So you collect data
00:56:25and you put it in front of humans
00:56:27and you validate it against the model
00:56:29that you built the model on.
00:56:30So, Darth, I wanna go to you quickly.
00:56:31Can you answer like the second part of the question?
00:56:33Cause that referenced what you were saying earlier.
00:56:35It's sort of that innovator's dilemma.
00:56:36You know, if the answer is you're gonna die later,
00:56:39but you're not gonna die today,
00:56:41how do you convince the CEO to invest in the technology?
00:56:45Well, I think it comes down to the ROI for the customer.
00:56:48You know, we just did a research report.
00:56:507,000 people were surveyed
00:56:52where productivity increase was one thing.
00:56:55The satisfaction of the agents
00:56:57in the job that they're doing,
00:56:58you know, that was another thing.
00:57:00And then customer satisfaction
00:57:01at the end of the day was another.
00:57:02So we're looking at those vectors
00:57:04and saying the capabilities that we're building,
00:57:07are they affecting those kinds of metrics, right?
00:57:09And we have a lot of product analytics
00:57:11that is watching both adoption of that capability,
00:57:14but also impact.
00:57:15For example, what's the deflection rate
00:57:17that we're giving?
00:57:18And is it a deflection as in,
00:57:19it just got sent to an agent
00:57:20or did it actually resolve the problem for the customer?
00:57:23So that's the kind of stuff we're looking at.
00:57:24Great.
00:57:25I think we're gonna have time for one,
00:57:26maybe two more questions.
00:57:28Who else has a question?
00:57:29And if not, I've got right here,
00:57:30this gentleman's got a question, great.
00:57:34I'm Paulo Rosado, CEO of OutSystems,
00:57:36a low-code development platform.
00:57:38And one of the questions I wanted to ask is,
00:57:42there's clearly models and processes
00:57:44of iterating on these AI solutions
00:57:47that follow more or less the process of software development
00:57:49where that input like Eli was describing,
00:57:54you have a cycle that's fairly onerous
00:57:56in the sense that you collect all the data,
00:57:58you put the experts and whatever,
00:58:00then it moves into R&D,
00:58:02the model is retrained
00:58:04and then it gets a new version and it gets released.
00:58:08However, one of the things we see in productivity,
00:58:12for instance, in the scope of productivity
00:58:14that includes Siddhartha's type of use cases,
00:58:19workflows and the likes,
00:58:21it seems that the process of reinforcement,
00:58:23either the learning or the updating
00:58:25moves into the runtime of the process,
00:58:29fundamentally disrupting the typical cycle of software
00:58:33that we are used to.
00:58:35And so my question is whether you've seen,
00:58:40what is that you've seen as multiple strategies
00:58:44to embed the feedback cycle of improving the agents
00:58:49into the running of the agent itself
00:58:51and the solutions that it empowers?
00:58:54So this is more like sort of online learning
00:58:56or that sort of somehow you're updating the model
00:58:58while it's running, it's very interesting.
00:59:00Subha, you have something to say on this.
00:59:01Yeah, I think Paulo, you and I talked quite extensively
00:59:05yesterday about some topics similar to this
00:59:07is a couple of things.
00:59:09I think one is on the LLM side,
00:59:13it's very different than the traditional way of serving.
00:59:18And what we've seen is by fine tuning drag,
00:59:23we're able to circumvent some of the traditional inputs
00:59:28back into the model fine tuning.
00:59:31That's one.
00:59:32The second thing we've seen is by in some cases,
00:59:37open source and the small language models,
00:59:39we've seen that retraining them on contextual,
00:59:44our contextual data has given us better outputs
00:59:47and we are able to do that on somewhat regular basis
00:59:51without necessarily incurring high levels of cost.
00:59:55So those are the two techniques we have used
00:59:57to kind of, but I would still say
00:59:59that circumventing the traditional learning
01:00:03and we've not had a reason for that
01:00:06because the new models are constantly being updated
01:00:10every six months, every few weeks now.
01:00:12Fiona, I wanna get you in here.
01:00:14Do you have something to say on this?
01:00:15Because I imagine at Wayfair,
01:00:16you're getting a lot of data in all the time
01:00:17about what customers are doing on the site,
01:00:19what are they searching for
01:00:20and how are you integrating that to update the models?
01:00:23Yeah, no, I think that's a good question.
01:00:24And I think it depends on the use case
01:00:26where it makes sense for it to be adaptive
01:00:27and where it's your own customer proprietary data
01:00:30that's really gonna be changing.
01:00:33But then there are other use cases that we found
01:00:34that's been interesting.
01:00:35For example, one of the things we have to do
01:00:37is we have to derive whatever information we can
01:00:40on the products that we get.
01:00:41And so like color and style and so forth.
01:00:44We found that just using the general models
01:00:46out of the box has been good enough, right?
01:00:48It's actually doing better than some of the stuff
01:00:50that we trained up and spent a lot of time
01:00:53because that's the cost is in training and et cetera.
01:00:55So it's almost like trying to figure out
01:00:57where you probably don't need to update that that much
01:00:59because the colors probably not so much.
01:01:01We tell style sometimes it's a little bit,
01:01:04there are new styles that are introduced all the time.
01:01:05So maybe you look at that more,
01:01:06but it's still not at a point
01:01:08where I need to adapt it all the time.
01:01:10The more adaptive stuff is what we've done
01:01:12in terms of like understanding our customer data.
01:01:14And then we keep looking at it as you are on our site
01:01:16and you're doing different things.
01:01:18We have models that will look at that
01:01:19and potentially figure out then,
01:01:21what should I put in front of you?
01:01:22Should I give you a financing offer
01:01:23or a different kind of offer?
01:01:26Can I just make an extra comment?
01:01:27Yeah, very quickly.
01:01:29What we face is that as we're building this,
01:01:32we're building these full solutions
01:01:34with agents in the middle,
01:01:36because it's a local platform,
01:01:37so it's fast to iterate on that.
01:01:39We find that about 80% of the agents
01:01:42or solutions that we build have a playground back office,
01:01:46which was something that never appeared in our use cases.
01:01:49And so as people iterate, they kind of,
01:01:51oh, it would be cool if we had like a narrow playground
01:01:54that we could, instead of giving a back office
01:01:56where people update data in the old days,
01:01:59now they have a playground where guided,
01:02:03they can actually tune the prompt and whatever.
01:02:04And we get a lot of knowledge
01:02:07by capturing in the usage of the system,
01:02:11something that then more permanently,
01:02:13we can update into the solution itself
01:02:16through a more normal cycle.
01:02:17So it's almost as if we now have like,
01:02:20one cycle goes very fast and then there is the other.
01:02:24And we end up by having specialized playgrounds
01:02:26for the multiple agents that we build.
01:02:28That's great.
01:02:29That's fascinating.
01:02:30I'm sure you could talk all day,
01:02:31but I'm afraid that's all the time we have.
01:02:33I want to thank our discussion leaders
01:02:35for being with us today.
01:02:35And I want to thank all of you for joining us.
01:02:38There's a reminder that this afternoon's main stage program
01:02:40will begin at 2.45 in the tent.
01:02:43See you all there.
01:02:44Thank you so much.
01:02:45Great.

Recommended