• 8 months ago
Edward J. “EJ” Achtner, Head, Office of Applied Artificial Intelligence, London, HSBC Alexandra Mousavizadeh, Co-founder and CEO, Evident Insights Brian Mullins, Chief Executive Officer, Mind Foundry Moderator: Massimo Marioni, Europe News Editor, FORTUNE

Category

🤖
Tech
Transcript
00:00 Okay, hello everyone, thanks for staying with us.
00:03 So, financial services and generative AI.
00:08 Banks have a huge opportunity to significantly reduce
00:12 the time that it takes to perform banking operations,
00:16 financial analyst tasks, and that hopefully then empowers
00:20 the employees by increasing their productivity.
00:23 Now, Alexandra, I know your company's done a lot of research
00:27 in this space, tell us who's leveraging generative AI well,
00:32 and what can we learn from them?
00:35 You mean who's leading, who's lagging?
00:36 I'll--
00:37 Not who's lagging, just who's leading.
00:38 No, no, I mean we do, the index that we create,
00:42 now get back to what that is, looks at not only generative AI
00:45 but sort of the whole AI spectrum.
00:48 So yes, we produced, it's now 14 months ago,
00:51 we released the first index measuring the biggest banks
00:55 in North America and Europe on their AI capabilities
00:59 or their AI maturity.
01:00 And it was interesting to see who leads and why,
01:05 and essentially what we do is we do a full 360
01:11 of the entirety of the bank's AI footprint.
01:14 And we look at elements like the AI talent stack,
01:18 we look at innovation, we look at leadership,
01:21 we look at operating model, and then we also look
01:23 at the transparency of responsible AI.
01:26 And put this index together, and the leaders were,
01:31 it was interesting to see that in the top 10,
01:34 or if you look at the top 20, they were dominated
01:37 by North American banks.
01:39 There's some reasons I think that could be,
01:44 we can get into that in a minute.
01:46 The top bank was JP Morgan, was followed by Capital One
01:51 and then RBC, and they're very different banks
01:53 but what they did have in common,
01:54 or what they do have in common,
01:57 is having been out the gate and very forceful
01:59 on being very clear about what that AI vision is.
02:04 And then all of that that followed was pushing very hard
02:11 to hire AI talent, setting up research labs,
02:16 really thinking about their innovation strategy
02:19 and structure, research and patents and partnerships
02:23 and vendors and so on.
02:24 And then looking at the operating model,
02:26 all with the view to sort of how can we get the time
02:28 from ideation to production down.
02:30 So that's what we created, that's what we established
02:34 in January last year, and then we did the second iteration
02:37 of the index in November, and it's on an annual cadence.
02:40 So it'll be very interesting to see if anything shuffles
02:43 between now and the next update, which is in October.
02:47 - And you mentioned JP Morgan got ahead almost
02:49 by being early adopters and getting out the gate quickly.
02:52 Obviously, we can learn from that going forward,
02:55 but you can't be an early adopter anymore.
02:57 What can financial services learn from what those top banks
03:02 have done apart from getting to it early?
03:04 What are they doing really well that can be replicated?
03:07 - Yeah, it's a question around sort of can one catch up now?
03:12 Or, I mean, one thing I would say is that the banks
03:15 that are leading are really doubling down.
03:17 So there is a bit of a gap that is growing
03:20 between sort of the leaders in the index
03:22 and those further behind, because there is such an advantage,
03:26 there is an advantage in being first mover,
03:28 because you've established a reputation
03:31 to draw in AI talent.
03:33 But that said, there's certainly any bank that decides
03:37 to really focus in on this now, and sort of looking at
03:40 what are the leading banks doing,
03:42 and taking what is best approach for that specific bank,
03:47 absolutely, that can be, a bank can catch up
03:50 in terms of being very clear about the vision,
03:53 being very articulated both internally and externally
03:57 to make sure that AI is the absolutely most important thing
04:01 for any leader of any line of business,
04:04 and putting in place what is important
04:08 for talent acquisition and retention.
04:11 So what is it that makes the bank attractive?
04:14 AI talent has a lot of other places to go than banks.
04:17 So one needs to make the bank
04:19 a really attractive place to work.
04:21 So that is, can you show your research?
04:24 Is there access to be active
04:27 and sharing on open source communities?
04:30 Are there really interesting problems to work on?
04:32 Is it a priority of the bank?
04:34 And so on, so there are many ways to make that,
04:37 are the skills and training ongoing in the bank and so on.
04:41 So those are sort of one of the,
04:42 some of the many ways that banks can focus on it now
04:46 and catch up.
04:47 - Now we've got EJ here from HSBC.
04:49 They're very aggressively making moves in the AI space.
04:52 I think I've heard you say that HSBC
04:54 has nearly 1,000 applications of AI within the bank.
04:57 Could you talk us through them all, please?
04:59 - No, not all.
05:00 Sure, here we go.
05:02 - What are your favorite, EJ, what are your favorites?
05:04 - So that is accurate.
05:06 We do have approximately 1,000 applications
05:09 across HSBC's operations that use artificial intelligence.
05:14 The oldest of which go back nearly a decade
05:16 with some of our original machine learning models.
05:19 And as you might imagine,
05:21 for our 62 market operational footprint,
05:24 all businesses, all functions are heavily invested
05:29 and really making sure that we're driving the leading edge
05:32 of responsible and ethical AI.
05:34 With respect to generative,
05:36 I think it's important to say a few things.
05:38 So as you might imagine, we're testing and learning
05:41 a range of generative AI use cases that are likely to scale.
05:46 And I think it's also important to distinguish
05:48 between proof of concepts, pilots, and production.
05:53 And what I see out there is a lot of great energy,
05:56 a lot of great momentum,
05:57 but I think what also needs to be stated,
05:59 particularly in financial services,
06:01 and I know there's many different sectors
06:04 represented here today,
06:06 but when it comes to banking, financial services,
06:08 perhaps healthcare,
06:10 we do have a differentiated standard of care.
06:13 We do have a differentiated standard
06:15 of regulatory compliance.
06:16 And those are good things.
06:18 We should embrace that.
06:19 And so for us, the focus is on that fine balance
06:24 between bridging from proof of concept into production.
06:28 That's going to take time.
06:29 And as you might imagine,
06:31 there's some lower risk types of use cases
06:33 related to knowledge management,
06:36 all the way up to and including other types of use cases
06:38 that candidly, even if it were in our risk appetite,
06:43 it's our impression that in some respects,
06:46 the technology, the tooling is not yet mature enough
06:49 for production grade applications.
06:52 So we're really, I think, proceeding cautiously,
06:54 but at the same time, we are optimistic about this
06:57 across all businesses and functions.
07:00 - Now, Brian, with anything fun and potentially profitable,
07:03 there comes risk, unfortunately.
07:06 What specific risks come with deploying generative AI
07:10 in finance like algorithmic trading or fraud detection?
07:14 What can you tell us about risk and how we can manage that?
07:17 - Yeah, I think that all the risks come with it.
07:20 I think it's really important to understand
07:25 what the models can and can't do
07:28 and whether or not that they're fit for purpose
07:33 is the decision that is the highest risk decision to make.
07:38 We see generative models, which in and of itself
07:42 is a little bit of a misnomer because all AI,
07:44 all machine learning models kind of generate an answer,
07:47 but in essence, I think what we're talking about
07:50 is foundation models that are trained
07:52 on large amounts of data and are predicting tokens.
07:57 And they are statistically encoding knowledge
08:00 that exists in the training data.
08:02 What does that mean?
08:04 You'll get some generalizability,
08:06 but they're not truly generalizable AI.
08:10 They will go off track.
08:13 They will invent answers.
08:14 This isn't a bug, it's a feature.
08:17 It's how they exist.
08:18 They invent these answers.
08:20 Hallucination's not an accident.
08:21 Sometimes we just like the hallucinations.
08:24 But if you understand that, then you can see
08:27 that there are some very powerful things
08:28 that you can do with it.
08:29 It's just not gonna solve all your problems.
08:32 And I think that once you know that and understand that,
08:36 you can choose the applications where they'll do the best.
08:40 Or you can combine them with other machine learning methods
08:45 to create a powerful solution that gets the flexibility
08:48 from a user interaction standpoint
08:51 of something like a large language model
08:52 put together with a more deterministic model
08:55 for forecasting to actually plug in the numbers
08:58 that you'll use as part of the ultimate solution.
09:02 So I think we really need to be thinking about it
09:04 not as a silver bullet, but as another arrow
09:07 in the solution quiver.
09:09 And then really understanding that the other silver bullet
09:14 that doesn't exist is the idea of a guardrail.
09:18 And we hear this term thrown around a lot,
09:21 but it's used almost as a,
09:24 we're gonna sprinkle some magic guardrail on top,
09:27 and it's gonna suddenly make the thing do what it can't do
09:30 or stop it from doing what we don't want it to do.
09:33 And I don't think we have enough experience with AI
09:38 as an industry to say that guardrails are improving things.
09:42 In fact, often is the case when you add complexity
09:45 to a system, you make it brittle, more likely to break.
09:48 And so I think we should be skeptical
09:51 when we see something that is not performant,
09:55 that the answer is we could add some guardrails to it
09:57 and make it perform.
09:59 We need to be cautious and really ask the questions
10:02 about suitability at the beginning of a project
10:07 before we decide whether we'll use something
10:10 like a large language model.
10:12 - And do you think it's fair to say
10:12 that one of the main factors involved
10:15 with mitigating that risk is the people
10:17 and the people that are operating the systems.
10:21 Now with that, and EJ, I've seen you talk about this before,
10:25 how can financial institutions ensure
10:27 that their workforce is adequately skilled
10:30 in order to deal with AI?
10:32 Because I've heard a lot of people talk about,
10:34 a lot of very senior people in AI talk about,
10:36 it's not AI that's gonna take your job,
10:39 it's someone who can use AI who's gonna take your job.
10:42 So how can financial institutions, as I say,
10:46 ensure that their workforce is on top of their game
10:49 when it comes to AI and that can mitigate risks
10:51 on behalf of consumers?
10:52 - Sure, so I don't think we talk about this topic enough.
10:55 I'm glad we're discussing it now.
10:57 You can come at this from the direction
11:00 of the highest and best productivity output.
11:04 You can come at this from the direction
11:06 of responsible and ethical use.
11:09 You can come at this from the perspective of the law.
11:12 And you could also come at this from the perspective
11:14 of what's the highest and best way for an individual,
11:18 a teammate, to achieve their personal professional best.
11:20 All roads lead to the same thing,
11:22 and that's making sure you have an engaged,
11:25 educated, and informed workforce
11:27 that's using these products and capabilities
11:30 in a way that best meet all of those criteria.
11:33 And the beauty of that is,
11:35 is that if you do it the right way,
11:37 you build great products with product market fit,
11:40 you have an engaged and informed workforce
11:42 that differentiates you from a talent perspective,
11:45 you're able to demonstrate to your key regulators
11:47 that you're doing this, and you have delighted customers.
11:49 So no matter, again, what your starting point is,
11:52 it's imperative that whatever sector you're in,
11:56 that you really have very thoughtful, detailed plans
11:59 on really employee reskilling.
12:01 - Now, I'll go to the audience to take some questions
12:04 if there are any.
12:05 I just wanna get some thoughts from the panel
12:06 on what emerging Gen AI trends can business leaders monitor
12:11 over the next decade, say, and what steps are necessary
12:15 to prepare them for these types of developments
12:17 that we're gonna see?
12:19 Let's start with Brian, who looks the most worried.
12:21 (audience laughing)
12:23 - Yeah, no, I think that, you know,
12:26 I would echo exactly what he said about
12:28 the idea of humans working with AI
12:33 and really supporting them to learn how to use the tools.
12:37 That's where you're gonna get the most clarity,
12:39 and I'm really excited whenever the user interface
12:43 kind of gets broken down so that people without AI skills
12:47 can start to use them and use them in their work.
12:49 Because every time somebody in an organization uses AI,
12:54 you get an answer quickly,
12:56 or it synthesizes a whole lot of information quickly,
12:58 but it kind of makes the information,
13:01 you know, from an information theory standpoint,
13:02 we'd say it increased the entropy,
13:04 but really it's just,
13:05 it's making the picture a little bit blurry,
13:07 and the person looks at it, figures out what's in it,
13:09 and then when they make a decision,
13:11 they make it more sharp, right?
13:12 'Cause they've combined a whole bunch of things
13:14 from the rest of their life
13:16 and the rest of the role that they do,
13:17 and that helps correct for the weaknesses
13:20 of these generative models,
13:22 and the more you can put those together,
13:24 I think the better,
13:25 and so I'm really excited when I see these UIs
13:28 or user experience being developed in products
13:31 that let people actively participate,
13:34 not in a box-ticking human-in-a-loop exercise,
13:37 but where I'm adding my creativity,
13:39 then they're adding the creativity and more,
13:41 and I think that's where we're gonna see,
13:44 real valuable generative AI being deployed in the world.
13:48 - And Alexandra, your top trend to monitor for business--
13:52 - Yeah, I just wanted to agree with that.
13:53 I mean, we monitor AI talent flows in a lot of detail,
13:58 because the hypothesis when generative AI was,
14:04 when ChatGPT4 was released,
14:06 was that there was gonna be a huge demand
14:10 for prompt engineers, right?
14:13 But that actually didn't happen,
14:15 because banks were looking internally
14:17 to do exactly what you're saying,
14:20 put it in the hands of everyone,
14:22 and figure out how one can use it for what,
14:26 and solve what problems.
14:28 So there wasn't that.
14:29 There's been other types of hires that we've seen instead,
14:33 and looking at talent coming from academia more,
14:38 looking at talent actually coming from big tech
14:41 that have been part of behind the growth
14:45 and the development of the tools,
14:47 so they know the, but also not trying to build
14:51 that in-house, but trying to create the know-how
14:54 and the talent inside the bank, and upskilling,
14:57 and actually putting it in the hands of everyone to,
15:00 some have had that strategy, put it in the hands of everyone
15:04 and then to see what surfaces from that,
15:06 and look at what problems, what ideas can come from that.
15:11 And ensuring that there is this,
15:12 I know a lot of people talk about innovation mindset,
15:15 but that's actually more important than ever,
15:17 because if you've got someone again,
15:19 and sort of inside a line of business,
15:21 or running it that has AI sort of front of mind
15:24 all the time as a solution,
15:27 then you're going to come up with many more ideas
15:30 for use cases, and use cases that are in the pipeline
15:32 that then can be tested in terms of selection,
15:36 which is looking at complexity,
15:38 and ROI that's attached to each of the use cases.
15:41 So on talent, right, so that's actually,
15:45 went in a slightly different direction than we anticipated.
15:48 But upskilling, and that's what the banks
15:51 are doing internally, and across,
15:53 not just senior leadership, but across the whole talent stack
15:57 is something we monitor quite closely.
15:58 - Interesting.
15:59 I'll throw open to the floor
16:01 if there's any questions for the group.
16:02 Yeah, gentleman in the middle.
16:04 Oh, sorry.
16:05 (woman speaking off mic)
16:09 - Hi, I'm Ajay from Salesforce.
16:13 Quick question, and this is from
16:14 a financial institution perspective.
16:17 Banks and financial institutions have gotten really good
16:20 with managing structured data,
16:22 and data is so critical to have the right AI output
16:26 at the end of the day.
16:27 Can you give us an insight of what financial institutions,
16:31 or what you individually are trying to do
16:33 when managing unstructured data?
16:34 Because it's so critical to have a great outcome
16:37 of any large language model.
16:40 - Sure, I'll take that one very briefly.
16:41 So, again, another topic that's not discussed enough.
16:45 If you expect to get high quality output,
16:49 even with a well-trained workforce,
16:52 your house of data, that foundation,
16:57 must absolutely be as strong as it can be,
16:59 structured, unstructured, where it is,
17:02 under what type of use, geographic, et cetera.
17:05 Full stop, especially in banking,
17:07 financial services, healthcare, et cetera.
17:10 So, again, that strong foundation for your data
17:14 is an absolute prerequisite,
17:16 especially if you need to get demonstrable,
17:18 repeatable, high quality outputs
17:20 that you can stand in front of a customer,
17:22 or a regulator, to ultimately prove
17:24 that you have product market fit.
17:26 - Okay, guys, I think we'll leave it there.
17:29 Ajay, Alexandra, Brian, thank you so very much.
17:31 (audience applauding)
17:34 [BLANK_AUDIO]

Recommended