Brainstorm AI Singapore 2024: Innovating Responsibly And Securely With AI

  • 3 months ago
Sheena JACOB, CMS Holborn Asia,
Ayesha KHANNA, Fortune Brainstorm AI Singapore,
Phoram MEHTA, A Matt HEIMER, FORTUNE
Transcript
00:00Thank you all so much for being here, both at the conference
00:02in general and at our concurrent session.
00:05Thrilled, thrilled to have you.
00:06Thrilled to see some of you from closer up
00:08than I got to see from the stage earlier.
00:10My name is Matt Heimer.
00:11I'm one of the co-chairs of this conference,
00:13and I am the executive editor for features at Fortune,
00:16which means, among other things, I oversee Fortune magazine.
00:20We're here at this panel to discuss innovating responsibly
00:24and securely with AI, which is a sort of polite way of saying,
00:29will the robots come and destroy us all?
00:33Spoiler alert, we don't think they will.
00:34More importantly, we have a great group
00:36of speakers who can address the questions that
00:39surround that big meta question.
00:42Before we get started and before I introduce them,
00:44I want to thank our partner, PayPal,
00:47for hosting this session for us.
00:49We certainly appreciate your partnership.
00:50We're incredibly grateful to be part of this operation
00:53with you.
00:54I also have a little bit of housekeeping for us all
00:57as a group.
00:58First of all, just a reminder that this session
01:00is going to be on the record.
01:01Anything that we say here could be reported on by the media,
01:06including Fortune.
01:07So be aware of that.
01:08More importantly, when we turn to the point
01:12where we'll be soliciting audience questions,
01:14although it is a small room, we do
01:16need you to raise your hand so that someone could come by
01:19with a microphone for you.
01:21That way, we are recording this event,
01:23and it'll be easier for audience members
01:25to hear you when you ask a question.
01:28And most importantly, this really
01:30is meant to be a group discussion.
01:31We have terrific leaders up here.
01:33But at the same time, we're looking forward
01:35to incorporating as many voices as we can,
01:37and that means yours.
01:39So don't be shy.
01:40Speak up when the spirit moves you.
01:41We're eager to hear it.
01:43With that, let me finally start with introductions
01:46to the panel.
01:47Starting at my immediate right, one of my co-chairs,
01:50Ayesha Khanna, the co-founder and CEO of Otto.ai.
01:55And to her right, Foram Mehta, who
01:58is the AIPAC chief information security officer for PayPal.
02:02And at my far right, Sheena Jacob,
02:04who's a partner and head of intellectual property
02:08at the law firm CMS Holborn Asia.
02:11So thrilled to have you all, and I'm
02:13so grateful that you're here.
02:16So as the title of this session suggests,
02:20innovation is at the heart of every conversation
02:21about artificial intelligence.
02:24I think we all share a belief that artificial intelligence
02:26will really accelerate economic growth.
02:29At the same time, just about every conversation
02:31we have around AI, there's a question of,
02:33how do you strike the right balance between innovation
02:36and growth on one hand, and responsibility and security
02:40on the other?
02:40That's where this conversation is going today.
02:44So with that in mind, there's a question
02:45that I want to pose to each of the panelists,
02:47give them each a chance to answer.
02:48And it's a two-part question that goes like this.
02:52What is one of the most exciting objectives
02:55that you think AI could help businesses
02:57accomplish in the near future?
02:59And what kinds of pitfalls go with that pursuit,
03:02and what kinds of guardrails and precautions
03:05are we going to need to avoid those pitfalls?
03:09So that's my big, broad question.
03:10I'm going to start by posing it to you, Aisha.
03:14Oh, OK.
03:15Oh, sorry.
03:15Yeah.
03:17You're the lucky one.
03:18Oh, two mics.
03:19Double fisting.
03:21So a goal that you're very excited about,
03:23and what kind of precautions do you
03:25have to keep in mind on the way to that goal?
03:27I think one of the things I'm really excited about AI
03:29is its ability to enhance human productivity.
03:34And we see that.
03:35We have a number of clients in health care, and logistics,
03:39and financial services.
03:41And internally, from every department,
03:46from HR, to legal, to marketing, to field services agents,
03:51we are seeing generative AI being
03:53used at an enterprise level to give them
03:58ability to work faster, and to be able to resolve issues
04:02much faster as well.
04:04So for example, for customer service,
04:06they're able to do this by answering questions.
04:09But not only that, they're able to do more effectively
04:12react to the customers by doing analytics of their voice
04:15or how they're feeling, so that they can upsell and cross-sell
04:19them different kinds of offers.
04:20So then empathy that was not there before,
04:23that ability to generate content that was not there before,
04:26is the most exciting thing.
04:28Now, the problem with all this is
04:29that if we don't do it, things that
04:32seem to be productive and efficient
04:36can become kind of creepy, can get into privacy issues.
04:41And for me, I think what people really miss out on,
04:44not just the bias and the manipulation,
04:46is that AI cyber hacking is a kind of cybersecurity risk,
04:52at least at a board level, that is bigger than anything else.
04:55For example, we all love chatting with chat GPT.
04:58But if you were to do a prompt attack,
05:01like just throw hundreds of prompts that were horrible,
05:05or poisoning the data of the foundational model,
05:09it can completely change the way our AI works.
05:12Great things to keep in mind.
05:14It can be a disaster.
05:15OK, so this is exactly what I was hoping for.
05:17That looks fantastic.
05:19That looks like a nightmare.
05:20So this is exactly the tone I hoped to set.
05:23Forum, over to you.
05:24I completely agree.
05:26I think the way I think about what's really, really exciting,
05:31I mean, we've obviously gotten together
05:35to discuss AI and the transformation it's bringing.
05:38Clearly, it has the potential to transform lives.
05:42If we just look at the history of technology advancement
05:45specifically from an automation perspective,
05:47we started with the deadly and the dangerous,
05:49where we really felt like an outsized investment needed
05:54to be made, whether it is jets, robots, that did very, very
05:57dangerous type of work.
05:58And then we came to the dull.
06:00And we are just finishing those dull aspects
06:03where everything that can be automated
06:05is getting automation.
06:06And we are seeing a lot of benefits from efficiency
06:10to productivity to other things.
06:12So what I feel super excited about
06:14is now after dangerous or deadly and dull,
06:19we're moving into the decisioning aspects.
06:21And with decisioning, it's not just
06:24about analytics or insights.
06:27It's actually how well can we predict,
06:29how well can we understand context
06:31like we discussed yesterday in terms of what humans really
06:35feel when they are communicating or using language
06:39to communicate something.
06:40I was in a panel last week with the head of Interpol.
06:45And how law enforcement, we take it for granted
06:48how law enforcement services should
06:51be available to every single human on Earth
06:54because they want to feel protected.
06:57And it should be a right.
06:58But just because of the lack of language capabilities,
07:02they couldn't do it until AI has enabled all the officers
07:06to carry any kind of language translation that they needed.
07:09From taking that further, what PayPal
07:13has been trying to do for last 20 plus years
07:16is reshaping, reimagining how commerce works.
07:19And what AI is enabling us to do is
07:22how do we think about payments specifically, not just online,
07:26but also in person, not just global,
07:28but also from a very, very personalized manner,
07:31and constantly use those data to make a very, very
07:35important decision that people don't expect machines
07:38to be able to make.
07:40And that's what is super exciting.
07:42Clearly, Ash has already mentioned
07:45how important my job is as cybersecurity president.
07:48So cyber risk is extremely important.
07:50But if you, once again, recap most of the talks
07:54of where the guardrails are needed,
07:55I mean, for the first time, it feels
07:57like a technology advancement that is happening
08:00has not left the guardrails discussion to the very end.
08:05We did it with internet.
08:06We did it with mobile phones, with social media,
08:09with even cloud.
08:10It wasn't the top topic.
08:12But now we have concepts like privacy by design,
08:15security by design.
08:16People are thinking about governance, ethical frameworks.
08:19We have our own responsibility.
08:21Where I feel some new sorts of challenges
08:25will come is the concept of rollback.
08:28Aisha mentioned poisoning of models.
08:30It's not simple rollback that you push out something,
08:35a feature, a bug, and then you take it back.
08:37Once that model has learned, once that data is consumed,
08:41and that piece of intelligence is embedded into the system,
08:44it's a huge investment to retrain the model
08:47to either purchase a new model.
08:48So supply chain risks, we saw a component of it
08:52with CrowdStrike, Microsoft Outage,
08:54but is an area where a lot of focus needs to be paid.
08:58Rollbacks, clearly.
09:00And then overall, just use of AI.
09:03So there is AI off security, or security by AI,
09:09and security off AI.
09:12Ref has already mentioned bringing it back to,
09:14you know, in last couple of thoughts,
09:18to APAC as the ground zero of cyber threats
09:21that are being powered by AI.
09:22Why?
09:23Because we have bypassed PCs.
09:26You know, everybody's on phone.
09:27Everybody's always connected.
09:29We've seen the power of deepfake.
09:32We've seen the power of disrupting traditional controls
09:35through AI, either through volume or just the complexity.
09:38And now, how good it is becoming
09:40in discovering vulnerabilities that nobody knows about.
09:44And that's what is concerning, is how AI, on one hand,
09:49while we are trying to catch up with the bad guys
09:51as they are getting organized,
09:52will also give them equal, if not more power,
09:56to try and disrupt and cause, you know,
09:59havoc in just the general way
10:01in which we are all trying to help all of our constituents.
10:04Yeah, perpetual cat and mouse possibilities.
10:08Sheena, give us your perspective on this.
10:10What's a great opportunity you see
10:12and some pitfalls on the way?
10:14Sure, I think, from my perspective,
10:18there are lots of use cases, I think,
10:19that we've talked about over the last couple of days,
10:24whether it's in the medical field, investments.
10:29We talk about productivity,
10:31but I also see AI being used a lot in creativity.
10:36So we're seeing a lot of the content companies
10:39using AI themselves to create new stories, new characters.
10:44And that, you know, that does provide
10:46a lot more possibilities,
10:48and it supplements human creativity.
10:53And I think that is very exciting for the future.
10:56Obviously, being a lawyer,
10:58I always have to think about, you know, the guardrails.
11:02And I guess I would say two things about that.
11:06Maybe, in my view,
11:12perhaps we're not really, regulation isn't that early.
11:16I think a lot of these models have been created,
11:19not necessarily with regard to privacy issues.
11:23Data has been used to train some of these models
11:28that uses intellectual property of third parties,
11:31and so you have a lot of litigation going on.
11:34And so there is already, I think, some risk,
11:37although I would agree that there is more thought
11:40being put into, you know, how do we regulate this
11:43as we use the systems?
11:45And so I think it is important,
11:47whether you are a deployer,
11:49and you develop the AI systems,
11:52and you're rolling them out,
11:54or whether you're a user,
11:55to think about what are the legal issues,
12:00what liability is there,
12:04and also making sure that there is compliance.
12:07There is going to be a tsunami of laws that you,
12:11I was telling Matt, that you, AI Act,
12:14is coming into effect tomorrow.
12:16And it does have a two-year window,
12:18but for example, in six months' time,
12:22unacceptable risk AI systems will be banned.
12:26And so you need to already be looking at your systems
12:30to determine whether or not
12:31you're going to be caught by that.
12:35Codes of conduct will also come into effect,
12:37I think, in February 2025, not a long time away.
12:41So, you know, there is a lot of work that we have to do,
12:45but I do think that these rules and regulations will help,
12:50but they won't necessarily solve all of the risks.
12:54I think that's something that all of us
12:56will have to work on as we,
12:59and I think as we go through this,
13:00and I honestly think some of the risks
13:02we haven't even worked out yet,
13:05and we're going to see as these systems deploy further.
13:10We'll only know them when we stumble into them.
13:15We've had a couple of panel discussions
13:16in the course of the conference already
13:18where we've had representatives of the Singapore government
13:21talk about wrapping their minds
13:23around the right regulatory framework.
13:26As practitioners and representatives of the private sector,
13:29you know, where do you see,
13:32what's the sort of best balance
13:34that the public sector can have?
13:36You know, how can they best support what you want to do?
13:38You know, where should they step aside?
13:41You know, where should they lead?
13:43You know, what's the right balance for regulators?
13:48Open question to anybody.
13:49I think it's a very interesting time
13:52to look at the regulatory landscape.
13:55You have, you know, the European Union,
13:58as Sina mentioned,
14:00there are consequences to the AI Act.
14:03For example, the latest multimodal models
14:05by Meta and Apple,
14:07they're not going to bring to Europe.
14:10So there's always this thing that,
14:13on the one hand, I'm a big believer
14:15in their risk-based framework,
14:17but every entrepreneur I meet from Europe
14:20says that it's stifling for innovation.
14:23So every country needs to find a balance on that.
14:27And if you look at the United States,
14:29it's starting to dip its toes in it,
14:31but the states are coming out
14:34and really trying to enforce certain regulations.
14:37So California has a proposed regulation
14:39and every single tech company has come out
14:42and said this is onerous
14:43because it says for large language models,
14:46if it's misused,
14:47then they will hold the company accountable.
14:50And they're saying,
14:50no, you should hold the person
14:51who's misusing it accountable.
14:52And that's just one of the regulations.
14:54So how do you kind of both encourage innovation,
15:00but also respect people's dignity,
15:02right to privacy,
15:03right to ethical and responsible deployment?
15:06That's the big question.
15:07And I think in Singapore,
15:09they're trying to find that balance.
15:12There is an open source toolkit
15:14where available from IMDA,
15:16which right now is voluntary reporting.
15:19And that's a good way to start.
15:20Kind of reminds me of the early days of ESG,
15:23where everybody had voluntary reporting
15:24and everybody was trying to be green.
15:27But eventually these regulations
15:29are going to start getting baked
15:31into the way governance is done.
15:33And the best thing that I advise my clients is,
15:36you have to realize that AI regulations
15:39and governance are coming,
15:40whether you like it or not.
15:42And if that means that one way is to go to the Middle East
15:45or somewhere else where you think it's more flexible,
15:49but as global companies,
15:51instead of trying to skirt these regulations,
15:53the best thing is to localize your infrastructure,
15:57just like we're doing with data localization
15:59and adhere to it.
16:00I don't think we really have a choice.
16:02The private sector needs to comply with public regulations.
16:06And there won't be sort of a Wild West
16:08where people can operate without that look.
16:10Sheena, as someone who's working in that field,
16:12you must be seeing a lot of this.
16:13So I think what is interesting
16:16is you really see countries fall into two camps.
16:19You have those that are going by looking at regulation.
16:23So what I call hard laws, Europe, China,
16:27Taiwan just had a draft regulation
16:30that it is moving forward with.
16:32And as Ayesha mentioned, some of the US states.
16:37And then you have another model, which is,
16:39let's have frameworks, guidance.
16:43These are not going to be hard laws.
16:46Companies will not face penalties,
16:49but the idea is that this is what we would like you to do.
16:53And this is how we would like you to use these models,
16:57develop these models.
16:58And I think that Singapore is very cognizant
17:02of the importance of that balance,
17:04because I do think that over-regulation
17:09will stifle AI innovation.
17:11And yes, I've heard the same thing.
17:13In fact, I was at a conference last year
17:16where there was a speaker from the EU Commission,
17:19and he actually had a slide with the number of laws
17:23that would apply.
17:25And it was something like 136 regulations.
17:29The slide was so small, I couldn't see anything.
17:31But that just tells you what the challenges are.
17:37And so you definitely are, even if you look at the rankings
17:41in terms of where innovation is happening outside, I guess,
17:47of the US and China, it is really Singapore, for example,
17:51is one of the key markets.
17:54And so I do think that balance is important.
17:57Europe has maybe a different philosophy.
18:00If you look at GDPR, GDPR was kind of the first mover.
18:06And I think Europe has decided to go down that road already.
18:10And it's just going to be a question of companies
18:14will have to look at that and figure out
18:16what's your AI strategy geographically,
18:19because it's going to be impacted by the regulations.
18:24And at the same time, it might be worth
18:27noting that in the United States,
18:28a couple of recent Supreme Court rulings
18:31set precedents that will take, in theory,
18:34a lot of discretion away from regulators,
18:36such that if in the United States
18:39they wanted to set rules around the use of AI, in theory,
18:43Congress would have to, in a law,
18:45put every single rule, which in the United States
18:48political system, it's very hard to imagine that actually
18:50happening.
18:51So I think the US arena could remain,
18:53if I'm understanding it correctly,
18:54it could remain very unsettled for a pretty long time.
18:57I think so.
18:58And perhaps that's not a bad thing.
19:03Because I do think you sometimes have
19:05to just watch to see how the technology is going to be used,
19:10what is really going to happen before you jump in and try
19:13and regulate it.
19:14Sure.
19:17So on one hand, I'm really impressed
19:20with how Singapore is doing it.
19:22Rather than saying, like Europe, you
19:24shall not, x, y, z, do it in this way,
19:27saying with the AI Verify and the Project Moonshot
19:30and others that you should do it responsibly,
19:34and this is how you do it.
19:35These are the toolkits.
19:36These are the guidelines.
19:37These are the frameworks through which you
19:39can continue to leverage the advancements and innovation
19:43to help truly transform the lives and your business.
19:48Where I feel we just, a little bit different perspective
19:52than what Aisha shared is we have
19:55to recognize that technology and these tools that
19:59started as silos, as areas where only the haves had access
20:06to them and the investments to try and create applications.
20:10The whole point of AI is to create and level
20:14the playing field.
20:15And as that happens, I feel the interconnectedness
20:20of every piece of data, interconnectedness of all of us
20:25as a global population is just hard to ignore.
20:28So harmonization is just one aspect.
20:31And if we can't get to a point within ASEAN,
20:34it's hard to imagine how will AI be
20:38such an integral part of what we do on a daily basis,
20:42from language to commerce to everything else,
20:45if the laws and the regulations are
20:47so different at every aspect.
20:48And there are examples from fraud, from cybersecurity,
20:52from compliance, where we have taken
20:55that sort of a pragmatic risk-based approach.
20:58But then we have seen that we become so hyper-focused
21:01on the local needs of how it affects the business, how
21:04it affects the risk, where the fraud is happening more,
21:06where it's not.
21:07And then the tuning of those dials
21:10is not quite right to both encourage the business
21:14and somehow provide that stimulus for growth,
21:18but also have the protectionist in the ecosystem.
21:21And I really hope that with these regulations
21:23and these discussions like these that regulators are meeting
21:26and trying to come up with something
21:28that everybody can agree on.
21:29Because at the end of the day, it's
21:31just that connectedness is just too much for us
21:34to have siloed laws and regulations.
21:39Well, I'm just going to say that I think, I mean,
21:42that would be great.
21:43But I think that's really unlikely,
21:46especially in this space.
21:49Yeah, you have a question.
21:51And just, she'll bring you a microphone
21:53so everybody can hear you.
21:56And actually, do you mind introducing yourself
21:58before you ask your question?
21:59My name is Rachel.
22:00I'm actually from NGX Stream, the founder.
22:03Question is, I hear that collaboration
22:05is important to make sure that we are working together
22:08to advance humanization, just like ESG, right?
22:12So there are so many institutions, regulations,
22:16bottom line standards.
22:17So that forces country to work together
22:19with a benchmark of greening the economy.
22:23So I'm just curious, from AI perspective,
22:26is there such concept today to kind of better
22:31for human kind of indicator, like greening
22:33the economy of carbon footprint or decarbonization?
22:36Is there any measures of such yet today?
22:40So sort of like a benchmark of sort of the social good
22:44that the AI can do.
22:44That's right.
22:45I see.
22:46Yeah.
22:47I mean, I think one of the panelists mentioned benchmarks.
22:50So I don't know if there is something established.
22:52But ASEAN earlier this year agreed
22:54on a responsible AI framework, which talks about fairness,
22:59that talks about bias, that talks about explainability
23:02on how the models work.
23:04All of these concepts that are very human,
23:06they are not business metrics, but that
23:09have to be incorporated in every single model
23:13that you develop, every single application that you develop,
23:16and every single experiences that you
23:18expose to the companies.
23:19And I think if it can happen at the ASEAN level,
23:22if EU countries can agree on something,
23:25there has to be something that people can agree,
23:28at least for human values and rights.
23:30From a legal perspective, I would love to hear from all of
23:34you.
23:34Sorry.
23:36Yeah, so I think in terms of the frameworks,
23:41whether it's an ASEAN responsible AI frameworks,
23:44the principles are the same, are similar.
23:48Bias, transparency.
23:51But I think how these laws are going to be,
23:53how the laws might be developed, how they would be implemented
23:57can be quite different.
23:59And so that is the risk that companies face, I think.
24:05The devil is always in the details.
24:09We heard a story earlier about how compliance
24:14with certain regulations under GDPR
24:17can actually have an impact on your business.
24:21And so I think that what we are seeing, for example,
24:27with the approach that is taken in the EU
24:32is that it is quite a strict and high bar.
24:37And that is going to make it difficult, I think,
24:42to innovate in the same way.
24:46And so I think it's an ideal.
24:49But there is, I think, as far as I'm aware,
24:52no sort of universal law or treaty that has been agreed.
25:01There's a question in the back of the room, something
25:04I want to come back around to after a couple of questions,
25:07has to do with building the databases from the ground up
25:11and how some of these issues of fairness and social utility
25:15maybe need to be baked in from the beginning.
25:17If the devil is in the details, maybe the angel
25:19can also be in the details.
25:20But we'll come back to that.
25:22I have a question for the back.
25:25And please introduce yourself before you.
25:26I'm Joseph.
25:27I'm a venture capitalist in health tech.
25:29I dabble in AI as a practitioner and like to invest
25:33in this space as well.
25:35I kind of feel like all this talk about regulations
25:39and frameworks and all this kind of stuff
25:41is like shuffling the chairs around on the Titanic.
25:43I mean, we have nation actors.
25:47We have non-nation actors.
25:49We have models you can download.
25:51You can run locally.
25:53There's all sorts of potential mayhem that can ensue today.
25:58So the question that I don't hear anyone talking about
26:01is we are the weakest link.
26:05AI can evolve defenses to AI, right?
26:09But how do we prevent ourselves, our brains, from being hacked?
26:14Because we have evolved a number of shortcuts
26:18that allow us to make quick decisions in nature that
26:21don't translate well to a sophisticated technological
26:24civilization.
26:25So how do we do that?
26:26And kind of a related question, if I'm going to be brief.
26:30That's a signal for me to stop.
26:32That's right.
26:33Let's stick with your first question
26:34just because it's great.
26:35And we'll come back a little later.
26:39So the question is, how do we train our brains
26:42to deal with an AI environment in a way that
26:47lets us, as humans, be more responsible, more
26:50secure in our relationships?
26:52I'm not watering you down too much, I hope.
26:54No, that's OK.
26:55That was good.
26:55Good middle ground.
26:57Thanks.
26:59It is not easy, right?
27:00So there is a, Anthropic has a constitutional AI model
27:05as to try and see what are some of the human principles
27:08that everybody can agree on, and see if that AI can always
27:13respect those principles in answering all the questions
27:15and anything that you develop.
27:17The point about being human is that it's not perfect.
27:21I mean, today, we're talking about mundane type of jobs
27:27or repeatable jobs being replaced.
27:29But you could argue that AI could, I think,
27:34make much, much better decisions than CEOs
27:36in a lot of different factors because the prediction is
27:38so good, because it's considering
27:41so many different data variables.
27:42It's taking emotions out of it, which makes us human.
27:45But do you want to do that?
27:47Because that's the whole point is,
27:50how do you draw a line between what makes us human
27:54and why that is the right thing to do
27:56for any particular system versus applying the systems
28:00thinking approach in any machine developed application
28:05that we build or systems we build,
28:07and create the right level of feedback loops
28:10that allow us to correct whatever mistakes we make
28:12because we are human, or in spite of being a human,
28:15or on the other side.
28:17So I don't think perfection in the extreme sense
28:23should be the goal or is the goal,
28:25because that's not what we are trying to do.
28:28It is, how do you find that balance between values,
28:32empathy, fairness, transparency, all of those,
28:36while making the most efficient, the most personalizable
28:40experience and decision?
28:42It feels like that's a conversation that doesn't just
28:44sort of, you reach a conclusion, then you stop, too.
28:46It has to be evolutionary across your organization.
28:50Ayesha, I know a lot of what Otto does
28:52has to do with helping companies get started
28:55with the digital transformation.
28:57What kinds of sort of flawed mental assumptions
29:01do companies sort of bring to the table
29:03when they're first talking to you and saying,
29:05okay, I think AI is gonna help me be richer.
29:08How am I gonna do this?
29:09I mean, usually there's some CEO who's seen another CEO
29:13on the cover of a magazine and then comes back
29:17and says, we have to have AI in our company, too.
29:21The issue is that people have, they're vague.
29:25Let's improve customer experience,
29:27or let's be more competitive, or let's be innovative,
29:29that's the most overused word, with AI, right?
29:32But a lot of what we do is we refine it into something
29:37that becomes what is a real use case.
29:40So that real use case is this is the exact problem,
29:45and if it's, for example, for customer service,
29:47we'll reduce the time for the resolution by 10%,
29:50we'll lower our, we'll answer 90% of our FAQs automatically,
29:54we'll lower our labor costs by 10%,
29:56and when you do that, there's discipline around it.
30:00Then the next thing they often don't understand is
30:02that it's like, I wanna be a ballerina,
30:04but I was never trained for it.
30:06So the training or the muscles you build
30:08that you need for AI is is your data properly organized
30:12or is it all over the place, is it a big mess?
30:14And 90% of companies are still in a big mess.
30:19They're digital, sometimes that means they're using digital,
30:22they're even e-commerce,
30:23but they're not tracking all the data,
30:25they don't have it in a proper architecture.
30:28When you combine kind of having the proper data
30:32with a focused use case, and the last thing,
30:35you keep it at three months maximum, 10 to 12 weeks,
30:39to be able to do something,
30:40that's the machinery you want to build, it's a rhythm.
30:44And AI is not a one-stop thing,
30:46it's kind of a momentum that you build in the company.
30:50And I think that, and it takes time for it to be realistic.
30:53So once you actually have this kind of agile approach
30:57that's based on realism and an organized data set,
31:00you begin to see results.
31:02I imagine that, well,
31:04everyone's gonna have a different perspective,
31:05but I imagine that the agile mindset
31:07also has to include sort of a,
31:09part of your mindset has to be
31:10a little bit of a red team mindset.
31:12It's sort of like, what could go wrong
31:13and how can I stop it from going wrong?
31:15That you don't want that to dominate your life,
31:17but you have to, it's gotta be in the conversation
31:18in the same way that evolutionarily,
31:20I have to enjoy my wonderful meal,
31:22and occasionally wonder if there's a saber-toothed tiger
31:24gonna compete with me for the meal.
31:26Maybe the wrong analogy.
31:27We can talk about this later.
31:28Okay, I have a question in the back, thankfully,
31:31to save me from myself.
31:33Hi, Mr. Healthcare Tech, I'm glad you asked this question
31:37because I was wondering,
31:38am I the only sinister guy in the room?
31:41Edward, Chief Investment Officer of Covenant Capital,
31:44we are a wealth management company based in Singapore.
31:47My question is this, this conference has been great,
31:49everyone talk about how AI can unleash the creativity
31:52and give us a lot of time to do other stuff.
31:55Government are well aware, setting guardrails,
31:58but what about nefarious characters?
32:00What's being done, right?
32:02The very person like us trying to make it work
32:05could be the other side of the equation,
32:08trying to make it, trying to make us pay.
32:10So question is, what is being done in industry?
32:13And you guys are there, PayPal especially, right?
32:15The forefront of people try to hack into you.
32:17What is being done to stop these characters
32:21with a more malicious intent?
32:22Yeah.
32:24Sure, I can start, but I'm sure there's a lot
32:28that both Ayesha and Sheena can add.
32:30So it's good to, I like the, what Ayesha mentioned
32:36about thinking about it in a use case perspective.
32:39So one of the already live and very, very helpful use case
32:46with a lot of positive impact for us has been
32:50improving the authorization and settlement rates
32:52for payment cards.
32:54Understanding which customer uses which card
32:58in what type of situation for what type of transaction
33:00in which country and with zip code for what items
33:03is extremely helpful to allow to ensure
33:06that the authorization rates are always good.
33:09We can warn them before they click on submit
33:12that this card has actually expired
33:14or it has some sort of a limit
33:16or there is a better reward in some other sense.
33:19So that's like the best case scenario.
33:21The, what you were trying to bring in on the other side,
33:25the corollary of it is credential stuffing type of attacks
33:30where there are dumps.
33:31Now, because everything is on data,
33:34there are all kinds of stolen credentials available outside
33:36and people use same password still
33:39in a lot of different places.
33:41So what are we trying to do about it?
33:43You know, one is understanding those patterns
33:46as quickly as possible,
33:47putting in basic signature based detection mechanisms.
33:53One, the other is educating the user
33:57in understanding how they have to protect their passwords.
34:00But that's not good enough, right?
34:02Because we've clearly seen that people still use
34:04same passwords on some random website online,
34:07as well as on some of their banking accounts.
34:10So we're moving towards passwordless authentication,
34:13for instance, as an industry,
34:15there's FIDO Alliance,
34:16there is passkeys that we have rolled out
34:18that tells you, please do not rely on passwords.
34:20Now we have devices, all of us carry them,
34:23that are way, way more powerful and secure
34:25than anything else that you can remember.
34:28So it's not just about what you have or what you know,
34:31it's also what you are.
34:32Using behavior and having a unique signature or footprint
34:37of 150, 20, 200 different variables
34:41that allow us to detect,
34:42is this really the person
34:44that is making the transaction or not?
34:46And if we feel that there is even a slight doubt,
34:49then step up and ask you another question
34:51before allowing those type of things.
34:53At the end of the day, it is a risk based decision.
34:55Some businesses are able to understand
34:57and invest in those type of decisions.
34:59And that's why we have the industry leading loss rates
35:02as we control them in a way to use the data
35:06and AI and automation with these machine learning models
35:09that we've been building over a decade.
35:11But it can only go so far, right?
35:15Because they are going to deploy newer models
35:18and we are going to use newer software,
35:20newer experiences that will have some vulnerabilities.
35:25We feel that it's actually helping the good guys
35:28level the playing field
35:30rather than the traditional computing
35:32where we just don't know who is doing what
35:34and how many phishing emails are being sent.
35:38It's better to be prepared and using AI in those defenses
35:42than waiting for it and then see
35:45what type of reactive responses we can take.
35:48There's a sort of an interesting sense in this conversation
35:50in all the forms that is taken at this conference
35:53of nefariousness is undeniable and it's out there.
35:58And yet, we now have tools
36:00to kind of quantify the nefariousness
36:02as soon as it shows up,
36:03which it can work in our favor in a data era.
36:07But it doesn't eliminate nefariousness.
36:09That's always gonna be intention.
36:11There was a question from someone here
36:13at this front left table.
36:16Hi, I'm Tyler.
36:18I'm from the financial services industry
36:21but I'm more towards information and cybersecurity.
36:25My question is perhaps a bit more towards
36:27on the operational level of things,
36:30especially when it comes to ideation of AI use cases
36:34and implementation within organizations.
36:36What would you see is the right enterprise
36:40or organizational level control tapping on existing
36:45processes, teams, infrastructure, whatever
36:49that would maybe give a 51% risk reduction
36:52in terms of reducing the risk that the organization face,
36:57either from an intellectual property perspective
36:59or from a cybersecurity perspective.
37:01What's one control that can be applied
37:05from your industries or experiences
37:08to really guardrail that correctly,
37:11to allow use cases to be explored,
37:13to be safely and responsibly consumed
37:16and bloom for the organization to reap the benefits.
37:22Thanks Tyler.
37:24I'll start from the intellectual property perspective.
37:28I think that what's critical is the data.
37:32As Aisha was talking about,
37:34talking about organization data,
37:36but also knowing what that data is
37:38and ensuring that it does,
37:40it is not comprised any one else's intellectual property
37:46that's not licensed or owned
37:49that you don't have the right to essentially.
37:53And that from a practical perspective,
37:56what we see companies doing is they do scans of the data
38:01to check for intellectual property ownership.
38:06And in that sense, you try to then clean up the data.
38:10And so that's a practical thing that you can do
38:13to manage that data and to reduce the risk
38:17of intellectual property infringements.
38:21It's kind of fascinating to note
38:22that the world's best known LLM did not take that step.
38:29I think when you go through
38:32the use case formulation phase,
38:36there's always a checklist.
38:38Actually, people think there's a complicated things,
38:40but they don't have to be.
38:41In our case, we always have a checklist,
38:43which includes what is the data,
38:45what is the regulation compliance.
38:48Then you work with the InfoSec security team,
38:50which is a centralized department
38:53throughout the organization.
38:54It provides a list of things that it must go through.
38:57And then you just have to go through that process
39:01at all points in the roadmap.
39:03So for example, if you're checking for bias and manipulation,
39:06these are the metrics you should look at
39:09when you're uploading data
39:10or you're doing exploratory data analysis.
39:12Similarly, when it is going through access management
39:15or other kinds of things, there are a set of processes.
39:18So it is that discipline that you need.
39:21And usually, there can be a project management arm,
39:23which is then informed by the cybersecurity
39:27or the data security or the strategic arm.
39:31And it is keeping to that discipline.
39:32That's what's always lacking.
39:34People get so frustrated with their internal controls
39:38and they think it's so bureaucratic and so slow
39:41that they start doing these little, little things
39:42on the side.
39:44And that's the risk.
39:45So on the one hand,
39:46you don't want your central InfoSec department
39:49to be so onerous as to have very elongated risk.
39:53But for the main important ones,
39:55it's non-negotiable for companies
39:57and the best companies stick to it.
40:00That's a great point.
40:01Forgive me, I want to get, oh, sorry.
40:04To sum it up in 10 seconds,
40:05I'd say it took us two, three decades in cybersecurity
40:08to move from detection to prevention
40:12to detection to prevention.
40:13And I think that's what we have to now use those learnings.
40:19And for AI, again,
40:22I'm fascinated with the concept of feedback loops.
40:25How good feedback loops companies are able to create
40:29in any kind of enterprise applications that they build
40:32will define who is more successful or not.
40:34Because everybody is going to make a misstep.
40:36Everybody's going to have a boo-boo.
40:39That's a good point.
40:41I think we have time for one more question.
40:43So I'd like to.
40:45Tim Horton from the Mohammed Bin Zayed
40:47University of Artificial Intelligence.
40:49So as someone who's done quite a bit of work
40:51in fairness and bias and AI safety,
40:54one of the things I don't have a good answer to myself,
40:57and I'm interested to hear your opinions,
40:59is we struggle to build unbiased models.
41:05We struggle to build completely safe models.
41:08But part of the reason for that is that the data is biased,
41:12i.e., we are biased.
41:14So in a way, we're trying to achieve a utopia
41:17which we ourselves can't achieve
41:19in terms of completely unbiased and completely safe models.
41:24You know, if I want to know how to, whatever,
41:26build an explosive device,
41:28the information's out there on the internet
41:30or on the deep web.
41:30It's there.
41:31I ask the right person the right question.
41:33So why is it that I want to insist
41:35that my AI model won't provide that?
41:38Yes, there are obvious answers in terms of accessibility,
41:42but what is the line at which we say,
41:46okay, good enough?
41:48It doesn't have to be perfect, superhuman perfect,
41:52for us to say, okay,
41:55or is there a more realistic way of going about this?
42:02That's a great philosophical question.
42:03I know.
42:04I know.
42:05Yeah, I mean, I've said it a couple of times.
42:07It's more akin to a body.
42:10People kind of know sleep and health
42:12and why headaches happen
42:13and why you feel the way you feel the next day
42:16after a party,
42:18but we choose to engage in those type of behaviors
42:22or have a lifestyle that gives us either short-term pleasure
42:27for some long-term.
42:29So it's a decision-making thing,
42:32and I think that's, at the end of the day,
42:34that's where what we can do is in the right places,
42:37we have those rights checks and balances
42:40where at least we are able to provide
42:44fair and equal results
42:47to the right types of use cases and the people, right?
42:50I mean, the opportunity, the access to knowledge,
42:53access to education,
42:54and the ability to transform and create innovation
42:57is not limited to the right people,
42:59and as long as that happens,
43:00I think as a humanity, as a race,
43:02we have just gone,
43:04continued our journey in the right direction.
43:09Ayesha?
43:11Yeah, I would just say that this is,
43:13it's a really great question, actually,
43:15and this also brings to my mind
43:17this whole debate between open source and closed source,
43:20right, so people are saying if it's open source,
43:23it democratizes LLMs,
43:25but then even the terrible Maleficent terrorists
43:27get access to it,
43:28whereas the closed source say,
43:30well, if you build these firewalls,
43:31we can protect you because you have so much money
43:35from these deepfakes and others theoretically,
43:38but then there'll only be Mark Zuckerberg
43:41with his gold chain who'll be like on top of everything.
43:44So this is the dilemma that we have to grapple with,
43:49and I don't think there's a clear answer to it
43:52because we don't want Africa, Asia, Latin America
43:55not to have access to this.
43:57I think that there is something about phones,
44:00like the Apple strategy and privacy and privacy by design,
44:04it does work wherever a new device will be
44:08to have trusted partners.
44:09So for our, for example,
44:11there's so many startups out there,
44:12for our clients, they only want to work
44:14with reputable, trusted partners.
44:17So there are things companies are doing
44:20to reduce these security risks as much as they can.
44:24It's not a perfect solution,
44:25but something needs to be done.
44:28Gosh.
44:30So I think, I would just say bias, yes, it exists.
44:35And, but, you know, it's probably the first time
44:39that we're actually trying to build or tell a system
44:43that you shouldn't be biased.
44:44I mean, if you look at any other system,
44:46we've never had laws that are sort of requiring systems
44:51to have those characteristics.
44:54So to me, that is an aspiration that, you know,
44:59I think says that this is something that, you know,
45:02we may get 10% better than we are now,
45:05which is still to me an improvement
45:08over how we make decisions.
45:10And obviously who you pick to be your developers,
45:13who is involved in the system, and if you make,
45:16if you are able to then diversify that, that makes it easier.
45:20But to me, it is, it's likely to me to be an improvement
45:24over what we have out there.
45:25And at the end of the day, AI to me is a tool.
45:30And we're just trying to make sure that this tool starts
45:33out with the right principles.
45:36Where it goes after that, who knows?
45:39It's occurring to me that probably everybody
45:43in this room sits somewhere on a sliding scale in terms
45:47of our beliefs in the eventual maybe omnipotence of AI.
45:51And the more powerful you think it's going to be,
45:55the more you feel the need to say, well, we have to perfect it.
45:58It can't have any flaws because someday it's going
46:00to call all the shots.
46:02And I mean, that day could come.
46:04That's the crazy thing about this moment.
46:05That day could come.
46:07And of course, you want perfectibility.
46:08You want the perfect balance of values.
46:10It has to be.
46:11That's how good it has to be.
46:12It can't just be good enough.
46:14But in fact, if it is just another tool that we're adding
46:18to our collective arsenal that's as powerful as, you know,
46:21the smartphone or the internet or the steam engine, you know,
46:24then in fact, maybe you accept some imperfection
46:28to get where you're going to go.
46:30Yeah. So the point is, if you look back like four or five years ago,
46:35I mean, I was with Accenture.
46:36We're talking about classically generated AI and now comes past.
46:40The next version is really super AI.
46:42So that's when things like neural link will come to life.
46:46So what's our core of order in terms of morality?
46:51For those who can't hear because we didn't have a mic up front,
46:54which is my fault, you know, no, it's not new at all.
46:58We're moving to this next stage where AI comes with the word super
47:01in front of it, super intelligent, AGI.
47:04As it gets bigger, that's when we feel like, well, we only,
47:08how many chances do we have to get it right?
47:10I think this group all hopes and believes
47:12that we'll have many chances over a long continuum.
47:15And I personally hope we're right about that too.
47:19We've reached the end of our time.
47:20I'm so grateful for everything that this group of panelists brought
47:24to the table and everything that you brought to the table
47:26with your questions.
47:27That's exactly what we love to have at a Fortune conference.
47:30And I hope these conversations continue in the halls as well.
47:33But for now, that's all the time we have today.
47:35I want to thank Sheena and Faram and Ayesha
47:39for all the fantastic insight that they brought.
47:42I want to thank again PayPal for hosting this session.
47:44Thank you all for being here.
47:46We've got about 15 minutes before the main stage session starts again back
47:50in the ballroom.
47:51That'll be at 1.30.
47:52Grab some dessert on the way out if you can.
47:55And thank you so much for being part of this.

Recommended