• 6 months ago
Scott Wilkie, Global Lead, Emerging Technology Security, Accenture, Maria Milosavljevic, Group Chief Information Security Officer, ANZ, Banking Group, Jennnifer Tiang, Regional Head of Cyber Practice, Asia, WTW (Willis Towers Watson), Calvin Ng, Director of Cybersecurity Programme Centre, Singapore, Cyber Security Agency Moderator: Nick Gordon, Editor, Asia, Fortune
Transcript
00:00Welcome. I'm Nick Gordon, an editor for Asia at Fortune based out of Hong Kong.
00:05Thank you all for being here today. This is the first of two virtual events as part of
00:09Brainstorm AI with Accenture as our founding partner. A special thanks to Scott Wilkie,
00:14Accenture's global lead for emerging technology security. Also on today's panel, Maria Milosevic,
00:23Group Chief Information Security Officer at ANZ Banking Group, Calvin Ng, Director of
00:28Cybersecurity Program Center at Cybersecurity Agency of Singapore, and Jennifer Tan, Regional
00:34Head of Cyber Practice of Asia at Willis Towers Watson. We've got a fantastic panel here for
00:40today's conversation. Please feel free to use the chat as well to offer your thoughts, comments,
00:44and questions. Please use the Q&A function if you have a question, and please direct your question
00:50to one of our speakers. And I'd like to remind our audience that this session is on the record
00:54and may be reported by a Fortune reporter. So I wanted to start off this conversation
01:01with the now infamous cybersecurity attack earlier this year, where a Hong Kong-based
01:06finance worker was duped into losing 200 million Hong Kong dollars by a very believable deepfake
01:12of his company's CFO. It's a novel AI spin on a tried and true phishing scam. But to start things
01:19off, I wanted to ask our panel, and maybe Scott, you can begin, you know, how is AI really affecting
01:25the ways that cybercriminals are targeting companies? Is it a matter of old tactics being
01:29done faster and at greater scale, or are there novel attacks that we're now seeing thanks to
01:34generative AI? Thanks, Nick. Great question to start with, and can I say that it's an absolute
01:41pleasure to be here as part of the Brainstorm AI program. What we're seeing at this point in time
01:48is certainly a widening in the breadth and volume of attacks across the board. In the last
01:5512 months, when we've engaged with our clients en masse, we're seeing roughly a doubling in the
02:02number of quantum of ransomware attacks, and in fact, phishing attacks have risen by over a thousand
02:09percent in the research that we've done. During that process, what we're seeing from generative
02:15AI and new large language models, they're enabling certainly more sophisticated attacks,
02:21greater volume, a lot better nuance and context about people, the ability to generate new
02:29contextualized based phishing attacks has certainly improved from the attack perspective.
02:36We're not seeing anything particularly novel, but as I said, certainly that volume and that
02:41sophistication and the specificity of the attacks are certainly rewarding the attackers
02:48and those who wish to do harm. With regards to deep fakes, it's certainly we're aware that
02:55there was this ending tidal wave. We've talked about it for almost 10 years now about the
03:01implications for deep fakes. However, very typically in enterprise, we're aware of them,
03:07but a lot of people don't do things until they're actually in there attacking that risk surface.
03:13So certainly the sophistication of deep fakes, the ability to generate them
03:18with relatively few resources is incredibly problematic across the board.
03:24One thing I can say though, Nick, is that old fashioned concepts about good cyber hygiene,
03:31and number two, around having very dedicated and continuous training and awareness programs
03:37is absolutely essential. I know even in-house at Accenture, we're a company of 750,000 people
03:45ourselves. We have continuous engagement with the internal security team to teach not only
03:53the workforce, but our leadership team, our board to continuously engage them in programs around
03:59phishing and deep fakes. The other thing that we're doing is some of my colleagues like scaring
04:05the living daylights out of each other and preparing deep fakes under certain scenarios
04:11to see if they can tempt or confuse each other. So I think that continuous training and awareness,
04:17continuous validation, continuous testing remains paramount even in this new world
04:22of large language models and very rapidly developing multimodal models.
04:26To keep on this idea of scaring people, I want to bring in Maria.
04:32From your position and working in the financial sector, what are some of the potential attacks
04:38or the novel attacks using Android and AI that keep you up at night, that you are most concerned
04:44about as a major bank?
04:48Sure. I think the sorts of things that keep me awake at night are probably the same things that
04:53are keeping many other people awake at night. To the points that Scott made, there's a few
05:00different things to think about here. The first one is that AI more broadly, but gen AI specifically
05:07is fueling the potential for large-scale misinformation. Things like deep fakes are
05:13just an example of that, but so could undermining trust in democracy around elections, etc.
05:22That's another area where we could see, and we have seen a lot of risk. Similarly to that,
05:28phishing, especially very personalised targeted phishing is another example of some social
05:33engineering where potentially misinformation can be used or generated as a result. But in the end,
05:41I think I like to think about it in terms of different categories. So there's offensive AI,
05:46which is basically AI being used in order to attack. These are all examples of that sort of
05:53things. Unfortunately, AI, which is incredibly useful and powerful, is available to both our
06:00friends and our foes. Increasingly, we do see threat actors using AI in order to automate and
06:09really power their offence. So without an AI-powered defence, we really can't stay in
06:16that game. We will be outmatched. So there's offensive AI or attacks using AI, but then
06:23the flip side of that is that AI itself has risks. So there's also adversarial AI where an adversary
06:31would attack AI. So we've also got to be really mindful that whenever we use AI, we're actually
06:38opening up potentially new vulnerabilities for threat actors to use.
06:44I'd like to bring in Jennifer now. How much damage can a cyber attack do to a company?
06:52And if cyber attack happens, how might you at WTW actually evaluate the cost of that damage?
07:00Yeah. So as a risk and broking firm, we're viewing cyber security risks,
07:06which is a huge umbrella term, but we're looking at it as the financial impact that's going to
07:10cause the company. And the scale of financial impact is huge. Just as Scott and Maria were
07:18mentioning, the sophistication and severity of the attack has just accelerated with the likes of
07:27AI and under that gen AI. So we look at things through lenses of frequency and severity,
07:34and on both axes, unfortunately, those have escalated with the advancements in technology.
07:42So how big it can, how big it could hit your company, various factors involve that,
07:50your geographic footprint, your revenue models, if you're a manufacturer, the number of plants
07:55you're operating. But if you think about what your maximum possible loss could be,
08:00what is really going to cripple you, that takes a really methodic approach to assessing what is
08:09our revenue stream, have we done the business continuity planning? And yeah, that involves
08:16data analytics, which we use to help our clients assess that.
08:20And maybe to kind of wrap up this topic quickly, Calvin, when you talk with companies,
08:27what are the sorts of things that they are most worried about? And is that the same thing as what
08:34you are most worried about when it comes to cybersecurity and AI?
08:40From the government perspective of Singapore government, AI indeed is coming in very fiercely.
08:48Everybody is trying to widely think of initiatives to adopt. That's good in terms of
08:54productivity, but AI also has, is actually nightmare. Like what just now Jennifer mentioned,
09:02the frequency and severity have increases because of AI. Why is it so? You can easily craft a
09:09phishing email, you can easily automate a malware. You don't need to be a cybersecurity
09:15professional. You're able to craft a malware using a chat GPT and do it so simply. And you're able to
09:21deploy ransomware as a service so easily. So things are being simplified, doing even things is
09:27simplified today. And that's going out like crazy whereby a lot of people are just taking the
09:34opportunity to seize this sort of tools in the market to attack. So this is a very worrying part
09:40of AI. People are embracing AI for the wrong reasons and taking AI to simplify things for
09:48attackers to actually go and penetrate into systems like that. So that's the key part of the AI that
09:54from a government perspective. That's where actually we think that we work with industry
10:00and try to come up with initiatives. How do we bring up this level of hygiene awareness
10:05and how do you practice safe AI when deploying such system?
10:11This is actually a good segue to kind of return to Scott here, because you mentioned
10:15cybersecurity training, improving levels of digital hygiene amongst companies,
10:21I think particularly amongst employees. I'm going to phrase this question frankly, which is how do
10:26you defend against people using generic AI in unsecure and let's say dumb ways? How do you
10:34protect against that? Nick, thank you for singling me out for that question. I'll try not
10:41to read anything into it. So Maria mentioned previously, a very traditional approach to
10:50security or information assurance revolves around a governance, protect, defend approach.
10:57And so when I look at that, I really have two types of engagement with our clients around the
11:04world at the moment. For those that are in highly regulated industries that are very,
11:11very risk aware, such as financial services or public sector, I have a slightly different
11:18engagement model now with the C-suite and the board of directors rather than necessarily coming
11:25just by the C-suite. And the rationale for that, Nick, is that in those highly regulated industries,
11:32boards of directors are thoroughly aware of their obligations around culture and risk and
11:37other things. And so are really setting the core principles for their organisation of how they want
11:44to achieve the future. And the governance of AI and security and technology are very clearly
11:52delineated according to those broader principles. From our foundational perspective, on the other
12:00hand, there are clients that I meet from time to time who try to move straight to a use program
12:09of AI without fully thinking through the governance and a number of other things.
12:14And in that case, I'm sure it wouldn't be a surprise to the listeners today that sometimes
12:21that doesn't go too well, where proof of concepts without well-defined guardrails and governance
12:29identify information that would mortify a C-suite or board of directors. And in that case, the goal
12:36is then to use that terror that comes from identifying that well so that there is a more
12:42proactive approach. So I guess it's a long winded answer of saying the nice thing is that the
12:48generative AI is taking risk and security even more and in an even more accelerated fashion
12:56towards the board of directors so that both the governance and then putting in the guardrails,
13:03identifying some key principles such as least privilege and segmentation of risks
13:11and other things. Some of these core principles of zero trust are now being embedded in generative
13:18AI programs. No doubt the last thing I'll have to say to that is that at the core of all of this
13:23is data and information. So some really old-fashioned concepts such as securing data at rest
13:32in transit and in use, this is a really good set of concepts to apply. So I think what we're doing
13:38is that we're seeing new models of engagement, new risks identifying, but some very old-fashioned
13:44concepts that go to the core of good security hygiene becoming even more important today
13:50moving forward. Nick, can I add a little bit more to that? I think that was a fantastic answer
13:56from Scott. Ultimately what we're talking about is trading in trust, sorry my light's going off,
14:01trading in trust. So organisations like ANZ, a large global financial institution,
14:09our purpose is to shape a world where communities and people thrive. So we take
14:14it really seriously to make sure that security as one part of this is done well, but it's a
14:21much bigger story and Scott's really, I think, put it really nicely. In the end what we're
14:28talking about is trustworthiness in the way that we use AI or Gen AI specifically and
14:35trustworthiness has a lot of different pillars that contribute to it and some of those will
14:40build trust, like rolling out new features, and some of those things will degrade trust,
14:46so poor design, poor quality, poor security, privacy failures, fairness, you name it. There's
14:52a whole range of them and so we've got to make sure that we balance that, but equally,
14:58you know, you sort of said, you know, how do we stop people from doing silly things? Well
15:02innovation can come from anywhere and I think it's also really important that we
15:07identify good ideas and capture them because not innovating is also a risk, because if we
15:14innovate that can also build trust. Not innovating can be a risk and it can lose trust, so we've just
15:20got to constantly keep our eyes on, you know, what are we actually doing to make sure that we continue
15:25to serve our purpose. And Maria, how are you at ANZ trying to find that balance between, you know,
15:34encouraging innovation, finding those features that build trust, while also warding off those
15:40developments that would degrade trust? How are you at ANZ trying to navigate that balance?
15:45Yeah, well one of the most important things that we're doing is we're bringing together all the
15:50various people in the puzzle because there's lots of, as I said, there's lots of these different
15:54pillars and they all can degrade or build trust, and so making sure that everyone who is involved
16:00is together in doing this, but then also making sure that we break down the silos
16:04and there is a way, a pathway to actually building out AI. So our CTO, for example, he does a lot of
16:12work to promote the use of AI and we've got a, you know, a way where we can put, anyone can put
16:20forward ideas, those ideas can be assessed in terms of their value and their practicality, and
16:26then if it's green lit, then we've got a pathway for experimentation before we then industrialize.
16:34And so that pathway around experimentation is, if you like, a safe way for us to understand what
16:40we're talking about. You don't always understand what the risks are when you embark on a journey
16:45like this. You actually need to go, you know, take a few steps forward, see what happens, understand
16:51the risk, do it in a contained way to make sure that, you know, you don't roll it
16:55out to an entire organization or all your customers straight away, but really that whole,
17:00you know, experiment, fail fast, quickly. I mean, keeping on this risk point, to go back to
17:07Jennifer, I mean, when you're talking to other companies, again, are you seeing a difference
17:14between what companies think they're worried about when it comes to cybersecurity and what you
17:18think they should be worried about? Is there a gap in kind of understanding cybersecurity risk
17:24between you and the companies you're often talking to? I would say the level of
17:30understanding and awareness does range in spectrum based on the company size and
17:35obviously industry. So like the financial institutions industry, they'd be front
17:41runners in their awareness of cybersecurity. I mean, they're heavily regulated. So it was really
17:45great to hear about how Maria, you know, that you balance that, you know, drive to be innovative
17:50while also being, you know, in a controlled environment because, you know, a lot of eyes,
17:55including the regulators, are on banks and finance institutions because they are such a strong
18:00pillar and basically, you know, like infrastructure of society. So it does range in
18:07spectrum of awareness, but we have seen a huge scaling up of awareness. And I think,
18:13you know, just even think reaching board vernacular, like ransomware has reached board
18:18vernacular and gen AI and cybersecurity. These are all terms that boards are now more familiar with
18:24and conceptually, they get it and they get that it's not just something that's fixed,
18:31like, oh, our IT team are on it. Oh, they understand that it's a spectrum of risk. I
18:36think they are seeing cybersecurity more through a risk management lens. So, you know, you accept,
18:42avoid, mitigate or transfer. I think perhaps before, well, avoiding is not possible, you know,
18:49to be a 21st century operating company. You can't avoid that cyber risks. You have to be connected
18:56digitally. Mitigation, you know, there's been a huge ramp up in investments. And thankfully,
19:04we've got, you know, industry experts to guide us through, you know, Scott at Accenture,
19:08he's guiding on emerging technologies. So holding the hands of companies that need stewardship
19:13through these times. And then more and more, there is that understanding of risk transfer. So
19:20they understand we can't possibly meet and treat every single risk. So there is an element of
19:26insurable loss and that they transfer that to the insurance market. So, yeah, the level
19:33of understanding is improving. I do want to get into this topic of kind of mitigation,
19:39how you deal with the damage caused by a cyber attack after it's happened. But I do want to
19:44bring in Calvin first, you know, can companies kind of help set best practices on cybersecurity
19:54for others in the industry? So it's not just the government telling what they have to do,
19:58but can companies play a leading role here too, perhaps to their customers, to their clients,
20:02or to their suppliers? Before in our previous prep call, we talked about kind of supply chains
20:08and how that might be an avenue for cybersecurity training and sharing best practices.
20:14Yes. I think when we want to deploy AI, we need to deploy AI in a secure way. We must think
20:23further than just developing a system from a design perspective to deployment,
20:29even operationalize it to your staff, your client, or even to your third party suppliers that's
20:36coming in. And in the end, you even have to think of how do you actually take away the system and
20:44make sure the system are cleaned up before end of life of the system. So there's a whole full
20:49spectrum of thinking over here. It's not just trying to quickly implement something and get
20:54usage from AI, but actually it's how to use it securely throughout the lifecycle of the process.
21:01So what we are talking about here is that I think we have here a lot from the design
21:05and development cycle. There is actually of the board need to understand when they're
21:10deploying such system, what's the kind of risk and limitation and what's the consequences of
21:14deploying an AI that can actually crawl information all over and produce information
21:20freely without consulting somebody. That's where Scott talked about guardrail has to put in place.
21:25So during the development cycle, you have to think about all these consequences and put in technical
21:31things like guardrail, even prevent data poisoning whereby people may influence your data or even
21:36model poisoning. These are some of the, and even the access control, who should have the access
21:42control to understand and query possible information. So there's a lot of thought,
21:46even from engineering, you can get a lot of information out inconsequentially. So there's
21:54some of the confidentiality may leak out from that space. And sometimes there's an unintentional
22:00misuse of all this information also. So at the end of it, even at the disposal and from the
22:07third-party supply chain issue, we need to understand how does even free models or even
22:14models that supply, is there vulnerability? Is there an opening over there that actually
22:19there's consequences caused by this cybersecurity practice? So we need to look at it holistically
22:25in terms of the full spectrum. And at the end of it, when you dispose and you thought your
22:30information has been disposed, is it really disposed properly or not? Or even when you
22:36train some model and you thought that after training this model, you can keep your data
22:41properly, but actually your model have learned some parts of the data. So these are some of
22:45the things that in the embedded model design that you need to understand what sort of risks
22:49you are going to and do a proper threat risk assessment on it. And manage this risk at the
22:56organization level, because sometimes this kind of risk is not just a technical risk. It's actually
23:01there's organization impact, reputation impact. It can be, the whole company can, I talked to a
23:08whole company that it can be down because their design is out there in clear. And how is that's
23:15the secret piece of the whole company design. So things like that have to think through what
23:20is the consequences at the highest level of understanding the consequences of deploying
23:25an AI model, what are the risks and limitations. Scott, is there something you want to add onto
23:31this point? Yeah, Nick, every now and then I hear words come out of my mouth and I think, oh my
23:38goodness, I sound cliched saying it, but listening to Kelvin speak just then, I'm reminded that
23:44we've always had an adage that cybersecurity should be a team sport or is a team sport. Now,
23:51I do remember times when it certainly wasn't a team sport, no matter what anyone pretended
23:57to be doing. But certainly the last five years, I have never seen a time where collaboration
24:05has been greater or better intentioned. I read a ransomware agreement between 27 nations a
24:13couple of months ago. Now, the thought that 27 nations would agree upon anything is historically
24:21never a very easy bet. Now, I almost always see Singapore, Australia, Japan,
24:29the USA, Canada. I see this wonderful collective of people thinking about
24:37a laws-based or rules-based world where people can be secure in cyberspace. And the terminology
24:45that we've all been using around AI, safe, secure, and responsible. I'm seeing very much that people
24:53are trying to live up to that paradigm and to engage collaboratively. And I'm not only seeing
24:58it at Kelvin's level, at a national security or cyber agency level, I'm also seeing it at an
25:04industry level. So for me, working in innovation and new technologies, the most dynamic places I
25:13think that I've been in the last 12 months have tended to revolve around things like
25:18financial services ISACs, whereby the level of innovation thinking through as a community,
25:26the issues around generative AI, around quantum security, around new worlds of insider risk
25:34management. These are really very innovative communities that are starting to build.
25:39And I think we're getting the benefit out of the diversity of the backgrounds and the diversity
25:45of thinking and the diversity of the way people invest and prioritise. I think this new world
25:50that I've seen over the last four or five years is compelling and hopefully becomes very rich
25:56for everyone. Can I just add a bit more to that, Nick? I think that's been a really useful conversation.
26:04And obviously, as a global bank, we are in a heavily regulated industry. And I think it's
26:10really important to remember why regulation exists. A lot of people don't necessarily
26:15understand the breadth of how regulation can operate. In the end, regulation is anything
26:20that government does to influence behaviour, which has a benefit in society, the economy,
26:26national security, etc. And so if you think about it from that broad perspective,
26:32there is absolutely an economic imperative to using AI because we want that economy to be
26:42strengthened and built and no country wants to be left behind. But there's also an economic risk to
26:49insecure AI or poorly executed AI. And so regulators have to balance this and really make sure that
26:57they drive innovation effectively, but then ensure that the right guardrails are in place.
27:03And I think that for the private sector, the not-for-profit sector, many individuals can all
27:10work together with government to make sure that when we do do regulation, we are doing it in a
27:16way where we are going to achieve both the imperative as well as manage and mitigate those
27:22risks. As I said, no country wants to be left behind. But at the same time, regulators need to
27:28or the government needs to be focused on those national opportunities as well as the individual
27:33risks to particular citizens. It's not straightforward. It's not easy.
27:38Yeah, I do agree with Maria. The point is that actually right now is really a team sport.
27:45It's actually, it's only through these few years you can notice that actually,
27:50not only government, regulator, every last time enterprise will say, hey regulator, you make some
27:55rules that you don't actually cascade down and we have a hard time on it. But you see this game is
28:00everybody is in it from the regulator, government, everybody looking at an interest, everybody knows
28:06that it's so useful. But how do we play a part individually to actually secure the use of AI?
28:15Right from the top, right from the manufacturer and the people who use it, use it responsibly.
28:21So these are some of the thinking that we're going to shift the landscape and shift the mind of the
28:26people. And I'm very happy that actually on the ground, this is happening in the whole world.
28:30And so many, we have so many conversation around with UK, US, Australia, even in South Korea,
28:37about all these that we want to do things together. We want innovate together, but
28:41innovate securely. Yeah. And everybody knows the potential. Thanks.
28:46So I want to remind our audience, we are open to questions. So if you have a question,
28:51please put it in the Q&A function, little box on your Zoom. While we kind of wait for some
28:58questions to come in, I do want to ask Jennifer, and you mentioned before the need to kind of
29:06mitigate the damage of cyber attack to kind of understand the loss that might be, insurable loss
29:10that might happen from a cyber attack. Say the worst happens, cyber attack happens.
29:17What should a company do to mitigate that damage?
29:22Well, ahead of that attack happening, I think the best way you can really mitigate the fallout,
29:28the financial fallout is preparedness. And so Scott touched on it earlier, I think it was,
29:33really getting down, getting in basics. And part of that basic preparedness is making
29:41sure that you have simulated what an event might look like, that you've got cross division
29:49awareness of business continuity plans, incident response plans, that this incident response plan
29:56isn't concentrated and only known at the IT level. So that there are various other stakeholders
30:03into a security issue. So there's legal ramifications, even corporate communications,
30:09HR potentially. And so possibly it's even making sure that those divisions that might not be aware
30:17that they are a stakeholder in a cyber security threat, that they're aware that they have a part
30:23to play. And so the best way is preparedness, like anything. And so you'll have many external
30:32consultants that can assist with tabletop simulations. I've seen a real trend in the
30:37last few years with our clients that are running these tabletop simulations. And that even that
30:42is getting the board understanding what the ramifications could be. So having conversations,
30:50being aware of a playbook and saying, oh, is this playbook up to date? Do our incident response
30:57plans even factor in our activation of our cyber insurance policy? Do people know whose role and
31:04responsibility is to contact external vendors? So that is going to be the best way to mitigate
31:11the financial flaw and make sure that if the worst were to occur, it's not a round of headless
31:18chickens running around in a panic mode. People are aware of their roles and responsibilities.
31:25So we do have one, we have a couple of audience questions now. I'm going to pose this first
31:30question to Maria, who's already answered this question in the Q&A box, but to pose this question
31:36out loud. So say your engineering team approaches you and says, we want to use your AI to accelerate
31:41code development. How would you deal with that kind of request? How would you think about that
31:46kind of request in terms of ANZ's operations? And then how would you test their products?
31:54Test what is produced to make sure that it's safe and secure?
31:57Yeah. So first of all, I'd be saying, fantastic. Let's see what we can do with this. Because
32:03as I said before, not innovating can also degrade trust, right? And so we do need to be really open
32:09to how innovation can make us better and stronger and faster. So in terms of how we would approach
32:16that, as I said in the answer there, there's two things. First thing is that we need to
32:21test things the way we always have. Make sure that we use the same kinds of techniques,
32:25because it's just code in the end. The way that it was created is irrelevant. We still need to
32:30test it in the same way. But secondly, a lot of organizations, including ourselves, are exploring
32:36the use of gen AI for automating or doing the testing itself faster, more scalably. Because
32:43I think one of the biggest risks of all of this created new code is keeping up with the pace.
32:49And so we've got to make sure that we can do that from a security perspective as well.
32:54We have another question from the audience. A lot of speakers have talked about countries
33:01and groups and governments coming together to work together on cybersecurity. However,
33:07and I think we do need to admit that there's geopolitics at play here. Countries have
33:14different views on national security in terms of AI. And so the question is, how do we best
33:21nurture technological growth, but also balance the geopolitical aspect of this?
33:28I might pose this question to Scott first, and then I think maybe others on the panel like Calvin
33:35may have additional thoughts as well about how to navigate the geopolitical landscape when it
33:41comes to AI. I think, Nick, I'll be very brief because I think you were probably right talking
33:48to Calvin initially. And I know Maria, I'm going to put her on the spot that she's come out of the
33:52public sector not so long ago. So I'll actually defer to them. But we, of course, are seeing a
33:58lot more conversation around what the concept of sovereignty means in a digital world. We're
34:07starting to learn and to refine the definitions around sovereignty. Sometimes sovereignty is
34:14outrageously inconvenient, particularly when scale and computing power and other things are involved.
34:23So I think this is an ongoing conversation, Nick, as to how countries manage their sovereignty well
34:31and with partners and supply chain that they believe that they can risk assess appropriately.
34:39Having built my own businesses in the past in the sovereign space,
34:45I'm looking forward to seeing where this lands in five years and 10 years time. But as I said,
34:52I'll gladly defer to Calvin and Maria about this issue. Well, in the spirit of that ongoing
34:58conversation, let's bring in Calvin. You're in government. How are you thinking about the
35:04geopolitical landscape and how that interacts with AI? I think there's a few perspectives of things
35:12when we meddle with geopolitics. First, of course, for the interest of the country,
35:18we're looking at how can a technology potentially help to grow the economy and help to bring a
35:24better digital life to our citizens. So from that perspective, we embrace that kind of technology.
35:31Of course, there's a national security interest in some of these geopolitics topics.
35:36That's where actually when we deploy technology, it's not like those days we just deploy because
35:42there's a lot of consequences in play. Availability of the technology, the manpower, the capacity
35:49bearing strength of this technology, and how far this technology can bring us.
35:55Will it cause disinformation? Will it cause a nationalist issue? So this, we have to
36:02involve a whole ecosystem of people looking from sociology to politics into this play to
36:09understand what are the consequences of technology now. And from the angle, at least
36:15that's where the first step being there's a geopolitical, yes, let's manage it part by part,
36:21whereby actually that's where we can seek a balance of how do we still have a balance of
36:25guarding certain interests and yet able to enjoy the technology in a balanced environment.
36:31It's not an easy topic, definitely, but I have seen that actually countries are more forward-looking
36:37into looking at a bigger picture and understand which part to talk about it, which part we do
36:42not want to talk about it, so that actually things can move on and we can progress with
36:47the common interests of things. So these are the kind of ground that we talk about.
36:56So I'll add a little bit more to that. Since Scott was happy to drop me in it,
37:03I won't pretend to have all the answers for this at all. Like as I said before, it is not
37:07easy at all. This is, and I think what's also important is that the private sector understand
37:15that it's not easy for regulators and regulators understand that it's not easy for the private
37:20sector. So I think everyone understanding the complexities that we're all dealing with
37:24is the first part of getting this right and just being really frank and honest with each other
37:31about how, you know, what are we mutually, what are we all trying to achieve here?
37:36What are the problems that we're trying to solve? The private sector cares just as much about the
37:41problems that the government is trying to solve, such as driving innovation and avoiding the risks.
37:46And so how do we make sure that we work together on solving them in more effective ways?
37:52Some of the ways of doing that include, you know, and some people call them public-private
37:57partnerships, but basically collaboration. As Scott was saying, we've never seen as much
38:02collaboration as we're seeing these days. And it's increasingly important because no one actually
38:07owns all the risk and no one has the ability to address any particular risk fully. And so we
38:14just have to acknowledge that in this incredibly complex society that we live in, it is about
38:19shared risk and it is about shared accountability. And that means that we've just got to, you know,
38:25take off all the pretense and work much more together. The insurers also play a really
38:30important role because insurers have this ability to be able to look across and see the consequences
38:37of poor decision-making and how that actually plays out in incidents across industry. So there's lots
38:43of different pieces of the puzzle, lots of different perspectives and bringing them together
38:47is really good. A good example of this is the Fintel Alliance that Austrac here in Australia
38:53set up a few years ago, where it's a combination of getting the right people in the room to
38:59understand, you know, what are the risks, what's going on and how do we address them. And they
39:04look at financial intelligence, but also looking at technological advances and how do we share
39:10the problems that we're trying to solve. Many years ago, we used to say that, you know, in order
39:17to be able to address these risks, everyone had to share all their information with everyone else.
39:22We've moved well beyond that and we're now at a situation where we can actually much more
39:27effectively share solutions and, you know, disaggregate the way that we approach these
39:34risks. So it's not simple and we're still learning as we go.
39:41Jennifer, I think...
39:42Oh sorry, Nick, I'd love to just jump in if I may for one second. I know you're going to
39:48jump to Jennifer a bit. I'd really love to accentuate something that was underpinning
39:55what Calvin and Maria said, which was about people. We're here to talk about generative AI
40:01and the assumption is that this thing is going to transform our lives magically or destroy our
40:08lives magically. It's not going to do either. What we're talking here about, how a new technology
40:16like the internet or like cloud computing is going to change people's behavior and enable them
40:22in a number of fashions. So I think I'd love a call out to everyone to take away from this
40:31that the largest investment that we're seeing and being really well utilized around AI in a
40:38safe and secure manner is an investment into people. I still do talk to a lot of clients
40:46who are very focused upon the technology and unfortunately I see us falling into the same
40:50trap where people are going to become trapped around particular large language models by the
40:57decisions. But to me the greatest investment should be in our own people, whether that's
41:03within a bank, within a government agency, within a giant company like Accenture
41:08or within a community. So you know our goal here is to educate people about the risks and
41:15opportunities for generative AI so that they are thoroughly prepared to use this and adapt it as
41:23the world changes. So I love that this conversation is about people at the end of the day. Sorry
41:29Jennifer, I cut you off. No problem and I'm sorry I'm blurry at the moment. I'm not trying to go
41:35incognito or anything. You're being extra secure. But actually you know and I'm happy for you to
41:45wait so that but I did have an audience question come in they might be able to help with.
41:49You know the the audience member is asking you know when generative AI came into the picture
41:54wasn't the case that business leaders particularly those not involved in security not involved in
41:59those responsibilities maybe were more inclined towards talking about benefits and now that
42:06they're seeing the the potential threats they're getting caught off guard now they've pivoted back
42:11to talking about resilience. Is that a pattern that that you've seen in talking to clients about
42:16risk management and you know one thing I add to this question is you know particularly in sectors
42:23that aren't tech that aren't heavily regulated like finance and aren't strategic and have the
42:27geopolitical interest in them if you're just like a normal company that just does business
42:32how you know is it is it how do you talk to them about cybersecurity and how are they kind of
42:37moving back and forth between you know benefits to resilience and kind of swinging
42:44back and forth on this pendulum. So yeah I mean the I think the point was made earlier that
42:54insurers we are generally when you're working in insurance and you're working in risk you are
42:59constantly seeing the worst case scenarios manifest so we are generally pessimists by nature
43:06and so we're not we're not trying to stop that innovation or stop that that you know the
43:12acceleration of developments but we are there to I suppose justify and foster the guardrails so
43:19that we are moving forward in a safe and secure manner and so what we're seeing with
43:27our clients the ones so just taking the ones that are not in a heavily regulated industry that
43:33may be you know perhaps you could say a little bit less sophisticated with gen AI I mean all of
43:40these these buzzwords are you know the leaders are confronting them but I think it's what's making
43:46them jolt up is that they need to get their basics right so they're needing to get you know really
43:52basic cyber hygiene right so I guess if I could use an analogy you could have these you know
43:59machine learning AI sophisticated cameras on your house but if you're basic if you're not
44:06using you know basic front door locks like one of the point of these very fancy cameras
44:11so they understand that the risks facing them are very sophisticated they need to get
44:18their basics right and they need to invest where perhaps there was minimal investment before because
44:22they weren't in a regulated industry and so the the call for investments into their IT staffing
44:30their IT security staffing is there's a huge push for that so it makes the conversations for IT
44:35teams a lot well not a lot easier but a little bit easier yeah full respect for IT teams that are
44:43you know they're meant to do world's work on like budgets of a shoestring so yeah I'd like to
44:50bring in Maria for my next question you know for most for most of the day we've talked about you
44:57know risks risks risks but we haven't talked a lot about opportunities that AI might present to
45:04cybersecurity and I know I know you've written about this and so I want to ask how does AI and
45:11generative AI you know help cybersecurity teams do their job better what are some ways that AI can
45:16help with cybersecurity as opposed to posing a cyber threat absolutely so I think you know this
45:26is really important for an organization like ours we have more than 10 billion data events coming into
45:33our sock every day and so obviously with that volume of data we can't have human beings they're
45:39looking at every single thing so around 35 percent of our incident response has already been automated
45:46thanks to machine learning and AI and I think there's opportunities across the whole life cycle
45:52for cybersecurity both in terms of what a sock might do but also much more broadly around what
45:58we do in policy setting policy the way that we communicate and you know share information
46:05automatically generating that for example or you know even tailored phishing exercises internal to
46:12the organization etc there are many many ways that we can use AI and I think it's just important
46:18that we remain creative and open in the way that we think about this but at the same time we've got
46:23to be absolutely sure that we do them well that we make sure that we maximize the value minimize the
46:29risks and the risks are not just about security risks though of course that is a really important
46:34thing for us to make sure that we do there's also risks around fairness and you know making sure that
46:40the design is effective and making sure that things can scale and all sorts of different things
46:46that we we need to make sure that we focus on in terms of Gen AI specifically I mean the answer to
46:54the question there around testing of software that's definitely something that is of real
46:59interest for for me around how do we have how do we use Gen AI for improving the way that we do
47:06things how do we increase our productivity because I think there's enormous potential there
47:11for freeing up our staff from doing the sort of mundane more tactical work to doing much more
47:18creative and strategic pieces of work. Can I follow on from Maria there you know I had the
47:26privilege of you know helping set up Accenture's generative AI security program and you know when
47:34you've got 22,000 security people inside your organization there's kind of a challenge to do
47:40something that is worthwhile and at scale so we we took a triangulated approach of working with our
47:47big infrastructure partners like Google and Microsoft and AWS some of our big security
47:53partners like Cloudflare and and Palo Alto Networks and secured it out and triangulated
47:59them with some big clients and assumed that if our infrastructure partners our security partners
48:05and clients had the same systemic problems or issues or challenges or opportunities in security
48:12and we were observing the same thing it was pretty good assumption that walking down that track and
48:18building a new use case or capability was worthwhile and we kind of put that into three
48:23buckets that we thought generative AI added a new dimension in its ability to automate and
48:29orchestrate and do all the fun things that we talk about. Number one it was it was relatively
48:36good at overcoming a siloed approach and a lack of understanding and data when it comes to threat
48:42intelligence so big T it's been really helpful about working through really really large
48:50multi-faceted data sets and data lakes about understanding what intelligence actually means
48:55in the threat landscape. Number two certainly as Maria mentioned the amount of work that you can
49:00get through or toil the ability to automate these tasks that are repeatedly that can be standardized
49:07and repeated very key but number three coming back to the people aspect in the third T talent
49:14it's been a wonderful augmentation tool around taking people from various skill sets and
49:21backgrounds and giving them augmentation around understanding security understanding risk but also
49:28to give them significantly more experience that they've had in the past so we really focused on
49:33that triangulation partners clients and then using those three Ts to transform the security
49:40capabilities that we can offer out to the world or that we deploy ourselves.
49:44How's that I've got a I got a question for you now you know so not every AI program works out
49:53I think we're starting to see some companies scale back their AI ambitions after they realize it'll
49:58be more costly more expensive maybe not as ambitious as as they otherwise would have hoped
50:04but then what do you do what do you do what what should the company do when they're trying to wind
50:09down an AI service that they've tested and they have all this data like what's the data cleanup
50:15process what should that be like from your perspective? So actually we have mentioned
50:21you know from CSA angle or we equip companies how to securely deploy your AI system and how
50:30to close off an AI system so some of this detail you can refer to our website.
50:36One of the key things is that we want to understand that actually where does your
50:40crowd do work is your data set still holding a model after you're training the model typically
50:46you use a data set to keep training the model and of course today you have not heard anything
50:53people doing RE on the model embedded model to actually retrieve the data but is the way you
50:59train a model gathering all this data point and typically it's unencrypted so all this data how
51:05do you transfer in so watch back the process of making sure all this data are cleaned up properly
51:11and if you are unencrypted how serious is your data what's our classification and you may have
51:16to do a few type of erase or even take back the hard disk if you are so fearful that people can
51:21take in from the memory so there's different proceeding to see what's the data set that
51:28proceeding to see how you should clean up this data and actually first you don't just think that
51:34actually you're just putting a data in a black box and then you don't know what is it and hopefully
51:38the black box will solve everything and clean your data up no there's no such things there's
51:42you need to wash your data please yeah so these are some of the things that we have delivered
51:47some of these guiding principles in our website to guide people how to develop AI safely
51:54two is how you engage AI model use it effectively and third way is some of the governance perspective
52:02of transparency explainability these are some of the things that we want people to understand AI
52:08so based on just now some of the topic we're talking about is that actually from the Singapore
52:14government level we are quite proactive in looking at what are the kind of systems that just now you
52:20mentioned they may not be a typical highly regulated system they may not be the tech
52:25industry but they use AI but sometimes they do not know that using AI may have some consequences
52:32is it a safety consequences or is it a it can be no consequences but we try to pick up those
52:38industries there's some safety concern if there's some safety concerns that human life may be
52:44on the plate so that's where we would really encourage them to engage them to say you need
52:51to train your people you need to train your board your developer to understand is there such
52:56consequences or not so we are actively trying to narrow down some of these systems that may
53:02have such kind of implication and consequences then actually we want to go upstream and talk
53:07to the people the enterprises the kind of people in the public market to understand to make them
53:14understand what kind of risk they're going to yeah but of course they need to realize the opportunity
53:20of using the tools to yeah thanks. So I started this conversation by asking about novel risks
53:29and I want to end our conversation by talking about novel uses of generative AI not solely
53:34for cybersecurity just like a use of generative AI that you that you personally think is really
53:38cool and I kind of want to get through get through all of us very quickly to kind of close off this
53:43conversation. Maybe I'll start with Jennifer. What's a really kind of cool use of generative
53:52AI that you've seen that you're really excited about? I would say like how a lot of our clients
54:00are using it and they are very excited so that makes me very excited it's the the threat hunting
54:04like the way that they can use these cybersecurity tools that simulate a threat actor so at like
54:11adaptive threat hunting that that is a very cool you know new development that we're seeing as a
54:18use case for generative AI. What about what about you Maria? What's a cool use of generative AI
54:24that's that that's getting you really excited as opposed to a threat that's keeping that's that's
54:28worrying you? Oh look you know how long have you got? I think there are so many opportunities
54:35and I think you were really touched on some of them today around how do we increase the
54:39productivity of our software engineers for example. It's incredibly useful there's so many things that
54:45we could do the targeted phishing for internal training and resilience I think is actually
54:51something I'm kind of keen on at the moment but then there's the simple things that you might
54:56want to do in your own life you know just like simple cool things like generating a Christmas
55:01message in the voice of Gandalf or something silly like that which is just cool and fun.
55:07Calvin, what's a cool use of generative AI that you're very excited about?
55:14I think I was supporting the healthcare industry. I think the cool thing about it is that
55:21next time when you go and see a doctor actually the doctor is AI capturing all
55:25the information that you have described to the doctor about your back ache and everything
55:30and even a nice message about how's your day and they are able to decipher and understand what
55:36is your problem and prescribe what is the things need to be done by the doctor
55:42in terms of a prescription and medicine that's wow it's cool, short, fast and pretty accurate
55:49but I think it needs the doctor to be adjusted to this sort of AI-assisted triage.
55:58Scott, last word to you. What's a cool use of generative AI that you're very excited about?
56:03Nick, I'm enamored with people who can think things or do things that I can't so watching
56:10some of our colleagues around the world and the clients I get to work with and the regulators
56:16and government agencies we get to work with resolving some systemic issues for making the
56:24world a better place so watching shipping routes become more efficient so that there is less risk
56:31on the waters but simultaneously affecting some key variables for climate change that's a really
56:37good side effect and I think we'll see more of that drug discovery that makes people's lives
56:43better so I think it's the really big philosophical things that get me most excited
56:49but at a personal level it's my wife's birthday next week, she wanted to go for a quiet lunch
56:54in a nice place so I'm taking her for a hike in the mountains instead and I've used generative
57:01AI to give me a climbing route that I would never thought of so I'm hoping that at a personal level
57:07it's simply as simple as putting a smile on my wife's face.
57:14So thank you all so much Scott, Jennifer, Maria, Calvin for joining us to share your insights we
57:20really appreciate it and again a big thanks to Accenture for partnering with us for this event
57:25today. Brainstorm AI will be held from July 30th and 31st in Singapore and also December 9th and 10th
57:32in San Francisco. If you want to submit your interest to attend please visit the links in
57:37the chat for each event. Again thank you so much for joining us today and hope you all have a
57:43pleasant rest of your day.

Recommended