• 3 months ago
Brad Arkin, Chief Trust Officer, Salesforce Anthony Grieco, Senior Vice President and Chief Security & Trust Officer, Cisco Lisa O’Connor, Global Lead, Security Research and Development, Accenture Wendi Whitmore, Senior Vice President, Unit 42, Palo Alto Networks Moderator: Sharon Goldman, Fortune

Category

🤖
Tech
Transcript
00:00Hey, everybody, welcome.
00:04Hope you are all getting fully caffeinated, because I am.
00:11And we need your energy here at this breakfast, this first breakfast discussion.
00:15Thanks so much for joining us.
00:17I am Sharon Goldman, I'm an AI reporter at Fortune.
00:20I'm not NAI, I cover AI at Fortune.
00:25We're here to discuss the topic of how to build trust through security.
00:29Before we get started, of course, I want to make sure to thank our partner for today's
00:33session, Salesforce.
00:35So thank you.
00:38Some housekeeping to start.
00:39So the session is on the record.
00:42Just letting you know.
00:43And I want to make it as interactive as possible.
00:46So that means, you know, please raise your hand, we'll be sure to get to you.
00:50I'll be like Oprah, kind of pointing and giving you a car.
00:56And make sure to state your name and company before asking your question or commenting.
01:00Very important are these tabletop mics that we've got here.
01:03What you have to do to talk is to press the little button that looks like a little face
01:07with a voice coming out of it.
01:09And when you're done, you have to turn it off.
01:11So it's like on Zoom when you mute and unmute, because we can only have four mics going at
01:16a time.
01:17So that's the housekeeping.
01:19And to that end, I want to introduce our discussion leaders.
01:23So to my right, we have Brad Arkin, Chief Trust Officer at Salesforce.
01:30To Brad's right, we have Lisa O'Connor, Global Lead, Security Research and Development at
01:35Accenture.
01:37And to Lisa's right, we have Anthony Greco, Senior Vice President and Chief Security and
01:41Trust Officer at Cisco.
01:44And last but certainly not least, Wendy Whitmore, Senior Vice President, Unit 42 at Palo Alto
01:51Networks.
01:52Brad, I'm going to start right away with you, and we'll get this discussion going.
01:58As Chief Trust Officer at Salesforce, you're obviously...
02:03This is right on point for this topic.
02:06I want to know if you could talk about the importance of trust in balancing safety.
02:12We're talking about the fast-evolving growth of AI-powered services.
02:16I was just talking to Ted over here about how enterprises are wanting to make sure that
02:22if they're sharing their customer data with third-party services, that it's safe.
02:30Clara was talking last afternoon in her session about how enterprises are frustrated.
02:39They want to make sure that their employees can trust the AI they're using, that customers
02:43can.
02:44Can you talk about why this is such a big deal right now?
02:47Yeah.
02:48I certainly can.
02:50Sorry about that.
02:53So I'm a career security guy, and I was the Chief Security Officer at Adobe.
02:58I worked there for 12 years.
02:59I worked at a place called Cisco, where I was the Chief Security and Trust Officer until
03:04recently.
03:06And so when I took the job at Salesforce, the teams that I manage, I'm just leading
03:12the security function.
03:13And so at another company, I might be called a CISO or a CSO.
03:16But Salesforce, from its founding, was really focused on trust as a concept.
03:22And so if you spend any time with Salesforce people, they talk about trust is our number
03:25one value.
03:27And they say that all the time.
03:29And so for Salesforce, trust is much more than just security.
03:33And so it's about availability and contracts and anything where you could feel let down
03:39by your counterparty when you're engaging with Salesforce.
03:42And that's something that, as the Chief Trust Officer, I'm on the hook for.
03:45And so the budgets that I manage and the people that I lead day to day are really focused
03:50narrowly on the security work that we do.
03:53But when I report up to Mark Benioff, our CEO, I'm talking about the work that's broadly
03:58done by a much bigger set of folks.
04:00And it's things like availability.
04:03And when you get topics like AI that we were talking about before, the question around
04:08how can I, as a customer of Salesforce, engage with these new features and feel confident
04:12that Salesforce is going to do something against my own interests as the customer?
04:17That's really important.
04:18And so the trust work that we're focused on is making sure that we understand what's important
04:23for our customers and to their customers.
04:25And then how do we set them up for success?
04:27And for me, the big summary is in all of our work that we do is we try to achieve no surprises.
04:33And so you should know what you're getting and be well-educated going into it.
04:37And if we can make sure that that works out, usually it's going to work out well.
04:41Okay.
04:42So I want to put this out to the audience.
04:43How many of you are concerned about the increased security threats that are out there to defend
04:49against the data privacy issues that continue to grow in this era of AI-powered services
04:55and what that means for trust in your organization?
04:57Can anyone give an example of that that they're dealing with?
05:01Might be too early.
05:04Too early.
05:05Okay.
05:07Still eating and drinking.
05:08That's fine.
05:09We can totally circle back, but I'm coming for you.
05:12I'm eyeing all the way in the back, in the back row there.
05:14I see you.
05:15Okay.
05:16Wendy, I think this one will definitely spur a little discussion.
05:19I want to know what we're talking about here.
05:21Can you give some examples of the landscape we're talking about, the increasingly convincing
05:28attacks that are out there, the sophisticated attacks that are out there, the challenge
05:33to discern what's real, what's not real?
05:36I feel like that's all coming into play very quickly at companies.
05:39Absolutely.
05:40Hi, everyone.
05:41My name's Wendy.
05:42Just to give a little context before I provide the answer, the lens I'm operating in is a
05:50Unit 42 team.
05:51We're a services component of Palo Alto Network.
05:53We like to say we're the special forces unit, if you will.
05:56The eyes and ears on the ground responding to these attacks.
05:59What we're seeing, especially particular to AI and how attackers are using AI, is three
06:05primary components that they're using to their advantage.
06:08I think the first is with just acceleration of attacks.
06:12Certainly removing language barriers from a communication perspective, the written communications
06:17that are so often needed to convince a user to open an email or helped us to respond,
06:23for example.
06:24We're really seeing them leverage AI to accelerate that level of communication.
06:29There's also the second component, which is generation.
06:32Generation of new attack vectors.
06:33It's not widely used today.
06:36We're not seeing this in every investigation or even in 25% of them at this point.
06:41It's a smaller percentage than that, but we are certainly anticipating and doing a lot
06:46of research in our labs to show, hey, this is absolutely possible.
06:49You can circumvent the protections used in many of these LLMs to actually create and
06:54generate new malware, and that's going to be more rampant moving forward.
06:58And then the last would just be the introduction of new attack vectors within organizations.
07:03Understanding that your employees certainly are using AI probably in every facet of the
07:09business, it's challenging from a business lens to understand each of those applications
07:14and to protect them in advance.
07:16The reality is there is some opportunity for attackers to introduce new vectors within
07:22your organization, whether it's from the exposure of sensitive data or it's potentially new
07:26technology being leveraged via AI usage.
07:30I see for myself now when it comes to phishing emails, just as an example, I mean, that's
07:35been a security issue for years and years.
07:39My husband works in cybersecurity, and he's always like, don't open that email.
07:43But I feel like now I've become even more untrusting of what's coming in.
07:47I'm questioning myself even more than I used to.
07:51Well, I'm glad to hear that, though.
07:54I think that a motto we always operate in is trust but verify, and we should absolutely
07:59be doing that with any communication, whether it's coming in from a text message, an email,
08:04a phone call at this point.
08:05So I'm glad to hear you're taking it to heart.
08:08Well, I'm going to come back to some folks in the audience, but Ted, you and I were just
08:13talking about kind of like how the threat landscape is kind of getting bigger and bigger.
08:19Can you talk a little bit about how that's expanding even beyond what Wendy was saying?
08:23You just have to press that face with the, yeah.
08:29Good morning.
08:30Absolutely.
08:31Well, so I think what we were referencing is that there's a lot of concerns here.
08:36Let me just start off with the data where I think in prior years organizations were
08:40really focused on, it was personal data around privacy.
08:45But in the introduction with LLMs, predominantly when ChatGPT came out, it was really a focus
08:50around their corporate trade secrets.
08:53And that sensitive data is much more valuable and concerning because that's what separates
08:58one company away from another company.
09:00And so they've really spent a lot of time in protecting that and putting that on silos
09:05and putting encryption and all other access controls and everything around that.
09:09But now when they move into this Gen-AI world, they now almost need to put it all together.
09:13They need to put it together for internal rag use or it's sharing with a third party
09:18or a cloud party or some other organization that's going to assist and help them.
09:23And that's a big concern because when they really focused around perimeters, they locked
09:27that perimeter down in prior structure.
09:31But now as they're putting it all together and sharing it, it's now a whole new landscape
09:35where it's really outside of those perimeters where those access controls and encryption
09:40will no longer protect the data because it doesn't follow that data and carrying it in.
09:44And then you have another concern when you put it into the LLM, what we were talking
09:47about, is that the LLM strips away the access controls, encryption, and that protection.
09:51So now when you have the data coming out, you have this exposure potentially or disclosure
09:56risk around sensitive data because it's generative and it's creating new data around there that's
10:02sensitive as well.
10:03And so it's a whole new world around how you protect that.
10:06And I think what organizations, what we see is that they're really focused around that
10:12risk reward and that reward has been so important over prior months in the last year.
10:17But that risk is really stepping in because that POC is not necessarily moving into production
10:22because they don't have that control of that sensitive data or they need to add the sensitive
10:26data to get that.
10:27So does that give some context to what we're talking about?
10:29Yeah, absolutely.
10:30I would love to get a few examples from the audience, something that you're seeing in
10:33your company or your customers' companies.
10:37What kind of threats are people concerned about on your end?
10:42I'm waiting for a hand.
10:45Wow.
10:46Okay.
10:47People are not properly caffeinated.
10:50Okay.
10:51Sure.
10:53So we're working with one of the largest banks and their concern is they have, first is they're
11:01not able to properly do detection on their data that's coming off.
11:07And because they're so large, they have a lot of data that's generating, just petabytes
11:12and petabytes and petabytes.
11:13And so they have a 50 percent ratio around the success around flagging it whether it's
11:18sensitive or non-sensitive.
11:19So you might as well flip a coin on that.
11:22So that's slowing down their whole gen AI process.
11:24And then when they move it in, they want to put it in and they want to work with many
11:27of the cloud vendors and they're concerned around now how do they share that data and
11:31how do they now move that outside of their organization?
11:35And then how do they deploy that to 20,000 of their customer service reps and accessing
11:39all this information?
11:40And so when they got to that point, then they realized, oh, wait, they need to actually
11:44move back to the first step is how do they actually now have some prompt protection going
11:48into that as well?
11:49Because there really have been a lot of the organizations have wanted to adopt these and
11:54start giving access.
11:55But now they're starting to realize the risks are so big and how do they do that?
11:58So that just ran the full gamut.
12:00And their view is they're not alone.
12:02They're one of many.
12:04Everyone's facing those issues.
12:05Yeah.
12:06How do they do that and how do they drive that?
12:07Anthony, I'd love to ask you about Cisco specifically.
12:10You've talked a lot about kind of building a culture at Cisco around security to make
12:15sure that everyone's involved, including partnerships.
12:18Can you talk about that?
12:19Yeah.
12:20I think one of the most important things when we look at this, especially in the AI space,
12:24but largely the data space, it's exposed throughout the company, even if you have access controls.
12:30Whether you're in human resources or finance or building products or engaging with customers,
12:36you have an opportunity to touch and engage with data in a very impactful way when it
12:42comes to security and ultimately the trust that your customers put forward.
12:45So that really means that the whole company has to be aware of the obligation, aware of
12:50the expectation, and that requires, frankly, building a consistent message for engagement
12:57with every employee inside of those organizations to help them understand what the obligations
13:02are.
13:03And for us at Cisco, we've really focused on everything from bottoms up.
13:08Every individual employee goes through different levels of training and awareness and testing
13:13and engagement, depending on functions as well.
13:16We'll do different things depending on different functions.
13:19But then also, importantly, I was on a company meeting earlier this week talking about the
13:25importance of our generative AI policy and what it means to use generative AI appropriately
13:30inside of Cisco.
13:32And so that message from the top, myself, Chuck, our CEO, and the rest of the executive
13:37leadership team, setting that expectation with the organizations is really important.
13:43And then also, I think the most important thing, like Brad, I'm born and bred in the
13:49security space.
13:50We have to give the businesses the right ways to use this technology safely.
13:56And so really working on paved paths so that the business can do what's necessary with
14:02the technology.
14:03So it's not just say, don't do these things, it's say, do it this way.
14:06That's a really important component to building a culture where people see security as an
14:10important part of their job.
14:12Awesome.
14:13Lisa, I'm definitely still looking for examples.
14:15I'm wondering if you can talk about Accenture, some of your clients, and just how they're
14:20working to get towards trustworthy AI, trustworthy security.
14:25Yeah.
14:26So I love the discussion about the tone at the top and culture and how that trickles
14:31down into the more practical.
14:34And the first one I'll throw out is responsible AI.
14:38And so that's the set of governance principles that organizations need to discuss and decide
14:44where they are in that journey of responsible AI and make sure then that the programs like
14:49trustworthy AI and the things that you would put in place are supporting those goals.
14:54And so we see a lot of organizations, again, experimenting, sandboxing, having their forays
15:03with large language models and figuring out what the power of them is and how they might
15:07affect business processes.
15:09And we see other businesses really kind of diving in at more customization and bespoke
15:14models and to kind of address some of the security concerns that are there, maybe using
15:20APIs or other things, better architectures, ML.
15:25God forbid we bring that back, right?
15:27ML.
15:28We can use ML.
15:29Bring back ML.
15:31For the more precise things, we're using your data, your company's data.
15:36But responsible AI is a discussion that should happen all the way at the board.
15:40And this is something your boards are already getting educated on, and they're trying to
15:44figure out how to provide oversight and how to really set the tone for what that future
15:50value is.
15:51And in some of the discussions that we've had, it's been really interesting to think
15:56about kind of what's the longevity of a corporation.
15:59And so if you go back to, like, NACD, National Association of Corporate Directors, they tell
16:03you corporations in America, public companies, the life cycle is 17 years.
16:09I would bet in tech that might be a little more volatile.
16:12And so, right, maybe?
16:15But the idea is that you really need to align those goals and make sure that you've got
16:20the right practices in place.
16:21And if you expect to be around, you better be doing things in a responsible way.
16:26And then when it really comes to trustworthy, and by trustworthy, that's the things that
16:33you're doing to really build things in throughout the life cycle of AI to protect it as you're
16:38training.
16:39And we talked about the ways that, you know, you can subvert AI.
16:43If I were an adversary, I'd be going after everybody's data, right, just to be in the
16:48data set.
16:50And because we see lots of data being, if you will, hoovered up into many of these models.
16:56But those are things that you want the guardrails throughout that whole AI DevSecOps process,
17:03and all the way through to how are you monitoring and defending that on the other side to know
17:07that you're still within the parameters or the performance that you need.
17:11But those practices, those are there, and you see lots of activity.
17:17But it's important to have that discipline.
17:19And Anthony shared, you know, it's definitely something you have to bake into the culture.
17:24And Accenture, we did that.
17:26We drank our own champagne.
17:27I don't say dog food, right?
17:29We drank our own champagne in the sense that when large language models, and they've been
17:35out for a long time, but when they became very publicly available, did a big timeout,
17:40because it's the same thing, you know, for those of you that have been around for a while,
17:43and I think I'm in good company, that we saw this with cloud.
17:47We saw shadow cloud, shadow IT showing up.
17:49We see shadow AI showing up.
17:51And it's really important to set that tone and set the policies and do the education
17:56that you need to make sure there is a way that the business is enabled to do this right
18:00and the right guardrails and environments are set up to do it.
18:04Exactly.
18:05Putting this out here again, are any of you seeing more of that, you know, we've seen
18:10shadow IT for so long, bring your own device to work.
18:13Are you seeing, you know, employees needing to be educated and sort of up-skilled around
18:20the emerging threats and what they need to be careful of to make sure that things are
18:26secure data-wise in the company?
18:28Nobody.
18:30Okay.
18:31One of the things that we spend a lot of time talking to our employees about is, like, if
18:41you're talking about the use of chat GPT, the public, open, available tool, of which
18:46there are many, really simple things is really important for our employees.
18:50We tell them, if you wouldn't tweet it, if you wouldn't put it out on Facebook, if you
18:54wouldn't publish it publicly, don't put it into those tools.
18:57These are the types of fundamental things.
18:59When you think about a large organization, which I know many of you have, boiling these
19:03things down to really important points where folks understand what their expectation is
19:09and ways that they understand it, I think, is really important.
19:11It can get really complicated.
19:13There's so many different places where these tools are being used, and employees making
19:17those decisions for themselves is really problematic.
19:20So helping that is a really important component of talking to them in a way that really connects
19:25with them.
19:26Yeah.
19:27So I was going to share an anecdote.
19:30My oldest just finished ninth grade, and so he went to a middle school where they had
19:36a stern rule, like never, ever touch any chat GPT, any AI model.
19:41If we catch you using it, we're going to expel you.
19:43Right.
19:44So that was eighth grade.
19:45Then he goes to a different school for high school, and then in ninth grade, it was very
19:48much, this is a tool just like Google Docs or anything else, and you need to learn how
19:52to use it.
19:53And so the classroom instruction, when they would give out assignments, they would talk
19:56about now, how could you use a model to help you work through this assignment?
20:02And so very much, they wanted to educate the kids on how to work with it and how to use
20:05it.
20:06And they're not dealing with anything sensitive for high school assignments.
20:09And so what I really liked about his experience from eighth grade, he didn't learn anything
20:13about it, because he's like, oh, it's against the rules, I'm not going to touch it.
20:16But then there was no value extracted from this new capability.
20:19And then in ninth grade, it became another tool, just like spell check or calculators
20:23or things like that, that he could use in order to get more out of his learning experience.
20:27And so I've heard some companies take that approach where they say, this is dangerous,
20:32don't touch it, don't use it.
20:34And then the more enlightened approach that Anthony just described, where you say, this
20:37is a tool, like any tool, a steak knife, a power drill, you can hurt yourself with it,
20:42but it also has valuable uses.
20:43And let's talk about where does it make sense, what's the right way to use it, setting up
20:47corporate instances, so you can say, you can put sensitive data into this interface, but
20:51don't do it on the public one, because they'll train against that data.
20:54And so having the right educational frameworks for employees and the right tooling, because
20:59when you understand the potential use cases, and you give them the tools so they can use
21:03and facilitate that, I think we're very, very early days about where is this going
21:07to be most useful.
21:08And so we need to facilitate the experiments of people that are closest to the work, but
21:12helping them understand what are risky and dangerous things, and then where does it make
21:16sense.
21:17And that's really my goal from a policy perspective, what we're trying to do.
21:21Yeah, I think that's interesting.
21:23One of the things that I think is so fascinating about this, and why I'm so interested in cybersecurity,
21:30especially when it comes to new AI applications, is that these companies are trying to implement
21:36these tools because they are potential future revenue.
21:40They want to get future ROI out of this.
21:42But on the other hand, they're also trying to balance the idea, well, I don't want to
21:47let my customers' data get hacked, I don't want to ruin our reputation, I don't want
21:52us to lose money based on that.
21:54So it's quite the balancing act, especially since these evolving technologies are moving
22:00so quickly.
22:01So I'm kind of wondering from all of you how companies can strike that balance and walk
22:08that tightrope between building trust and security, and also not shutting things down,
22:14like Brad was saying.
22:16Yeah, so I'll add, I've written about this, but it's really strategic.
22:22Whether you decide to race in or you decide to tap the brakes, that is a decision for
22:28your business strategy, right?
22:29I mean, it's going to impact many things if you're sitting on the sidelines and your competitors
22:35or people that you don't even think are your competitors have now entered your market space.
22:40So this is an instance where you really need to have eyes wide open and be paying attention
22:45and decide what that position is that your company wants to take.
22:50And boy, I would not want to be on the sidelines on this one, and even if that's experimenting
22:55or doing things or finding the right partnerships to exercise and flex.
23:00And so I'll use one example, because we're working with Fortune, and that was creating
23:05a model, and it's based on Fortune's proprietary data and their insights, but really tuning
23:11it so it enables more of the interaction and the analytics on that.
23:16And that's a bespoke model, right?
23:17That's sort of build your own, if you will, but it's a way to empower Fortune employees
23:24and others for accessing that data and getting the insights over time of all of the things
23:29that they have collected and gathered that's their IP, right?
23:33And so there are ways to kind of unlock the value of data and insights and also just the
23:40sheer optimization that we're going to see over many parts of the workplace.
23:46Wendy?
23:47I think, you know, I love the discussion and the examples.
23:51I think, to me, the real question is, you know, why are we having this discussion in
23:55terms of why are we concerned, right, about attackers using this?
23:59I think Brad's example was really pertinent in kind of illuminating there's two sides
24:03to every story, right?
24:04You can be concerned about the technology and put some limitations around it, but the
24:10reality is attackers don't have any of these limitations, right?
24:14So they're not concerned about what their acceptable use policies are or how they can
24:19circumvent some sort of jurisdictional challenge, right?
24:23And so I want to make sure that everyone leaves with the understanding that when we look at
24:28attackers and their use of technology, whether it's AI, whether it's other types of attacks
24:33that have been around for some time, the shift, especially in the cyber criminal landscape
24:37to each of your businesses, is that they've really dedicated cycles to intentionally understand
24:43how your business operates.
24:45In particular, how do you operate with vendors and partners?
24:48What are the potential vulnerabilities in those relationships that they can exploit?
24:53And now that's shifted from things that maybe are more trivial, like exploiting your help
24:57desk, trivial but something that's very disruptive, to intentionally disrupting your business.
25:03So the breaches you're hearing about in the news today, over the last few weeks and months,
25:09why we're hearing about them is because these attackers are intentionally disrupting operations
25:13partner transactions, the ability to certify networks and essentially convince your partners
25:19and your vendors that they're safe to interact with your organization.
25:23And they're doing that because it commands a higher ROI on their payments.
25:27And so they have more likelihood of success in making money from these attacks.
25:30So I think that, to me, a takeaway is going back to your organization, thinking about,
25:36OK, what mechanisms are we going to put in place to more effectively prepare to detect
25:41these attacks much sooner, and then ensure that we've got the business controls and the
25:45relationships with your peers in order to facilitate those happening even more effectively?
25:51That's great.
25:52We just have a few minutes.
25:53And I want to leave a couple of minutes for questions for our great panelists here.
25:58So raise your hand in just a minute.
26:01But I do just want to ask Brad, jumping off of what Wendy was talking about, one of the
26:06other things I think is so interesting around cybersecurity and AI tools specifically is
26:12that they can be used both for offense and for defense.
26:15So as the attackers are getting more sophisticated, so can we.
26:18Can you talk just briefly about that?
26:20Yeah.
26:21So I think a lot of the breathless hype around AI when it comes to security topics is, will
26:28this permanently change the playing field?
26:30And will offense win forever, and bad guys will get whatever they want forever more?
26:35Or will defense win and bad guys will go out of business?
26:38I think the reality is it'll increase the pace of play, but I think it'll be roughly
26:43a wash is my guess.
26:45Because the ability of bad guys to use AI tools to scale what they otherwise would require
26:52individual humans to do, that'll allow them to get a lot more coverage.
26:56They can maybe go much faster from when a security bulletin gets released.
27:02How do you translate that into a working exploit against the people who haven't patched
27:05yet?
27:06Things like that could be accelerated and go a lot faster.
27:08But as a defender, there's all sorts of things that scale with human labor in the old days.
27:13And I think AI tooling will help us do that with a lot more comprehensive coverage in
27:17the future.
27:18And so the idea that any one mistake, the bad guys will be able to pounce on immediately
27:22by getting closer to perfect more frequently, seems like AI tooling will help people be
27:27more effective with that.
27:28And so there's a lot of doomerism out there related to this.
27:32And I think that, if anything, it'll just pick up the pace.
27:35But I don't think it's going to permanently change the landscape.
27:38So there's no checkmate here?
27:39I don't think so.
27:41I don't think so.
27:41Any questions for our panelists?
27:43Yes, back there.
27:46Elena Kvachko.
27:47I teach a graduate level course on trust and security at Cornell.
27:50And I'm curious in your opinion, so trust is often regarded as an intangible business
27:56value.
27:57Yet it's critical for business success.
27:59Could you share maybe any examples where trust can be translated into a more tangible business
28:05KPIs?
28:06And what I mean by that is enhancing the trust centers, or providing more self-service opportunities,
28:11or potentially just enhancing the speed at which customer questionnaires are answered.
28:18So what are the ways in which trust can become a more tangible business KPI?
28:23Is that for me?
28:25I'll start, and then other people may jump in.
28:29So trust, sometimes it's smurfy.
28:33You start to forget what that word means, because you can apply it to so many different
28:36areas.
28:37There's the really incredibly tangible opportunities for trust enhancement that occurs in the pre-sales
28:44process.
28:45And so customer wants to buy, they've got questions.
28:48Maybe the customer wants to buy, but their security team has questions.
28:51And so you need to remove that friction that might otherwise hold up or slow down a deal.
28:56And so things like compliance with SOC 2, or your favorite standard for whatever industry
29:01you're selling to, or what country you're in.
29:03These are examples of things that if you make the investment as a seller, then you can increase
29:07the confidence and trust that the customer has, and eliminate the friction.
29:11And so you can measure, before we had this thing, deals would take eight weeks to close
29:16once they hit the security Q&A process.
29:18But after we got the trust center stood up, it removes that friction.
29:21And so now instead of eight weeks, it's three days, and sales guys are happier.
29:24So that's very tangible that you can measure.
29:27The other thing that I think you can try to quantify is the growth within an existing
29:34account.
29:34So if the customer today is spending a million dollars a year, but they're really happy,
29:39they feel like they're getting the information they need, their confidence in using more
29:44of what you can do for them grows.
29:47And the trust is a big part of that.
29:49And for us, we see that through, can I trust you with more sensitive data, with more sensitive
29:54workflows, things that have higher uptime requirements.
29:58And so in the old days, 99% was good enough, but now it's four nines, four nine five,
30:03five nines.
30:05And getting the confidence that if something does go wrong, there'll be good communication,
30:10and that they understand the roadmap and where things are going.
30:13And if we're constantly making promises and not delivering, trust is low.
30:17If we have credibility and we deliver on our commitments, then trust increases.
30:20And then the amount of reliance the customer is willing to place in us grows over time.
30:26And so you can look at growth within the account.
30:29And where accounts are not growing or shrinking, it may be because they don't have the confidence
30:33that we're the right partner for the future.
30:35So these are some real easy ways to quantify.
30:38And then there's the vibe part of it, where you can just get a sense, how many accounts
30:43is security, availability, or trust topics one of their top concerns?
30:47And so we have lots of qualitative ways to measure which accounts are in a red status
30:52or which accounts are happy.
30:54And then you have customer satisfaction scores, and when does trust show up as a frequently
30:59mentioned topic.
31:00So these are all ways to try to quantify what's going on.
31:03And then you tune the dial of how much to invest in making the trust outcomes better.
31:08What about Cisco?
31:10Yeah.
31:10Not surprisingly, we have similar thoughts on it, given the background that Brad and
31:16I share.
31:16But one thing I would just add, in addition to what he said, there's another layer to
31:20the conversation that we think about, which is, how many customers are we strategic advisors
31:26to when it comes to big transformational initiatives?
31:28How many are we the ones that they come to for a platform that will help them build out
31:34for the future and do big transactional things?
31:38We find, especially in the security space, but broadly across technology, the point solution
31:42conversation versus how do you actually build a strategic platform for the future of a company
31:48is another place where you're not going to get that business.
31:50You're not even going to be in those conversations if they don't trust you.
31:53And so that's another way that we think about measuring the topic of trust.
31:56We have time for a couple other questions.
31:58Yes.
32:01One question that strikes me as I sit here is that, and I think it was your question
32:05about are we training our employees, right, about chat GPT, is have you seen any good
32:10examples of up-leveling people who may get left behind and are quite vulnerable that
32:14are not necessarily at your company?
32:17Like I think about older people and how they've actually become incredibly, they've become
32:23targets, right, of a lot of the AI.
32:25And you're hearing a lot more about people's parents being sort of pranked or spoofed or
32:30much worse things happening.
32:32Have you seen any great examples of that?
32:33Because when I look at, I sit on a public board, I sit on a university board, I mean
32:38depending on where you're at in this learning cycle, there are people who are getting left
32:42behind and the issue of trust becomes much bigger than what's happening, like it becomes
32:47personal for people.
32:48And so I just haven't seen a lot of models of companies trying to address those markets
32:53that, you know, senior citizens, for example, who could be quite vulnerable in the equation
32:58long term.
32:59Yeah, I mean you call out an important point.
33:02That's a societal thing that we all need to grapple with, you know, especially an organization
33:06like ours where we do a lot of business-to-business engagement.
33:09We also participate in organizations like National Cybersecurity Awareness, NCSE, and
33:17really those organizations are where many companies come together and put out educational
33:23materials, especially like October Cybersecurity Awareness Month, you will see a lot of effort
33:28to help educate the broad general population around those sorts of things.
33:32I think it is a broader societal thing that we're going to be faced with for many years.
33:36I think we should also think about taking some learnings from other parts of the world
33:40where they've faced challenges around misinformation and all of the things that this technology
33:45will, is ultimately, when you get down to the core of what you're describing, it's
33:49misinformation and technology being used for misinformation.
33:52So a lot of places around the world where that has been a common practice and they have
33:56had to build societal mechanisms to think about that, I think there's a lot of learnings
34:01that we need to take from that and employ, not just as individual companies, but coming
34:05together in these, particularly these nonprofit organizations that many of us are a part of.
34:12Yeah, and I'd also add that the World Economic Forum has a lot of work that they've done
34:17in this area, and there's a partnership against cybercrime, which is trying to grapple with
34:22these issues, and again, that goes global and that goes business to citizen.
34:27So they're trying to come up with strategies, and that's a working group many of these
34:32folks are in, that is trying to come up with appropriate strategies and remediation and
34:39things to put in place, also looking at what kinds of regulations globally or upscaling
34:45could be done to get people more aware of that.
34:47But I mean, we have, what, six generations in the workforce right now, and everybody's
34:52sort of sensitivity to what they trust and what their sort of natural interaction is
34:57with technology is very different.
34:59And I think that's even something just within businesses that we have to deal with and make
35:04sure that we're reaching our different generations in the training the right way, so that we're
35:10bringing folks along either to create a little more healthy skepticism, operational discipline
35:15around things, or really to put the training in there to sort of intercept the human response,
35:22right, because that's the challenge, especially with the deepfakes and other things that we
35:27see quite richly now, whether they're for entertainment or for a bigger campaign, that
35:34folks really need to create that moment or figure out how to create that air gap where
35:38you go to what we're doing as a human multi-factor to try to figure out if it's real or go back
35:44to our controls and say, what's the next verification?
35:47We have time for one more question.
35:49Yes.
35:55There you go.
35:56Yay.
35:56Okay.
35:57So just to answer your question, I work for Insight.
36:00My name is Megan Almdahl, partners with all of you guys, competitors to you, but that's
36:04fine.
36:05But what I would say is Insight leaned in massively from an AI adoption, and with that,
36:12it gave us the burden of we're all taking about three hours, a quarter of compliance
36:18training, but we've made it relevant for our personal use as well in our data protection
36:24and compliance.
36:25So you can have our parents look at that training and help them as well.
36:29So I think that's a huge opportunity for us to answer your question in just a tangible
36:34way.
36:36Wonderful.
36:37Okay.
36:37Well, thank you so much.
36:38That's all the time we have today.
36:40A big thank you to Brad Arkin and Salesforce for co-hosting this session with us.
36:45Thank you to Lisa, Anthony, and Wendy.

Recommended