• 2 days ago
TheStreet’s Conway speaks with Jeetu Patel, EVP and Chief Product Officer at Cisco, about the implementation of safety guardrails when it comes to using AI.

Category

🥇
Sports
Transcript
00:00So tell me, as AI continues to make progress and becomes more integrated in business,
00:05how can companies make sure the technology they're using is safe?
00:14That's a great question, Conway.
00:16The way that we think about it, there's going to be two classes of companies in the world
00:20as we move forward.
00:21There's going to be ones that are going to be great at the use of AI,
00:24and then ones that are going to struggle for relevance.
00:26And the great ones, what we're finding is they want to move fast,
00:30but they oftentimes get held back because of safety and security.
00:34And so that's an area that actually really needs to get focused on,
00:38because by definition, the models that AI applications are built on
00:43tend to be non-deterministic, and they tend to be rather unpredictable.
00:48And so you need to make sure that you've got the right level of safety and security guardrails
00:52so that they do behave, in fact, the way that we want them to behave.
00:56So what is your number one safety concern when using artificial intelligence in a business setting?
01:04If you think about the big areas that there's concerns that organizations have,
01:08it's around, in safety, you might have things like toxicity or prompt injection attacks
01:15that might happen where the behavior of the model is not quite what you want it to be.
01:20So that's what we need to make sure that we can ensure that there are guardrails for.
01:23So these models, which are inherently unpredictable,
01:26can behave in a way that are far more predictable for the context of the application.
01:31So how do you combat those unpredictable models?
01:36So that's exactly where Cisco comes in.
01:38We just launched a product called AI Defense,
01:41and AI Defense is a product essentially is a common safety and security solution for the market,
01:46because if you think about it, we're going to be living in a multi-model world.
01:50You'll have many, many models that applications are built on,
01:53and what we want to do is make sure that there's a common layer or substrate of security
01:58across all of these different models, across all the clouds and across all applications.
02:02And so what we do is provide the enforcement of guardrails for both the model itself
02:09as well as any external attacks that might happen on the model from threat actors.
02:14We want to make sure that both the safety concerns of the model behaving the way you want it to behave
02:18and the security attacks that might happen on the model to change the behavior of the model,
02:23can be compensated for.
02:24And that's what AI Defense does is allows organizations to innovate fearlessly,
02:29where they don't have to worry about safety and security because we can take care of that.
02:33One of the things that comes to my mind, of course, is like the movies that we've seen in Hollywood
02:39about AI and how AI could take over.
02:43So what are the guardrails that are put in place to avoid a kind of doomsday situation
02:49where companies integrate some kind of AI in order to protect themselves against AI
02:54and then they lose control?
02:58So let me take a step back because what's happening right now is the composition of our workforce
03:03is going to change quite a bit.
03:04So today, 100% of our workforce is humans.
03:08Tomorrow, you're going to have augmenting of that workforce with AI agents,
03:13you might have robots, you might have humanoids.
03:16And we need to make sure that these different AI augmentations can actually work
03:21the way that we want them to work.
03:23So what we would do is, if you think about a model, before a model goes to production
03:28for a specific application, let's say it's a loan processing application,
03:33we want to make sure that that model is behaving exactly the way that you want it to behave.
03:37So we have an algorithmic way of going out and doing a level of validation on the model
03:44to make sure, and typically for an organization, it takes 7 to 10 weeks, Conway,
03:49to go out and validate a model.
03:51For us, with AI defense, you can now do it within 30 seconds.
03:55And so that level of compression of time and not having to worry about the details
04:00makes a huge difference in not just the velocity but also the safety and security
04:04where you can enforce guardrails on this thing that if there's a model that's behaving in a different way,
04:09you can actually provide a compensating control on that
04:12so it doesn't behave the way that it should behave.
04:14For more information, visit conway.com

Recommended