Ready, Set, Midmarket!

Ready.Set.Midmarket!: All About AI Poisoning

Episode Summary

In this episode of Ready.Set.Midmarket! hosts Adam Dennison and Samara Lynn discuss the critical topic of AI poisoning with experts Vrajesh Bhavzar, co-founder and CEO, Operant.AI, and Sarfraz Shaikh, CIO/CISO, One Mesa.The conversation explores the implications of AI adoption in midmarket companies, the security risks associated with AI, and the importance of implementing robust security measures to combat potential threats. The guests share insights on the current landscape of AI security, the challenges faced by organizations, and the solutions offered by Operant.AI to protect against AI poisoning attacks.

Episode Transcription

Adam Dennison (00:02)

Hello and welcome to another episode of podcast brought to you by MES Computing. I'm Adam Dennison, vice president of Midsize Enterprise Services with The Channel Company Joining me as always is my co-host Samara Lynn, senior editor of MES Computing. Hi Samara. have two excellent guests with us today. We have Vrajesh Bhavzar. He's a co-founder and CEO of Operant.AI And we also have Sarfraz Shaikh. He is the CIO and CISO with One Mesa.

 

And he's also an MES advisory board member going on, believe two years or two and a half years now. Welcome VJ and Fraz.

 

Sarfraz Shaikh (00:36)

Glad to be here.

 

Vrajesh (00:37)

Thank you.

 

Adam Dennison (00:37)

So before we get started, why don't we start with you, VJ just kind of give us a little bit of a background on your organization, your role there, and then we'll go over and meet Fraz and then we'll get things started around the topic of AI poisoning.

 

Vrajesh (00:51)

Great. Thanks for having me, guys. I'm Vrajesh. I'm the co-founder and CEO of Operant. Operant is the platform for runtime protection for AI and cloud. My background is in security and AI. I started early a career at Apple as a kernel engineer, building security, data protection, privacy, et cetera. Later in my career, I built the machine learning business unit for ARM and ⁓ was deep in AI and machine learning before it became a huge thing here.

 

Really excited to be leading the charge on securing these AI systems and the challenges that a lot of industry leaders have to face today.

 

Adam Dennison (01:26)

Awesome, thank you. And I am a proud user of your former products at Apple. So I'll let you know that right now. Fraz, why don't you take us in a little bit about One Mesa and congratulations on being newly named and appointed as the CIO and CISO of One Mesa.

 

Sarfraz Shaikh (01:41)

Appreciate it. Thank you. Appreciate the opportunity as well. So One Mesa is a manufacturing engineering construction firm. We play in the oil and gas industries and I've been with the company over 15 years. Who's counting right? When we're having so much fun with the growth.

 

So my responsibilities hover around IT, ⁓ OT, there's manufacturing piece, ⁓ facilities, so the operational technology plays in, and then cybersecurity as well. So that's where we get to spend a lot of time. So yeah, that's me.

 

Adam Dennison (02:18)

Awesome. Thank you so much. And for those new to the podcast that are unfamiliar with MES, we focus on IT leaders that are running the IT organizations within their midmarket companies. So, for us all verticals, healthcare, finance, construction, manufacturing, that is Samara's and my world here at MES and The Channel Company. let's get things started on the topic. ⁓ been a couple of years now, probably going on three or four years now where you cannot have

 

a business technology discussion without having a discussion around AI. I think we've moved well past the definition of it. We've moved well past where people are deploying it, either pilot-wise, department-wise or enterprise-wide. They're starting to see returns on their AI investments. But now we're starting to have a lot of discussions around the security aspect of it. And it branches off ⁓ quite a bit from there. There's a lot of tentacles from the security standpoint.

 

And I am not an expert at all, but I will say the topic of AI poisoning was one that I thought was intriguing. And when I did some research on it, it is one that's definitely a little bit scary as well. So VJ you're the expert in this space. Why don't you just kind of paint the picture of AI poisoning and bring it home from a business technology standpoint? Why should someone like Fraz care about this? And then we can get into how we can combat this. But what is it that's out there and what should folks be aware of?

 

Vrajesh (03:36)

Yeah, yeah, thanks for

 

setting this up and this is such an important topic to really dig into now. Because as you said, over the last couple of years or so, we have been trying to wrap our heads around what this new innovation cycle is about, what is going on, how do we leverage it. And in many ways, finally now we are coming about for more adoption, moving out of the labs to more production environments.

 

moving out of like, let me research like what use cases this can work for, but actually applying it for real use cases, working with real data, working with real systems. And as this kind of adoption has just like taken off, people are also trying to figure out well, what does the security architecture look like? What are the different risks that we are getting exposed to? And some of them are getting caught

 

not having thought about these things in kind of dire ways. There was an example recently about how an agentic system basically went out and deleted production databases. So there are these kinds of issues that are coming out now that people are finally figuring out like, well, what are these security risks that you need to be aware of as you are bringing these AI systems to production?

 

And this element of AI poisoning is a really big one. It spreads across the entire lifecycle. So let me give more color around that because AI poisoning is different ways to kind of see where this kind of change into the way your AI is going to operate can be injected into different stages of AI

 

whether you see it as data poisoning at the time of training. So you might be training your large models or other types of AI systems and RAG applications with the kind of data that is specific to your industry, your segment, your business. When that type of ⁓ training is happening, there are many avenues how some malicious or misinformation can ⁓

 

get be part of this training environment. And think about it as like, as these trained models come into production, there will be more ways that different types of users will interact with these models and can try to inject different instructions because all of this, now we are in this mode where data is how lot of these systems are executing.

 

when data becomes executable and when instructions are coming in in this natural way, there are a lot of ways that these models are behaving that we don't have full transparency on, full discovery on, full controls on. So you have data poisoning that can happen at training time. You can have prompt injection type of attacks happen at the time of interaction with these models. And when you move into these agentic workflows, you can have

 

tool poisoning or context poisoning kind of attacks happen. And all of these, you know, different sub attacks and risks are in my mind, like they all kind of fall under this AI poisoning kind of set of ⁓ threats.

 

Adam Dennison (06:38)

Yeah.

 

So when we talk about security, you you think about internal threats and external threats, and there's a lot of internal ones, right? There's employees behaving badly, there's employees that don't know what they're doing, they don't even know that they're behaving badly. When I hear AI poisoning and the data that's getting affected, is it something that's both internal and external, or is this something that's more an external type of a threat? What's your thoughts around that?

 

Vrajesh (07:00)

Mm-hmm.

 

Mm-hmm. Yeah, this is a great point because there are a lot of ⁓ internal risks to be aware of in using AI and deploying it in different applications. Obviously, external risks are always around. There are a lot of ways that different businesses are under attack from different types of external entities. But in the internal sense, there are two categories in my mind where there might be just a misunderstanding or mis-

 

configuration in the way that you are setting up these AI applications and agentic workflows. For example, if you give instructions in the wrong way, then sometimes if agents have too much permission, they will try to go access different systems that they were not supposed to. Like the developers who set up these instructions, they didn't mean for anything bad to happen, but

 

It's just something that these agentic intelligence systems are trying to be helpful and they go off and do something wrong. And that can have a business impact where someone's database might get access that it was not supposed to. And now you have all this different exposure. There are also a lot of these poisoning attacks that happen if you have internal threats where like some internal

 

⁓ access might be injecting data or injecting instructions into these models, which are incorrect or they might have implied behaviors. So for example, you know, sometimes we come across teams who are exposing their IT knowledge through some chatbot, like, Hey, we had all these Wiki pages, like tens of thousands of the knowledge documents or, you know, other types of

 

information that we want to make accessible to more employees. And so we can accelerate all these IT flows and automate some of the ticketing systems and all that. And often in those chat interfaces now, if you are opening it up to thousands of employees, some people just start to kick the tires with like, well, what would you do if I give you this wrong information? Or like be like, you know,

 

have these misbehaviors happen that in fact affect the brand in a wrong way, affect the employees' experiences in other teams. So for example, if some marketing team folks went off and did something that affects GitHub, that's going to be a problem for the developers. Like, hey, what are the API keys that you're supposed to use to talk to some cloud provider? And sometimes if you give the instructions that are going to put you on the wrong path,

 

that you have lost two months.

 

Adam Dennison (09:35)

It's always sales and marketing, always. ⁓

 

Sarfraz Shaikh (09:38)

That's

 

a great point. Wanted to kind of take a step back, right? Vrajesh, you've kind of hit those key points. For midmarket companies, right, we sit at a space where we're rapidly trying to adopt AI to stay competitive, right? Often we tend to overlook those guardrails. I know we ourselves are on these multiple initiatives where we're thinking, hey, where can AI be used for?

 

Customer support and engagement, right? Chat bots, your automated ticket triage and response, if you will. We talk about sales and marketing automation, generating the email content for social media posts, right? HubSpot and Salesforce, personalizing the SEO stuff, not to mention HR.

 

Vrajesh (10:25)

So,

 

Sarfraz Shaikh (10:26)

Think about HR and talent management using AI tools for resume screening, right? Writing those job descriptions. Imagine the AI tech that we are using to create and generate that content. What if it is biased and how it is generating that content around that job description, predicting the attrition,

 

Samara Lynn (10:44)

and I'll see you next time.

 

Sarfraz Shaikh (10:48)

hiring success, finance and accounting.

 

We have several initiatives where we're trying to automate certain things, invoice processing, reconciliation, expense classification, fraud detection, you name it. It lends itself across and bleeds across the entire operational functionalities of companies like us.

 

Adam Dennison (11:09)

So, Fraz as a practitioner, when you're thinking about...

 

new AI opportunities across your organization. At what point does your CISO hat come on and how do you have these types of discussions with the business users and how deep do you go with them thinking about something like AI poisoning, which obviously has a very strong connotation to it. But how do you address that and have those discussions internally and what are your sort of checks and balances at your enterprise?

 

Sarfraz Shaikh (11:38)

Absolutely great question. I think the CISO hat is always on right? It's 24/7 There is no. There's no downtime for us. I feel as a mid sized organization, most organizations ...I'll kind of lay a broad stroke here: most organizations are not adequately prepared for AI poisoning attacks. I feel you know while that adoption of AI is accelerating across industries.

 

I don't think that those defense practices have kept pace. I was reading up a proof point, a report where they were saying only about 20 percent of organizations say that they are well-prepared for AI-driven threats. So when we...

 

come up with an initiative, if you will. Hey, we need to automate this or this seems like this is gonna be a great platform to automate or bring in AI tools. One of the few things that we do is have those in-depth conversations with the end users. Okay, this is a time consuming task. How can we...

 

bring technologies in place and what are those pitfalls? So you've got to get into ⁓ the depth of it in terms of what Vrajesh talked about internal users and there might not be any malice around that ⁓ because IT as a department is no longer a department of no, we are a department of enabling, we are a department of.

 

corporate success growth, right? So we've got to enable that. We don't want to say, no, that's security. So you have to walk that fine line. You've got to understand the guardrails, the swim lanes you'll have to create, sandbox it, containerize it, a comprehensive ⁓ end user acceptance, UAT, before we roll out. have...

 

couple projects around there where we use, we're a Microsoft shop and we are trying to create a chat agent within our SharePoint environment. And we're thinking, wait a second, we've got HR who hosts a lot of PII stuff. We've got our safety department that posts a lot of, again, PII background information, driving records and medical history sometimes.

 

Vrajesh (13:43)

Mm-hmm.

 

Sarfraz Shaikh (13:54)

If we just laid out there, imagine the kind of impact it would have if somebody were to search, hey, I want to do this and put in the right keyword. And if that information is, if there's no AI poisoning, imagine the kind of impact that we have. So we always take a very conservative approach whenever we engage on using AI tools.

 

Vrajesh (14:08)

Mm-hmm.

 

This is a great point. Thanks for like diving into all these details. I mean, I think this is really helpful in how folks should think about adoption, as well as ⁓ setting in the right controls and protection in place. Often when we work with lot of leaders and customers, they kind of also have like a crawl walk run motion, right? Like, hey, I want to make sure I understand

 

what we are signing up for here and then grow it to a smaller team and make sure, like know, as Faraz said about getting it to the right level of user acceptance before you just like go full on with it. But the fact is like all these steps are now squeezed in like a very, very shorter time, right? It's not like, I'm gonna take six months to do this. It's more like six weeks. Sometimes we are coming across teams that already have plans that...

 

⁓ engineering or other business organizations have decided to use agents or use these models and security or IT might be coming in after the fact because they have already reached a point of planning that security needs to then come and act within days or weeks and put in these guardrails that he's talking about. And in many ways now, you think about like in the

 

Samara Lynn (15:15)

Good

 

Vrajesh (15:28)

kind of an older IT world, how they're used to be based to control your network, control your employee access, and make sure these corporate and cloud network firewalls are in place, so to say. And now the need for setting those kind of control points has moved into that AI layer, almost where all these API interactions are going on, all the way from employee laptops.

 

how they interact across these different systems, often now with MCP involved. And also all these ⁓ business applications that are running in different cloud environments at a layer where these API interactions are happening, you need to set the guardrails kind of deep inside these interactions, wherever these things are on Kubernetes or some form of serverless and other kinds of environments.

 

Sarfraz Shaikh (16:17)

Yeah.

 

Vrajesh (16:18)

Yeah.

 

Sarfraz Shaikh (16:19)

One quick point also Adam, Samara, wanted to add is at the C suite level, right? There is a lot of pressure for the leadership team to to join the AI race. Right, so there's like hey, what are we doing on a AI adoption? So you've got to manage those expectations and define that to the.

 

Samara Lynn (16:32)

you

 

Sarfraz Shaikh (16:41)

to the C-suite that yes, while we are on it, we've got to take a measured approach. We simply cannot fall into, yeah, everyone's adopting AI, so let's roll it out. You've got to take that step back and get a convincing narrative, if you will, to ensure that if we're going that route, we better have a measured approach rather than going

 

Vrajesh (16:53)

Mm-hmm.

 

Sarfraz Shaikh (17:05)

full out and then having to do reputational damage control.

 

Adam Dennison (17:08)

Absolutely. Samara, I don't want to dominate. Do have a question you want to ask of these guys?

 

Samara Lynn (17:11)

Well, I think Fraz made a great point. And VJ, I want to ask you a question that's twofold. First, how pervasive are AI poisoning attacks? Is it something you're seeing statistics on? I mean, I'm sure it's not up anywhere near like phishing, but I'm just curious, is this something that midmarket IT leaders need to have their sights set on now? And then my other question is the current...

 

security infrastructure that most, let's say, mid-size companies have. Let's say a combination of on-premise, you know, endpoint, and then they're working with an MSSP. Are the technologies we have in place right now, SIEM SASE, can they catch these AI poisoning attacks?

 

Vrajesh (17:57)

So to your question about how pervasive this is, I hope that's a question for Gartner to come up with some statistics. But definitely, we see there are so many attacks.

 

There is so much research coming out, so many demos coming out. are so many, it's like People, this is like a cat and mouse game as it's always been, right? Like as more production AI adoption is happening, more teams on the security side are also educating the market in terms of like, 'hey, this was possible. This other thing was possible.' These others, like poisoning, whether it's data poisoning, tool poisoning, context poisoning.

 

prompt injection, like all these different things that are real examples of very prevalent systems that are just like there are attacks and hacks that are being shown. So it's very, very pervasive in the day-to-day tools that everyone uses across midmarket and high enterprise. And it's not something that anyone can just shove under the rug at this point. And I think to your question, the second question is kind of like tied to that because

 

As I was saying earlier, some of the older approaches around, well, I have some network firewalls or endpoint protection, and hence I should be protected, it does not stand anymore. So I'll give you an example. A lot of companies are now adopting different types of AI IDEs or co-pilots for coding,

 

So all these AI IDEs, so to say, now come with so many MCP servers. all these AI IDEs have connections into these servers that were just coming out in the open source. And there are thousands of these servers now. And these AI IDEs are running on developer laptops.

 

making all these MCP connections to critical data, as well as having connections into all these remote servers. So, And all that is happening from... ⁓

 

these layers that ⁓ are not possible to be caught by older technologies, which is where you need to find kind of the new ways to put the right guardrails and the right controls. Because this is a very different way that you are interacting with these AI models and AI systems and building out these agent-take workflows. So that's why some of the ways that we are approaching it is like bringing

 

Samara Lynn (20:16)

By we, you

 

mean Operant.AI Yes.

 

Vrajesh (20:19)

Yes, so let me talk about Operant.AI for a minute. And I think as an industry, what I was going to say first was as Sarfraz was saying, we're evaluating the entire architecture, but also that evaluation needs to be happening super fast. It's not something that you can take time on.

 

And what we are seeing is that in terms of a lot of these CISOs that are under pressure, they are coming with this attitude of like, do I enable this? Because they do want to accelerate the growth, support the innovation and not be like, ⁓ this cannot be done. And so the way to enable that is able to start enabling these use cases, enabling this innovation.

 

by protecting these systems in real time, in the runtime. So think about it where like at the actual interaction of these prompts, like what kind of poisoning attack might go into a model or what kind of response the model is giving out, you can start setting guardrails and controls and all these fine-grained interactions. We have two products that specifically target this

 

AI Gatekeeper is the first one that goes into AI application protection, as well as MCP Gateway, which protects all these different MCP interactions and agentic interactions. And in doing that, we bring a 3D approach, which is discovery, detection, and defense. And all these things need to come together in a single platform, because now you need to be able to discover the entire footprint, your shadow AI,

 

all the different model footprint and interactions that are going on, and then do live detections of these poisoning attacks. And not just like, you know, pump injection or, you know, different types of model theft, et cetera, but being able to detect what kind of sensitive data is passing through, what might someone be able to be successfully do run some injection or do some cross-site scripting or something like that, and be able to start exfiltration.

 

So we can detect that in these AI systems in real time and start defending against it. We can stop those calls or even redact that data in real time so that these AI models don't get a copy of your sensitive data, right? So as Sarfraz was saying, there are all these HR systems that might be ⁓ talking to a chatbot. And if you can eliminate all the sensitive data going through this model,

 

then you are much more protected. So you're supporting the innovation and accelerating that growth, but still protecting the crown jewels that are critical to your business.

 

Sarfraz Shaikh (22:47)

Well said. Just wanted to add a little bit there. Samara was, you know, as a midmarket we have limited security budgets right? Our focus on security spend is those core IT risks. Ransomware, phishing, you talked about that, endpoint security. So these AI specific threats are still not prioritized.

 

Samara Lynn (22:48)

Bye.

 

Sarfraz Shaikh (23:09)

⁓ So we depend a lot on these third-party AI vendors, right? We rely on something because we don't want to reinvent the wheel. We just do not have those resources. We are looking for those SaaS platforms with embedded AI, open source model or pre-trained LLMs, APIs, things like that, right? So that makes us so dependent on the security hygiene of the vendors.

 

Samara Lynn (23:19)

right.

 

Sarfraz Shaikh (23:35)

So it is very critical for us as the consumers to make sure you have those in depth discovery sessions as you're picking those vendors.

 

Adam Dennison (23:45)

question

 

is, if someone like Fraz were to engage with an organization such as yours, is this an add-on to what they currently have or is this something that can replace things that they currently have so he doesn't have to add budget and he doesn't have to add tools and add security sprawl to his organization?

 

Vrajesh (24:02)

That's a good question. It depends. For a lot of organizations, they are in different phases of adoption of these tools and technologies, right? So often we kind of see three different phases of adoption where first they might have some developer.

 

based experimentation going on, whether it's on the AI model side or MCP side, and maybe they are using AI IDEs. But then they move into the phase two, now it is clear that they need to use some AI models or AI IDEs for business critical use cases. where...

 

you might have ⁓ the AI and agentic workflows

 

Samara Lynn (24:39)

There we go.

 

Vrajesh (24:42)

And you might have AI and agentic workflows going on. So in these different phases, someone might need to think about like, well, I need to start investing in this now. If you start early and like do it right, then you don't need to like redo a lot of tooling. Often this would be an add-on, but once you start, at least our approach with Operant is that

 

we can solve so many different use cases for AI security, MCP security, API and Kubernetes security all together under a single platform, which is the unique solution that we are bringing to market and which allows for displacing some of the other tooling that you might have.

 

Adam Dennison (25:18)

We only have a couple minutes left. Samara, do you have any final wrap up questions for these gentlemen?

 

Samara Lynn (25:25)

I just, maybe this is for both of you. VJ what is the worst-case scenario that could happen to an organization with an AI poisoning attack? And Fraz, is this something that you and your peers are talking about, things that you're concerned of, as far as your other million security concerns you have to deal with every day?

 

Vrajesh (25:44)

I'll take first. Worst case scenarios. There are many different ways that this can go. I imagine like a worst case scenario could be, ⁓ you know, some agent system getting into a lot of these critical infrastructure and, you know, FinTech organizations and creating false identities and false bank accounts

 

I don't know, I'm just like coming up with one, but like there are so many different things that can happen. The scale and speed of this is like really something that ⁓ folks need to think about because within minutes, right? Like it's like seconds or minutes that these AI agents can reach a lot of your data and start exploitation and can affect the business in a dire way. And so it's no more like, I'm going to depend on my SOC

 

and take a week to respond to something. It's something that is happening at super fast speed and scale that needs runtime, real time protection.

 

Sarfraz Shaikh (26:39)

Yeah, from a user standpoint, worst case scenario of an AI poisoning is that silent undetected compromise, right? I talked earlier about sensitive information, even with a use case of internal misalignment, not to mention if you fall target to something that you have not thoroughly vetted.

 

Imagine not only the financial damage to companies, the reputational damage that this can bring. That reminds me of Grok recently having some responses ⁓ that were so anti-Semitic and it was just caused so much chaos. So imagine how it can directly impact the reputation of the company that you are in.

 

Vrajesh (27:15)

So.

 

Sarfraz Shaikh (27:28)

is very critical for all, and it should be very critical for all CISOs out there. It should be a constant conversation. As I've mentioned before, there's a huge focus emphasis on, are we on that

 

AI train yet? We need to slow down. We need to take a deeper look and make sure the approaches that we are taking are well measured.

 

and there is a return on investment and at the biggest level, there are not these security gaps that we overlook.

 

Adam Dennison (27:59)

Well said. I think just like the ⁓ hit song from I think the 90s, more money, more problems, more AI, more problems is something that we're seeing right now. I want to thank VJ, I want to thank you and Fraz for joining Samara and I on Ready.Set.Midmarket! uncovering certainly more

 

Vrajesh (28:07)

Thank

 

Adam Dennison (28:19)

⁓ potential threats that are out there with AI and more importantly though how to potentially combat them through strategy and technology and tools. So with that, I want to say again, thank you so much. Thanks to our audience for joining us and we'll see you on the next Ready, Set, Mid-Market.

 

Sarfraz Shaikh (28:33)

Thanks