In this episode, Adam Dennison and Samara Lynn discuss the emerging concept of shadow AI with Peter Garraghan, co-founder and CEO of MindGuard, a security company that offers red teaming for AI vulnerabilities The conversation delves into the implications of AI used without organizational oversight, the risks associated with shadow AI, and the role of security teams in managing those risks. Peter also shares insights from a recent survey on AI usage among cybersecurity professionals, highlighting the need for controls and visibility in AI adoption.
TRANSCRIPT
Adam Dennison: Hello and welcome. I'm Adam Dennison, vice president, Midsize Enterprise Services with The Channel Company. And welcome to joining us at Ready.Set.Midmarket! This is MES Computing's podcast that we do with all things geared toward midmarket IT leaders. I'm joined by my co-host, Samara Lynn. She is the senior editor with MES Computing. Hi, Samara. And we're thrilled to have with us today, Peter Garraghan. He's the co-founder and CEO of Mindgard. Hi, Peter.
Samara Lynn: Hello
Peter Garraghan: Hi. Good to meet you.
Adam Dennison: Alright, let's get things started. So, the general topic that we want to discuss today--everything is centered around AI, pretty much in high tech right now. We want to take a little different look at it and talk about shadow AI. It's a term that's definitely picking up some steam in the market. We all have heard of shadow IT for a number of years, but we want to talk about it more from an AI perspective. And Peter, we'd like to start with you.
I know Mindgard plays in that space. I also know that you recently have published a pretty sizable survey in the market to some security executives. So, if you might kind of get us started in terms of what is your definition and view of shadow AI and then maybe give us some feedback around that survey and kind of some of what your findings were and then we'll kind of go from there.
Peter Garraghan: Sure. The reason we did this survey is because in nearly all the chats I've had with CISOs and IT practitioners, there's always been the compromise saying, ‘I don't even know where the AI is half the time,’ which is interesting because that's not reassuring. And I kind of thought, let's actually talk to vendors, practitioners, people who own security risks in organizations about this. No, that's why they said we need the survey. But let's take a step back.
I think the main thing I wish to talk about AI is it's still software and hardware and data. It's a bit new there. And it's great that you mentioned shadow IT because shadow IT has been a problem for a long time. I want to know what assets. I want to know what it's doing. I want to make sure it's under observation. I can have controls in place. AI is no exception. So, in this case, shadow AI is the existence of AI assets within an organization which has no visibility from different practices.
Organizations in question. Very, very umbrella. Maybe I'll dive down into some concrete examples of what that looks like. We can first start with the simpler example: User is using ChatGPT, private chat. They purchase on their own credit card or debit card. They have no visibility to the organization itself. These are things such as i've spent my own ChatGPT session or a comparable vendor.
Adam Dennison: Yeah
Peter Garraghan: And I'm asking it things like, ‘Hey, take this email that I've used my account login to and actually summarize it for me if need be.’ That can be problematic for lots of reasons, because you're giving information to a model that [IT] has no control over. So that's actually one of the biggest ones we've seen in terms of, yes, people using private AI sessions. And even the security [pros] themselves are doing it, which is also quite funny. And ... developers using it to generate code. That's kind of one slice what you look at.
The second slice is people pulling AI libraries’ components into development workflows. This could be things such as, think of a code repo that your developers have used. They're calling open AI wrappers to call external services without actually having oversight actually what they're calling in that system. That can be problematic for lots of reasons in terms of yes, controls, but also just one of my callings, it's software instead of license. And the third on which I'm sure we'll talk a bit more detail about is it is mentioned in the survey, but I think it's probably the more worrying one in the future is supply chain. What's stopping, let's say I'm a big, I'm the mid-sized, I'm the midmarket, so I'm a big company. I have a third-party vendor, which I've been using for maybe a couple of years now, they're pretty useful. And they so happen to update their team in terms of service or just push an update telling me there's now an AI inside their software, which I have no visibility upon. That could be a whole but a knack or the system talks another AI system if need be.
All these three examples, which is, your employees are using AI without you and putting information that you shouldn't be doing. It could be they're using AI components or AI libraries to call them to do my bit of my products or capabilities. Or I have vendors using AI without actually notifying me or it's being pushed, the update without actually having visibility about it.
Adam Dennison: Sure. So one of the questions I have around and whether you saw it come through the data or you from your customer set and discussions that you have, you know, I spent a lot of years over at cio.com and we would, we would do, you know, research around shadow IT and we would always ask what department is the most guilty and everyone thought it would be, you know, marketing and marketing certainly it was in, you know, usually in top five.
But I will tell you that the security team was in there as well. Folks that are supposed to be thwarting it and teaching people how to avoid it were guilty of it. And I think I saw that come through in the data that you had released as well. What's your perspective on that in terms of the very people that are trained to protect an organization and to train the organization and the people to do the right things that are actually some of the ones that are, you know, out there creating these instances as well?
Peter Garraghan: Yeah, so I think it's nearly 90 percent of cybersecurity people are using AI in their workflows, but only like 30-ish percent actually have controls in place. There's a bit of discrepancy here happening. I was on a panel of security practitioners, and I said, ‘Show of hands, who's using AI?’ Everyone shot up. I said, how many of them actually have controls in place? And suddenly just most of hands went down.
The cat's out the bag to some extent, which is they're using it because it's useful. And they seem, hey, I can use benefits about this. And it seems like software, and it is still software at the of the day, why shouldn't I use such a thing? I think that one of the biggest problems in this space is people have a habit of anthropomorphizing AI. It says, ‘please say thank you’ to me. It writes me in full sentences backwards. It seems like someone I'm talking to. So if I upload my email or document, say hey, summarize it, but please be confidential about this. say, sure, of course they will do. It's just generating text and sending it back to you in that case. A replacement of AI with an external service I've logged into. Would you upload documents to it? Probably not. AI should be exception. But the security teams were seeing the same thing, which is it's useful. It talks back to me. So, it seems like it's legitimate. So, if I ask it questions about, I'm trying to summarize security logs for triaging, I can use a foundation model to help me summarize. That's super useful.
But I think one of the things about this is, because that's a classic example, is do I trust a vendor to do that? Probably. I've opened my eyes relatively above board, I would say, in quite a few things. In other cases, probably not. But in terms of help me summarize my data, yeah, I can trust them to try to keep my data reasonable to be able to leak it without them actually knowing. But given it's so useful.
But they have controls that are saying, I should not upload documents that have not been approved by a vendor or be detected in the first place. What happens is that information contains this information [that] should be flagged and blocked. Given none of those controls that you apply to the AI tools the security teams are using, that's pretty problematic because they're not being tracked. The problem isn't necessarily using the AI to help you do the job. That's a different can of worms. It's actually just knowing in the first place someone is using it, using it knowingly, yes I've signed up an account, or unknowingly, is yes I'm using an asset which I've no idea I own or not.
Adam Dennison: Yep. Samara, you have a question you want to jump in with?
Samara Lynn: Yeah, you know, one of the things that, you know, and I covered this, the survey and the information your company gave, Peter. And one of the things you talked about, which I thought was such a great quote, was you talked about that this is like a real risk right now. You know, it's not something that we need to think about for the future. What is the number one risk that you feel is associated with shadow AI for organizations? Is it loss of intellectual property or is it somehow a way for threat actors to have another vector into an organization?
Peter Garraghan: Good question, [I] think it's depending on use case, they're all relevant. I replaced the word AI with software and data, exactly the same problem manifests. So, for some companies supply to risk is number one, prior to number one, I just stopped this from happening. Number two is threat back into my system. I think there's a few ways you look at this. So, one, is people are putting information into this AI and they don't know where it's going? And that's me knowing I've knowingly signed up to an AI account and put it on the system. That's kind of number one, and there's been quite a few use cases we've seen recently where people upload ... I think there was one company that uploaded the entire patent, all the patents of the organization into the AI foundation model and they shouldn't have. And that hasn't necessarily been leaked, but again, are basically flagged saying, actually cannot do this because there is discussion about how these companies can train their data on this or actually use it against it. That's a big intellectual property risk we've seen a lot of. But I think the one that worries me personally in the most is AIs being embedded into more and more things ubiquitously, it's not just, I have it, I am an AI front end. You can see me very easily. It's when it's buried in logic, like an application that seems perfectly legitimate. It's good, but they've maybe swapped out a pre-baked system with an AI system in the backend, which you have no visibility upon. And that AI, you can engineer it to do things that shouldn't be designed to do in the first place. So, it's not necessarily just a threat act to embedding AI into the software that I can exploit later, like a backdoor.
It is, if I know using AI, there's quite a few clever techniques that allow me to legitimately get data out of it that I shouldn't do in that type of system, which causes huge supply chain risk. But these are all use case specific and context specific. I think the big problem is people are deploying this thing without actually putting controls in place. And even if it was perfectly legitimate, no risks. I can't imagine an organization running software assets that have no visibility upon it. don't know what they're doing. It's just a whole bunch of uncertainty to the organization.
Samara Lynn: I was talking to another vendor yesterday and I think one of the issues is we obviously know about the chatbots, but now there's AI infused into standard legacy software like Salesforce, like Grammarly. So you could be potentially putting sensitive data into apps you've already used as well. I mean, that's a form of shadow AI too that maybe perhaps CISOs aren't thinking of.
Peter Garraghan: Yeah, as I mentioned, the biggest, more the biggest risks I've seen in point number, in pillar number three is existing vendors are updating their software with AI inside it without telling you it directly, ‘hey, I've got AI inside the system or in the supply chain.’ Maybe the vendor, the vendor's now using it for summarization. And I don't know what the AI is and the AI might have a bunch of similarities about it that I can manipulate. So, there's the data expectation problem, which is the model's going to an AI model.
But there's lots of other things you can do about this, which is, is the AI model actually legitimate? Is it from a vendor that I blocked? I'm not allowed to use, for example, open-air modernization, but a vendor is using it in my behest. If it's detected, am I actually culpable? It may be operating in a certain jurisdiction or a geography that I'm not allowed to organize in. All these questions, again, are classic shadow IT problems and data security problems. AI is no exception to it.
Adam Dennison: So, I have a question and we definitely want to get into to Mindgard how you, how you help solve these problems. But I look at it as there's sort of two sides of it, right? You've got your, your sort of behavioral side of it. And then there's the, there's the technology, you know, process side of it. How, how can an organization enforce the rules, mitigate the risks when it's so pervasive?
And, you know, you just look at an organization like ours, where, you know, we're a midmarket organization and our, our executives are saying, you know, try things, we feel like we might be behind in AI. So try things, try and fail, you know, see what we can do, get some small wins, iterate off of it. And so now you have a whole team of people that are going out, trying different things, you know, talking to different vendors, free or not free and, and trying to pull some things together to get out ahead and in front and make our lives easier, you know, hopefully eventually make money off of some things. So, what's your advice to the, know, try it, get out there, you know, get in front of it, but let's make sure we do it appropriately.
Peter Garraghan: So I think as an analogy, let's think of comparable technology. So, AI is an enabler to do stuff, but also makes my job easier. So, remember when things like Dropbox and OneDrive and all these hosting services were flourishing and I'm in organizations that's using it? People signed up their own accounts and put it in there, dumped it. Great. Fantastic. What happened over time is saying, no, no, no, no, no, you can't do that anymore. You have to use the organization-approved one and where [there] could be enforced information.
Adam Dennison: Absolutely.
Peter Garraghan: It's a combination of just education, but also just saying, ‘hey, there's already [an] existing account here that's used this type of thing, and you can't put this in the system.’ I can imagine AI being the same way, which is, they'd be encouraged. People use the cloud storage accounts because they're super convenient. Dump it all on there, don't worry about it. But nowadays, most organizations have a place, and it became easy to do so. I suspect with the use of AI, there should be a set of approved AI or technologies and models and things that are already conveniently there for them to experiment with. So, it doesn't have to rely on the users themselves to do their own research, to make their own accounts, set up and build their own workflow. If you have an organization that is enforcing your staff to build their own workflows from scratch, of course they go off and find their own type of tools. If an organization now has some sort of practice in place that makes it convenient by saying, well, one balance is yes, get your own organizations to build your own workflows, chat with AIs with a risk versus here's already your playground and here's only a couple of things you need and it's already convenient. Over time that is like, you know, it's like the carrot, it's like the honey from which, yeah, let's go for that one instead. So, I just spent over time just combination of education for saying, it's not a good idea to put my documents in this system. It's convenient, the organization supports me to actually get AI projects underway. I can, want to use their approved toolkit and the sandboxing. [I] can innovate sufficiently. And the third one is just, if I think about it logically, yes, I shouldn't be putting stuff online.
I shouldn't be doing [that] in the first place, the AI should be no exception as the use cases get more mature. And the last one is, and the tools in place will mature, which means we can actually say, I'm not allowed to put this thing online, or I shouldn't do it because I can't understand.
Adam Dennison: I think that's a great analogy using that other category from, I mean, that's 10-plus years ago now, I think that when they all sort of popped up. And I happen to know also that I know Dropbox did it because I spoke with them and I'm sure some of the other ones, but part of their strategy was to go out and talk to an organization and say, ‘Here's where all your Dropbox accounts are that you don't know about. So why don't you talk to our sales team and let's set you up on an enterprise level program and make it all above board?’ Sometimes I get successful and sometimes it's not really. Can you talk to us a little bit about, let's get into to Mindgard and tell us a little bit about the organization. I wasn't that familiar with it till I read the article and did some of my own research, but what was the market need when you launched, where are are you at now, where you kind of taking things and how can you help this problem out there?
Peter Garraghan: Sure. So beyond being the CEO of the company, I'm also a chair professor of computer science. I specialize in AI security and system security. So about 10 years ago, I am thinking, I'm also sure if current security techniques work against deep neural networks, which is we talk about AI, talk about deep neural networks specifically. Fast forward to today, I have a company in this space, raised a bunch of money. Fundamentally, I built this company to make positive changes to social good, AI store software.
All sorts of vulnerabilities, AI has the same risks. And we're finding more companies and teams are saying, ‘I don't know where the AI is. I might have some inklings of where the AI is. I can't even test it. So how do I actually do anything about this?’
So, what Mindgard does is we have what we call an ultimate AI red teaming platform. What this allows you to do is that you can connect to different targets and actually run a whole bunch of tests against the assets to determine is it AI?
And the second one is actually what vulnerabilities can I find the first place in this? Because the two part there is, yes, finding AI. And point number two is then what can I do with AI? Our platform really is dedicated to allow developers and security teams and data scientists to say, what are the risks with using this AI if I put this thing into production? Or if I'm using a session, what can actually go wrong? And also, for that, we have a system that allows us to find these vulnerabilities in the system itself.
Adam Dennison: What's your primary target market? Company size or verticals that you... I assume you sell internationally?
Peter Garraghan: Yep, we sell internationally, we work across different industries. So, from like pharmaceutical to financial to entertainment, that's basically anyone's deploying AI in any capacity from enterprise down into midmarket, we kind of actually cater for. Fundamentally, if you're an organization that has AI projects underway or in production, you need tools to actually assess and find vulnerabilities. We have that type of tool to do what we call AI Red Teaming, which is the idea of finding vulnerabilities in the first place. And then it allows you to do controls and actually put controls in a system place to fix it.
Adam Dennison: Samara, any more follow up questions from your side.
Samara Lynn: Yeah, just really quickly, Peter, you know, I was reading about Mindgard and I love the focus on AI red teaming. know, red teaming is not a new concept, but I think the focus on AI is relatively new and emerging. But how can like a midmarket CISO or CIO use your company's capabilities to kind of put their arms around getting a handle of what's happening with AI and what are the risks in their organization?
Can you give me a specific example, like how a customer is using it for?
Peter Garraghan: Sure. So, as I mentioned, AI is still software. Software has problems inside it. If I cannot even articulate what those problems are, how can I build a control? How do I build a playbook? How do I put controls in place? How do I actually figure out where to look in the first place? So, a common usage of our tool is security teams or product managers saying, ‘we have AI projects. We need ways to evidence risk.’
Or maybe some cases a lack of risk, that's a good thing, or make sure we're compliant for onboarding or EU AI Act, all these things in place. They can use our solution to enable them to actually find those things, to do a whole bunch of tests to say, yes, here are the risks we have with this AI.
Example: let's say your organization with an AI chatbot, you want to go live. Great, that's a good idea. You've done all the numbers in terms of the benefits of doing so. Then you might think, okay, are we compliant? And second of all, what are the risks involved? Have you actually done testing? Most will say, no, we've done no testing. We don't know how to do it in the first place.
Our kind of tool allows them to actually do the testing to find those risks of vulnerabilities, make sure they are compliant, and fundamentally make their product much, much better. So, if you're working in midmarket with AI projects, the second question after, is this a thing valuable? What are the problems we can manifest if our customer uses this or uses it internally? Right now, we're finding teams aren't able to answer those questions. And our team are our tool and our people specialize in helping answer those.
Adam Dennison: Got it. We're getting ready to close up here. Peter, thanks again for taking the time. Are there a couple of things, you know, our audiences, like I said, IT security leaders within midmarket organizations, know, kind of one or two things you would want to leave them with ... piece of advice, tips, things to kind of think about as they're navigating through this? And what might that next risk be that they need to be concerned about as well?
Peter Garraghan: I think the two things are, start with the risks they should be mindful of, which is lots of hype about agentic AI and as a whole, about what is agentic AI. Agents aren't a new concept. They've been around for many, many years. AI agents are coming out. So, you should be quite mindful in terms of adopting AI. What do we have right now? A lot of people have LLMs and RAG systems which are going to be agents, [I] suspect, will be later this year, maybe 2026, 2027. When you're building your controls and playbooks on these activities, make sure you have something that's encompassing, because this space changes so quickly in this space. That's point number one. Point number two is replace the word AI with just app or software or data, and you're talking to your teams with questions like, should I put this software online with no testing? The answer should be absolutely no.
Why would AI be any different about this? There's a habit of people anthropomorphizing it to a lot of degree and kind of overlooking kind of basic principles of how do I manage my organizational risk? The main thing I'll say to all teams is yes, just replace the word AI [with] software or data in your head. It makes your problems much easier to handle. Because half the time you might say, ‘oh, it's fine because we're gonna replace.’ The other time is actually no, they don't work. We should think about something carefully about this.
Adam Dennison: Well, we know the famous saying, the only certainties in life are death and taxes. I think we can probably add security concerns from a cyber perspective to that as a third one there moving forward. Again, Peter, Samara and I would love to thank you ... thanks for taking the time. Great to get to know you and Mindgard. And we wish you obviously the best of luck moving forward.
Peter Garraghan: Great, thanks both. Lovely to meet you.
Samara Lynn: Thank you.
Adam Dennison: Take care.