On the 6th episode of Enterprise Software Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Rahul Naroola, Chief Information Security Officer at Dover. Dover is a Fortune 500 global manufacturing company with over 25,000 employees and $8 billion in annual revenue. In this conversation, Rahul shares his thoughts on AI-generated attacks, applications of AI to boost productivity, and realistic expectations for the future of AI security tools.
Quick hits from Rahul:
On the promise shown by AI cybertools: “You should be able to say, ‘I want you to tell me when there is a login attempt that happens that's unusual.’ The AI is supposed to elastically, figure out what that is, right? So that tremendously helps where we are not looking at a finite set of possibilities, but a much broader set of security possibilities. I think there's promise there. I think in the email space, which constantly morphs, there's a lot of promise there.”
On the potential impact of AI on cybersecurity teams: “We can always teach technology to folks that are in your SOC or on your team. But if they don't have that business centric mindset, they don't understand the business that you are in. Like we're in manufacturing, they don't understand how we actually make stuff. What are the people that need to interact with and why?
Getting an AI savvy person won't really take you that far. For me, we are not going to see changes in the team in terms of personnel for the most part. I think we have to really invest in training our teams so they can understand and recognize these new emerging technologies and be savvy on how to use it best to their advantage.”
On AI’s impact beyond security: “I don't think it's just security. It's going to have a big impact on analytics and how analysis is done across the businesses, which honestly, from a SOC perspective, they're running a large data analytics engine anyways, that's really what they're doing. So I think both the data analytics side on the business side as well as on the security side, is gonna benefit with sort of the same core principles that AI offers.”
Recent Book Recommendation: The Metail Economy by Joe Bines
Evan Reiser: Hi there, and welcome to Enterprise Software Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, Fortune 500 CISOs share how the threat landscape has changed due to the cloud, real world examples of modern attacks, and the role AI can play in the future of cybersecurity.
I'm Evan Reiser, the CEO and founder of Abnormal Security.
Mike Britton:And I'm Mike Britton, the CISO of Abnormal Security.
Evan: Today on the show, we're bringing you a conversation with Rahul Naroola, Chief Information Security Officer at Dover. Dover is a Fortune 500 global manufacturing company with over 25, 000 employees and 8 billion in annual revenue.
In this conversation, Rahul shares his thoughts on AI generated attacks, applications of AI to boost productivity, and realistic expectations for the future of AI security tools.
Evan: So maybe kick it off. Um, Rahul do you mind sharing a little bit about what your role is at Dover?
Rahul Naroola: Yes. I'm the Chief Information Security Officer at Dover. I've been in this role for a little over three years, but my background at Dover goes further than that. I helped create the internal audit department specifically for IT and cyber.
Now, when I took this role, cyber, this was back in 2011, cyber wasn't really on a lot of people's minds, especially not a lot of board's minds back then, and where things start to change was with that target breach that happened. You can imagine Target has a lot of suppliers and a lot of boards got very worried about that, that credit card breach through the HVAC system. Really, it could have come from anywhere, but given how large of a customer they are for a number of companies, a number of suppliers, they started to ask a lot of questions of, "Hey, how are your systems that are sitting in our stores secure?" And a number of boards really started to look at cyber as a top risk, or getting up there in risk, and started to take that seriously. The first place they generally started was with the internal audit teams.
They said, look, uh, internal audit, you're gonna have, especially the IT auditors, you have your fingers on the pulse at a lot of operations. Can you help explain where this was. Now, at Dover, that was me, so I had to explain what cyber really means and I had a little bit of an advantage 'cause prior to joining Dover I worked at KPMG, and that's a lot of the work I did with customers, on the cyber side of things, so, at a very basic stage. These days, cyber has really evolved, right? It's now you have a lot of tools, a lot of players. There's many, many ways to attack any sort of an infrastructure, any operation. It causes widespread damage. Ransomware, back then, wasn't even that much of a thing.
It was maybe a concept back then, but now it's, it's become big business. A lot of cyber attackers have full fledged HR departments, finance departments, they're full operations. They have their own IT service desk, right? So I guess they need help on setting up their computers too, and they break keyboards as well and things like that. So it's, it's fascinating the world that we live in these days.
Evan: A lot of the CISOs I talked to, right, they've been in the role for a couple years. They've been at that organization for a small period of time. One thing thats unique about you right, is you've been at Dover for what, like 12 years. You've been actually there through like an evolution of technology. I think the biggest trend, the last, you know, decade, right when it comes to IT and private security is the shift to the cloud. Right, and I imagine 10 years ago, Mo, probably all three of us were sitting down on our like desktop workstations using our desktop apps, you know, in our office. And now it's a lot different with, you know, covid and remote work and all these explosion of SaaS applications.
Right? Everything is kind of web-based. Can you maybe describe like how. The kind of threat landscape and your security program has changed over the last 10 years as you've gone from this kind of on-premise, in-office, desktop world, to this now, anyone connect to any app anywhere in the world, access to any data?
Like how, how is that, uh, you know, different today and maybe there's some examples of threats that you see today that just like rule over 10 years ago? What do you have to think about?
Rahul: We kind of still play in that mixed bag. I think many companies do. Anything new they're deploying they're trying to do it in a SaaS model.
The advantages with SaaS is it's high availability, it works practically anywhere in the world, it gets constantly updated and patched, and it just, it looks nice. It just looks more modern and fresh for, for the end user. The experience is really nice. We leverage, and I think a lot of companies start to leverage cloud technologies, but they do end up, in a lot of cases, costing more than your on-prem version. Right on-prem starts depreciate and you sort of, it's a, it's a very known beast, whereas cloud you can have a lot of changes that could, that can happen on the backend. So we have to deal with that from a security perspective. Okay. It's not as controlled an environment as we would like it, but it's a very high availability and very user-friendly environment. So we have to deal with that security wise.
Mike: So there's been so much that's changed over the last few years. You know, you mentioned the explosion of SaaS apps, AI technologies, so on and so forth. When you look at the, the risks that are gonna occur over the next five years, what are some of the investments within security that you feel would be disproportionately valuable?
Rahul: So we can spin up new instances in a SaaS app. Let's say we are providing a service to a customer. We can spin that up very rapidly. Classically when you had servers, you'd have to configure each one one by one. With SaaS, the investment that a number of companies make is in that initial configuration setup, so it's out of the box setup, secure day one, right from the very start.
If our colleagues in the security space can start to make that investment upfront in design of these, I don't wanna say apps, but at least a setup on the network design, even on multi-cloud design, right? Don't just rely upon one vendor in one cloud. You really want to be in, in a multiple cloud environment.
That I think goes a much longer way than trying to go after each of these very fast growing set of apps that the business demands you'll never catch up. We have and we continue to investigate and invest in that initial design, that secure by design phase. And the other question was AI, right? AI can be leveraged even for security controls.
You know, we almost have to fight fire with fire. If our apps are using artificial intelligence to deliver products faster or deliver, uh, results faster to whoever their users are, security has to be in lockstep with that at some point of time. You know, we use, uh, sim tools like a lot of people do, and people have to run their queries and searches.
They got a kind of almost no SQL code to write it. AI tools can really help with providing far more precise information, as it gets better, in a manner that is much faster than what a human being would have to do and spend hours doing.
Evan: So why don't you follow up on this AI topic, I think, you know, every IT strategy conversation right now seems to be revolved around AI.
I think everyone's thinking about, Hey, how can use AI to improve our productivity, improve our customer experience, right? It's kind of like a, you know, it's a maybe more of an offensive kind of approach. Hey, how do we get better? I imagine there's criminals out there that're thinking the same thing, right?
Where they're like, Hey, this is amazing. I no longer have to write really good English grammar to send a phishing email. Or I can, I use general, you know, chatGPT to do, you know, social engineering. So like how do you see criminals taking advantage of some of these AI technologies and what's the implication for different security programs?
Rahul: Well, a lot of security training, from companies like NOL before and and certain others that provided, they're based upon looking at grammatical errors, sentence structure errors. We'd have the same training and like everybody else does, 'cause it's part of a package, right? As AI technology starts to be used, even by criminals, that training would need to evolve.
I think. A lot of companies that provide this, this sort of training still need to provide it, don't get me wrong. It, it's just not everyone's using AI yet, but I think more and more threat actors are, and as I said before, we've gotta fight fire with fire. You've gotta be able to leverage the same set of tools where it's feasible to do so in detecting these things and, and even preventing these things.
AI is not perfect. It has its flaws, it has its drawbacks. I know the more you feed it, the better it gets. But when it comes to feeding company confidential information on AI, we have to just be very careful with, with how we manage that and, and what really goes up there to be ingested and and improve. The threat actors know this as well, so I think they're gonna start to use AI in multiple ways. It's not just gonna be for phishing. It's also going to be for making certain documents and, if they want to copy a company, which people see that kind of stuff, especially like websites and things like that, they'll be able to leverage these tools and make a far more accurate set of sites for people to land to, or land on and, and, and, uh, you know, be fooled by it.
So. I think the more we can leverage artificial intelligence, not just for productivity reasons, but also for security reasons, over time it'll serve us better, but it's a cat and mouse game. No matter how we, how we look at it. We have to really see where this evolves out to.
Mike: So when you, when you see these things that concern you, like AI generated documents or websites, what are some things that worry you within your own organization or keep you up at night with regards to generative ai?
Rahul: There's a couple of things. Let's take an example. My supervisor asks me, Hey, I have this 50 page document. I want you to summarize it into, I don't know, three or four slides. Now I can spend hours reading the 50 page document and reading all the nuances and then try to compress the best I can into a 50 page document and hand it off to my boss and say, this is it.
He kind of re looks at it, reads it, and he's like, Hey, this is, uh, work that he knows I produced. But he can look at it and say, Hey, you know what? You have some errors here. I don't think you interpret this right. Go back and redo it. But that's on me. If I use AI to do the same thing, it's far more efficient, right, people are using it this way. That AI tool is now provided by my company to me, so I'm like, you know what? I'm gonna take this 50 page document. I just have to summarize it. I'm gonna squeeze it through AI. Whether that's chatGPT, or what have you. It'll produce a result. I'll hand it off to my boss, he's gonna read it.
He's gonna say, Hey, there's a bunch of errors here. Now, as an employee, I can go out and say, yeah, but that's the tool you gave me as a company. Right, so your tool doesn't work. Don't blame me on it. You asked me to use it as a corporation, as an example. That we haven't sorted out yet. These are questions we have asked a number of AI experts in the industry, I'm not gonna name them but I think you can glean who they are, and I don't think the legalities of that has been sorted out. The other legality that hasn't been sorted out is it produced that document. Who is the author of it? Who owns it, right? Whose name's on it? That's not exactly clear. So if it produces erroneous information, can I go and sue that AI company?
I don't know. If somebody goes out and generates a bunch of information from the internet that the AI produces. Um, let's keep it simple. A recipe for a dish. My mom used, uh, OpenAI just two days ago, so she was pretty excited. So I'm using this as a, as a point of reference, but she asked for a, a, a recipe for a dish and it produces, the chat bot produced it, but it didn't cite any sources of where it got this from.
Those recipes may have been copyrighted. The consumer, us, don't know that it's copyrighted, 'cause there's no indication of it in a lot of cases. So if we want to use it, are we in violation of some copyright? These are questions that swirl around in our heads and in meetings and conferences that we still need answers for, and I think at some point, some regulatory body has to step in and provide some guardrails for use of AI and where we are okay to use it and what our liability limitations are, and where we may wanna look at this with a skeptical eye. And that also requires a lot of user education. It's something that it, you know, this thing hit us so fast, we, as human beings and as governments just haven't kept up and caught up with the evolution of technology, especially when it comes to these generative AI technologies.
It's very helpful. It just needs some guardrails on either side.
Evan: So, I totally agree with you on this theme of like, it hit us really fast. And so I feel like we're on this crazy rate of growth. And so as you imagine kind of like. Where, you know, your, your comment earlier about these AI generated documents, or AI generated LinkedIn profiles or webpages, like when we're a couple years in the future, right?
And we're even further down that exponential curve, how's that gonna like change, you know, cybersecurity? And I'd love to hear your thoughts on, you know, where you see, you know, AI kind of actually delivering because the capabilities get so much better and where you see there's maybe like more hype than substance.
Rahul: In the cyber world, I think things that take a lot of human effort is where we'll see improvements. So for example, I already mentioned SEM before, in that space where you have to write correlations and queries, we have to manually write them today and say, we are looking for these things and alert me when you see a particular incident happen with ai, you should be able to put in.
A thread act of query, right? So you should be able to say, okay, I want you to tell me when there's a login attempt that happens that's unusual. The AI is supposed to elastically, figure out what that is, right? So that tremendously helps where we are not looking at a finite set of possibilities, but a much broader set of security possibilities.
I think there's promise there. I think in the email space, which constantly morphs, there's a lot of promise there. Keeping up with millions of emails that at least we get on a daily basis, we can't do this without the help of technology and without the help of some sort of artificial intelligence that morphs its engine to look for the latest threats over a period of time.
Mike: So within the context of the cybersecurity workforce, what do you think the impacts to your team will be over time? Do you see some jobs potentially shifting or do you see looking for more AI savvy type of security folks? Like, kind of, what do you see it doing to your cybersecurity workforce?
Rahul: You know, we have to train them. We can get an AI savvy person or set of people. That's gonna take some time to get. Nobody has that training right now. They're going through it at this stage. It's just changes so quickly. There's no replacing a person that understands your business first.
We can always teach technology to folks that are in your, in your SOC or on your team. But if they don't have that business centric mindset, they don't understand the business that you are in. Like we're in manufacturing, they don't understand how we actually make stuff. What are the people that need to interact with and why?
Getting an AI savvy person won't really take you that far. For me, we are not gonna see changes in the team in terms of personnel for the most part. I think we have to really invest in training our teams so they can understand and recognize these new emerging technologies and be savvy on how to use it best for their advantage.
Evan: So, you know, when you think about. Yeah, we're trying this like crazy exponential trends. Very hard to imagine what the future looks like. I remember when we were talking about AI five years ago, people would laugh outside of the room because it felt too science fictiony. And now some of these things that felt science fictiony five years ago, like are feeling much more real today.
So my question for you is, um, you know, what do you think is gonna be true about kind of AI's future impact on cybersecurity that maybe some of your peers would consider science fiction today, but you have, you know, some conviction, like will actually, you know, play out and become real.
Rahul: My thought is for your SOC, your managed service providers, your SOC, the people that are 24/7, I think a lot of that work could be done with a huge assistance from AI. You know, today what happens in a typical SOC, right? You have a person, they have 2, 3, 4 screens, they're looking at 20 different consoles, and they're looking for any blips and blops that happen.
I, I think AI can make that work a lot easier and a lot more streamlined for them. Now, our SOC and MSSPs, anybody that gets into that level of investment of time and education and use of these kinds of technologies in making these, these tools far more effective, far better to be used and far more user friendly, bringing it back to the cloud side of things as well. They're really going to have a big impact in the marketplace. Previously, you would see this in movies, right? You'd see some guy there with, he's chewing on a toothpick or a chewing gum, and he is looking at all these screens, and then you see some fancy graphic that is, of course, totally Hollywood made up, but they can quickly just type in a very English like query and boom comes up
the answer. I don't think we are far away from that. People that invest in that sort of conversational based thing that they're looking for, in the security space, would really, really benefit. And, and whoever comes up with that sort of an interconnected SOC, that SOC and that company that uses those, that that technology and uses that sort of a SOC setup would tremendously benefit.
With catching these threat actors earlier and earlier on into their attack pattern than doing it much later down the line when you know the cat sort of has left the bag by then.
Evan: You know, I, I'm, I'm with you. And I think that's also an exciting world, not just because the kind of productivity of the team increase and the efficacy of security operation increase, but also like it now enables kind of a new cybersecurity workforce, right?
You can actually be a very productive cyber defender without being an expert at analyzing logs, right? And so I think the ability for people to use these AI tools to kind of bootstrap or move some of the technical hurdles to enable them to actually, you know, do the role and apply their judgment, um, And that that's gonna be, that, that's an exciting future so I, I wanna, hopefully, hopefully, we'll, we'll be there soon.
Rahul: One of my close friends, he does data analytics. He sent spends months setting up analytics that basically a user can just hit a button and the analysis happens across the whole data lake. With AI, a sentence could be written by a user. I am looking for any place where we had more than a 20% variance in my financial accounts over the last quarter.
You write that up and it produces the chart and the graph and the colors and the conditional access, all that stuff, right? So I don't think it's just security. It's gonna have a big impact on analytics and how analysis is done across the businesses, which honestly, from a SOC perspective, they're running a large data analytics engine anyways, that's really what they're doing.
So I think both the data analytics side on the business side as well as on the security side, is gonna benefit with sort of the same core principles that AI offers.
Evan: I, I agree. Okay. I know we're basically out of time, but we, we normally like to end with a quick lightning round. Maybe just give us like kind of the one tweet answers and we'll maybe fire off our, our three lightning round questions. Mike, you wanna go first?
Mike: Sure. So what's the one nugget of advice you'd give to someone stepping into their first CISO role?
Rahul: Make friends with your legal department.
Mike: That's a good one.
Evan: What is the best way for a CISO to stay up to date with some of these new security trends and technology trends?
Rahul: You have to attend conferences with your peers. That's where you get a lot of the thought process coming in and seeing where everyone else's experiences are 'cause otherwise, you're in your own bubble. You gotta learn from others.
Evan: Rahul, so great to see you again and really appreciate you making time to chat with us and, uh, looking forward to talking more about the future AI uh, next time we chat.
Rahul: Thanks for having me. Great talking to you guys.
Evan: That was Rahul Naroola, Chief Information Security Officer at Dover.
Mike: Thanks for listening to Enterprise Software Defenders. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I'm Evan Reiser, the CEO and founder of Abnormal Security. Please be sure to subscribe so you never miss an episode. You can always find more great lessons from technology leaders and other enterprise software experts at enterprisesoftware.blog.
Mike: This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.