On the 7th episode of Enterprise Software Defenders, hosts Evan Reiser and Mike Britton, both executives at Abnormal Security, talk with Viswa Vinnakota, Chief Information Security Officer at Xerox. Xerox is a foundational computing and technology company with over 20,000 employees and multiple spinoff companies operating at the frontier of modern technology. In this conversation, Viswa shares his thoughts on enterprise adoption of AI, the growing implications of AI’s accessibility, and AI’s impact on the future of cybersecurity.
Quick hits from Viswa:
On enterprise companies and AI adoption strategy: “When your employees and businesses start adopting generative AI or any kind of AI technologies,, we should always start with the policy. Security is just one part of AI’s risk, but there are a lot of things beyond security. Privacy issues, data security issues, and the ethical use of AI. So as an organization, it's not one person's job to decide how you need to use generative AI.”
On AI’s rapid accessibility: “AI has been there for many years. The biggest change [in the last year] is it's available to everybody. It used to be more of a privileged thing in the past where only certain products had that capability to even build AI into their capabilities, and it's open and accessible to everybody. That shift is bringing new security challenges, and how do we mitigate the risks of adoption of AI within the organization.”
On AI’s potential impact on cybersecurity: “The speed at which you respond to your cyber attacks is definitely important, what we call defensive AI. You can use AI to defend your organizations, especially around how you analyze the incidents, threat intelligence, and collecting the threat intelligence. You can leverage AI in order to generate some patterns that can help you quickly do your analysis and respond back to the threats that you're actively facing in the organization. It definitely brings efficiency.”
Podcast Recommendation: CISO Series Podcast by David Spark, Mike Johnson, and Andy Ellis
Evan Reiser: Hi there and welcome to Enterprise Software Defenders, a show that highlights how enterprise security leaders are using innovative technologies to stop the most sophisticated cyber attacks. In each episode, fortune 500 CISOs share how the threat landscape has changed due to the cloud real-world examples of modern attacks in the role AI can play in the future of cybersecurity.
I'm Evan Reiser, the CEO and founder of Abnormal Security
Mike Britton: And I’m Mike Britton, the CISO of Abnormal Security.
Evan: Today on the show we're bringing you a conversation with Viswa Vinnakota, Chief Information Security Officer at Xerox.
Xerox is a foundational computing and technology company, with over 20,000 employees and multiple spinoff companies operating at the frontier of modern technology.
In this conversation, Viswa shares his thoughts on enterprise adoption of AI, the growing implications of AI’s accessibility, and AI’s impact on the future of cybersecurity.
Evan: Well maybe kick us off, uh, this way. Do you wanna share a little bit about kinda what your role is at Xerox?
Viswa Vinnakota: Yeah, uh, Viswa Vinnakota Chief Information Security Officer for Xerox Corporation. My responsibility is building the cybersecurity program, uh, for all of Xerox, including the core print business, as well as all the subsidiaries of Xerox across the world.
That includes the usual cybersecurity scope of work using governance risk compliance, cyber defense organization, as well as product security. I'm also responsible for security of products that, that Xerox develops and sells to our customers, along with the services as well.
Mike: So what are some of the unique aspects of running security for a company like Xerox that an outsider might not fully appreciate?
Viswa: Xerox is known for printers. We've been in this industry for 115 years, but that's not all Xerox manufacturers. Printers are our heart, or core of our business. Over the years, Xerox has gone beyond printers and built a large portfolio of services and solutions that we sell to our customers. We're no longer selling just the hardware, or selling the supplies, but we have built an ecosystem around our printers, like cloud-based services, like cloud print services, workplace cloud, all those services along with well, on the, on the other side of the spectrum, Xerox goes beyond print and services into IT services, IT managed services, security services, robotic product process automation, uh, augmented reality service management platforms, as well as financial services through one of our business entity called Fiddle.
So the breadth of services that Xerox has beyond print, those Xerox is, is, is known for print. There are a lot of other services that we do for our customers that creates a unique risk profile for the company where you have diverse businesses that has different set of risks altogether. And how do you build a security program that fits into this diverse business practices is where the biggest challenge is as a CISO at Xerox.
Evan: One thing that I just find so admirable about Xerox right, is Xerox has always been innovator across like multiple generations of technology trends. Obviously the big software trend, at least in the last um, or maybe the big IT trend in the last 10 years, obviously the shift to the cloud, is I'd love to hear kind of, you know, how you see the threat landscape changing, not just with kind of the rise of cloud software, but also like in your business that's so multifaceted and, and complex.
Viswa: Yeah, we are, we are kind of an hybrid organization, you know, as part of when we expanded our services and started building services around our print ecosystem, as a company we started adopting cloud as well as when we build new services and technologies, they are built on cloud-based services. So eventually we ended up in a more of a hybrid environment that's a combination of data centers as well as cloud services.
Now, those certain type of risks we see are common with respect to cloud and data centers. Cloud brings completely a unique challenge in terms of, uh, the risks that the pros and the threats that we face in the cloud environment.
How we look into our cloud footprint and how we understand our threats, again, related to our businesses, which means the services and solutions that we provide and operate in the cloud is, is important for us because that's what drills down into more specifics about what kind of risks that on threats we face in this environments, and what kind of solutions, you know, certain solutions can be used in a hybrid environment.
Certain solutions are more cloud centric. When you look at the worlds of CSPMs, when you look at the worlds of cloud workload protections, all those things are more cloud specific and how do you bring in the solutions in order for you to build that foundational cloud security model of the architecture for your to protect your businesses is the key for us.
Mike: With all the complexity and, and definitely wanting to be a business enabler, I'm sure you see a lot of innovation within your business, especially around the use of SaaS apps and AI technologies. When you see these within your, within Xerox, what are some areas that you feel like you, you need to invest a little bit more heavily in your security program so that you can address those risks that come along with new technologies like AI and and SaaS?
Viswa: Yeah, SaaS is a unique, unique challenge by itself. Well, we do not have any kind of control over the underlying infrastructure. It's mostly the application layer and, and the data that goes into the SaaS platforms is, is, uh, is the key. What happens in that scenario is when you look at the traditional operating model, you know, you have, you know, service accounts, you have active directories and all those things.
When you go into, uh, SaaS based applications, there is a paradigm shift. You have APIs, now you have secrets, you have keys that are different SaaS platforms talking to each other over internet, right? You don't have visibility Who's talking to what? The privileges. So non-human identities. There, there are two parts of it.
One is obviously your human identities, who's going to consume the SaaS services and as well as non-human identities where, you know, different SaaS platforms are talking to each other and who is, and where are your data boundaries? Whose data taking data from where? So that, that creates a unique kind of a challenge when you talk about multiple SaaS platform and your global adoption of different SaaS platforms.
So getting visibility into those data flows, you know, that are, that access or access control authorizations and everything is important when you talk about the SaaS platform and how do we ensure that you have full control over, you know, whether it's a human or a non-human, who is accessing what and what data is being taken out.
And that's where the, the analytics, artificial intelligence and everything come into play. Uh, it's a, it's a hot topic right now. Uh, artificial intelligence across the industry. AI is not new. AI has been there for many years. What has changed now in the last six months or one year? The, the biggest change is it's available to everybody.
What it used to be more of a privileged thing in the past where only certain products has that capability to even build kind of AI into, into their capabilities, and it's, it's open and, uh, accessible to everybody. That is the shift that is bringing that new security challenges, what we are talking about and, and, and how do we mitigate the risks of adoption of AI within the organization.
It's a combination of both. It is beneficial for your organizations to succeed at the same time, it creates the set of risks. Clearly understanding those risks and trying to implement your processes, tools, you know, uh, all those things are, are important for organizations to tackle the risk of generative AI or any AI.
Mike: And do you feel like there's like a certain, you know, you obviously process people and technology. Do you feel like when it comes to AI from, from running your security program, are there certain areas where you feel like you've had to double down a little bit more?
Viswa: Yeah, it's a combination. Uh, the, the, the biggest challenge, what we face is there's a hype on, on what AI can do and what it cannot do.
So some of this interest that you see from, from the users is more of curiosity, what it is, and how it can help in their day-to-day work. Some of these is more of leveraging the AI into their day-to-day work, right? How to, how to bring efficiencies. So when, when, when your employees and businesses start adopting generative AI or any kind of AI technologies, depending on the scope of the usage, obviously we need to be all, we should always start with the policy. As a company, how do you want to use generative AI?
When you talk about security is just one part of AI risks, but there are a lot of things beyond security. When you talk about, you know, privacy issues. When you, when you talk about sec, uh, data security issues, when you talk about the ethical use of AI. So as an organization, you have to it, and it's not a one man's job to, to decide how you need to use, uh, generative AI.
You need to come together as, as an organization, then come up with, with, with, uh, you know, an idea of how you want to use generative AI and where you want to use generative AI. End of the day, it is all come, comes down to what we call as a controlled use of AI in the organization. That control is what you, you translate into a policy that you want as an organization.
This is, this is our, our policies against the use of generative, against the use of generative AI, and then how you translate that policies into processes. You know, the standards and the tools and technologies that can support you to manage the risk are not so responsible use of AI. And this is an evolving technology and there is always a human validation part of it on what we consume because if you look at a lot of studies, there are different studies that have shown the accuracy of what AI is generating, right? So, you know, there's always a human validation part of it where you know anything that you consume from the AI, is there a human intervention in order to make the judgment call?
That's one thing. And second is, this is the less spoken thing right now, what is the impact of artificial intelligence on human intelligence? Are we just going to rely, too much rely on artificial intelligence and your human intelligence start diminishing because you don't have to think. If you need to write an email, you think about how you want to write an email.
If you go to generative AI, tell it to write an email, copy and paste it and send it to send it to anyone, then is that helping you as a human?
Evan: That's right. I think some areas we probably have to give up on, like, uh, our kids will not know how to use a phone book.
Evan: Hopefully there's someone that knows how to use a library in the future, but, we'll, we'll see.
Viswa: Yeah, I have to listen to some podcasts about, you know, how, uh, uh, you know, students in universities are relying on generative AI to create, write their assignments, right? Which means, you know, that both sides of it people are actually taking the content and obviously using their own intelligence in order to write.
And in certain cases you just take the content out of it and submit as part of it, right? So, so how do you balance that in an organization?
Evan: It's, it's really hard 'cause you don't, you don't know the intent behind it. Right? Same, same thing with like, you know, looking like stack overflow over software development, it's an incredible tool to write, inspire you and help you understand how to do things that like are kind of silly for you to like figure out when the answer is obvious. But if you're copy pasting stuff off, off the internet and write your code base. You may have a major problem, right? So like it's, hard to know which is which, right.
Viswa: That's right. And the developers used to learn languages, then they used to go search and how to write a piece of code. Obviously Google helped us over many years to, to troubleshoot and all those things. Now, if you're just going to generative AI and tell, hey, write this piece of code for me, and you just take it and use it in your products, right?
That creates another supply software, supply chain issues.
Evan: Viswa, one thing you said, which I um, I totally agree with is, AI technology's been around for a while. They're now more accessible. I also think that part of what's exciting about AI, you know, at the civilization level is that it allows more people to do more things, right?
There's, there's someone out there that, you know, maybe they don't have to like query database, but they can use these AI tools to be able to accomplish like the, the task or do the work, right? Without, you know, not need some of the same technical skills. I have to imagine that also apply, you know, all for all the reasons we're excited about the world being more productive, you know, using AI tools, it probably also applies to criminals as well. Love to hear kind of how you think, you know, how you envision or what you worry about, you know, as criminals start getting access to more of these tools as well.
Viswa: Yeah. It's a, it's the same kind of advantage or disadvantage that you see on the, on the other side of it, criminals.
Right? Do you have to be technical enough? Deeply technical enough in order to, to target organization or for your threat activity, or you just rely on generative AI tools, uh, to, to make it much easier for you. So does it, uh, you know, going to increase the number of threat actors or the, or the attacks that any organization faces, uh, today, which I mean, some of the recent studies are, are actually showing that when, you know, uh, there is an increase after the introduction of generative AI, there is definitely an increase in number of certain types of threats that the organizations are seeing. Uh, one of the common thing is phishing, uh, has been, you know, uh, threat actors have become more innovative, how they can use generative AI in order to, to target the organizations with phishing attempts that can bypass your traditional kind security controls, email security controls.
On other side, I've seen certain references to threat actors using, creating their own tools in deep dark web based on generative AI. Of the name I heard is WormGPT. And I've seen, you know, certain, uh, come across certain references where, uh, people are developing exploit tools based on generative AI.
They don't have to think about how the exploit works. All you need to do is, hey, here's the input, go and exploit. So does it make it easier for, for somebody to, to easily target the organization?
Mike: So let's flip back to the good guys. What are some ways, or what are some results that you've seen from AI technologies and cybersecurity that most people might be surprised to hear, or maybe we underestimate the power of AI to, to fight cyber crime?
Viswa: Yeah, so, As Evan said, productivity, efficiency, how you bring, so in this age, the speed at which you respond to your cyber attacks is definitely important for you. What we call is a defensive AI. Where do you want to use AI in, in, uh, defending your organizations, especially around how you analyze the incidents, threat intelligence, collecting the threat intelligence, all those areas.
What used to be some kind of a human validation you can always leverage AI in order to generate some patterns and everything that can help you to quickly do your analysis and respond back to the threats that you're actively facing in the organization on a day-to-day basis. It definitely brings some kind of efficiency.
What we used to do in security automation has been there for many years. Uh, we used to automate tasks that are trivial while in security operations or other areas within cybersecurity. What we can do is we can use AI in order to help to go further, one step further, how we can analyze data from multiple patterns.
Now, if you look at threats, they've gone to a place of mostly using behavioral patterns. If you look at user behavior analytics, which is the biggest challenge that we face. What are your user behaviors and how AI can actually interpret this behavior in a much better way so that they can give us better early warnings about something going wrong so that we have time and our appropriate time to go back and rere react to those threats that we are facing.
Again, this, this space is evolving as we see more and more capabilities are built, uh, around this generative AI data models, large data models. We'll see more and more use cases that will be coming up in this space that can really help us rather than, uh, yeah. We have to be careful on how we, how we use generative AI, even for our defensive purposes.
As an organization, we have to build our own private instances where we can actually freely use our data, organization data, with going beyond boundaries to really harvest the capabilities of generative AI rather than encouraging people to go and use the publicly available generative AI instances. That is where the biggest risk is for the organizations.
When your employees try to adopt those public models, the riskier it becomes. That all comes down to the availability of these technologies within your organization. If you have our own instances that is available for people to learn, to, to develop something within the organization. They'll lean towards the technology stack within your organization rather than going out and trying to find something somewhere.
Evan: What are some of the areas you feel like, like certainly AI will have a bigger impact on how we do security operations, incidence response. There's probably more data that will be analyzed, judgments will be made more by AI. Right. What are some of the areas you feel like there's maybe less opportunity? What? What are the areas like, you know, in the future from now, where do you see kind of, you know, AI not having as big of an impact as maybe, you know, we, we would hope.
Viswa: Yeah. I mean, it, it, it depends on how AI is going to evolve in the future and, and the adoption of all the regulations that are going to come in around the use of responsible or controlled AI within the organization. How that is going to limit the, the adoption of the technology is something that is going to drive the future.
Now, at least we know what it's capable of. It is dependent on what you feed to, to the large, uh, you know, language models. It's not something that it can come up with it own, with its own intelligence. You're feeding data to the, to the large language models, and they're actually helping you with more contextual information.
Now, how is this, this data going to evolve? And at the same time, you have seen a lot of attacks on the AI models themselves, where, you know, people are trying to attack the model so that they're, they're, uh, you know, generating false information. So securing these large language models will, will also going to be a challenge in future, right?
For the, for the organizations who are developing and maintaining these models. So how all these things are going to come together and build the future for AI, but one thing it's not going to help is, you know, at least it's not going to replace the human intelligence for sure. Uh, it can help people as from the productivity efficiency standpoint, make things more simpler, but, Is generative AI going to completely replace human intelligence?
It's not gonna replace a resource, but maybe you'll have resources who knows how to use the, and, uh, you know, artificial intelligence models for the benefit of your business.
Mike: So around the, around the context of cybersecurity workforce, what impacts do you think it will have over your, your team over time and, Not only that, but what do you think as a person that hires cybersecurity people, are there different things you might look for in potential candidates over time as it with regards to AI as well?
Viswa: Yeah, I mean, technology adoption is easy because what we need is really people who are skilled in, uh, cybersecurity. Learning a new technology today. You know, we have AI and maybe in the next two years we'll see completely a different technology, which is uh, you know, which has revolutionized. Or better than AI.
So what is important is, you know, foundational knowledge and exper expertise of the, you know, in security is important. And learning a tool is, you know, maybe, uh, a matter of time. It's not going to take too much time to learn. So what I look for people is more about their adaptability, how, how, what skills they have and how they're going to adopt to these new technologies and tools that are going to come in, and how they're going to help me succeed or evolve my cybersecurity program to the future when these threats changes the technology adoption changes by your business.
That is something that I'm looking for. Um, I, I don't think I'll be looking for, specifically for someone who knows how to use AI for security, rather, I, I look for someone who knows security or eager to learn security, which eventually end up learn learning the tools that we use.
Evan: That's probably good general advice, I think applies to, you know, most organizations. Right. So, appreciate you sharing. Good, good reminder for myself. Okay. Well Viswa I know we're, we only got five minutes left. Um, what I'd like to do the end of the episodes is do kind of a quick lightning round with like a handful of questions and just looking for kind like punchy one tweet, one tweet version.
So, uh, Mike, you wanna do fire off the first one?
Mike: Sure. What advice would you give to a security leader who just stepped into their first CISO job? What would they maybe overestimate or underestimate about the job?
Viswa: What I personally learned when I stepped into the CISO role is understanding the environment and the, the business you are in and the associated risk.
That's the key thing for you. Taking enough time to understand the environment and the complexities in the environment is going to help you succeed. Future building a better strategy and a way how you can defend or protect the organization. There's no one side fits all.
Evan: One thing that's always impressed me about you is you're very, you're surprisingly up to date on like technology trends, right. At, at the next level of detail. What would be your advice, uh, for some of your peers about the best way for 'em to stay up to date with like the, you know, newest security challenges, newest technologies, new risk areas, like Yeah. How do people stay, stay current?
Viswa: Stay close to your team?
Evan: Oh, that's a great answer.
Viswa: Yeah. They're the ones who are working in and out, defending the organization. If you stay close to your team and, and obviously the way I work is, I'm part of them, so once you stay close to your team, understand the challenges they're facing and, and how they're actually overcoming those challenges, the technology or threats and all those things, you'll eventually become knowledgeable in that, in that space.
Once you disconnect yourself from the team, that's when you, you have to rely on someone else to get the gain that knowledge.
Evan: Is there any, you know, podcast, blog post, tweet, right? Something that, something, you know, something that's kind of having an impact on you or your leadership, right? That, um, has kind of stuck with you.
Viswa: I, I follow a lot of podcasts. Uh, you know, one, one specific thing, especially around the CISO Cities podcast, which is, uh, more popular. So that, that gives a perspective of, of my peers in the industry, how they're tackling with the challenges that we face on a day-to-day basis, and what I can learn from them, uh, you know, and to, to the success here. Uh, beyond that, I, I, I, I keep listening to a lot of other podcasts. Some of these are more technologies focused, not security focused. That's maybe one of the way that I can help my team if they run into any challenge, how can, uh, you know, they feel that I have their backs from a technologist standpoint.
I, I can give them ideas how to overcome those challenges.
Evan: Viswa, thanks so much for taking the time to, uh, speak with us today. As always, great to chat with you and really appreciate you sharing your views of the world.
Viswa: Thanks, Evan. It's a pleasure, uh, talking to you and Mike.
Evan: That was Viswa Vinnakota, Chief Information Security Officer at Xerox
Mike: Thanks for listening to the Enterprise Software Defenders podcast. I'm Mike Britton, the CISO of Abnormal Security.
Evan: And I’m Evan Reiser, the CEO and founder of Abnormal Security.
Mike: And Please be sure to subscribe so you never miss an episode. You can find more great lessons from technology leaders and other enterprise software experts at enterprisesoftware.blog.
Evan: This show is produced by Josh Meer. See you next time.
Hear their exclusive stories about technology innovations at scale.