On the 31st episode of Enterprise AI Defenders, hosts Evan Reiser, CEO and co-founder of Abnormal AI, and Mike Britton, CIO at Abnormal AI, sit down with Sue Lapierre, Head of IT Governance & Information Security Officer at Prologis. With 1.3 billion square feet of logistics real estate and $3.2 trillion in goods moving through its facilities, Prologis faces extraordinary stakes. Sue shares how her security team is proactively responding, embedding AI safely across the business while running simulated deepfakes to stress-test resilience. She explains how zero trust begins with human awareness, how internal AI policies ensure transformational security, and why cybersecurity today demands curiosity, not fear.
Quick takes from Sue:
On zero trust being human-first: “We have to also think about zero trust, not only on the technical side, but on the human side."
On internal GPT adoption and focus: “At the very beginning, we locked down all AI, except our internal AI, PrologisGPT. That helped to focus people into one vector.”
On simulating deepfake attacks: “We actually hired a third party that created deepfakes and targeted a variety of individuals. We wanted to test if someone fell for it, would they fall all the way? Or do we have defenses that would stop it?”
Recent Book Recommendation: Turn the Ship Around! by L. David Marquet
Evan (00:00): Well Sue, thank you so much for joining us today. Mike and I are really looking forward to having you on the show. Maybe just for a bit of background, can you give our audience some context about what you do today and how you got there?
Sue Lapierre (00:10): I am Sue LaPierre. I'm the CISO at Prologis. I've been with the company for 11 years; it'll be 11 years in September. I'm responsible for information security as well as IT governance, risk & compliance, and IT vendor management.
I did not actually study at university for information systems, or security, or anything like that. I actually got a liberal arts degree and ended up going into financial services. I worked in back offices, a variety of different departments, including corporate training and things like that. So, I got to know the business and how companies work, especially financial companies. And so, on my journey, I ended up getting into business continuity and disaster recovery. I had to come up with all kinds of bad things that could happen to an organization, right? It's all about business resilience.
And as I was creating different scenarios, and tabletops, and all of that, one of the situations that could go bad is any kind of information security, or a breach, or fraud, or any kind of a cyber attack. And so, I just started learning a little bit more about that, and that was kind of my foray from the business side over into the technology side. So, I definitely didn't come up on the technology route.
Evan (01:42): Cyber security is a really hard job, right? There's lots of easier jobs out there where criminals aren't trying to mess with your business and you'll get woken up in the middle of the night. What inspires you? What motivates you? Why do the hard job? There's easier stuff. You've been a great professional for a long time. What keeps you motivated today?
Sue (02:02): Every single day is different. Every day. And so, if I had to have a job where I was doing the exact same thing every single day, I would be bored silly. And I really enjoy working with our business. I enjoy working with the technology group. And I've got that protection gene in me to make sure that everyone is safe. And so, that's one of the things: that I wanted to continue making sure everyone is safe. Same thing with cyber, as well.
Mike (02:41): So, you've been at Prologis for quite some time, which is definitely admirable. I'm sure you have lots of interesting context into various facets of your business, but to help frame our conversation with our audience, maybe you can give us some more insight into what exactly Prologis does, and some of the breadth of the company's operations?
Sue (03:02): It's pretty exciting, actually. We actually enable the modern economy. We have 1.3 billion square feet of industrial real estate in 20 countries. We have 6,500 customers. We also have data center, energy, and mobility businesses included in all of that, too. So, it is really fascinating to learn that, I think it's, $3.2 trillion worth of goods flow through our facilities every year. It's just mind boggling to me. So, I'm very proud to be here at Prologis.
Evan (03:45): What are some unique threat vectors or cyber considerations that are unique to your business that the average listener wouldn't quite fully appreciate? We have to imagine there's some sophisticated things you guys have to think about and consider that are beyond the conventional responsibility scope of the average cybersecurity team.
Sue (04:05): So, I would say that in security right now, the big thing is zero trust. Everyone's on a zero trust journey. And we, of course, have been for a couple of years.
I think we have to also think about zero trust, not only on the technical side, but on the human side. So, we have to also raise the awareness with our employees that zero trust is not just technical. It's an interpersonal kind of thing. And that whole “don't trust anyone,” I hate to say that because I'm a really positive person, but I have to be training and creating awareness about our employees: “Hey, you know, don't trust anyone.”
Evan (04:48): I recently met with Sineesh Keshav, who you obviously know as your CTO, but our audience may not know that. One thing that I found really interesting was that he talked about having 95% company-wide adoption of Prologis’s Internal GPT, which is really impressive. I meet with a lot of CIOs, and I think most are still trying to figure out how to get something out, let alone get to the 95% adoption.
Do you mind sharing a little bit about how those AI tools are being deployed? What success you've had? And what would be interesting for this audience is; how do you think about securing those? Cause you guys are quite ahead of some of your peers there.
Sue (05:22): So, we went all in, as Sineesh already talked about. And so, our AI team has done a tremendous job of actually doing some video shorts to help people. We put it on our internal Viva Engage, so that you can just small snippets of: this is how you do this. And “Hey, if you want to know how to do this, this is how you do it.” And they periodically send out a new one; it feels like almost every week. And it's really gotten a lot of engagement, a lot of enthusiasm. People are really wanting to learn how to do it because, of course, not everybody wants to do a rote task all the time, right? And so, if you can automate that, why not? You can save time and then you can spend your time on things that are more interesting, things that you wanna be doing, and things that will give more value.
It was a top down initiative. Our senior management, they're all in. Our research team is using it to the fullest. It's pretty exciting. And people are sharing. That's why our employees have created 900 plus GPTs for different use cases. And they're also sharing that.
We do have an AI committee that looks at these. And so, that's why we have 14 enterprise-wide GPTs that can be shared across the enterprise. So it's very exciting.
Mike (07:00): I'm curious because we have that same top down approach here from an AI transformation perspective, but was there like an “aha moment” where it just clicked with the general population of employees at Prologis? Where, it's always this something scary, something new, and all of a sudden people really embraced it? Was there any defining moment there? Did people just start seeing the use cases and benefits of it?
Sue (07:25): At the very beginning, from a security standpoint, we locked down all AI except our internal AI, PrologisGPT. And that helped to focus people into one vector. And then we had people sharing with others. And what's also really wonderful, and I have to say that our head of AI gave me this wonderful tip about: if you don't know what to do or how to do it next, ask your AI. Ask PrologisGPT. And guess what? It's going to tell you exactly what to do. It's like, “Oh, yeah. Of course.” Right? And so, when that message gets out, people become less afraid.
Of course, there's always that trepidation, I guess, at the very beginning. So, a year ago, people were like, “Well, can I ask this? Who's going to see it? Who's not going to see it?” Those kinds of things. And knowing that our data is not being used outside of our instance really gives you a little bit more of a comfort level, so that people are using it more and more.
Mike (08:35): How do you balance and weigh the risk and security of transformation and additional use cases, while also protecting sensitive data and protecting the organization?
Sue (08:48): Well, that's the thing, is that, from a governance standpoint, we put together an AI policy to say what you should be using this for, what you shouldn't be using this for. Being very careful about newly generated data and what that can be used for and not. So, we did that at the very beginning. We had that policy out when we were deploying it.
However, things change, and so we have to continue looking and seeing, making sure that what we're generating is still in alignment with the other privacy policies around for the 20 countries that we're in, as well.
Evan (09:36): How do you get appropriate security around some of these AI tools and AI agents? What I think all three of us have seen is that the productivity gains are off the charts. There's so many cool things. I remember when Sineesh was on my other podcast, he talked about automated site selection, self-optimizing logistics, and universal visibility. There's a great vision and dream there for what's possible.
I think other companies, and I've heard other CIOs and CTOs talk about this: one of the challenges across the CISOs, we can all see the wave, or the big needs, and the big appetite for all these AI agents everywhere. As the productivity and efficacy and efficiency wins increase, so can the risk appetite of the business.
So, what's your advice to other CISOs who are thinking about how to balance there? Where is it appropriate to take risks and where do you need to kind of slow down? I think one unique thing about you is, you guys have done a lot of this transformation ahead of the tools being available. There's not a one click AI agent security wizard that you can buy from the next great startup.
How do you guys balance that risk? What do you think is really needed for some of your peers to help them accelerate through that transformation in the future?
Sue (10:49): I'm glad you brought that up because you talked about tools. That's actually where we're focusing our energy right now. Lots of different products and services that are coming in are saying, every one of them has an AI. “We have AI! We have AI!” But you have to dig in deeper. If you're bringing in a tool or some kind of service that says—actually they don't have to say AI—but, you have to be looking to see if there is AI in those tools and how it's being used, how your data is being used. Where's the data ownership of it? Before you even do a POC or sign a contract or anything like that, you need to understand how that tool is using AI and what the impact to your organization is.
So, that's a big worry for me right now because sometimes, you've got different business departments that are, “Hey, this is a shiny object. We love this! This is going to be great! This is going to change our lives! And it's got AI!” So, you have to bring it in and say, “Let's look through this and let's make sure that it works within our organization. We know where the data ownership is.” Or, “It's not going to work. You know what? We reject this.” And sometimes, we have to do that.
Evan (12:16): How do you tell what's real and what's BS? How do you separate out the real impact from the marketing nonsense? Is there any key questions? What would be your advice to your peers out there?
Sue (12:30): Ask the right people the right questions. And when I say that, it is nothing against products or companies that offer products or anything. But the salesperson is not necessarily the person you want to believe. You want to actually get to the technical person and ask them about it. And if you're not technically savvy as a CISO, I'm sure you have lots of individuals that are smart and very, very technical.
Bring them in. Have them ask those questions about: how is this being used? How is it connecting? What data is it using? What data is it taking? How are you using our data? Are you training your particular AI model? You have to get down to the meat of actually what's happening.
But, I also have to say, on the flip side, that, for me, I'm looking for AI security products to identify AI. So, the bad actors are using AI, right? And, as you know, our humans on our and anyone's SOC, we have very lean teams. We can't go up against AI bad actors or the AI that bad actors are using. So, we need to actually find those new security products that are using AI to detect AI. And that's where we're going to make a little bit of a foreway into stopping, or at least slowing them down. So, that's actually one of the areas that I'm looking at right now, is to try to find AI products to identify AI.
But, I say that also with the caveat that that doesn't mean that you have to give up on all of your cyber hygiene, because the bad actors are going after those vulnerabilities that have always been around. It's just that they're doing it faster. So, you still have to do that foundational cyber hygiene, as well as looking forward and finding that AI.
Evan (14:51): You're calling out some real problems, right? You have non-criminals, which can now be criminals due to AI support. You have AI increasing the volume and rate and scale of attacks. Some of these attacks are AI-automated in different ways. They're happening at machine speed. They're hard for us to defend against.
You guys have seen a lot of very good positive usage of your team using AI. What about on the threat landscape? How are criminals adopting these technologies? What do you worry about as the average criminal starts incorporating some of these generative AI tools into their arsenal?
Sue (15:26): I go back to deep fakes again, right? I mean, that's kind of the thing that gets a lot of the news headlines. And I think there was one earlier this year for the financial director that got duped because it was a Zoom call. I think there's been a couple of those actually, and there've been big money ones. But, the problem is that, I mean, that made the news, but there's a lot of other things that are not making the news. And those are the ones that I'm worrying about, is: what is it out there that I don't know about that I haven't seen yet?
And so, I have to put my business continuity hat on and go, “Okay. What all could happen to us? What are all the bad things so that we can try to think ahead and figure out where they could go next? What steps might they take? And, right now, I hate to say it, but social engineering is an easy target.
Again, those phishing emails are quick, they're easy. It's a click of the button and you send out millions of them. And if you get one hit, if one person that falls for it, that can take down an organization. And unfortunately, we've seen that in the news, as well as our vendors and suppliers. Some of our smaller ones are also getting targeted on little things, like invoices, and fraud, and things like that. It's all social engineering. It goes back to that. And that's where I really want to focus our attention and make sure that people understand and they don't trust anyone.
Mike (17:10): I'd love to understand exactly how you're evaluating these new threats and these new risks. It always comes down to people, process, and technology. So, I'm assuming there's some processes that need to change. There's obviously new technology that needs to come in to solve the problems. And, honestly, awareness and you mentioned deep fakes, but probably rethinking how we do awareness and frequency and timing and content.
How are you picturing these areas from your vantage point and from what you're seeing?
Sue (17:44): So, what we chose to do: we're 100% cloud. And so, when we do our pen tests every year, we have to think about what we want our third party to target for us, because we don't have that perimeter anymore of the traditional organization. And so, what we chose to do this year was actually hire a third party that went out and created deep fakes and targeted a variety of different individuals within the organization. They went after our help desk, they went after our senior manager, they went after admin, and other individuals, also. Different departments and different things.
And we wanted to do that to really test ourselves to see: do we have the security tools and defense in-depth in place so that if somebody actually did fall for an impersonation or a deep fake, would they fall all the way? Or do we have additional defenses, different measures in place, that would stop it at some point in the process? And so, that was quite eye-opening. It was really interesting. This organization helped us do different deep fakes, and one of them, one senior leader, actually got the phone call, and it was actually an impersonation of Sineesh. And they went on—you could tell that because I have the recording—and you could tell that the senior leader was like, “Wait a second, this doesn't sound right.” And at one point he made a decision: “Okay, wait a second. I don't believe this.” And so, he took the perfect step, and he said, “When was the last time we saw each other?”
And, of course, the deep fake wasn't able to answer that. And he immediately was like, “This is a fake. Hanging up.” And so, the more that we can do those kinds of things. He's doing a testimonial for us in October, cybersecurity awareness month, and sharing those stories: “This is what happened to me.” And these are really good tips of what to do if you get into this situation.
So, I think that the more that we can share and talk to each other, the better.
Evan (20:32): The threat landscape is changing. The IT architecture is changing. The secure architecture is going to change. When you think about kind like the internal organization operations of a security team, how do you think those evolve in the age of AI?
Do you imagine new roles, new teams? Are there new tools that we're gonna need in the three years we came and imagined today? What's your vision of the future of how AI affects the organization operations of a security team a couple years down the road?
Sue (21:00): Well, I think that some folks in the news are scared that, “AI is going to take my job,” or something to that effect. You know what? It might take a piece of your job, again, talking about those rote tasks. But, I don't think that the humans will be out of it, that you will always need a human, especially on the security side.
Sometimes, there's such nuances when you're investigating and you're doing threat hunting that you can be utilizing AI to help you, but you still have to be the one to say, “Does this make sense? What am I missing on this?” And be able to kind of put some pieces together.
So, what we've been doing here at Prologis is looking to see about, when we have a new position, backfilling a position, or something like that, we have to look at it and say, “Okay, of this position, is there a piece that could be AI?” And if there is, that's great. Let's bring in someone that has that experience, so that they can get that going. And then they can do even more things. So, we're looking at it as, we're embracing AI and saying, “Okay, we can use it for this.” But there's so much more that we can do by not having to do some of those other tasks. So, we really are not going down the road of, “it's going to take all our jobs,” because it's not.
Evan (22:40): So, at the end of the show, we like to do a quick lightning round. And this is where we ask impossibly difficult questions that are difficult to answer in the one-tweet format. But, we’re looking for the one tweet answer. We'll be kind to you, since you’ve been such a great guest so far, and only ask you four or so, instead of a thousand.
But Mike, do you want to kick it off for us?
Mike (23:00): Sure. So Sue, what's the one piece of advice you would give a brand new CISO stepping into their very first job? Maybe something they would overestimate, underestimate?
Sue (23:09): Building relationships with your business. And I'm gonna add on to that, and this one goes, the second piece is for anyone. It could be a new person, or a CISO, either one. Be curious. And when I say be curious, I'm not just saying ask questions. I'm saying be curious to try to understand and keep on being curious.
That's really, really important. Actually, if you're not curious, you can't be on a security roll. It's just the way it is.
Evan (23:40): So you're obviously very up to date with the latest technologies and trends. What's your advice for the best ways for CISOs to stay up to date on the latest technologies?
Sue (23:50): A variety of different ways. Don't just choose one way. You need to be looking at what is out there, what's happening in the security product market. Talk to your fellow CISOs and also your trusted technology partners. So, that could be a VAR, value-added reseller, or it could be a product that you've been using for years that you just say, “Hey, what are you seeing? What are you seeing some of your other customers doing?”
But, I would say listen to podcasts and also get some threat intelligence throughout the day. I mean, not a huge amount, but at least once a day you should be going through and seeing what's happening.
Mike (24:47): So, on a more personal note, what's a book that you've read that's had a big impact on you and why?
Sue (24:55): There's a book called Turn the Ship Around! I think it's Turn the Ship Around! by L. David Marquet, I believe. He was a nuclear submarine commander, and he was given the post to take the worst performing submarine and make it the best. And he did.
How he did it was that he bucked the system. In the Navy, it was traditionally a leader-follower model, and whatever the leader said, the followers did. And he just threw that out the window and he was like, “Okay, now we're going to do a leader-leader model.” And that is really great, especially in security, because you can't have the leader and then the follower, “This is what you need to do, because you need to be.” If there is a breach happening, something going on, you need to have trust in your team to be able to make decisions when needed, and be comfortable and competent to be able to make those the correct decisions. And so, it's a leader-leader model, and I just really love that book.
Evan (26:13): What do you think will be true with the future of AI and security that most people would consider science fiction today?
Sue (26:19): A couple of things. I would say it's on both sides, that the autonomous AI security person that can actually do that review, identify, that, “Yes, this is a threat,” and then be able to take care of it and all of that. I think that that will happen. But, on the flip side, the autonomous bad actor also is going to be there, too. And it could just be a one-click and it goes. And you know, the bad actors don't have to do anything.
And unfortunately, I think both are real and are coming, which is good and bad at the same time. So, not sure. I like to be optimistic, but that one's optimistic and pessimistic on the side of it.
Evan (27:13): Sue, really appreciate you joining us today. Mike and I have been looking forward to this episode for a while and appreciate you sharing your wisdom and experience with the world.
Sue (27:20): Thank you both, Evan and Mike. I appreciate it.
Hear their exclusive stories about technology innovations at scale.