Enterprise AI Team

Beyond the Firewall

October 23, 2025
Share this blog post

Tackling Cybersecurity as AI Rises

David Sherry’s time at Princeton started with a philosophical question: "Is it better to centralize all the IT and security in one giant entity or to keep it distributed out in the different academic departments and schools?" When he took over as the Chief Information Security Officer at the university, he challenged the institution to address a profound structural challenge facing higher education cybersecurity as a whole. That question has gained urgency as new digital threats and technologies, like generative AI, reshape how institutions protect themselves.

Sherry isn't just managing firewalls and endpoints. As CISO of one of the world's top academic institutions, he steers a cybersecurity ship in seas filled with open intellectual exploration and decentralized autonomy. That environment, while academically enriching, makes enforcing traditional cybersecurity models far more complex. "We have research on really cool, life-changing, and world-changing decisions that could be made, which makes it a really complex environment,” Sherry says, “But the thing that differentiates it is the collaboration amongst higher ed.”

Shadows of Legacy Thinking

Unlike corporate IT environments where command structures and policies can cascade top-down, universities often operate in federated or distributed IT models. That structure was built over decades, sometimes out of necessity, sometimes from academic independence. But it has left institutions like Princeton balancing freedom, control, flexibility, and security.

“We have our administrative network and the research network, which are really highly controlled and secured, but still have a lot of flexibility and a lot of freedom to them. We just need to make sure that on the last three of the cybersecurity framework (detect, respond, recover), it is a lot easier here and a lot more important.”

Traditional approaches focused heavily on perimeter defense, trusting what was inside and mistrusting what was outside. That binary has eroded. Today’s threats, from ransomware actors to phishing campaigns, exploit internal trust as easily as external vulnerabilities. 

Legacy models also didn’t anticipate today’s threat actors: “We all have really good endpoints. We all have a good port of firewalls, intrusion detection, automatic DDoS failover, and the criminals know that. So they move away from that, and they go right after the ultimate endpoint being the human.”

Even in well-resourced environments like Princeton, older mindsets had to be challenged: “That's an area that AI and ML have to help us because there's almost no defeat from that until after it's over.”

Building Trust Amid Decentralization

For Sherry, success lies in relationships as much as in firewalls. His team doesn’t just enforce policies; they build bridges. “It's a really smart community. The students are really smart. The faculty are just amazingly brilliant, and I would dare say that the staff is right up there with them.”

That meant embedding security within departments, not policing them. It meant monthly town halls, newsletters, and informal chats with IT staff across campus. “When speaking to your boss, your peers, your cabinet, whatever, get rid of the old fear, uncertainty, and doubt. It's just all about protection, value, and speaking with data.”

New Frontier, New Risks

Sherry isn’t blind to the disruptive potential of artificial intelligence. He sees it coming fast. “We can't keep up with the threats. I tell the boss all the time: if an attacker has the right time, motivation, and resources, they will hit us. They will get through to us. We can't keep up in that regard.” he said. And that, for security teams, is both exhilarating and terrifying.

On one hand, AI tools can supercharge threat detection. On the other, they can supercharge phishing, impersonation, and misinformation. “When we first heard about ChatGPT, the buzz on the university was, ‘Wow, what's this gonna do to admissions essays?" And then I'm the crazy guy sitting at the end of the table saying, ‘What if somebody uses ChatGPT to unleash a worm that can't be detected? I'm a little bit concerned about the negative uses of it.”

Princeton hasn’t banned tools like ChatGPT. Instead, they’re asking deeper questions and encouraging education on responsible use and setting boundaries without stifling innovation. The goal isn’t to completely ban the use of these tools but rather to encourage their use productively and safely within the university.

A Playbook for the Sector

Sherry’s advice to other CISOs in higher ed? Rely on your team. “Don't overestimate that you need every tool under the sun. Hire good people that think really well. I look for good thinkers, better than great technologists. I can teach somebody the technology, but not thinking that a tool is going to save you.”

“Just keep learning, learning, learning. There's so many paths and so many avenues, but the people who step up in the CISO role, especially from someone with a technology background, they gotta learn the business acumen, too.” What emerges from Sherry’s philosophy is not just a cybersecurity strategy but a leadership blueprint. One grounded in trust, flexibility, and foresight.

In a world of zero-days and zero trust, that human-centered lens may be Princeton’s most powerful firewall of all.