CISO Blog

Secure AI in the Hands of 200,000 Users

Lester Godsey
November 5, 2025
Share this blog post

On the 33rd episode of Enterprise AI Defenders, hosts Evan Reiser (CEO and co-founder, Abnormal AI) and Mike Britton (CIO, Abnormal AI) talk with Lester Godsey, Chief Information Security Officer at Arizona State University. ASU is not only the largest public university in the country but also one of the most ambitious adopters of generative AI, granting platform access to over 200,000 students, faculty, and staff. With that scale comes a need for disciplined governance and imaginative risk management, and Godsey is at the center of both.

ASU's security strategy is a blend of openness and orchestration. "We just announced an agreement with OpenAI to give ChatGPT access to all faculty, staff, and students," Godsey explains. But they did not stop there. His team also built a private, in-house platform supporting over 60 large language models, both commercial and custom, within a privacy-first walled-garden environment. This internal tool is fueling over a thousand AI-driven projects, including a security bot trained to answer compliance questions based on ASU's latest policy updates.

Managing this level of innovation requires more than just perimeter defense. Godsey and his team are standing up an AI governance framework and conducting tabletop exercises to pressure-test scenarios involving AI misuse. "We are treating this just like any major tech shift, whether it's cloud, virtualization, or the early internet," he says. The foundation is familiar: data classification, third-party risk assessments, and scenario planning. What is new is how quickly the threat landscape is morphing.

Godsey highlights prompt injection as a top-tier concern, likening it to "the new flavor of SQL injection." While most attacks have not changed fundamentally, they are now deployed faster and more effectively. He recalls seeing deepfake content as early as 2020 during his tenure at Maricopa County. "Back then, it wasn’t believable. In 2024, nobody’s laughing."

Despite growing risks, Godsey sees AI as a long-overdue equalizer in cyber defense. He is running a project to train a model on both reliable and misleading threat intel sources, aiming to build a high-fidelity filter for ingestible security signals. The goal is to help smaller teams act with confidence. "AI can finally democratize threat intelligence," he argues, reducing reliance on costly six-figure vendor solutions.

ASU is extending that vision to students. Godsey describes an upcoming AI-focused cyber hackathon with tracks ranging from IoT security to social engineering using generative AI. "This is the place to learn, to play, and to develop the skills needed to be a cybersecurity practitioner in the future."

Whether it is LLM innovation, insider risk modeling, or balancing openness with safety, Godsey’s approach underscores a core belief: cybersecurity does not stifle innovation, it enables it. "We are part of an organization focused on having a societal impact. That mission outweighs all the complexity."

Listen to Lester’s episode here and read the transcript here.