On the 37th episode of Enterprise AI Defenders, hosts Evan Reiser (co-founder and CEO, Abnormal AI) and Mike Britton (CIO, Abnormal AI) talk with Matt, Chief Security Officer at KPMG US, about what it takes to protect a firm that operates as a trusted third party to many of the world’s largest brands. AI did not invent fraud, impersonation, or sloppy exploitation. It changed the math. In Matt Posid’s view, the biggest shift is simple: “AI is really good at making people better, and it’s really good at making people faster.” That combination widens the pool of capable attackers and compresses the time defenders have to respond.
Matt’s current remit is intentionally broad. At KPMG US, he leads an enterprise security program spanning cybersecurity, insider risk, physical security, life safety, compliance, business resilience, and third-party risk. That consolidation was not cosmetic. “Threats are multi-domain, and we needed to pull everything together,” he explains, describing how security used to be “fractured” across the organization. In a services firm, the blast radius extends beyond internal systems. Client expectations and sector-specific obligations shape what “good security” must look like, and that creates a program that is “very complex” by necessity, not by accident.
From there, Matt frames the AI-era threat landscape in two layers. First, capability is becoming more evenly distributed. When “amateurs” can reach for tools that used to be confined to sophisticated operators, the list of threats worth taking seriously expands. Second, velocity becomes the enemy of normal operating rhythm. Most organizations have patch cycles and IT hygiene cadences built for yesterday’s speed. If vulnerabilities are exploited faster than that cadence, defenders can find themselves “no longer fast enough to keep up with changes in the threat.” The result is a world with more shots on goal and less time between opening and impact.
Then comes the part that security leaders often need to say out loud: the deepfake variant can be scarier to watch, but it does not automatically defeat a well-run process. Matt points to payment fraud and executive impersonation as a practical example. Organizations with robust accounts payable controls, pre-registered vendors, and verification steps are already building resistance to social engineering. “The controls we’ve had to protect against non AI based attacks are still, in many cases, effective against the AI based variants,” he notes, because disciplined workflows do not care whether the prompt came from a convincing email or a “lifelike” deepfake video. AI increases the volume and believability of attempts, which makes compliance with the control more important, not less.
KPMG’s internal approach to the generative AI wave also reveals a useful operating principle: defaulting to bans can delay learning. When many organizations were blocking tools outright, KPMG put a reminder “splash page” in front of those sites. The message was not “use this for client work,” but rather: experiment, learn, and understand what these tools do. That early exposure produced two outcomes Matt values: faster risk discovery and better governance design. In his telling, the governance question cannot be owned by security alone. KPMG developed an internal process to evaluate AI adoption across security, legal, ethical, and transparency risks. That effort ultimately became a packaged approach the firm could reuse, “the KPMG trusted AI framework,” informed by its experience as “client zero” on its own controls and approvals.
Looking forward, Matt expects defenders to mirror attackers in at least one way: tool-enabled speed. “If the bad guys are using certain tools, the good guys probably have to also,” he says, especially to help analysts move up the stack from manual review into higher judgment work. But he is equally clear on the boundary conditions for autonomy. AI is “awesome,” not magical, and the right question is which decisions are low-risk enough to delegate, just as security teams already scope authority differently across L1 analysts, incident response leaders, and executives. The durable takeaway is not “buy more AI.” It is to tighten fraud-prevention workflows, invest in adaptable talent, and build governance that lets you move faster without pretending risk disappears.