Cloud Security Podcast

Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!

cloud-security-podcast_high_res.png

Episode list

#261
February 2, 2026

EP261 No More Aspiration: Scaling a Modern SOC with Real AI Agents

Guest:

29:29

Topics covered:

  • We ended our season talking about the AI apocalypse. In your opinion, are we living in the world that the guests describe in their apocalypse paper
  • Do you think AI-powered attacks are really here, and if so, what is your plan to respond? Is it faster patching? Better D&R? Something else altogether? 
  • Your team has a hybrid agent workflow: could you tell us what that means?  Also, define “AI agent” please.
  • What are your production use cases for AI and AI agents in your SOC?
  • What are your overall SOC metrics and how does the agentic AI part play into that?
  • It's one thing to ask a team "hey what did y'all do last week" and get a good report - how are you measuring the agentic parts of your SOC?
  • How are you thinking about what comes next once AI is automatically writing good (!) rules for your team out of research blog posts and TI papers? 
#260
January 26, 2026

EP260 The Agentic IAM Trainwreck: Why Your Bots Need Better Permissions Than Your Admins

Guest:

29:29

Topics covered:

  • Why is agent security so different from “just” LLM security?
  • Why now? Agents are coming, sure, but they are - to put it mildly - not in wide use. Why create a top 10 list now and not wait for people to make the mistakes?
  • It sounds like “agents + IAM” is a disaster waiting to happen. What should be our approach for solving this? Do we have one?
  • Which one agentic AI risk keeps you up at night? 
  • Is there an interesting AI shared responsibility angle here? Agent developer, operator, downstream system operator?
  • We are having a lot of experimentation, but sometimes little value from Agents. What are the biggest challenges of secure agentic AI and AI agents adoption in enterprises?
#259
January 19, 2026

EP259 Why Google Built a Security LLM and How It Beats the Generalists

Guest:

29:29

Topics covered:

  • What is Sec-Gemini, why are we building it?
  • How do we decide when to create something like Sec-Gemini?
  • What motivates a decision to focus on something like this vs anything else we might build as a dedicated set of regular Gemini capabilities?
  • What is Sec-Gemini good at? How do we know it's good at those things?
  • Where and how is it better than a general LLM?
  • Are we using Sec-Gemini internally?
#258
January 12, 2026

EP258 Why Your Security Strategy Needs an Immune System, Not a Fortress with Royal Hansen

Guest:

  • Royal Hansen, VP of Engineering at Google, former CISO of Alphabet
29:29

Topics covered:

  • The "God-Like Designer" Fallacy: You've argued that we need to move away from the "God-like designer" model of security—where we pre-calculate every risk like building a bridge—and towards a biological model. Can you explain why that old engineering mindset is becoming risky in today’s cloud and AI environments?
  • Resilience vs. Robustness: In your view, what is the practical difference between a robust system (like a fortress that eventually breaks) and a resilient system (like an immune system)? How does a CISO start shifting their team's focus from creating the former to nurturing the latter?
  • Securing the Unknown: We're entering an era where AI agents will call other agents, creating pathways we never explicitly designed. If we can't predict these interactions, how can we possibly secure them? What does "emergent security" look like in practice?
  • Primitives for Agents: You mentioned the need for new "biological primitives" for these agents—things like time-bound access or inherent throttling. Are these just new names for old concepts like Zero Trust, or is there something different about how we need to apply them to AI?
  • The Compliance Friction: There's a massive tension between this dynamic, probabilistic reality and the static, checklist-based world of many compliance regimes. How do you, as a leader, bridge that gap? How do you convince an auditor or a board that a "probabilistic" approach doesn't just mean "we don't know for sure"?
  •  "Safe" Failures: How can organizations get comfortable with the idea of designing for allowable failure in their subsystems, rather than striving for 100% uptime and security everywhere?
#257
January 5, 2026

EP257 Beyond the 'Kaboom': What Actually Breaks When OT Meets the Cloud?

Guest:

29:29

Topics covered:

  • When we hear “attacks on Operational Technology (OT)” some think of Stuxnet targeting PLCs or even backdoored pipeline control software plot in the 1980s. Is this space always so spectacular or are there less “kaboom” style attacks we are more concerned about in practice?
  • Given the old "air-gapped" mindset of many OT environments, what are the most common security gaps or blind spots you see when organizations start to integrate cloud services for things like data analytics or remote monitoring?
  • How is the shift to cloud connectivity - for things like data analytics, centralized management, and remote access -  changing the security posture of these systems? What's a real-world example of a positive security outcome you've seen as a direct result of this cloud adoption?
  • How do the Tactics, Techniques, and Procedures outlined in the MITRE ATT&CK for ICS framework change or evolve when attackers can leverage cloud-based reconnaissance and command-and-control infrastructure to target OT networks? Can you provide an example?
  • OT environments are generating vast amounts of operational data. What is interesting for OT Detection and Response (D&R)?
#256
December 15, 2025

EP256 Rewiring Democracy & Hacking Trust: Bruce Schneier on the AI Offense-Defense Balance

29:29

Topics covered:

  • Do you believe that AI is going to end up being a net improvement for defenders or attackers?  Is short term vs long term different?
  • We’re excited about the new book you have coming out with your co-author Nathan Sanders “Rewiring Democracy”.  We want to ask the same question, but for society: do you think AI is going to end up helping the forces of liberal democracy, or the forces of corruption, illiberalism, and authoritarianism? 
  • If exploitation is always cheaper than patching (and attackers don’t follow as many rules and procedures), do we have a chance here? 
  • If this requires pervasive and fast “humanless” automatic patching (kinda like what Chrome does for years), will this ever work for most organizations?
  • Do defenders have to do the same and just discover and fix issues faster? Or can we use AI somehow differently?
  • Does this make defense in depth more important?
  • How do you see AI as changing how society develops and maintains trust? 
#255
December 8, 2025

EP255 Separating Hype from Hazard: The Truth About Autonomous AI Hacking

Guest:

29:29

Topics covered:

  • The term "AI Hacking Singularity" sounds like pure sci-fi, yet you and some other very credible folks are using it to describe an imminent threat. How much of this is hyperbole to shock the complacent, and how much is based on actual, observed capabilities today? 
  • Can autonomous AI agents really achieve that "exploit - at - machine - velocity" without human intervention for the zero-day discovery phase?
  • On the other hand, why may it actually not happen?
  • When we talk about autonomous AI attack platforms, are we talking about highly resourced nation-states and top-tier criminal groups, or will this capability truly be accessible to the average threat actor within the next 6-12 months? What's the "Metasploit" equivalent for AI-powered exploitation that will be ubiquitous? 
  • Can you paint a realistic picture of the worst-case scenario that autonomous AI hacking enables? Is it a complete breakdown of patch cycles, a global infrastructure collapse, or something worse?
  • If attackers are operating at "machine speed," the human defender is fundamentally outmatched. Is there a genuine "AI-to-AI" counter-tactic that doesn't just devolve into an infinite arms race? Or can we counter without AI at all?
  • Given that AI can expedite vulnerability discovery, how does this amplified threat vector impact the software supply chain? If a dependency is compromised within minutes of a new vulnerability being created, does this force the industry to completely abandon the open-source model, or does it demand a radical, real-time security scanning and patching system that only a handful of tech giants can afford?
  • Are current proposed regulations, like those focusing on model safety or disclosure, even targeting the right problem? 
  • If the real danger is the combinatorial speed of autonomous attack agents, what simple, impactful policy change should world governments prioritize right now?
#254
December 1, 2025

EP254 Escaping 1990s Vulnerability Management: From Unauthenticated Scans to AI-Driven Mitigation

Guest:

  • Caleb Hoch, Consulting Manager on Security Transformation Team, Mandiant, Google Cloud
29:29

Topics covered:

  • How has vulnerability management (VM) evolved beyond basic scanning and reporting, and what are the biggest gaps between modern practices and what organizations are actually doing?
  • Why are so many organizations stuck with 1990s VM practices?
  • Why mitigation planning is still hard for so many?
  • Why do many organizations, including large ones, still rely on unauthenticated scans despite the known importance of authenticated scanning for accurate results?
  • What constitutes a "gold standard" vulnerability prioritization process in 2025 that moves beyond CVSS scores to incorporate threat intelligence, asset criticality, and other contextual factors?
  • What are the primary human and organizational challenges in vulnerability management, and how can issues like unclear governance, lack of accountability, and fear of system crashes be overcome?
  • How is AI impacting vulnerability management, and does the shift to cloud environments fundamentally change VM practices?
#253
November 24, 2025

EP253 The Craft of Cloud Bug Hunting: Writing Winning Reports and Secrets from a VRP Champion

Guests:

29:29

Topics covered:

  • We hear from the Cloud VRP team that you write excellent bug bounty reports - is there any advice you'd give to other researchers when they write reports?
  • You are one of Cloud VRP's top researchers and won the MVH (Most Valuable Hacker) award at their event. What do you think makes you so successful at finding issues? 
  • What is a Bugswat?
  • What do you find most enjoyable and least enjoyable about the VRP?
  • What is the single best piece of advice you'd give an aspiring cloud bug hunter today?
#252
November 17, 2025

EP252 The Agentic SOC Reality: Governing AI Agents, Data Fidelity, and Measuring Success

Guests:

29:29

Topics covered:

  • Moving from traditional SIEM to an agentic SOC model, especially in a heavily regulated insurer, is a massive undertaking. What did the collaboration model with your vendor look like? 
  • Agentic AI introduces a new layer of risk - that of unconstrained or unintended autonomous action. In the context of Allianz, how did you establish the governance framework for the SOC alert triage agents?
  • Where did you draw the line between fully automated action and the mandatory "human-in-the-loop" for investigation or response?
  • Agentic triage is only as good as the data it analyzes. From your perspective, what were the biggest challenges - and wins - in ensuring the data fidelity, freshness, and completeness in your SIEM to fuel reliable agent decisions?
  • We've been talking about SOC automation for years, but this agentic wave feels different. As a deputy CISO, what was your primary, non-negotiable goal for the agent? Was it purely Mean Time to Respond (MTTR) reduction, or was the bigger strategic prize to fundamentally re-skill and uplevel your Tier 2/3 analysts by removing the low-value alert noise?
  • As you built this out, were there any surprises along the way that left you shaking your head or laughing at the unexpected AI behaviors?
  • We felt a major lack of proof - Anton kept asking for pudding - that any of the agentic SOC vendors we saw at RSA had actually achieved anything beyond hype! When it comes to your org, how are you measuring agent success?  What are the key metrics you are using right now?