Understanding AI, Risk,
and the Future
The essential podcast for innovators, leaders, and curious minds who want to navigate a world where AI is advancing faster than our ability to make sense of it.
AI Governance Needs a Dungeon Master
David Wendt has spent 30 years building models and just as long running D&D campaigns. Turns out both taught him the same things about operating in uncertainty. He joins Mo to talk AI governance at enterprise scale, what real red teaming looks like, and why the smarter move is to stop measuring your AI and start measuring what you actually care about.


David Wendt helps executives, innovators, and tech leaders turn uncertainty about AI into actionable trust and clarity. With 30+ years guiding Fortune 100/200 organizations through strategic change, he blends systems thinking, storytelling, and practical frameworks to make AI governance accessible and impactful.
Featured Episodes
Conversations with the practitioners, leaders, and builders shaping the future of AI and security.
David Wendt has spent 30 years building models and just as long running D&D campaigns. Turns out both taught him the same things about operating in uncertainty. He joins Mo to talk AI governance at enterprise scale, what real red teaming looks like, and why the smarter move is to stop measuring your AI and start measuring what you actually care about.

David Wendt helps executives, innovators, and tech leaders turn uncertainty about AI into actionable trust and clarity. With 30+ years guiding Fortune 100/200 organizations through strategic change, he blends systems thinking, storytelling, and practical frameworks to make AI governance accessible and impactful.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.

Diana Kelley is CISO at Noma Security. She has held senior security leadership roles at Microsoft, IBM Security, Symantec, and Protect AI. She is co-author of Practical Cybersecurity Architecture, a LinkedIn Learning instructor on AI/ML security, and a 2023 Global Cyber Security Hall of Fame inductee.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
Most countries talk about digital trust. Estonia engineered it. Joseph Carson has spent 23 years living through Estonia's digital transformation from the inside. He and Mo get into what it actually takes to build trust at a national scale, what Estonia got right, where it went wrong, and what the rest of the world is still figuring out. Joseph is also bringing the full story to RSA Conference 2026 with his session "From Cyber War to a Digital Nation: Estonia's Playbook for Resilience."

Joseph Carson is an award-winning cybersecurity leader with 30+ years of experience securing enterprises, governments, and critical infrastructure worldwide. As Chief Security Evangelist and Advisory CISO at Segura, he helps organizations adopt identity-first, resilient security strategies. He’s the author of Cybersecurity for Dummies, a frequent industry speaker, and host of the Security by Default podcast.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
AI is moving from experimentation into production, and with that shift comes a harder question: how do we actually build systems people can trust? In this episode of Curiouser & Curiouser, Mo sits down with Laura Powell, Senior Director of Partnerships at LatticeFlow AI, to talk about what that actually requires. They cover why agentic AI is outpacing the frameworks meant to govern it, where the 80/20 approach to risk breaks down, and what biased training data is already doing in production today.

Laura Powell is a technical leader working at the intersection of AI, risk, and reality. For over a decade, she’s built and led AI, data, privacy, and governance programs in high-growth environments, helping teams turn fast-moving innovation and regulatory pressure into practical, production-ready systems. Today, she focuses on responsible AI and operationalizing trust without slowing progress.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
.webp)
Steve Wilson is Chief AI and Product Officer at Exabeam and founder and co-chair of the OWASP Gen AI Security Project, where he helps shape the standards for secure, production-ready AI. A recognized AI security leader, he’s a Google Cloud AI Innovation All-Star and author of The Developer’s Playbook for Large Language Model Security. He previously held leadership roles at Sun, Citrix, and Oracle and holds 11 patents.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
Curiosity might be our most important security tool. In the first episode of Curiouser & Curiouser, Mo Sadek sits down with longtime security leader Julie Tsai to explore AI, security, and the human judgment that still matters most. Together, they cut through hype and fear to talk about what’s actually changing, what isn’t, and how we build systems we can truly trust.
.webp)
Julie Tsai is a CISO-in-Residence at Ballistic Ventures, board member of the Bay Area CSO Council, and cybersecurity leader for AI Insiders. A six-time CISO and SecDevOps specialist, she’s led security at organizations ranging from startups to Fortune 1 companies, including Roblox, WalmartLabs, and Box. Julie advises startups, teaches cybersecurity, and focuses on using AI and DevSecOps to build stronger, more responsible security practices.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.





