Understanding AI, Risk,
and the Future
The essential podcast for innovators, leaders, and curious minds who want to navigate a world where AI is advancing faster than our ability to make sense of it.
What Does It Actually Take to Build Unbiased AI?
Nobody told Tennisha Martin the importance of having a mentor, so she built a community of tens of thousands instead. As the Founder and Chairwoman of BlackGirlsHack, her whole mission has been making sure nobody else has to figure it out alone. In this episode, she and Mo get into AI bias, why it's already showing up in places that matter far beyond tech, and why the real fix starts with getting the right people in the room when these systems get built.
.png)

Tennisha Martin is the Founder and Chairwoman of BlackGirlsHack, an international nonprofit that has helped tens of thousands of people break into cybersecurity and IT. A penetration tester, bestselling author, and doctoral candidate in AI and cybersecurity, she has raised over $1.5 million in funding, been recognized as the 2025 Cybersecurity Woman Hacker of the Year, and founded SquadCon, the only Black-led independent cybersecurity conference in Las Vegas during Hacker Summer Camp.
Featured Episodes
Conversations with the practitioners, leaders, and builders shaping the future of AI and security.
Nobody told Tennisha Martin the importance of having a mentor, so she built a community of tens of thousands instead. As the Founder and Chairwoman of BlackGirlsHack, her whole mission has been making sure nobody else has to figure it out alone. In this episode, she and Mo get into AI bias, why it's already showing up in places that matter far beyond tech, and why the real fix starts with getting the right people in the room when these systems get built.

Tennisha Martin is the Founder and Chairwoman of BlackGirlsHack, an international nonprofit that has helped tens of thousands of people break into cybersecurity and IT. A penetration tester, bestselling author, and doctoral candidate in AI and cybersecurity, she has raised over $1.5 million in funding, been recognized as the 2025 Cybersecurity Woman Hacker of the Year, and founded SquadCon, the only Black-led independent cybersecurity conference in Las Vegas during Hacker Summer Camp.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
David Wendt has spent 30 years building models and just as long running D&D campaigns. Turns out both taught him the same things about operating in uncertainty. He joins Mo to talk AI governance at enterprise scale, what real red teaming looks like, and why the smarter move is to stop measuring your AI and start measuring what you actually care about.

David Wendt helps executives, innovators, and tech leaders turn uncertainty about AI into actionable trust and clarity. With 30+ years guiding Fortune 100/200 organizations through strategic change, he blends systems thinking, storytelling, and practical frameworks to make AI governance accessible and impactful.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.

Diana Kelley is CISO at Noma Security. She has held senior security leadership roles at Microsoft, IBM Security, Symantec, and Protect AI. She is co-author of Practical Cybersecurity Architecture, a LinkedIn Learning instructor on AI/ML security, and a 2023 Global Cyber Security Hall of Fame inductee.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
Most countries talk about digital trust. Estonia engineered it. Joseph Carson has spent 23 years living through Estonia's digital transformation from the inside. He and Mo get into what it actually takes to build trust at a national scale, what Estonia got right, where it went wrong, and what the rest of the world is still figuring out. Joseph is also bringing the full story to RSA Conference 2026 with his session "From Cyber War to a Digital Nation: Estonia's Playbook for Resilience."

Joseph Carson is an award-winning cybersecurity leader with 30+ years of experience securing enterprises, governments, and critical infrastructure worldwide. As Chief Security Evangelist and Advisory CISO at Segura, he helps organizations adopt identity-first, resilient security strategies. He’s the author of Cybersecurity for Dummies, a frequent industry speaker, and host of the Security by Default podcast.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
AI is moving from experimentation into production, and with that shift comes a harder question: how do we actually build systems people can trust? In this episode of Curiouser & Curiouser, Mo sits down with Laura Powell, Senior Director of Partnerships at LatticeFlow AI, to talk about what that actually requires. They cover why agentic AI is outpacing the frameworks meant to govern it, where the 80/20 approach to risk breaks down, and what biased training data is already doing in production today.

Laura Powell is a technical leader working at the intersection of AI, risk, and reality. For over a decade, she’s built and led AI, data, privacy, and governance programs in high-growth environments, helping teams turn fast-moving innovation and regulatory pressure into practical, production-ready systems. Today, she focuses on responsible AI and operationalizing trust without slowing progress.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
.webp)
Steve Wilson is Chief AI and Product Officer at Exabeam and founder and co-chair of the OWASP Gen AI Security Project, where he helps shape the standards for secure, production-ready AI. A recognized AI security leader, he’s a Google Cloud AI Innovation All-Star and author of The Developer’s Playbook for Large Language Model Security. He previously held leadership roles at Sun, Citrix, and Oracle and holds 11 patents.

Mo is an AI security practitioner with over a decade of experience in application and product security, spanning hands-on engineering, program building, and cross-functional leadership. At Alice, he focuses on advancing how organizations understand, test, and manage real-world risks in AI systems, helping bridge security research, product, and emerging AI governance needs.





