Designing your AI safety tool stack: What to build, buy, and blend
Building a secure AI application requires more than a single filter. Discover how to architect a multi-layered safety stack that addresses risks at every stage, from model development to real-time user interactions.
Watch On-Demand
Watch On-Demand
Designing your AI safety tool stack: What to build, buy, and blend



Overview
A robust AI safety strategy requires a coordinated approach across the entire tech stack to prevent vulnerabilities like jailbreaking and data leakage. This session explores the essential components needed to build a defense-in-depth architecture for your AI products.
- Learn the differences between model-level, system-level, and application-level safety.
- Discover how to integrate real-time guardrails without compromising system performance.
- Understand how to choose the right safety tools for different stages of the AI lifecycle.
‍
Meet our speakers



What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
