GenAI deployment: What's the worst that can happen?
Moving GenAI from a successful pilot to a production-ready application requires more than just scaling code. Learn how to navigate the complex security, safety, and deployment risks that emerge when taking models live.
Watch On-Demand
GenAI deployment: What's the worst that can happen?



Overview
The transition from experimentation to production is where most GenAI projects face their toughest hurdles, from cost management to unpredictable model behavior. This session provides a roadmap for safely deploying GenAI while maintaining model integrity and user trust.
- Understand the key differences between testing environments and live production risks.
- Learn to identify and mitigate deployment-specific vulnerabilities like data drift and prompt injection.
- Discover industry benchmarks and strategies for managing the long-term reliability of AI applications.
‍
Meet our speakers



What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
