Safeguarding Children in the GenAI Era
Watch this webinar to learn proactive strategies and regulatory insights to protect vulnerable populations in the GenAI era.
Watch On-Demand
Watch On-Demand
Safeguarding Children in the GenAI Era



Overview
Trust & Safety teams are on high alert, as increased child safety risks such as cyberbullying, harassment, and CSAM dissemination are amplified by GenAI’s autonomous content generation capabilities. Join this webinar to learn strategies for safeguarding children in the age of Generative AI (GenAI).
‍Key Topics:
- Identifying and understanding child safety risks in the GenAI era.
- Why traditional approaches like red-teaming are ineffective when it comes to child safety, posing complex regulatory challenges.
- How GenAI integration in popular children's platforms accelerates the need for mitigation strategies.
- Legal challenges and regulatory considerations - the evolving landscape of GenAI regulations and their intersection with child protection laws.
- Strategies to mitigate child safety risks and integrate GenAI safely.
Gain proactive insights into identifying and mitigating child safety risks in the GenAI era, ensuring robust protections for vulnerable populations.
Meet our speakers



What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
