Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Watch On-Demand
Watch On-Demand
Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming

Overview
Toxicity in gaming isn’t new, but as AI and LLMs reshape user-generated content, the stakes are higher than ever. With regulations lagging and major platforms loosening safeguards, studios of all sizes face a critical choice: fight toxicity reactively or invest in proactive, prosocial tooling. In this webinar, we’ll break down:
- The State of Play: Why safety matters, the cost of inaction, and what choices studios—big and small—have today.
- The AI Factor: How LLMs will impact UGC and what to consider when selecting a moderation solution (Speed, Cost, Quality, Context, Scale).
- ROI of Detection: What we know about the business case for strong AI moderation.
- Preemptive vs. Reactive Strategies: How Safety by Design, early detection, and real-world case studies shape safer gaming spaces.
- Prosocial AI: Moving beyond harm detection—what is pro-social modeling, and how can Player Experience (PX) teams use it to foster better communities?
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Meet our speakers

What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
