Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Watch On-Demand
Watch On-Demand
Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming

Overview
Toxicity in gaming isn’t new, but as AI and LLMs reshape user-generated content, the stakes are higher than ever. With regulations lagging and major platforms loosening safeguards, studios of all sizes face a critical choice: fight toxicity reactively or invest in proactive, prosocial tooling. In this webinar, we’ll break down:
- The State of Play: Why safety matters, the cost of inaction, and what choices studios—big and small—have today.
- The AI Factor: How LLMs will impact UGC and what to consider when selecting a moderation solution (Speed, Cost, Quality, Context, Scale).
- ROI of Detection: What we know about the business case for strong AI moderation.
- Preemptive vs. Reactive Strategies: How Safety by Design, early detection, and real-world case studies shape safer gaming spaces.
- Prosocial AI: Moving beyond harm detection—what is pro-social modeling, and how can Player Experience (PX) teams use it to foster better communities?
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Meet our speakers

What’s New from Alice
Making Sense of AI: Trust, Scale, and the Human Role
Curiosity might be our most important security tool. In the first episode of Curiouser & Curiouser, Mo Sadek sits down with longtime security leader Julie Tsai to explore AI, security, and the human judgment that still matters most. Together, they cut through hype and fear to talk about what’s actually changing, what isn’t, and how we build systems we can truly trust.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
