Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Watch On-Demand
Watch On-Demand
Beyond Moderation: The Future of AI, Safety, and Prosocial Play in Gaming

Overview
Toxicity in gaming isn’t new, but as AI and LLMs reshape user-generated content, the stakes are higher than ever. With regulations lagging and major platforms loosening safeguards, studios of all sizes face a critical choice: fight toxicity reactively or invest in proactive, prosocial tooling. In this webinar, we’ll break down:
- The State of Play: Why safety matters, the cost of inaction, and what choices studios—big and small—have today.
- The AI Factor: How LLMs will impact UGC and what to consider when selecting a moderation solution (Speed, Cost, Quality, Context, Scale).
- ROI of Detection: What we know about the business case for strong AI moderation.
- Preemptive vs. Reactive Strategies: How Safety by Design, early detection, and real-world case studies shape safer gaming spaces.
- Prosocial AI: Moving beyond harm detection—what is pro-social modeling, and how can Player Experience (PX) teams use it to foster better communities?
Join us as we explore the next frontier of AI-powered safety in gaming and why it’s time to go beyond moderation.
Meet our speakers

What’s New from Alice
Securing Agentic AI: The OWASP Approach
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
