AI’s New Frontier: The Threats of Synthetic Video
AI video generation is advancing faster than most safety frameworks can track, and threat actors are already ahead of the curve. Child predators, terrorist networks, and hate groups are actively testing synthetic video tools to produce illegal content, radicalize audiences, and manufacture false narratives at scale. The window to get ahead of this threat is narrowing.
Download this report to understand exactly how these actors operate and what your safety strategy needs to account for before they reach your platform.

Overview
In this report, we cover:
- How child predators are using text-to-video models and LoRA training techniques to generate synthetic CSAM, including the methods they are developing to bypass safety restrictions built into mainstream AI tools
- How terrorist organizations and hate communities are embedding GenAI video into propaganda campaigns, using synthetic media to glorify violence, radicalize viewers, and fabricate statements from real public figures
- How to identify the safety gaps that put your platform at legal and reputational risk as these adversarial tactics grow in sophistication
Use this report to brief your safety and product leadership on the threat landscape taking shape right now. Download it today and make informed decisions that keep your platform and your users protected.
Download the Full Report
What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
