GenAI: The new attack vector for trust & safety
Malicious actors are rapidly moving from testing GenAI to deploying it for large-scale abuse across digital platforms. This report uncovers how these groups manipulate AI models to bypass traditional safeguards and automate harmful activities.
- Understand the shift from manual to AI-accelerated threats in fraud and disinformation.
- Identify the specific techniques predators and extremists use to subvert safety filters.
- Learn proactive strategies to fortify your moderation workflows against synthetic harms.
‍

Overview
The democratization of Generative AI has provided bad actors with a powerful new toolkit to amplify their operations at an unprecedented scale. While much of the industry focuses on internal model safety, our research focuses on the "wild"—the hidden communities where threat actors share tutorials on jailbreaking models and generating prohibited content. From creating hyper-realistic synthetic media for disinformation to automating the grooming of minors, the nature of online harm is undergoing a fundamental shift.
In the report, "Generative AI: The New Attack Vector for Trust and Safety," Alice draws on exclusive threat intelligence to show how these groups are bypassing existing safety guardrails. We examine real-world case studies, including a 172% increase in AI-generated harmful imagery and the rise of deepfake audio used for political instability. By understanding these adversary TTPs (Tactics, Techniques, and Procedures), Trust and Safety teams can move from a reactive posture to a proactive defense. This research provides the context necessary to anticipate how GenAI will be used as a weapon, helping you build more resilient systems that protect your users and your brand’s integrity.
‍
Download the Full Report
What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
