GenAI: The new attack vector for trust & safety
Malicious actors are rapidly moving from testing GenAI to deploying it for large-scale abuse across digital platforms. This report uncovers how these groups manipulate AI models to bypass traditional safeguards and automate harmful activities.
- Understand the shift from manual to AI-accelerated threats in fraud and disinformation.
- Identify the specific techniques predators and extremists use to subvert safety filters.
- Learn proactive strategies to fortify your moderation workflows against synthetic harms.

Download the Full Report
Overview
The democratization of Generative AI has provided bad actors with a powerful new toolkit to amplify their operations at an unprecedented scale. While much of the industry focuses on internal model safety, our research focuses on the "wild"—the hidden communities where threat actors share tutorials on jailbreaking models and generating prohibited content. From creating hyper-realistic synthetic media for disinformation to automating the grooming of minors, the nature of online harm is undergoing a fundamental shift.
In the report, "Generative AI: The New Attack Vector for Trust and Safety," Alice draws on exclusive threat intelligence to show how these groups are bypassing existing safety guardrails. We examine real-world case studies, including a 172% increase in AI-generated harmful imagery and the rise of deepfake audio used for political instability. By understanding these adversary TTPs (Tactics, Techniques, and Procedures), Trust and Safety teams can move from a reactive posture to a proactive defense. This research provides the context necessary to anticipate how GenAI will be used as a weapon, helping you build more resilient systems that protect your users and your brand’s integrity.
What’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
