ActiveFence is now Alice
x
Back
Whitepaper

GenAI: The new attack vector for trust & safety

Malicious actors are rapidly moving from testing GenAI to deploying it for large-scale abuse across digital platforms. This report uncovers how these groups manipulate AI models to bypass traditional safeguards and automate harmful activities.

  • Understand the shift from manual to AI-accelerated threats in fraud and disinformation.
  • Identify the specific techniques predators and extremists use to subvert safety filters.
  • Learn proactive strategies to fortify your moderation workflows against synthetic harms.

May 23, 2023

Download the Full Report

Overview

The democratization of Generative AI has provided bad actors with a powerful new toolkit to amplify their operations at an unprecedented scale. While much of the industry focuses on internal model safety, our research focuses on the "wild"—the hidden communities where threat actors share tutorials on jailbreaking models and generating prohibited content. From creating hyper-realistic synthetic media for disinformation to automating the grooming of minors, the nature of online harm is undergoing a fundamental shift.

In the report, "Generative AI: The New Attack Vector for Trust and Safety," Alice draws on exclusive threat intelligence to show how these groups are bypassing existing safety guardrails. We examine real-world case studies, including a 172% increase in AI-generated harmful imagery and the rise of deepfake audio used for political instability. By understanding these adversary TTPs (Tactics, Techniques, and Procedures), Trust and Safety teams can move from a reactive posture to a proactive defense. This research provides the context necessary to anticipate how GenAI will be used as a weapon, helping you build more resilient systems that protect your users and your brand’s integrity.

Secure the keys to GenAI wonderland?

Get a demo
Red-Team Lab