NEW YORK, July 23rd, 2024 — Alice, formerly ActiveFence, the leading technology solution for Trust and Safety intelligence, management, and content moderation, is proud to announce a new industry breakthrough. ActiveFence has developed algorithms that can detect newly generated or manipulated Child Sexual Abuse Material (CSAM), going beyond the detection of CSAM already present in existing databases.
ActiveFence’s detection automation solution, ActiveScore, utilizes AI to identify harmful content on a large scale. The key advantage of our AI models in CSAM detection lies in their ability to identify new and previously unreported content.
The production, distribution, and consumption of CSAM have been significant societal issues for decades. With the internet’s widespread use and the rise of file-sharing websites, social media platforms, and Generative AI (GenAI), the situation has worsened. CSAM is inherently evasive, as bad actors continuously generate new items and manipulate previously reported ones to evade detection. This makes it nearly impossible for platforms to effectively detect and remove CSAM without the right AI models.
ActiveFence’s AI algorithms identify novel CSAM across different modalities such as video, image, and text. Trained on proprietary sources of data, our text detectors can detect sex solicitation, CSAM discussions, and estimate a user’s age. These detectors also identify the use of specific keywords and emojis, multilingual terminology, and GenAI text prompt manipulation techniques. For image detection, our computer vision detectors can identify indicators of CSAM in images, including identifying specific body parts and estimating age.
VP of Data and AI, PhD Matar Haller:“While image hashing and matching have been effective, they are not enough, especially in the GenAI era, where the bar of entry for creating new and therefore unindexed CSAM has been drastically lowered. Integrating AI detection models is critical to ensure we are able to effectively and efficiently detect at scale.”
Future technological advancements will further enhance CSAM detection by identifying even more subtle features and patterns that traditional methods often miss. These advancements will play a vital role in countering the evolving tactics of child predators, particularly as generative AI continues to evolve.
To learn more about how ActiveFence safeguards online platforms and users against online harm, please visit our website at alice.io
About ActiveFence:
ActiveFence is the leading Trust and Safety provider for online platforms, protecting over three billion users daily from malicious behavior and content. Trust and Safety teams of all sizes rely on ActiveFence to keep their users safe from the widest spectrum of online harms, including child abuse, disinformation, hate speech, terror, fraud, and more. We offer a full stack of capabilities with our deep intelligence research, AI-driven harmful content detection and moderation platform. ActiveFence protects platforms globally, in over 100 languages, letting people interact and thrive safely online.
Alice Data Advantage
Alice is the world’s largest collector and manager of adversarial intelligence data. Our data is the cornerstone for protecting platform, tech, and users online.
Learn More >What’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
