The GenAI Surge in NCII Production
Non-Consensual Intimate Imagery (NCII) creation is a crime primarily focused against women. The motivations for its creation range from a desire to sexualize, shame, or extort. While law and policy around this violent behavior addressed authentic imagery that was recorded, leaked or stolen, now GenAI has revolutionized its creation. Threat actors are using AI tools trained upon pornographic models to produce synthetic sexual images of real people. All that is needed is a victim's photograph stolen from a social media or dating profile.

Overview
This Alice report, featured by the Daily Mail and Forbes, provides insights into how Generative AI has sparked a surge in this activity which is already affecting all platforms.
- Data on the rise of demand for GenAI services to create synthetic NCII;
- Threat actor TTPs used to generate synthetic NCII from social media and other online platforms;
- Information on threat actor circumvention of GenAI model safeguards;
- A sample of the clear web ecosystem that supports this dangerous economy.
Download the Full Report
What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
