Back
Whitepaper
Emerging Threats Risk Assessment: Are LLMs Ready?
How do today’s top LLMs handle high-risk prompts? Large language models (LLMs) are advancing fast, but so are the threats they face. How well can they handle emerging risks in child safety, fraud, and abuse? To find out, we put 7 leading LLMs to the test against 33 emerging threats. The results reveal critical gaps that could put users, businesses, and platforms at risk. Download the report to learn more.
Feb 5, 2025

Download the Full Report
Overview
In this report, we cover:
- How top LLMs respond to high-risk prompts across key abuse areas
- Where the biggest vulnerabilities exist, and what they mean for AI safety
- Steps platforms can take to strengthen LLM defenses against evolving threats
What’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
webinar
Sep 25, 2025
 -
This is some text inside of a div block.
 min read
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Red-Team Lab
