Exposing the Hidden Risks of AI Toys
AI-powered toys are quickly becoming part of how children learn, play, and communicate.
In this Curious Findings By Alice report, we look at how these systems actually behave in real interactions, and what that means for child safety in the AI era.

Overview
In this report, we cover:
- The types of safety gaps that emerge
- Patterns in how conversations evolve once boundaries are crossed
- What these findings reveal about broader risks in child-facing AI systems
Through hands-on testing, using language and interaction styles that reflect how children naturally speak, we uncover how simple, everyday conversations can lead to unexpected and unsafe outcomes.
As AI becomes more embedded in child-facing products, understanding these behaviors early is critical. The way these systems respond will directly shape how children explore, trust, and engage with technology.

What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
