Alice x Amazon Webinar: Navigating Agentic AI Risks With Agentic Solutions
Join an exclusive briefing on the future risks, and safety opportunities, of AI agents.
Sign Up Now
Watch On-Demand
Alice x Amazon Webinar: Navigating Agentic AI Risks With Agentic Solutions


Overview
AI systems are rapidly evolving - not just in capability, but in autonomy, agency, and interconnectivity. As these systems begin to reason, plan, and act with increasing independence, the associated risk landscape is shifting as well
In this exclusive executive briefing, Iftach Orr (CTO, Alice) and Charith Peris (Senior Applied Scientist, Amazon AGI) will delve into the next generation of AI risks - focusing on vulnerabilities already surfacing in real systems and what it means for those designing and deploying foundation models.
A central theme: agentic AI. These systems go beyond passive generation. They take actions, invoke tools, collaborate with other agents, and operate in dynamic environments. These capabilities create new forms of emergent risk - but they also unlock new ways to improve safety, alignment, and control.
To illustrate, Charith will share Amazon’s recent work on using multi-agent deliberation to generate high-quality chain-of-thought (CoT) training data - showing how agentic systems can be used to enhance policy adherence, jailbreak robustness, and overall model safety.
What You’ll Learn
- The most urgent and emerging risks in Agentic AI safety and security
- Real-world examples of vulnerabilities, including malicious code generation, agentic protocols exploitation, data exfiltration, scaled fraud, reasoning injection and more
- How agentic AI can also improve safety – including a look at Amazon’s multiagent framework for generating safer chain-of-thought data
- Practical steps companies can take now to future-proof their Agentic AI
Meet our speakers


What’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.
