Alice x Amazon Webinar: Navigating Agentic AI Risks With Agentic Solutions
Join an exclusive briefing on the future risks, and safety opportunities, of AI agents.
Sign Up Now
Watch On-Demand
Alice x Amazon Webinar: Navigating Agentic AI Risks With Agentic Solutions


Overview
AI systems are rapidly evolving - not just in capability, but in autonomy, agency, and interconnectivity. As these systems begin to reason, plan, and act with increasing independence, the associated risk landscape is shifting as well
In this exclusive executive briefing, Iftach Orr (CTO, Alice) and Charith Peris (Senior Applied Scientist, Amazon AGI) will delve into the next generation of AI risks - focusing on vulnerabilities already surfacing in real systems and what it means for those designing and deploying foundation models.
A central theme: agentic AI. These systems go beyond passive generation. They take actions, invoke tools, collaborate with other agents, and operate in dynamic environments. These capabilities create new forms of emergent risk - but they also unlock new ways to improve safety, alignment, and control.
To illustrate, Charith will share Amazon’s recent work on using multi-agent deliberation to generate high-quality chain-of-thought (CoT) training data - showing how agentic systems can be used to enhance policy adherence, jailbreak robustness, and overall model safety.
What You’ll Learn
- The most urgent and emerging risks in Agentic AI safety and security
- Real-world examples of vulnerabilities, including malicious code generation, agentic protocols exploitation, data exfiltration, scaled fraud, reasoning injection and more
- How agentic AI can also improve safety – including a look at Amazon’s multiagent framework for generating safer chain-of-thought data
- Practical steps companies can take now to future-proof their Agentic AI
Meet our speakers


What’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
