Mitigating the Risks of Agentic AI
Join us as we discuss how to detect and avoid the risks Agentic AI poses to data, markets, infrastructure, and more.
Watch On-Demand
Watch On-Demand
Mitigating the Risks of Agentic AI


Overview
Without proper safeguards, Agentic AI’s unpredictable behaviors, susceptibility to manipulation, and ability to act independently could result in data breaches, disinformation, market instability, and infrastructure failures. Join us for this important webinar where we dive into actionable steps you can take to mitigate the risks posed by Agentic AI. Designed for AI developers and enterprises looking to develop AI agents, this session dives deep into essential topics, including:
- The risks posed by Agentic AI through four critical lenses: Privacy and Data Breaches, Fraud and Finance, Physical Safety, and Influence Operations.
- How to detect unusual agent behaviors in each of these risk areas.
- Recommendations to mitigate risks without missing out of the benefits Agentic AI brings.
- Expert testing methodologies used by ActiveFence to test Agentic AI systems for reliability, safety, and compliance.
Secure your spot now and discover how to ensure your AI agents are secure and resilient for safe distribution and use across diverse applications.
Meet our speakers


What’s New from Alice
"Okay, Here is How to Build a Bomb": Millions Download Dangerous LLMs
Thousands of abliterated LLMs have flooded open-source platforms with millions of downloads. These models comply with virtually any request, from bomb-making to malware, and run fully offline on consumer devices.
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
