ActiveFence is now Alice
x
WEBINAR

5 Red Teaming Tactics to Ensure GenAI Safety

As Generative AI evolves, traditional security testing is no longer sufficient. Join this expert-led session to learn essential red teaming tactics that identify hidden risks and ensure safer model releases.

May 28, 2024
-
|

Unlock the full recording

Watch On-Demand

5 Red Teaming Tactics to Ensure GenAI Safety

Alice Logo
Alice Logo
Alice Logo
-
|
May 28, 2024
Watch Now

Overview

Relying on standard safety benchmarks can leave your GenAI models vulnerable to sophisticated, real-world misuse. This webinar explores how to apply Safety by Design principles through advanced red teaming to mitigate risks and improve model reliability.

  • Explore red teaming approaches tailored for different company sizes and use cases.
  • Gain insights into reducing bias and uncovering unknown failure modes in LLMs.
  • Learn how to measure the effectiveness of your testing to ensure ongoing safety.

Meet our speakers

Tomer Poran
VP Strategy and BizDev
Guy Paltieli, PhD
Head of GenAI Trust & Safety, ActiveFence
Tomomi Tanaka, PhD
Founder, Safety by Design Lab
Yoav Schlesinger
Responsible AI & Tech Architect, Salesforce

What’s New from Alice

The Rise and Risk of Reasoning Agents

blog
Feb 18, 2026
,
 
Feb 18, 2026
 -
6
 min read
February 18, 2026

As AI agents gain the ability to reason, plan, and act autonomously, their internal thinking becomes a new attack surface that must be protected just as carefully as the tools they use.

Learn More

How Your Agent-to-Agent Systems Can Fail and How to Prevent It

whitepaper
Oct 22, 2025
,
 
Oct 22, 2025
 -
This is some text inside of a div block.
 min read
October 22, 2025

Discover the risks that AI Agents pose and how you can protect your Agentic AI systems.

Learn More
Red-Team Lab