WEBINAR
5 Red Teaming Tactics to Ensure GenAI Safety
As Generative AI evolves, traditional security testing is no longer sufficient. Join this expert-led session to learn essential red teaming tactics that identify hidden risks and ensure safer model releases.
May 28, 2024
-
|
Unlock the full recording
Watch On-Demand
5 Red Teaming Tactics to Ensure GenAI Safety



-
|
May 28, 2024
Overview
Relying on standard safety benchmarks can leave your GenAI models vulnerable to sophisticated, real-world misuse. This webinar explores how to apply Safety by Design principles through advanced red teaming to mitigate risks and improve model reliability.
- Explore red teaming approaches tailored for different company sizes and use cases.
- Gain insights into reducing bias and uncovering unknown failure modes in LLMs.
- Learn how to measure the effectiveness of your testing to ensure ongoing safety.
Meet our speakers

Tomer Poran
VP Strategy and BizDev

Guy Paltieli, PhD
Head of GenAI Trust & Safety, ActiveFence

Tomomi Tanaka, PhD
Founder, Safety by Design Lab

Yoav Schlesinger
Responsible AI & Tech Architect, Salesforce
What’s New from Alice
Distilling LLMs into Efficient Transformers for Real-World AI
webinar
Sep 25, 2025
 -
This is some text inside of a div block.
 min read
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Red-Team Lab
