ActiveFence is now Alice
x
WEBINAR

LLM Safety Review: Benchmarks & Analysis

Find out what happened when we tested the responses of six leading LLMs, in 7 languages, to over 20,000 prompts related to child exploitation, hate speech, suicide and self-harm, and misinformation.

Aug 23, 2023
-
|

Watch On-Demand

LLM Safety Review: Benchmarks & Analysis

Alice Logo
Alice Logo
-
|
Aug 23, 2023
Watch Now

Overview

As more and more applications implement Generative AI, a clear understanding of foundation models' safety risks becomes imperative. During this webinar, we will review the outcomes of Alice's LLM safety benchmarking report, which evaluated whether gaps exist in the basic safety of GenAI apps and LLM providers. From child exploitation to misinformation, hate speech to self-harm, we will discuss harmful model outputs, the ways bad actors can abuse LLMs, and the risks to those applications that rely on them. Join us to learn about how we evaluated LLM safety, and what risks you should consider as you implement these models into your applications.

Meet our speakers

Nitzan Tamari
Generative AI Solutions Advisor, ActiveFence
Guy Paltieli, PhD
Head of GenAI Trust & Safety, ActiveFence

What’s New from Alice

The Rise and Risk of Reasoning Agents

blog
Feb 18, 2026
,
 
Feb 18, 2026
 -
6
 min read
February 18, 2026

As AI agents gain the ability to reason, plan, and act autonomously, their internal thinking becomes a new attack surface that must be protected just as carefully as the tools they use.

Learn More

How Your Agent-to-Agent Systems Can Fail and How to Prevent It

whitepaper
Oct 22, 2025
,
 
Oct 22, 2025
 -
This is some text inside of a div block.
 min read
October 22, 2025

Discover the risks that AI Agents pose and how you can protect your Agentic AI systems.

Learn More
Guardrails