ActiveFence is now Alice
x
Alice - Blog

Written in
the Cards

In a landscape that shifts as quickly as AI Wonderland, we’re here to help you read the signs. Explore fresh tales, industry trends, and timely dispatches from the heart of Alice.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

WonderBuild for Launch-ready GenAI

Jan 20, 2026
,
 
Jan 20, 2026
 -
3
 min read
January 20, 2026

Learn how WonderBuild from Alice helps teams stress test GenAI before launch, uncover hidden risks, and ship secure, trusted AI with confidence.

Learn More

Meet WonderSuite: Lifecycle Security & Safety for AI systems

Jan 14, 2026
,
 
Jan 14, 2026
 -
2
 min read
January 14, 2026

Discover WonderSuite, lifecycle security and safety for generative AI. Red team, apply adaptive guardrails, and govern AI systems as risk evolves.

Learn More

Why We Became Alice

Jan 14, 2026
,
 
Jan 14, 2026
 -
6
 min read
January 14, 2026

After years on the frontlines of online harm, ActiveFence became Alice to meet a new era of communicative tech, where risk lives inside AI and responsibility must evolve alongside innovation.

Learn More

Into the Looking Glass and the Definition of Communicative Technology

Jan 14, 2026
,
 
Jan 14, 2026
 -
2
 min read
January 14, 2026

Explore how communicative technology is evolving with AI, and why trust, safety, and compliance are critical to scaling GenAI systems responsibly.

Learn More

The 5 Most Shocking LLM Weaknesses We Uncovered in 2025

Dec 25, 2025
,
 
Dec 25, 2025
 -
6
 min read
December 25, 2025

Our red team researchers uncovered five unexpected LLM vulnerabilities in 2025, from hijacked reasoning to invisible tool execution. This countdown highlights the most eye-opening failures shaping AI safety today.

Learn More

If I Already Do AI Pen Testing, Why Do I Need Red AI Teaming?

Dec 22, 2025
,
 
Dec 22, 2025
 -
7
 min read
December 22, 2025

AI penetration testing and red teaming address different risks. This article explains why passing an AI pen test doesn’t guarantee real-world safety, and how red teaming exposes systemic weaknesses attackers actually exploit.

Learn More

Be Prepared for an AI Crackdown from U.S. State Attorneys General

Dec 16, 2025
,
 
Dec 16, 2025
 -
9
 min read
December 16, 2025

A coalition of 42 U.S. State Attorneys General is demanding stronger AI safeguards, audits, and accountability. This article explains what the warning means for AI developers, enterprises, and liability risk.

Learn More

Disney and OpenAI’s $1 Billion Deal Hinges on AI Guardrails

Dec 12, 2025
,
 
Dec 12, 2025
 -
7
 min read
December 12, 2025

Disney’s $1B partnership with OpenAI opens iconic IP to AI-generated content. This article explores why guardrails, red teaming, and real-time oversight will determine whether fan creativity can scale without risking brand trust.

Learn More

Globally trusted for good reason.

Alice is led, supported, and backed by experts in communicative tech integrity. See how we use our unparalleled threat intelligence to continuously protect over 3 billion people worldwide.

Get a Demo