Written in
the Cards
In a landscape that shifts as quickly as AI Wonderland, we’re here to help you read the signs. Explore fresh tales, industry trends, and timely dispatches from the heart of Alice.
Meet WonderSuite: Lifecycle Security & Safety for AI systems
Why CISOs Like Me Don’t Sleep in 2025: What You Must Know About Securing GenAI
Discover what really keeps CISOs up at night from our very own Guy Stern, who shares frontline insights into GenAI risk in 2025, exposing hidden vulnerabilities, internal misuse, and how enterprise security must adapt.
Exfiltrating Secrets from LLM Memory: Lessons from the Red Team Trenches
RAG makes AI smarter while also creating new ways for hackers to steal private data. Learn how our Red Team Lab used "memory exfiltration" to trick models into leaking sensitive info through hidden browser requests.
Building Safer AI Agents on Databricks with Alice WonderFence
Discover how Alice and Databricks are partnering to build safer AI agents. Learn how Alice's WonderFence integrates with Databricks’ Mosaic AI Agent Framework to mitigate risks like prompt injection, toxic outputs, and policy violations, ensuring secure, compliant AI deployment at scale
How Roleplay and Multi-Turn Prompts Bypass LLM Guardrails
New research shows how roleplaying and multi-turn prompts can bypass LLM moderation and jailbreak protections. This post explains how evasion attacks work, why single-turn filters fail, and how to mitigate real-world risk.
What CBRN Testing Reveals About LLM Vulnerabilities
Learn how AI systems misbehave when prompted in one of the most dangerous threat areas: high-risk CBRN. Based on ActiveFence’s internal testing of leading LLMs, the results reveal critical safety gaps that demand serious attention from enterprise developers.
The integrity of your GenAI is no longer an afterthought.
See how we embed GenAI safety and security from build, to launch, to continuous operation.
Get a demo