ActiveFence is now Alice
x
Alice - Blog

Perspectives

Insights on AI security, governance, and real-world risk.
Learn how to prevent prompt injection, secure AI systems, and manage AI in production.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Five Competitive Advantages from Real-Time GenAI Guardrails

Jul 14, 2025
,
 
Jul 14, 2025
 -
4
 min read
July 14, 2025

See how implementing runtime guardrails in your GenAI powered apps gives you an edge over your competition.

Learn More

When War Is a Game: How Video Game Footage Fuels Conflict Misinformation

Jul 9, 2025
,
 
Jul 9, 2025
 -
6
 min read
July 9, 2025

Video game footage is increasingly misused as fake war content online, shaping conflict narratives and misleading even mainstream outlets. ActiveFence explains how and why, and how trust & safety teams can respond.

Learn More

Real-Time AI Safety Now Available on AWS Marketplace

Jul 9, 2025
,
 
Jul 9, 2025
 -
3
 min read
July 9, 2025

Discover how ActiveFence Guardrails now provides real-time AI safety with low latency, and no-code controls, in secure, scalable AWS enterprise deployments.

Learn More

Why CISOs Like Me Don’t Sleep in 2025: What You Must Know About Securing GenAI

Jul 3, 2025
,
 
Jul 3, 2025
 -
7
 min read
July 3, 2025

Discover what really keeps CISOs up at night from our very own Guy Stern, who shares frontline insights into GenAI risk in 2025, exposing hidden vulnerabilities, internal misuse, and how enterprise security must adapt.

Learn More

Exfiltrating Secrets from LLM Memory: Lessons from the Red Team Trenches

Jul 2, 2025
,
 
Jul 2, 2025
 -
7
 min read
July 2, 2025

RAG makes AI smarter while also creating new ways for hackers to steal private data. Learn how our Red Team Lab used "memory exfiltration" to trick models into leaking sensitive info through hidden browser requests.

Learn More

How Scammers Are Abusing GenAI to Impersonate and Manipulate

Jun 26, 2025
,
 
Jun 26, 2025
 -
6
 min read
June 26, 2025

Scammers are using GenAI to impersonate people, brands, and institutions at scale; fueling fraud, misinformation, and exploitation. Here’s how impersonation abuse works and what AI deployers must do to stay ahead.

Learn More

How Roleplay and Multi-Turn Prompts Bypass LLM Guardrails

Jun 19, 2025
,
 
Jun 19, 2025
 -
6
 min read
June 19, 2025

New research shows how roleplaying and multi-turn prompts can bypass LLM moderation and jailbreak protections. This post explains how evasion attacks work, why single-turn filters fail, and how to mitigate real-world risk.

Learn More

Trusted by security and product teams in the world's most regulated industries

Alice brings years of adversarial intelligence expertise to AI security. We give enterprise teams the coverage that generic guardrails and one-time audits can't match.

Get a Demo