Perspectives
Insights on AI security, governance, and real-world risk.
Learn how to prevent prompt injection, secure AI systems, and manage AI in production.
What AI Red Teaming Looks Like Outside the Lab
What CBRN Testing Reveals About LLM Vulnerabilities
Learn how AI systems misbehave when prompted in one of the most dangerous threat areas: high-risk CBRN. Based on ActiveFence’s internal testing of leading LLMs, the results reveal critical safety gaps that demand serious attention from enterprise developers.
Alice Continues to Power Safety-First Enterprise AI with NVIDIA
Scaling AI requires safety by design. Learn how Alice is integrating with NVIDIA NeMo Guardrails to provide real-time protection and help enterprises deploy AI agents that are secure, responsible, and aligned.
Alice Powers the AI Safety Flywheel with NVIDIA
Many AI safety and governance approaches break down once AI systems reach production. See how Alice and NVIDIA address this gap with an AI safety flywheel combining adversarial testing, guardrails, and real-time oversight to keep AI systems governed at scale.
From Shadows to Sanctions: Unmasking Russia’s Hybrid Influence Strategy
The EU has formally sanctioned key players behind Russia’s coordinated disinformation ecosystem. These campaigns, long monitored by ActiveFence, reveal a complex strategy built on narrative laundering, infrastructure resilience, and long-term influence.
Exposing the Threat Landscape: A Taxonomy of GenAI Attack Vectors | Alice
GenAI systems introduce a new and rapidly evolving attack surface. This taxonomy outlines the most common GenAI attack vectors; from prompt injection to multimodal evasion; and how enterprises can detect and defend against them.
Trusted by security and product teams in the world's most regulated industries
Alice brings years of adversarial intelligence expertise to AI security. We give enterprise teams the coverage that generic guardrails and one-time audits can't match.
Get a Demo