ActiveFence is now Alice
x
Back
Amazon Nova
-
Case Studies

Validate Model Safety and Benchmark Against Competitors for Responsible Deployment

To validate its most advanced foundation model to date, Amazon engaged Allice for a manual red-teaming evaluation of Nova Premier, testing the model's readiness for safe and secure deployment.

Feb 18, 2026
Get a demo
Company Info

Company Size

Industry

GenAI - LLM

About

Nova Premier is Amazon’s most advanced foundation model, designed for complex reasoning and serves as a distillation teacher for downstream systems.

"Through this hands-on evaluation, Alice strengthened Nova’s security posture and supported Amazon’s broader Responsible AI goals, ensuring the model could be deployed with greater confidence."

Rahul Gupta
-
Senior Manager, Responsible AI, Amazon AGI
AT A GLANCE

To help validate its most advanced model to date, Amazon partnered with Alice to red-team Nova Premier against high-risk prompts. The results positioned Nova as safer than its competitors, marking a major step toward secure enterprise deployment.

Challenge

Amazon aimed to rigorously validate the safety of its most capable foundation model, Nova Premier ahead of public release. With increasing risks associated with advanced generative models, they sought to benchmark it against real-world adversarial threats across critical responsible AI (RAI) categories.

Solution

Alice partnered with Amazon as a third-party red teamer to perform manual, blind evaluations of Nova Premier on Amazon Bedrock. Testing spanned prompts across Amazon’s eight RAI categories, including safety, fairness and bias, and privacy and security. ALice also benchmarked Nova Premier against other LLMs for comparison.

Impact

The collaboration demonstrated how expert-led manual red teaming complements automated testing, offering a comprehensive snapshot of model robustness.

Share

Globally trusted for good reason.

Alice is led, supported, and backed by experts in communicative tech integrity. See how we use our unparalleled threat intelligence to continuously protect over 3 billion people worldwide.

Get a Demo

What’s New from Alice

The Rise and Risk of Reasoning Agents

blog
Feb 18, 2026
,
 
Feb 18, 2026
 -
6
 min read
February 18, 2026

As AI agents gain the ability to reason, plan, and act autonomously, their internal thinking becomes a new attack surface that must be protected just as carefully as the tools they use.

Learn More

How Your Agent-to-Agent Systems Can Fail and How to Prevent It

whitepaper
Oct 22, 2025
,
 
Oct 22, 2025
 -
This is some text inside of a div block.
 min read
October 22, 2025

Discover the risks that AI Agents pose and how you can protect your Agentic AI systems.

Learn More