ActiveFence is now Alice
x
Back
Lovable
-
Case Studies

Building Safer AI Products Through Proactive Red Teaming

Lovable partnered with Alice to proactively detect risks, strengthen trust and safety, and help shape a safer internet.

Apr 9, 2026
Get a demo
Company Info

Company Size

150+ Employees

Industry

AI / Developer Tools / Web Creation Platforms

About

Lovable is a software creation platform that empowers anyone to build full-stack apps and websites by chatting with AI. In its first year, builders created over 25 million projects with Lovable. By removing technical barriers and long development cycles, Lovable enables creators to ship real products in days. As its product evolves, Lovable remains focused on user trust, safety, and responsible product development as core components of the platform's design.

"As AI capabilities advance, so do the risks that accompany them. Working with Alice as a safety partner enables us to proactively simulate real-world misuse scenarios, stay ahead of emerging threats, and reinforce protections designed to keep users safe."

Alejandra Arreola Ruiz
-
Trust, Safety and Policy Lead, Lovable
AT A GLANCE

Lovable partnered with Alice to strengthen its AI safety measures through proactive, expert-led red teaming. The collaboration focused on identifying real-world abuse patterns related to child safety and mental health using adversarial testing techniques informed by industry-wide experience. Insights from the exercises supported Trust & Safety teams in refining policies, improving prevention strategies, and staying ahead of evolving risks. 

The result: a stronger safety posture and a shared commitment to cross-industry collaboration for a safer internet.

Challenge

As AI systems become more capable and widely adopted, risks related to child safety and mental health remain present across the broader internet ecosystem. These risks are not unique to any single platform, and they continue to evolve alongside new technologies and user behaviors.

Lovable recognized the importance of proactively identifying potential safety gaps before harm occurs. While internal policies and safeguards were already in place, the team sought additional external expertise to pressure-test assumptions, uncover edge cases, and better understand how real-life bad actors might attempt to bypass protections.

The goal was not only to detect risks, but to use those findings to help Trust & Safety teams reimagine stronger, more effective prevention strategies.

Solution

Lovable partnered with Alice to conduct expert-led red team exercises designed to proactively test safety measures under realistic, adversarial conditions.

Rather than relying on a single testing method, the exercises explored a range of real-world abuse patterns observed across the tech industry. This included examining how harmful intent can be gradually introduced, obscured through language, or framed in ways that test policy boundaries.

Findings from the exercises were reviewed collaboratively and translated into practical insights, supporting additional policy refinement, enforcement tuning, and long-term safety strategy without overexposing sensitive operational details.

Impact

The red team exercises provided Lovable with a deeper, more nuanced understanding of how risks can manifest in practice.

Key outcomes included:

  • Proactive detection of edge cases that are difficult to surface through standard testing
  • Actionable inputs for Trust & Safety teams to strengthen prevention strategies
  • Greater confidence in policy clarity and enforcement balance
  • A shared framework for continuously adapting to emerging threat patterns

Beyond the immediate findings, the partnership reinforced the value of external collaboration in building safer AI systems.

Share

Trusted by security and product teams in the world's most regulated industries

Alice brings years of adversarial intelligence expertise to AI security. We give enterprise teams the coverage that generic guardrails and one-time audits can't match.

Get a Demo

What’s New from Alice

"Okay, Here is How to Build a Bomb": Millions Download Dangerous LLMs

blog
Apr 17, 2026
,
 
Apr 17, 2026
 -
2
 min read
April 17, 2026

Thousands of abliterated LLMs have flooded open-source platforms with millions of downloads. These models comply with virtually any request, from bomb-making to malware, and run fully offline on consumer devices.

Learn More

Alice Financial Benchmark

whitepaper
Apr 16, 2026
,
 
Apr 16, 2026
 -
This is some text inside of a div block.
 min read
April 16, 2026

See which models tested gave unauthorized financial advice with no jailbreak needed. Get the benchmark and protect your deployment.

Learn More