TL;DR
Lovable partnered with Alice to ensure that as AI-powered website creation becomes "magical," it remains fundamentally safe. By combining Lovable’s rapid full-stack innovation with Alice’s expertise in simulating risks and strengthening safeguards, the two are proactively addressing abuse before it occurs. This collaboration sets a new standard for responsible development, proving that building boldly requires a shared commitment to Trust & Safety.
Building websites with AI feels a bit like magic.
You describe what you want, and suddenly, there is a working product in front of you. Pages, flows, integrations, even payments. No setup. No boilerplate. No long nights wrestling with layouts or logic. Just creation, at the speed of thought.
It is fun. It is empowering. And honestly, it is exciting in a way building on the internet has not been for a long time.
However, every new technology introduces new considerations.
The same tools that make it easier to build useful and legitimate products can also be used in ways that fall outside their intended purpose.
Risks evolve at the same pace as innovation, and emerging technologies must be designed with strong safety measures in place across critical areas such as child safety and mental health.
From possibility to responsibility
This tension between how easy it is to build with AI and how important it is to build safely is exactly why we partnered with Lovable.
Lovable enables people to create full-stack websites and web applications by chatting with AI. In minutes, builders can launch products that once took weeks of planning and development. That kind of capability is transformative. It also raises important questions about how those products behave once they are in the hands of real users.
When AI empowers builders, it unlocks meaningful opportunities for innovation and becomes a powerful driver of positive change across many areas of life.
Building trustworthy AI requires organizations to make responsibility a core principle, anticipating evolving risks, strengthening safeguards as technology advances, and addressing emerging threats before harm occurs.
Learning before harm happens
Together with Lovable’s Trust & Safety team, we focused on learning proactively.
Instead of waiting for real-world incidents, we worked to simulate how abuse might realistically surface. The subtle ways intent can change over time. How meaning can be hidden, reframed, or escalated. How systems can be pushed to create something harmful against their designers’ wishes.
This work was focused on understanding how products behave under pressure, where policies need more clarity, and where protections can be strengthened early
Insights from this collaboration helped inform Lovable’s platform rules and safety policies, supporting Trust & Safety teams as they combine fast iteration with responsible design. Just as importantly, it reinforced a shared understanding that safety is something you continuously improve, not something with a one-time fix.
Building boldly, responsibly
AI lowers barriers to creation. It enables more people to build than ever before.
But it does not remove responsibility. If anything, it increases it.
The websites and applications being created today will shape how people learn, communicate, build, and ultimately innovate online tomorrow. Ensuring those experiences are not only functional, but safe, is a shared responsibility. One that no single company can carry alone.
That is why partnerships like this matter. Cross-industry collaboration helps teams learn faster, anticipate harm earlier, and raise the bar for what responsible AI development looks like.
Here is to building boldly, responsibly, and together.
--
Learn more about our partnership with Lovable here.
Strengthen your own safety posture with Alice
Learn moreWhat’s New from Alice
Securing Agentic AI: The OWASP Approach
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.

