TL;DR
Alice and Databricks partner to help enterprises build safer AI agents with built-in guardrails, reducing risk while enabling scalable, trustworthy autonomous AI systems.
As enterprises embrace generative AI, intelligent agents are rapidly becoming core components of customer experiences, operations, and products. But with this power comes risk: AI agents can behave unpredictably, respond with toxic or non-compliant content, or be manipulated through adversarial prompts, putting your brand and users at risk.
That’s why we’ve partnered with Databricks, a leader in AI infrastructure and enterprise-scale LLM development, to make building safe, policy-aligned AI agents easier than ever. Together, we're helping developers integrate WonderFence Guardrails into the Databricks Mosaic AI Agent Framework, ensuring agents are protected at runtime from safety, security, and compliance risks.
With WonderFence, organizations gain real-time protection across every input and output, live observability into agent behavior, and actionable safeguards that reflect your unique policies and brand values.
This collaboration brings together Databricks’ powerful AI development stack with Alice's enterprise-grade safety solutions, allowing teams to deploy agentic AI with confidence, without compromising innovation or agility.
👉 Curious how it works in practice? Check out the full step-by-step code notebook on our Engineering blog on Medium →
Learn more about Alice's Partnerships
Learn moreWhat’s New from Alice
Securing Agentic AI: The OWASP Approach
In this episode, Mo Sadek is joined by Steve Wilson (Chief AI and Product Officer at Exabeam, founder and co-chair of the OWASP GenAI Security Project) to explore how OWASP is shaping practical guidance for agentic AI security. They dig into prompt injection, guardrails, red teaming, and what responsible adoption can look like inside real organizations.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.

