TL;DR
Alice and Databricks partner to help enterprises build safer AI agents with built-in guardrails, reducing risk while enabling scalable, trustworthy autonomous AI systems.
As enterprises embrace generative AI, intelligent agents are rapidly becoming core components of customer experiences, operations, and products. But with this power comes risk: AI agents can behave unpredictably, respond with toxic or non-compliant content, or be manipulated through adversarial prompts, putting your brand and users at risk.
That’s why we’ve partnered with Databricks, a leader in AI infrastructure and enterprise-scale LLM development, to make building safe, policy-aligned AI agents easier than ever. Together, we're helping developers integrate WonderFence Guardrails into the Databricks Mosaic AI Agent Framework, ensuring agents are protected at runtime from safety, security, and compliance risks.
With WonderFence, organizations gain real-time protection across every input and output, live observability into agent behavior, and actionable safeguards that reflect your unique policies and brand values.
This collaboration brings together Databricks’ powerful AI development stack with Alice's enterprise-grade safety solutions, allowing teams to deploy agentic AI with confidence, without compromising innovation or agility.
The partnership equips data teams with the infrastructure to manage agentic AI risk at scale, making agentic AI security and safety an integrated part of the development workflow rather than an afterthought.
👉 Curious how it works in practice? Check out the full step-by-step code notebook on our Engineering blog on Medium →
Learn more about Alice's Partnerships
Powering Safer GenAI, TogetherWhat’s New from Alice
Your LLM Has No Idea What It's Doing
Diana Kelley, CISO at Noma Security and former Cybersecurity CTO at Microsoft, joins Mo to work through the real mechanics of LLM risk: why the context window flattens the trust boundary between system instructions and user data, why that makes reliable internal guardrails essentially impossible, and why agentic AI is less a new threat category and more a stress test for the hygiene debt organizations never fully paid off.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.
Exposing the Hidden Risks of AI Toys
AI-powered toys are entering children’s everyday lives, but new research reveals serious safety gaps. Alice testing shows how child-like interactions can lead to inappropriate content, unsafe conversations, and risky behaviors.

