Prompt injections and other threats to AI security and safety could be hiding in the skills you install. This Week In Startups explains the security risks as part of a discussion on Moltbot, highlighting how Alice can help protect your Agentic AI.
Discover the story behind ActiveFence’s transformation into Alice, as Calcalist explains how AI and cloud innovation fueled a rebrand, and why it matters for the future of AI model security.
Reality Defender and ActiveFence are teaming up to tackle the rise of synthetic media threats. By integrating Reality Defender’s deepfake detection API into ActiveFence’s real-time AI safety guardrails, we bring enterprises powerful, built-in safeguards against AI-generated audio, video, image, and text manipulation.
As enterprise AI capabilities accelerate, so does the need for robust safety and security. Highlighted recently at NVIDIA GTC Paris 2025, ActiveFence is embedding AI safety and security with NVIDIA to ensure AI agents operate safely, responsibly, and in alignment with organizational values.
Safety and security for generative AI isn’t a one-time fix. It’s an ongoing process that, like a flywheel, gains momentum and stability with every cycle. That’s why we’re introducing an approach using the NVIDIA AI Safety Recipe for end-to-end AI safety across the entire AI lifecycle. Each cycle of testing, evaluation, and refinement makes AI systems more stable, more adaptive, and better prepared for emerging threats.
AI teammates are becoming more capable, conversational, and independent. ActiveFence secures every phase of agentic AI, whether triggered by a prompt or acting autonomously. With NVIDIA’s AI Blueprints, the Agent Intelligence toolkit and our safety platform combined, enterprises can now deploy intelligent agents that are productive and protected.