ActiveFence is now Alice
x
Back
Blog

The Art of the Unseen: Why We Built Caterpillar

Mo Sadek
-
Feb 4, 2026

Table of Contents

TL;DR

As AI agents gain autonomy, their hidden "skills" often create dangerous security blind spots. We built Caterpillar, a free open-source tool that unmasks these vulnerabilities and automates supply chain validation so you can stop flying blind and ship AI securely.

AI agents are showing up in more places now. With OpenClaw (previously Moltbot and ClawdBot) starting to make some real waves in the developer community, people are finding new ways to build and interact with AI. We’re starting to see more movement from a niche community experiment into something that feels more real and practical.

That shift is interesting because these aren't just tools that sit there waiting for commands. They're agents - they can actually do things, make decisions, interact with systems in ways we might not fully anticipate. That's what makes them useful, but it's also what makes them complicated from a security perspective. 

However, when you give an agent access to your environment, you're not just deploying code. You're introducing capability and autonomy. And if you can't see what that agent can actually do including it’s list of skills, what they can access and how they are affecting your agent in real-time, you’re kinda flying blind.

That's why we built Caterpillar. 

Seeing Through the Mask

Caterpillar is a free, open-source security tool designed specifically to bring transparency to the skills ecosystem. It inspects the logic and configurations of "skills" to find the things that aren't meant to be seen, including injection paths, unsafe access, and benign instructions that put your agents at risk.

Caterpillar is powered by RabbitHole, Alice's adversarial intelligence database. It is the culmination of years of quiet, rigorous research into how threats evolve.

At its core, Caterpillar is our answer to supply chain security when it comes to AI skill security. We have seen AI supply chain compromises plague the ecosystem over the last few months, and know how pivotal the validation of third party packages, sources, and skills is as it relates to AI.

At Alice, we believe that partnership comes with dogfooding. We ourselves have been working on securing these implementations as part of our own internal engineering teams, which is why we've shared our learnings around the fundamentals of securing AI skills and exact steps required to install ClawdBot safely. Open sourcing our internal tool, Caterpillar, was a clear next step in giving back to the community and moving the ball forward.

A Necessary Intervention

During early testing, we scanned published skills across OpenClaw. Multiple injection vulnerabilities. Unsafe configurations. Over 6,000 users running skills with exploitable flaws. These weren't malicious by design—they were poorly written code that created attack surfaces.

The AI skills ecosystem is moving too fast for manual review. Caterpillar automates the security validation that should have been standard practice from day one.

“Agent ecosystems are scaling faster than the security assumptions around them,” says Noam Schwartz, CEO of Alice. “When you install a skill, you’re not installing a feature, you’re installing behavior. Caterpillar helps teams see what they’re actually running, and catch issues early, before they become incidents.”

See Caterpillar in Action

Advance Unafraid

At Alice, we believe in the teams building this future. This technology is transformative, and we know you're committed to shipping it securely.

That's why Caterpillar is open-source and free. When the whole ecosystem benefits, we all ship better software.

Explore Caterpillar at caterpillar.alice.io

Get Continuous Safety & Security for Enterprise GenAI

Learn more
Share
Inside Alice