TL;DR
Legend says the best AI governance framework ever written has a 20-sided die. 👀 David Wendt, Manager of Innovation and AI Governance at Sherwin-Williams, makes the case that the shift from deterministic to non-deterministic isn't just a technical problem, it's a GRC problem. And the teams navigating it best aren't the ones with the longest policy documents. They're the ones with the best judgment.
Raise your hand if the shift you're feeling in GRC, from strictly following rules to now using them as suggestive guidelines, has made you feel like you've fallen behind.
Now raise your hand if this eerily feels like you're in a D&D game where the rules are mere guidelines and nothing is deterministic. I see a couple hands in the back... no one else? Right. That was my thinking when David Wendt made the analogy of D&D to AI governance.
As an avid player myself I'm here to report from the front lines of this recording as the quiet producer in the background that I was in fact kicking my feet, but like, in a high elf kind of way obviously.
For the folks that are new here, these are the kinds of conversations we're having on Curiouser & Curiouser, our podcast is for people who are just trying to hold on for dear life as they navigate the everyday in AI and AI security.
David Wendt is the Manager of Innovation and AI Governance at Sherwin-Williams and has spent 30 years as a data scientist, as well as a DM (dungeon master, for the newbies who don't know). So naturally he was a perfect fit for the podcast. The conversation that he and Mo had was so fun while also being grounded in the uncomfortable reality of what it means to govern AI right now. So buckle up! Let's break this down.
How AI Governance and D&D Relate More Than You Think
Here's what you're going to be thinking about long after you close this tab:
The world we're governing AI in isn't the same one we built our frameworks for.
What once was deterministic, where you could put a rule down and expect a predictable outcome, is now non-deterministic. The same input won't give you the same output twice. Sound familiar? That's D&D. And that shift is exactly why the rulebook can't be the final word in AI governance anymore. It has to be a reference point, a guideline, not a verdict. Fun, right?
Which brings us to one of the best things David said in this whole conversation: ✨the rule of cool. ✨ In D&D, if a player describes taking down an enemy in a way that’s so compelling, so uniquely awesome that you don't want them to fail (but maybe lives just outside the rulebook), you can give them advantage. You bend the rules because the outcome is so, so worth it. Mo made the point that we're trying to do the same thing in AI governance, help people understand the real limitations while keeping them informed enough to make good decisions.
The problem is that because AI has become so fast and so easy to use, we've actually lost some of that awareness along the way. People are making decisions without fully understanding what they're working with, and that gap is quietly growing.
Why GRC Folks Feel One Step Behind
Here's the truth: just like how D&D shifted from a board war game to a digital battlefield with new weapons and evolving defenses, so too has the attack surface for GRC. David described it as the frameworks that used to feel solid were built for a game nobody is playing anymore. The ground keeps moving and the old playbook keeps pretending it isn't. That's why it feels like you're always one step behind. I know: deep breaths, deep breaths.
Now, roll for initiative. 🎲
The space GRC teams are desperate for doesn't come from a new framework or a bigger budget ask. It comes from getting ahead of the threat for once. David is actually building a dedicated generative AI red team at Sherwin-Williams, people whose entire job is to break your stuff before the bad actors do. And the business case writes itself when a single breach can run around what, $6 million? For a fraction of that you could build a pretty solid team and actually sleep at night.
But here's the part most people get wrong about red teaming, and David really hit the nail on the head here. It can't just be humans and it can't just be automated. You need both running at the same time. And we agree! The automated layer catches what scales, the human layer catches what's novel, and the stuff that actually keeps you up at night tends to fall right between the two.
Okay So Who Should Actually Care About This?
This post is for the GRC folks who have been quietly chasing something that refuses to slow down. It's here to say that the feelings you've been sitting with, that nothing is predictable anymore, that the rules keep changing faster than you can document them, are real and it's valid. And you're not alone in this.
What David is really saying throughout this whole conversation is that the value of GRC hasn't gone anywhere. If anything it's more important than ever. The world just needs a different kind of governance now. One built on judgment, communication, and knowing your rules well enough to know when to bend them.
And if you take nothing else from this, take this:
"We have a responsibility as human beings to embrace change and accept humility. If you can do those two things, I think you can adapt to almost anything."
That's David's closing thought and something pretty great to walk away with. Not because it's a framework or a checklist but because it's true. The teams or individuals that are going to navigate this well aren't the ones with the most elaborate policies. They're the ones who are humble enough to keep learning and open enough to keep adapting.
And look at that, you're already doing that by being here.
For the full conversation listen to Episode 6 of Curiouser & Curiouser. And if you see David at a conference this year, ask him about the D&D governance game he's designing. Maybe he'll let you roll for initiative. Find him on LinkedIn or at his speaker page: https://sites.google.com/wendtonline.net/wendtspeaking/home
Stay curious friends.
What’s New from Alice
Curiouser Soundbites: What D&D Taught Us About AI Governance
If you work in GRC and you've ever felt like the ground keeps moving faster than you can document it, this one is for you. David Wendt, Manager of Innovation and AI Governance at Sherwin-Williams, draws one of the most unexpectedly useful analogies we've heard on Curiouser & Curiouser yet, and it involves Dungeons and Dragons.
AI Governance Needs a Dungeon Master
David Wendt has spent 30 years building models and just as long running D&D campaigns. Turns out both taught him the same things about operating in uncertainty. He joins Mo to talk AI governance at enterprise scale, what real red teaming looks like, and why the smarter move is to stop measuring your AI and start measuring what you actually care about.
Distilling LLMs into Efficient Transformers for Real-World AI
This technical webinar explores how we distilled the world knowledge of a large language model into a compact, high-performing transformer—balancing safety, latency, and scale. Learn how we combine LLM-based annotations and weight distillation to power real-world AI safety.

