From Frontier Experimentation to Enterprise Reality
In many ways, today’s AI landscape resembles the early internet era, where innovation moved quickly, standards lagged behind, and security and governance were largely afterthoughts.
Early web applications were fast and flexible, but often fragile and exposed. Over time, layers of infrastructure—protocols, security frameworks, identity systems—brought order to that chaos.
AI agents are now at a similar inflection point.
Left unchecked, they behave like frontier technologies: adaptable, creative, and occasionally reckless. For enterprises, we know that’s not a sustainable model. So how do you ensure agents are operating “lawfully”—within defined, reliable boundaries—while at the same time not limiting their capabilities?
It’s a tough needle to thread, and as Monroy notes with a bit of levity, current standards don’t always instill the greatest confidence.
“You try and make them lawful by providing prompts and guardrails in the system prompt or the context window,” he says, “but you're fundamentally crossing your fingers and going, gosh, I hope this agent respects what I asked it to do. And within the rules.”
The solution comes down to intentional, nuanced structure.