Your AI Policy Has a Governance Problem
Most enterprises have an AI policy. Far fewer have AI governance. Here's why that gap is putting organizations at risk — and how to close it.
Julie Colwell
Principal Strategist
Workday
Most enterprises have an AI policy. Far fewer have AI governance. Here's why that gap is putting organizations at risk — and how to close it.
Julie Colwell
Principal Strategist
Workday
Every day there’s more news about AI models running amok. Whether it’s wiping out an entire production database or creating a fake list of customers, it’s clear that unmanaged AI is both powerful and risky. It’s a struggle to balance rapid adoption of AI with the governance necessary to manage it.
Here’s why.
AI adoption moves at user speed. Governance moves at organizational speed.
Report
It gets harder from there. Most companies operate across industries, jurisdictions, and often regulatory regimes, which changes how each one can use AI. Acceptable AI use in financial services isn't the same as in healthcare. EU rules aren't U.S. rules. When internal definitions of policy and governance are unclear, external differences compound the confusion.
You can see it in how different functions experience problems:
Financial services: Regulators ask how AI-driven decisions get monitored. Model risk teams are scrambling to answer them.
Healthcare: Compliance is working out when an algorithm is allowed to influence patient care and when it isn't.
Everywhere else: Fragmented tool usage across departments outpaces IT’s ability to maintain security.
AI pilots stall before production because no one can give legal or security a clean answer to these basic questions: Who owns this? What can it touch? How do we prove it's under control?
To answer these questions and scale AI, companies need to treat governance as part of their infrastructure, not a box to check at the end of a project. Governance that works needs to be fast enough to keep up with users, deep enough to follow the data, and clear enough that legal, security, and the business all trust the same answer.
Nowhere is that mismatch more visible than in shadow AI. A recent CIO report on a survey of 2,000 enterprise workers found that 86% of employees use AI at work weekly, but a substantial share of that happens on unsanctioned tools and personal accounts. And senior leaders are among the most frequent offenders. More than half of those using non-approved tools rely on free versions, where ingested data is typically used to train the underlying model and can't be retrieved once it's gone.
Even more interesting is that 21% of employees believe employers will ignore unsanctioned AI use as long as the work gets done. And in many cases, they're right. Plenty of companies quietly tolerate shadow AI because slowing people down feels more expensive than the abstract risk of a tool no one's vetted. As one security CEO put it, the efficiency gains are too large to ignore, and they're overriding security concerns.
This is a sign the sanctioned path — get approval, wait for review, work within vetted tools — is too slow to compete with what employees can do on their own. Governance only works when the governed path is also the fastest path.
While more enterprises now have formal AI governance strategies on paper, very few have successfully operationalized them. And the trajectory isn't naturally self-correcting.
A certified, auditable governance foundation is critical for AI trust, risk, and security.
A certified, auditable governance foundation is critical for AI trust, risk, and security.
For a long time, governance was planned prior to deployment. Companies would model evaluation, bias testing, and review security. Those things are still important, but they're not sufficient.
An AI system that passes every pre-deployment check can still behave differently in production, which it is functionally designed to do - to learn from users. So data patterns shift. Edge cases emerge. Users interact with the system in unanticipated ways. Governance that only lives upstream of deployment is already out of date by the time the system is live.
Without real-time monitoring and automated guardrails, policies written months earlier have little practical impact in production.
Governance needs to be treated as a continuous function, not a checkpoint. That means:
And it means building all of that into the platforms where the work is already happening, not as a parallel system on the front end, or hope to reconcile it later. This becomes even more critical when companies are managing not just AI, but teams of agents operating within their platform.
"Governance that only lives upstream of deployment is already out of date by the time the system is live."
Agents act, delegate to other agents, and trigger downstream processes. They interact with sensitive data across multiple systems, often in ways that are nearly impossible to observe without purpose-built infrastructure.
Workday CEO Aneel Bhusri is direct about what this looks like, "The Agent System of Record is built just like an HR system. For employees, it'll be an extension of their own HR system," he said at Innovation Summit.
This is the only way it makes sense for security. Companies manage people through systems that track identity, authorization, and accountability. Agents need the same treatment — defined identities, scoped permissions, complete audit trails.
Agents operating outside a governed platform don't understand a company's compliance rules. They can produce outputs that look reasonable but violate policy. Governance built into the platform makes agents lawful by default. Governance retrofitted afterward is always playing catch-up.
Analysts suggest this is increasing rapidly. Gartner projects that by 2028, at least 15% of day-to-day work decisions will be made autonomously by agentic AI, up from essentially zero today. By 2030, that number grows to over 50% of routine ERP tasks in finance, supply chain, and HR being autonomously executed by AI. The governance infrastructure organizations build now will either be ready for that scale, or it won't.
Governance built into the platform makes agents lawful by default.
HR and finance are the functions where AI decisions carry the highest regulatory exposure and the highest employee relations stakes.
When AI influences a hiring decision, a performance review, or a financial forecast, the ability to explain that influence clearly, traceably, and on demand is a requirement. In Predicts 2026: The Future of ERP, Gartner said:
Companies with comprehensive AI governance platforms will have 40% fewer AI-related ethical incidents than those without.
AI tools will reduce ERP modernization costs by 40% by 2030 — but only for companies with established governance foundations.
By 2030, one third of ERP selections will be driven by vendor marketplace capabilities, not just the core product.
For enterprises in Europe and Asia-Pacific, there's an additional layer: 75% of net-new ERP deployments are projected to choose sovereign cloud by 2030 — driven by compliance, security, and autonomy requirements that demand governance be there from the start.
Companies that have made platform consolidation a priority prove this is right. Washington State University used AI-driven governance controls to achieve 100% risk-based audit coverage, cut reimbursement time by six days, and eliminate $20,000 in potential duplicate spend.
Bon Secours Mercy Health decommissioned more than 28 applications and saved over $5.7 million by standardizing HR and finance on a single platform. This consolidation gives AI the clean, governed data foundation it needs to actually work.
Companies building governance infrastructure now are the ones that will be able to scale AI safely. Those treating it as an afterthought are accumulating technical debt and regulatory exposure at the same time. The ecosystem companies build around their platforms will determine their AI ceiling.
The gap between AI usage and governance is real. But for companies willing to treat governance as infrastructure rather than process, it's closeable.
Report