How the AI Trust Gap Slows Adoption (and What to Do About It)
By shifting to integrated governance, enterprises can eliminate the 'AI tax' and turn stalled adoption into scalable productivity gains.
Julie Colwell
Principal Strategist
Workday
By shifting to integrated governance, enterprises can eliminate the 'AI tax' and turn stalled adoption into scalable productivity gains.
Julie Colwell
Principal Strategist
Workday
The modern enterprise doesn’t run on AI. It’s being redesigned by AI. Yet a Workday survey of 5,000+ business leaders and employees around the world found that just over half (55%) of employees are confident that their organizations will implement AI responsibly.
This translates to more than a sentiment problem. It’s a barrier to AI adoption and scale.
When people don’t trust an AI system, they avoid it, work around it, or double-check everything it produces. Adoption stalls, and promised gains in productivity or decision quality get canceled out by friction and rework. The trust gap becomes an execution gap.
Success in AI doesn’t fail on capability alone, but on controllability. To scale successfully, enterprises must integrate four critical disciplines that they often manage in silos: trust, risk, security, and integrity.
Of these four factors, trust is the multiplier. Risk can be documented and security can be tested, but trust determines whether employees rely on AI in day-to-day decisions. In HR and finance, where mistakes have significant consequences, building that trust is the difference between AI that’s deployed and AI that’s actually used.
Report
The data is sobering in its specifics: A recent survey by Harvard Business Review found that while 76% of executives believe their employees are enthusiastic about AI, just 31% expressed enthusiasm about its adoption. And the gap isn’t small—HBR says leaders are more than twice off the mark in estimating employee enthusiasm for AI. The more senior a leader is, the more likely they are to overestimate positive AI sentiment.
While 76% of executives believe their employees are enthusiastic about AI, just 31% expressed actual enthusiasm about its adoption.
Workday research also suggests an operating model disconnect around human oversight: Leaders and employees agree that AI should be developed in ways that allow human review and intervention, yet the day-to-day reality for many employees is that AI governance and usage are not visible or transparent.
This separation between belief in AI and day-to-day practice is where a lot of the AI trust deficit originates.
When employees resist using AI, it’s usually because of a breakdown in one of three areas: they can’t see how the AI reached a conclusion, the AI’s behavior feels unpredictable, or there’s no clear human in the loop to catch a mistake.
To bridge this gap, AI governance needs to focus on three simple pillars:
This is also where workloads begin to spike if governance isn’t designed for scale. It’s one of the primary reasons nearly 40% of AI time savings are lost to rework—because while the task-level use cases are in place for AI efficiency, operating models aren’t designed for them to scale safely.
Organizations that integrate AI governance into their compliance workflows rather than maintaining a separate parallel process are better positioned to eliminate that AI tax and see net value gains.
Without scalable AI governance, nearly 40% of AI time savings are lost to rework.
A solid strategy for AI only works when all the pieces talk to each other. If an organization has risk management on paper but no way for employees to actually trust the tool, it ends up with a stack of policy documents that don't match how people work in the real world. Similarly, security is only half the battle; people also need to know who is responsible when a tool makes an unexpected suggestion.
Trust is the glue that holds everything else together. For teams in high-stakes areas like HR and finance, where every decision counts, the bar for using AI responsibly is high and getting higher. Trust has to be earned and maintained throughout the entire AI journey through ethical development, responsible implementation, clear guidelines, and smart governance.
By treating trust as a core part of how organizations design and launch AI tools—rather than an afterthought—they move past the trial phase and start seeing the real-world returns on investment.
Ninety-eight percent of CEOs foresee an immediate business benefit from implementing AI. Download this report to discover the potential positive impact on your company, with insights from 2,355 global leaders.
Report