Why Trust Is the Fastest Path to Responsible AI Adoption
A look at how legal teams can accelerate AI adoption while building trust, governance, and long-term value.
Emily Faracca
Multimedia Content Writer
Workday
A look at how legal teams can accelerate AI adoption while building trust, governance, and long-term value.
Emily Faracca
Multimedia Content Writer
Workday
Audio also available on Apple Podcasts and Spotify.
True to its promise, AI is moving at breathtaking speed. New tools appear almost weekly. Capabilities leapfrog one another.
Leaders everywhere are trying to move quickly without losing their footing.
It’s a tension that took center stage when Aine Lyons, senior vice president and deputy general counsel at Workday, joined Vanessa Candela, chief legal and trust officer at Celonis, on the Future of Work podcast.
Candela’s perspective is both grounding and energizing. She isn’t here to slow innovation down. She’s here to help organizations move faster while keeping confidence, credibility, and people at the forefront.
Report
Candela starts with a simple but powerful recommendation: move the conversation from risk to trust.
Risk still matters, but trust is the outcome leaders actually need. Trust enables adoption and sustains credibility. It keeps customers confident and employees engaged as change accelerates.
Legal, she acknowledges, was once viewed as the “department of no.” In modern, fast-moving organizations, that role has evolved into something far more influential: a trusted business partner that helps the enterprise move forward responsibly.
AI has pushed that evolution further yet. The risks are newer, the stakes higher, and the pace relentless. The response can’t be to slow down until everything is certain. Instead, leaders need to build the conditions that allow the business to move quickly without losing trust along the way.
“We can't sit around and wait for the regulators to sort of figure it out, because our businesses want to use the technology,” she explains. “We’re embracing it, we’re putting common sense guardrails around it and just pushing forward.”
Legal was once viewed as the “department of no.” In modern, fast-moving organizations, that role has evolved into something far more influential: a trusted business partner that helps the enterprise move forward responsibly.
Lyons agrees, noting that this shift is industry-wide.
"We've made that shift, I think, as an industry to being much more trusted business partners," says Lyons. "But when AI has presented itself, I think that has swung the pendulum again...forcing us to navigate uncertainty."
One of the most practical takeaways from the conversation is how Celonis, a data processing SaaS company, approaches AI governance.
Rather than treating governance as a final checkpoint, Celonis established a cross-functional AI governance council that includes a range of voices, including legal, engineering, product, information security, compliance, and ethics.
Every AI use case—whether internal or product-facing—is evaluated through this kaleidoscopic lens. The result isn’t stifling bureaucracy, but alignment that leads to action.
When governance is designed upfront and shared across functions, it becomes an accelerator; a path to progress rather than a barrier. Decisions get made faster because everyone understands the guardrails. Innovation moves forward because trust is built into the process.
A recurring theme in the conversation is that AI alone doesn’t deliver ROI you can bank on. Candela captures this with a phrase that has become a north star at her company:
“There’s no AI without PI (process intelligence).”
Enterprise AI needs more than data; it needs context: an understanding of how work actually happens inside a specific organization. Process intelligence provides that living, breathing, operational view, specifying where bottlenecks exist, how systems interact, and what “normal” really looks like.
“AI without the right context just spits out generic information,” says Candela. But with it, AI becomes far more reliable, relevant, and actionable.
This connects directly back to trust.
If people don’t trust AI’s output, they won’t use it. If they don’t use it, adoption stalls. And without adoption, ROI never materializes. In this way, trust is really a prerequisite for realizing value, because all the tools and features in the world become meaningless if not successfully adopted.
Lyons emphasizes that this connection between accuracy and adoption is critical for the bottom line.
"You won't realize the ROI unless you do have that adoption," she notes. "The output needs to be accurate to build people's trust."
If people don’t trust AI’s output, they won’t use it. If they don’t use it, adoption stalls. And without adoption, ROI never materializes.
According to Candela, the biggest mistake in AI adoption is the "tool-first" approach. Organizations too often select a platform, roll it out, and naively expect behavior to change—a strategy that is almost guaranteed to fail.
At Celonis, the focus is on enablement and change management. People are supported wherever they are in their AI journey, from first-time users to advanced practitioners. Initiatives like AI Activation Week, embedded AI champions, and practical training sessions help make the technology feel accessible instead of intimidating.
“If you aren't learning how to use [AI] to enhance what you do, you are going to be left behind,” Candela explains. “And that's not from a people perspective in terms of you're going to lose your job. But you're going to miss out on this opportunity to be better at your job.”
The goal is to develop comfort, confidence, and daily use over time. It doesn’t need to be an overnight transformation.
“If you aren't learning how to use [AI] to enhance what you do, you are going to be left behind.”
—Vanessa Candela, chief legal and trust officer at Celonis
Candela isn’t pretending regulation will catch up with innovation anytime soon. The law has always lagged behind technology, and AI is no exception.
Rather than waiting for it to catch up, Celonis applies a common-sense, risk-based approach that focuses on the same issues regulators care about: security, hallucinations, data integrity, IP leakage, and bias.
By addressing those concerns proactively, organizations are likely to find themselves largely prepared when regulations arrive. The aim is to stay ahead of regulatory intent more so than achieving perfection on day one.
Lyons describes this balance of speed and safety as "innovating with integrity."
"It is really the right way to think about it," Lyons tells Candela. "To embrace AI, but to do so responsibly...helps people move from risk to trust. And that's ultimately our goal: how do we create a trusted environment?"
The AI movement is fully underway and the stakes are high. Those who hesitate to engage with AI risk falling behind. Those who embrace it responsibly gain leverage: more efficiency, deeper focus, and stronger impact in their roles.
A remarkable 82% of organizations are already using AI agents. But is your team ready? Read our latest report to learn how businesses are maximizing human potential with AI, featuring insights from nearly 3,000 global leaders.
Report