Operationalizing Ethics: The Finance Leader’s Role in Responsible AI
As AI comes to play a bigger role in decision-making, CFOs should start playing a bigger role in spearheading its governance.
Blaise Radley
Editorial Strategist
Workday
As AI comes to play a bigger role in decision-making, CFOs should start playing a bigger role in spearheading its governance.
Blaise Radley
Editorial Strategist
Workday
Artificial intelligence is zipping along faster than most organizations can govern it.
Nearly 90% of employees are now using AI at least weekly in some capacity. What began as cautious experimentation has quickly turned into a daily fixture.
This feels like a turning point. Employees are no longer waiting for permission — they’re moving ahead, and this creates new kinds of risk.
With responsible AI emerging as a central focus, finance has the opportunity to take the lead. Businesses need operational frameworks that ensure outputs are accurate, compliant, and aligned with business integrity at scale. CFOs have the skills and authority to take the wheel.
Let’s explore how to make that happen.
Report
A global study by KPMG in 2025 found that:
70% of U.S. workers are eager to experience the benefits of AI, with 61% already seeing positive impacts.
But only 41% say they're willing to trust AI.
And 44% are using AI tools without proper authorization, or in inappropriate ways.
At the same time, AI usage has accelerated rapidly across functions, including finance. In a McKinsey survey, 44% of CFOs said they leveraged gen AI for more than five use cases in 2025, up from 7% the previous year.
This threatens to create a disconnect between how AI is being used and how AI is being governed. And for finance leadership, the split is hardly just theoretical. It shows up in real decisions:
Forecasting models influenced by opaque algorithms
Automated expense classification or anomaly detection
AI-assisted financial reporting or scenario planning
These represent the heart of financial operations. When AI is integrated into those processes without sufficient oversight, the risks are immediate: compliance exposure, audit challenges, and reputational damage, among other undesirables.
Responsible AI is the answer, but it brings its own set of questions.
AI is often framed as a technology transformation. In practice, it is a decision-making transformation. And finance owns the integrity of decisions.
When AI influences financial outcomes — directly or indirectly — it raises fundamental governance questions. Can we explain how this decision was made? Can we validate the inputs and assumptions behind it? Can we identify and intervene if the output is incorrect or biased?
And finally: Can we stand behind this decision in an audit or regulatory review?
Being able to answer these questions is a matter of financial accountability. CFOs can’t afford to be operating in the dark. But amid this reality, they face growing pressure to demonstrate concrete cost savings from AI investments.
These two forces can feel like they are at odds: pushing forward AI adoption to drive ROI, without compromising the integrity of financial decisions. Reward versus risk. It’s where forward thinking and intentional design become essential.
“CFOs and their finance teams have a major role to play,” said Workday CFO Zane Rowe. “We’re used to having auditors look at our data, understand audit trails, look at the reliability of the data, and ensure its cleanliness and trustworthiness. That approach needs to extend to AI applications.”
CFOs and their finance teams have a major role to play. We’re used to having auditors look at our data, understand audit trails, look at the reliability of the data, and ensure its cleanliness and trustworthiness. That approach needs to extend to AI applications.
Policy statements can amount to mere lip service if not backed by structural processes and embedded into the way systems are designed, deployed, and used.
Leading organizations are moving from stated principles to responsibility by design: weaving governance, transparency, and control directly into their AI architecture. For finance leaders, this means ensuring that AI-driven processes are:
Auditable
Explainable
Aligned with risk and compliance standards from the start
Moreover, AI must be human-centric and trusted. Transparent visibility into how models generate outputs is key for both adoption and accountability. In finance, black-box decisions are not viable; teams need to understand, question, and validate results before acting on them.
Thus, human oversight remains critical. Requiring review of AI-generated outputs in high-stakes workflows ensures that accountability stays with people, where it belongs. One suggestion is to divide AI use cases into different risk-level buckets (low vs. medium vs. high), and apply different guardrails to each. For example, tagging transactions or cleansing data might require less scrutiny than automating financial reporting inputs.
70% of leaders agree AI processes should allow for human intervention.
Operationalizing ethics doesn’t need to mean slowing innovation. It just means applying rigorous, continuous controls as you charge forward—and knowing when to pull back the reins.
Putting these ideas into practice requires shared accountability across the value chain. Those who build AI systems and those who deploy them in business workflows have to operate within a common set of standards.
In many organizations, these responsibilities are fragmented—models might be developed by technical teams, applied by business users, and governed inconsistently. This setup can be a red flag for risk.
As CFO, it’s time to take a stand. Contextualize the stakes, and rally different functions around a unified vision for governance. Responsible AI will only take hold if companies embrace it as an organization-wide directive, but finance can play an instrumental role in ensuring the organization applies it with consistency, control, and integrity.
Responsible AI will only take hold if it’s embraced as an organization-wide directive, but finance can play an instrumental role in ensuring it is applied with consistency, control, and integrity.
This moment calls for a broader view of the CFO’s role. As decision-making becomes more automated and data-driven, the need for clear oversight, sound judgment, and accountability only grows. For finance leaders, that’s in our blood.
And really, what’s the alternative? If AI isn’t handled responsibly, finance can feel the brunt of the impact, so championing AI integrity is a natural play. The opportunity is golden: elevate the profile of CFO while paving the way for high-powered innovation guided by a strong framework of ethics and compliance.
“RAI currently flies under the radar for many finance leaders, but it is vital to understand and get right for long-term AI success,” said Alex Levine of Gartner last year. “The importance of RAI has grown as AI becomes more deeply integrated into business and society. RAI practices are increasingly formalized through governance structures and industry regulations, requiring organizations to address both organizational and societal responsibilities.”
The bottom line: AI is already reshaping how leaders make decisions. The question is who will shape how we govern those decisions.
Report