Measure the ROI of AI With This One Weird Trick
Most AI ROI models fail not because AI is overhyped but because finance is measuring the wrong thing first.
Bruno J. Navarro
Senior Editorial Strategist, Finance
Workday
Most AI ROI models fail not because AI is overhyped but because finance is measuring the wrong thing first.
Bruno J. Navarro
Senior Editorial Strategist, Finance
Workday
Artificial intelligence is here to stay. Companies are pouring billions into large language models (LLMs), computer vision systems, and predictive tools. In a recent survey, nearly three-fourths (77%) of CEOs say AI is embedded in their core product or service, a 40% leap from a year ago.
However, traditional ROI formulas (cost vs. revenue) fail for most internal AI systems. When focused on risk reduction, quality control, or operational flow embedded within a specific process, AI’s value is usually indirect. It’s difficult to draw a straight line from a more efficient process to a topline sales number.
That’s why such leaders as Nvdia CEO Jensen Huang tend to emphasize the experimentation phase of scaling AI in the enterprise rather than the balance sheet. “I get questions like … ROI,” he said recently. “I wouldn’t go there.”
At the same time, finance teams are being asked to approve spend using ROI frameworks established prior to the technological leaps of AI. In practice, AI’s actual impact shows up somewhere much more difficult to measure: How work gets done, how fast decisions move, and how much capacity quietly appears inside the organization.
The good news is that the vast majority (85%) of employees save between 1 and 7 hours per week on their tasks, according to our latest research.
Yet most ROI models are likely to miss the benefits of AI because the gains might not be instantly apparent, such as time saved that’s then redeployed into higher-value work. The fix is this one weird trick: Measure time reclaimed before you measure money.
The fix is this one weird trick: Measure time reclaimed before you measure money.
Report
The mistake most companies make is falling into the revenue trap. Much of AI focuses on optimization and risk mitigation. If an AI system simply catches 30% more errors before they reach the customer, how is that saved revenue attributed? It’s not as easy as it might seem.
This leads to a cost center fallacy, where a sophisticated, value-generating AI team is treated purely as a capex investment.
When the AI helps a human worker make a decision 10 times faster, where does that monetary outcome sit? It sits in the operational data rather than the final sales receipt.
Time is the first-order effect of almost every successful AI use case, especially in finance. And unlike many soft benefits, time is observable, comparable, and measurable long before dollars show up.
Instead of asking teams to justify AI in terms of revenue or headcount, start with a simpler question: How much time did this actually give back and to whom?
That shift alone changes the quality of the conversation.
Instead of asking teams to justify AI in terms of revenue or headcount, start with a simpler question: How much time did this actually give back, and to whom?
“One clear way to measure AI ROI is by calculating hours reclaimed across tasks like content creation, customer service, research, and operations,” said Michelle Gines, founder of Purpose Publishing. “AI should make your team faster, sharper and more focused on high-value work. If it’s not freeing people up to grow the business, it’s not worth it.”
Pick a handful of real workflows, such as monthly reporting, variance explanations, contract review, budget narratives. Ask the teams doing the work how long these tasks took before AI and how long they take now. Multiply by how often they happen and who’s doing them.
If an analyst tells you something that used to take three hours now takes 45 minutes, that’s already a useful signal. You’re looking for anecdotal truth, not audit-grade certainty. But the potential gains don’t stop there. Ask the teams, “What part of this task used to make you procrastinate?” Often, the value of AI goes beyond the 135 minutes saved to take into account removal of the cognitive load. If an analyst used to spend two days dreading the start of a manual data-entry task, the AI is saving more than three hours to help unblock their entire week.
“For generative AI, ROI is most often assessed on efficiency and productivity gains,” a Deloitte study found. “For agentic AI, measurement is likely to focus on cost savings, process redesign, risk management, and longer-term transformation.”
But saving time only matters if the organization does something with it. That time can disappear into meetings or inboxes. Sometimes it increases throughput, or it might allow higher-value work that wasn’t feasible before. That means the most important variable is how much of that time gets redeployed productively.
From a finance perspective, it’s reasonable to assume only a portion of reclaimed time converts into economic value. The key is to model that assumption explicitly rather than pretending every hour saved equals a dollar earned.
Measuring time reclaimed does three things finance leaders care about. It creates early visibility into value, it avoids speculative claims, and it forces the organization to confront whether saved capacity is actually being used well.
Most importantly, it reframes the AI conversation to focus on whether organizations are effectively using the time AI gives back. That’s a question CFOs are uniquely positioned to answer.
In order to earn its place on the balance sheet, AI needs to be useful, measurable, and intentionally redeployed.
Start by measuring time. The ROI will follow.
Report