Designing Operating Models for Human + Agent Teams
The future of enterprise operations is agentic. Smart leaders are building hybrid operating models now to support human-agent teams in the future.
Sydney Scott
Editorial Strategist, AI
Workday
The future of enterprise operations is agentic. Smart leaders are building hybrid operating models now to support human-agent teams in the future.
Sydney Scott
Editorial Strategist, AI
Workday
The novelty of agentic AI has officially evaporated. With 82% of organizations already deploying it into the nerve centers of HR and finance, simply having an agent is no longer a competitive edge; it’s the baseline.
The real 2026 differentiator is the architecture of the team using the technology (not the tech itself). As these agents move from experimental toys to autonomous coworkers, they’ve created a new friction. Organizations are realizing that while agents provide the speed, humans need to provide the soul—the complex judgment and quality control that sustains enterprise trust. Agents aren’t plug and play; they need to be integrated.
We’re entering the era of the hybrid operating model. In this new landscape, the manager’s role is shifting: Beyond leading a team of people, they’ll be orchestrating a workforce of humans and agents. Just as a human employee requires coaching and clear KPIs, an AI agent requires a governance framework to ensure that its intelligence actually translates into ROI.
Eighty-two percent of organizations are already deploying agentic AI into the nerve centers of HR and finance.
Report
AI agents provide the speed needed to execute workflows in real time. But as they do, a paradox has emerged: While 88% of employees believe agents boost productivity, nearly half fear this will simply increase pressure to work faster. What’s more, nearly 40% of efficiency gains are lost to rework when there’s no clear structure in place for agent performance.
To mitigate risk and maximize performance, leaders must architect an operating model that treats human-agent collaboration as a core discipline.
In every workflow where humans and agents contribute, humans are the ultimate orchestrators and decision makers, AI agents operate in a support role, and clear governance guides how both agents and humans execute. These seven building blocks bridge the gap between AI potential and operational reality.
Designing a solid human-AI operating model starts with clarity on exactly where agents can act on their own and where teams require human oversight. This allows AI agents to handle routine execution while preserving human control over higher-stakes decisions.
This clarity is crucial for building trust. Today, while 75% of workers say they’re comfortable working with agents, far fewer want them to be decision-makers or supervisors. Establishing authority boundaries from the very start demonstrates to your teams that humans remain responsible for final outcomes and are empowered to leverage agents to achieve them.
Not every task should be delegated to an agent. Successful organizations intentionally map agents to workflows where automation and augmentation provide the greatest benefit without undermining human expertise.
Employees tend to be more comfortable with agents supporting operational domains such as IT infrastructure or skills development. In contrast, areas requiring more complex judgment—such as hiring decisions or financial planning—still benefit from strong human involvement.
Clear task-to-agent mapping prevents overreach while helping organizations deploy agents where they can deliver the most meaningful gains.
Successful organizations intentionally map agents to workflows where automation and augmentation provide the greatest benefit without undermining human expertise.
Governance isn't one-size-fits-all. Decisions involving people, capital, or compliance demand a higher tier of scrutiny—one that requires formal guardrails as agents take a seat at the execution table.
Employee expectations reinforce this need. Today, employees say they still prefer human oversight in these domains. Strong review protocols ensure that sensitive decisions remain subject to the right level of oversight while still allowing agents to support execution across the workflow.
Employee trust in agent-driven workflows depends largely in part on whether they understand how AI systems operate. Leaders must be transparent about what AI agents do, what data they access, and where they’re active across workflows.
This visibility becomes increasingly important as agents operate more continuously in enterprise systems. Just 24% of employees say they’re comfortable with agents operating “in the background” without their knowledge.
Clear communication around system behavior and data access helps ensure human colleagues see agents as collaborators rather than opaque systems.
Organizations that anticipate these shifts can redesign roles around shared skill needs rather than outdated role responsibilities. Agents can also surface emerging capability gaps, providing leaders with earlier visibility into where they’ll need workforce development.
Traditional productivity metrics fail to capture the value of human-agent collaboration. Measuring success purely through automation volume or task completion provides an incomplete view of how agents contribute to business performance.
A stronger approach evaluates outcomes. Metrics such as decision quality, quality of hire, revenue per output, override frequency, voluntary use, and employee trust provide a clearer picture of where and how agents are strengthening workflow execution.
Finally, hybrid operating models require leadership beyond a single function. While IT typically leads the technical deployment of agents, other business leaders play an equally important role in shaping how they’re used.
HR contributes to workforce design and skills development. Finance evaluates the business value created by agent adoption. Operations ensures agents align with workflows and performance goals. When these perspectives all come together, organizations can scale agent adoption in a way that supports both operational efficiency and long-term workforce strategy.
Trust in AI agents grows with maturity—from 36% during exploration to 95% at scale.
As organizations move from experimentation to broader AI agent adoption, a strong technology foundation becomes increasingly important. That includes AI agent frameworks to standardize how organizations deploy, govern, and scale agents across workflows.
Early agentic pilots may succeed with close oversight, but scaling agents across workflows requires systems that support both controlled testing and confident expansion.
Successful progression is closely tied to trust. As employees gain exposure to agents, confidence rises sharply—from 36% during exploration to 95% at scale. But AI trust doesn’t grow on exposure alone. Agents need unified, high-quality data to perform effectively, and fragmented systems can quickly undermine both results and governance.
The same is true for control. The boundaries defined in an operating model must also be embedded at the platform level. The operating model determines how work scales; the technology foundation determines whether it can scale safely.
A strong system of record makes that possible. By unifying the data, workflows, and governance structures AI agents rely on, a system of record gives organizations a consistent foundation for execution as adoption expands.
Instead of forcing agents to operate across disconnected systems and incomplete context, leaders can ground them in a trusted, single source of truth that supports performance, oversight, and more scalable ROI.
Ninety-eight percent of CEOs foresee an immediate business benefit from implementing AI. Download this report to discover the potential positive impact on your company, with insights from 2,355 global leaders.
Report