2. Set Clear Quality Standards
Decision: Define the specific dimensions (examples: accuracy, tone, rigor) that determine whether AI-supported work meets the bar.
Why it matters: Quality standards protect the time freed up by AI, keeping it available for the work that moves outcomes. This is also where speed advantages turn into something more durable.
According to Gerrit Kazmaier, Workday president of product and technology: “AI should do the complex work under the hood so people can focus on judgment, creativity, and connection. That's how organizations turn AI‑powered speed into durable and human‑led advantage."
Quality standards operationalize that promise so that employees aren’t left wiring things together and fact-checking every answer on their own.
What to document:
- Core deliverable dimensions: Requirements for accuracy, completeness, evidence or source rigor, audience fit, tone, and compliance sensitivity.
- Validation non-negotiables: Mandated checks for facts, numbers, sourcing, and policy alignment.
- Output thresholds: What qualifies as an internal working draft versus a final executive-ready deliverable.
What breaks when this step is missing: Output looks polished and moves quickly, but then fails under scrutiny. Teams get a false sense of productivity and pay for it later through downstream correction and rework.
3. Assign Review Ownership and Escalation Paths
Decision: Make review an explicit responsibility within the role design, with clear ownership and escalation triggers.
Why it matters: As AI accelerates production, quality control becomes a designed part of how teams work.
Chris Ernst, Workday chief learning officer, aptly summed up the importance of balance between humans and AI: “The path forward is not a zero-sum game,” he noted. “It’s an opportunity to architect a symbiotic relationship, where technology expands our capabilities and humans continue to grow and make progress.”
That requires review to be designed into the role, with ownership and escalation triggers explicit so that verification doesn’t become invisible labor concentrated on the same few people.
What to document:
Review ownership: Who reviews which outputs (manager, peer, cross-functional partner) based on the deliverable.
Review depth: What qualifies as a light review versus a structured review, tied to risk and audience.
Escalation triggers: Risk level, missing inputs, uncertainty, policy sensitivity, unclear sourcing, high people impact.
What breaks when this step is missing: The correction burden concentrates on a handful of individuals. Bottlenecks form, quality varies by reviewer availability, and rework shows up late when it is most expensive to fix.
4. Update Success Measures to Drive New Behavior
Decision: Align performance evaluation with outcome quality and strategic impact rather than volume.
Why it matters: Metrics drive behavior. When speed becomes the dominant metric, quality and judgment often become secondary. In practice, the two need to be complementary.
“The future of work isn't human or AI; it's a partnership between the two,” Ashley Goldsmith, Workday chief people officer, explains, “driven by a deep understanding of when, where, and how to deploy human talent and AI capabilities. Humans should always be at the center, and AI is here to help us maximize our potential and focus on what we uniquely do best.”
Success measures should reinforce that partnership by rewarding net value and outcomes that hold up, not just output volume.
What to document:
- Impact metrics: Role-appropriate outcomes such as quality of hire, forecast accuracy, and first-pass yield.
- Judgment signals: How well the employee validates AI-supported work, applies context, and navigates tradeoffs.
- Net-value indicators: First-pass quality rate, revision cycles, stakeholder acceptance, and time spent clarifying/correcting output tracked over time.
What breaks when this step is missing: Speed becomes the dominant signal. Output volume rises, rework rises with it, and organizations struggle to move roles into consistent net-positive outcomes.