Responsible AI you can trust.

AI creates limitless possibilities for innovation—and with that comes the responsibility to develop and use this technology in a safe, ethical way. See what we’re doing to build trust and transparency for a better future.

Image of smiling woman.


The responsible AI pillars that we hold true.

Image of man shaking hands.


We believe that AI should be a force for good and put people first.

Our North Star: living our values.

Responsible AI isn’t just good for business—it’s the right thing to do. We strive to develop AI solutions that amplify human potential, champion transparency and fairness, and deliver data privacy and protection.

See how our principles align to our core values.

Image of people discussing and collaborating.


We create and follow best practices to mitigate risk.

Governance: risk identification and mitigation.

We take a risk-based approach to responsible AI, identifying the sensitivity level of each new AI application. Then we address associated risks for unintended consequences throughout the build and maintenance of each app.

Learn how we mitigate potential risks.

Image of people smiling at a desk.


We rely on a diverse set of experts to develop and maintain governance.

Bringing product experts and diverse perspectives together.

Our chief responsible AI officer and dedicated team of social and data scientists, engineers, and tech experts uphold our RAI governance. A board of C-suite executives guide this work, and our Responsible AI Champions ensure its adoption.

Find out how to put humans at the center of AI.

Image of people meeting and collaborating.

Public Policy

We drive AI regulation that builds trust and enables innovation.

Advocating for responsible AI.

We’re proud to play a leading role in AI policy discussions at the federal, state, and local levels in the U.S., in Europe, and across Asia.

We’ve also worked with industry leaders to co-develop best practices for AI in the workplace. 

Learn more about this work.

Image of people collaborating.


Developing responsibly, at every step.

Our principles give us a solid foundation for our approach. But we don’t stop there—we put them into practice at every step of development. Learn more about some of our key practices.

Advocating for thoughtful frameworks.

We’re active in the development of leading frameworks and regulations such as the U.S. National Institute of Standards and Technology’s (NIST) AI Risk Management Framework and the European Union’s AI Act.

Designing for responsible AI.

We consider the potential for unintended consequences throughout the development and build of our products and keep safety and security in mind. That means guardrails to ensure fairness, transparency, explainability, reliability, and more.

Providing our customers with visibility.

To help customers enable responsible AI within their own organizations, we explain how our AI solutions are built, how they work, and how they are trained and tested. Fact sheets, including descriptions of relevant risk evaluations and mitigations, are available for all customers.

“Transparency around how AI and ML models are trained is key to establishing trust. Systems that lack the sophistication to support that will struggle. Workday has the resources and brainpower to push all of us further ahead.”

—SVP, Chief Information Officer

Image of person at desk.


Shaping a fair, transparent future for AI.

As the technology landscape evolves, so does our work in advancing the responsible use of AI. We look forward to uncovering even more innovative use cases while ensuring fairness and transparency for all.

Expanding our advisory board.

We’re bringing in more perspectives from different disciplines and areas of expertise.

Increasing investment.

We continue to explore ongoing responsible AI training and opportunities for collaboration with customers, partners, and legislators.

Partnering with our customers.

We are continuously working with customers to find more opportunities to enable their responsible deployment of AI.


Closing the AI trust gap.

Leaders and employees agree that AI presents many business opportunities—but the lack of trust that it will be deployed responsibly creates a barrier. The solution? Collaboration and regulation. 


of leaders welcome AI adoption in their organization.


of employees think their employer might put their own interests first when adopting AI.


of business leaders believe AI should allow for human intervention.

4 in 5

workers say their company does not communicate AI usage guidelines.

Our approach to trust.

Ready to talk? Get in touch.