Skip to Content

Global Best Practices

The Catalyst for Strong AI Governance

Stay ahead in the rapidly evolving world of artificial intelligence with expert insights on governance frameworks, internal audit strategies, and risk management. Learn how organizations are addressing ethical, operational, and reputational challenges while strengthening oversight and building stakeholder trust. Explore key approaches, including governance committee involvement, emerging risk identification, and assurance services, to ensure responsible and resilient AI adoption.

Sponsored By:

DataSnipper logo

Driving Confidence in AI Governance

Discover proven strategies to strengthen oversight, enhance transparency, and mitigate risks in AI adoption. Position your organization at the forefront of responsible AI innovation.

At The ODP Corporation, the board has directed the business to develop an artificial intelligence (AI) governance plan so that appropriate reviews and approvals are in place. The internal audit function is involved in the process, with the CAE being a member of the AI Governance Committee, according to Sarah Morejon Rodriguez, senior manager of Internal Audit. The discussions delve into the evolving landscape of AI deployment, addressing implementation milestones, emerging risks, and the controls necessary to help safeguard stakeholder trust.

As organizations race to harness the transformative potential of AI and generative AI, internal audit functions are stepping into a critical role. Far beyond compliance, they are becoming strategic partners — helping boards and executive teams navigate the ethical, operational, and reputational dimensions of AI.

Steps to Effective AI Use

According to PwC’s Responsible AI and Internal Audit: What You Need to Know, “internal audit has a critical choice: Lead the charge on AI governance or scramble to catch up in the aftermath of a model failure, compliance breach, or public misstep.”

The report says internal audit can make a meaningful difference in responsible AI use by:

  • Establishing a line of sight across the AI landscape. A living inventory of AI systems, models, and embedded tools is valuable in assessing vendor concentration risk and regulatory compliance. It also can reveal high-impact use cases and facilitate audit planning.
  • Auditing AI governance structures. With AI governance typically spread throughout the company, internal audit can determine if governance roles, escalation paths, and decision rights are clearly defined and working as intended. It can also evaluate whether AI oversight responsibilities are documented, resourced, and operational.
  • Assessing the adequacy of AI risk and control frameworks. Internal audit can use recognized frameworks to determine if controls are effective for AI-related technology risks.
  • Embedding internal audit into responsible AI design. Internal audit should be aware of design and deployment plans, especially when high-risk or customer-facing models are involved.
  • Equipping internal audit with AI. AI tools can enhance control testing, document summarization, and anomaly detection.

AI Blind Spots

Internal audit can add value by identifying AI-related blind spots, such as model bias, data integrity, and lack of explainability. To enhance assurance around these concerns, Barham recommends that internal auditors:

  • Create an inventory of known AI-related risks.
  • Examine training data used for AI to find any bias.
  • Ask how closely the organization is scrutinizing AI-generated results to detect bias and to ensure there are appropriate controls to find errors, that access to data is protected, and that data encryption is being used.
  • Be cautious when there is a lack of explainability. Questions should be asked if the technology’s basic workings and purpose are not easy to describe to someone who is not familiar with the model.

In this era of accelerating innovation, AI governance is not optional, it is essential. And internal auditors are uniquely positioned to provide independent assurance to support governing bodies through both advisory and assurance services.

Having a Seat at the Table

As a member of the AI Governance Committee, The ODP Corporation’s CAE has “a big role in making sure that risks are being identified, that they’re being addressed appropriately, and that the right groups are involved in AI governance discussions,” Rodriguez says. The stakeholders vary based on the AI use case, but generally will include IT, legal, risk management, information security, privacy, compliance, and relevant business owners. The CAE’s presence also helps ensure that the internal audit function is thoroughly up to date on current AI processes and uses throughout the organization.

Internal audit is a key player because it provides independent assurance to the governing body that the organization’s approach to managing AI and internal controls is robust, regularly updated, and effectively implemented across business units, says George Barham, director of Standards and Professional Guidance for technology at The IIA. In doing so, internal auditors employ the same rigor and approaches that they use in reviewing policies and procedures in other areas, he says.

When it comes to AI roles and responsibility, Barham emphasizes the need for robust checks and balances and segregation of duties, so that those who design and develop the AI programs are not the ones who test and deploy it. In terms of compliance, internal audit can examine any related regulations and provide independent assurance that the organization is following the rules, in collaboration with the legal and regulatory teams.

To manage risks before they cause operational or reputational damage, internal auditors can consult with product and engineering teams on risk management issues and recommend the best ways to address them, according to Richter. Often the people implementing new technologies may already be aware of related risks, but they might not have the right controls in place or may not be updating them as needed.

Internal auditors should keep in mind that the core technology is no different from machine learning tools that have been available for years, Richter says. Greater complexity does add new challenges, however. Agentic AI, for example, can make decisions and gain access to a variety of systems and tools. Companies will have to be vigilant about the access these systems are allowed so that they do not inappropriately disseminate sensitive human resources information, for example, he says.

AI Resources

To reflect the changing technology environment, The IIA’s Artificial Intelligence Auditing Framework outlines what internal auditors need to know to identify best practices and internal controls for AI. Other resources include:

Setting Strong Guardrails

There is often an assumption that governance, policies, procedures, and controls can hinder innovation, but Barham says appropriate guardrails and parameters support AI innovation and the people involved in it. He compares them to the brakes on a car, which give drivers greater confidence because they know how to slow down or stop in any situation.

For AI leadership, it can be easier to innovate if they know there are controls and processes that can prevent any serious missteps. Internal audit, acting as a strategic partner, can help establish valuable guardrails by working to ensure the governing body has the information it needs to provide good governance.

Barham emphasizes the importance of training for those responsible for AI operations and governance. The governing body should receive updates on AI and should be prepared to ask incisive questions and be engaged in the process.

Building an organizationwide culture of AI accountability and transparency also begins with education for the entire company on what AI tools are available to them and how to put them to best use, Rodriguez says. Training should encompass AI use guidelines, such as security, privacy, and ethical concerns.

To promote a culture of AI accountability and transparency, internal auditors should encourage communication among involved teams and assess whether people at all levels can report a problematic process or result.

Being a Partner

The IIA’s Three Lines Model provides a clear framework for delineating internal audit’s role in relation to both the governing body and organizational leadership. This structure can serve as a guidepost for how internal auditors contribute meaningfully to AI governance.

As AI adoption accelerates, so too does the need for trusted partners who can assess emerging risks and ensure regulatory alignment. “I would love to have a team that can help me keep up with new regulations and risks,” says Richter, who, while not an internal auditor himself, underscores the growing demand for such expertise.

Far beyond a compliance function, internal audit is well-positioned to play a strategic advisory role. Auditors can help their organizations navigate the complexity of AI systems, ensuring that innovation proceeds responsibly. In a landscape shaped by rapid technological change, internal auditors have an opportunity — and a responsibility — to lead.

Disclaimer

The IIA publishes this document for informational and educational purposes only. This material is not intended to provide definitive answers to specific individual circumstances and as such is only intended to be used as peer-informed thought leadership. It is not formal IIA Guidance. The IIA recommends seeking independent expert advice relating directly to any specific situation. The IIA accepts no responsibility for anyone placing sole reliance on this material.


September 3, 2025