00:00:01 The IIA
The Institute of Internal Auditors presents all things internal audit tech in this episode, Ernest Anunciacion talks with Marko Horvat about how global AI regulations are reshaping governance, risk management, and the role of internal audit. They discuss why regulators are prioritizing risk.
00:00:20 The IIA
To individuals and how AI governance spans the full system lifecycle.
00:00:28 Ernest Anunciacion
Hello everyone and welcome to today's episode. I'm Ernest and incision CIA and former chief audit executive. I am joined by Marco Horvat. He's a recognized expert in AI regulation, so Marco, welcome to the show. Would you like to do a quick introduction yourself to the?
00:00:42 Marco Horvat
Audience. Sure. I'm Marco Horvat, CPA.
00:00:47 Marco Horvat
I'm the senior vice president of business transformation to EB Learning. Will we help organizations regard themselves for the AI future?
00:00:56 Ernest Anunciacion
Awesome. So let's jump into it for a while now. It feels like AI regulation has been slow and theoretical, but now it's clearly accelerating across the globe. We're seeing governments move from different principles and guidelines to things that are becoming much more binding in terms of requirements, and that shift has some major implications for organizations.
00:01:17 Ernest Anunciacion
As it pertains to governance, risk management and even internal audit. So Marco, what are you seeing in the regulatory environment and why is this happening now the leading?
00:01:26 Marco Horvat
Voice and all this stuff is coming out of Europe with their AI regulations and the interesting thing about that is, you know, traditionally when we think about.
00:01:35 Marco Horvat
Audit has or pertains to business. You know, maybe this is my bias being a CPA obviously, but usually we're really concerned with the idea of material misstatements and the quality of the financials that are coming out. The representations we're making to the public and to our investors, etcetera. But the interesting thing about the focus of these AI regulations is when we talk about things like risk.
00:01:57 Marco Horvat
We're not talking about risk of financial misstatement, but they're primarily concerned about is risk to individuals in, in, in human populations, basically anything that uses AI to label, target, assess either individuals or characteristics of individuals is really the primary concern of these regulations.
00:02:18 Ernest Anunciacion
You only unpack that just a little bit, so I'm trying to think too in terms of what the focus of certain regulation and legislation is. So obviously there's kind of the transparency around artificial intelligence, one of the big risks that I see out there are deep fakes and misuse of our official intelligence.
00:02:35 Ernest Anunciacion
So is that kind?
00:02:35 Ernest Anunciacion
Of what they're going after.
00:02:37 Marco Horvat
That's a part of it. There's different tiers of let's the EU regulations in in particular, they have different risk levels. So if you're going to be using AI within your organization, you have to determine sort of what are the associated risk of using that API. And we talked about the risk we're talking about.
00:02:55 Marco Horvat
The risk to the people that are being impacted by it so to illuminate that a little better, the the highest risk is sort of like that. Thou shalt not. Those are sort of barred from use and and that's basically intentionally using AI to trick and sort of scam people, those sort of things. And then as you descend down the different risk levels, they're mostly concerned with.
00:03:16 Marco Horvat
The ability of AI to adjudicate in a way that impacts people directly, so things like.
00:03:23 Marco Horvat
Using AI as a means of determining the creditworthiness of an individual or using AI and as a recruitment tool in terms of how you hire people, so anytime there's judgment being made.
00:03:34 Marco Horvat
By the AI.
00:03:36 Marco Horvat
System. Those are the types of risks are involved, and the risk they're mostly concerned about is, you know, labeling people or judging people.
00:03:43 Marco Horvat
Or barring people from opportunity without a human in the loop. So that's a really, really interesting point of view versus, you know, traditionally we think about risk within an organization. It's it's more about the internal risk and things that we're.
00:03:58 Marco Horvat
Developing within your organization, right, the representations of of what's going on within the business, not necessarily how it's impacting stakeholders outside the organization from you know this sort of labeling or judgmental sort of perspective.
00:04:11 Ernest Anunciacion
I think that makes a lot of sense. It's more about almost creating kind of metadata on human beings or individuals and that not being taken into account. So when regulators are putting this stuff together, what are their expectations for organizations? So. So what do they want businesses and companies to kind of demonstrate in terms of?
00:04:31 Ernest Anunciacion
How they comply with these types of regular?
00:04:34 Marco Horvat
I think the interesting part about that is that from a compliance perspective, they want organizations to basically fully understand or to the best that they can understand how the AI is being used, how it's impacting the people that it's being targeted towards and to be be able to monitor that entire process sort of end to.
00:04:55 Marco Horvat
And that introduces a couple of interesting challenges that we haven't fully embraced in the past, right? So the regulatory life cycle of AI, for example, is, is, is, is a good sort of starting point. We talked about that whereas in the past where an internal audit was that sort of third line of defense, right? So the first line was you have your business process.
00:05:15 Marco Horvat
Centers that are developing the system in an ethical, regulatory compliant way, and then you have sort of the second line that comes in that ensures that you know that there's compliance and an ongoing monitoring and then you know the auditors come in as an independent third party observer within the organization to make sure that everything is going according to.
00:05:32 Marco Horvat
Plan with the new AI regulations in place that are coming down, there needs to be involvement in the.
00:05:38 Marco Horvat
Entire life cycle.
00:05:40 Marco Horvat
Right. So, you know, we were talking about from the very design of of the entire system making sure that we have a good human in the loop design for certain critical, you know.
00:05:52 Marco Horvat
Higher risk types of of activities.
00:05:56 Marco Horvat
Making sure that the data quality is good so that it's representative of your entire data set. Not that that we have, you know, biases that might be introduced that could affect the outcomes. So doing like a thorough review of the data that's going into the system, sort of reviewing everything from like an algorithmic understanding rather than sort of an end of the end result compliance perspective.
00:06:17 Marco Horvat
Is a different point of view than traditionally, internal audit has ever had and and having to be involved at sort of every one of those stages as opposed to sort of previewing things at the end. I think it's really interesting and could be potentially trip.
00:06:31 Marco Horvat
The position for internal auditors to be in considering the fact that, at the same time they they, they need to maintain that management independence.
00:06:39 Ernest Anunciacion
Right. And that's a really unique perspective, Marco. You know, I still believe that a lot of organizations see AI governance more as a technology or compliance issue. But the way that you just described it, you're saying that internal audit has a role across the entire AI regulatory life cycle. And I, I agree, I think there's, you know kind of three.
00:06:59 Ernest Anunciacion
Areas specifically with auditors where they can help in addition to what you mentioned, you know readiness assessments, which is really.
00:07:06 Ernest Anunciacion
The most critical piece because a lot of companies can't even answer the question. That's very basic. Like where are we actually using AI? You gotta believe that people have on their own personal accounts. They're using ChatGPT, even if they're company isn't allowing for those tools there. So you mentioned a little bit of the design reviews, you know evaluating the frameworks where they have fit.
00:07:27 Ernest Anunciacion
For regulatory expectations and obviously the ongoing assurance, so another follow up question to that, Marco, what are some of the skill sets, do you think internal auditors need in order to you know provide value as it pertains to this AI regulation?
00:07:43 Marco Horvat
This is also really interesting, right? And and when you think about it, there has to be a a certain level of AI literacy that's just beyond the basic how do you write a prompt in ChatGPT that returns what you need? We talk about the requirements in order to fully understand the system. We're not talking about that. They need to be able to go in and.
00:08:03 Marco Horvat
You know, write minds of code. You know the level expertise that you need to sort of natively develop an AI solution. But what they do need to understand is sort of understanding how these AI models work. Right. And so one of the big things that is important in this oncoming wave of AI regulation.
00:08:21 Marco Horvat
Is explain ability right? So explainable AI is something that's incredibly important, so one of the things that's been really, really tricky is sort of the the the black box problem where a lot of AI sort of operates with the black box. We're not exactly sure how it works and sort of from an audibility perspective, how do you add something when you don't understand?
00:08:41 Marco Horvat
How it got to that conclusion? And so now there's tools that are being developed around sort of explainable AI where you can sort of look at the output and say does that output make logical sense? Can you determine how someone would reach that conclusion?
00:08:54 Marco Horvat
So when we talk about how to internal auditors prepare themselves from a skills perspective, it's understanding hallucinations like what are, you know, how that the hallucinations is, how do you identify hallucinations, identifying model drifts? So, you know, one of the things I've seen is that you'll have a good AI model over time. You know, it's sort of evolves.
00:09:14 Marco Horvat
It sort of changes and it'll drift away from sort of the initial design. So understanding that, you know it has it sort of gone off the rails. How is it fundamentally changed in the initial design of the model and being able, like I said, to identify the explained ability of the AI and?
00:09:29 Marco Horvat
You know, act accordingly.
00:09:31 Ernest Anunciacion
Yeah, it's really good point. And so follow up to that too. Then, Marco, in practicality, when you look at organizations, what are some of the common gaps and red flags, do you see from an audit standpoint?
00:09:44 Marco Horvat
I think there's three.
00:09:46 Marco Horvat
Big problems or challenges that comes a I like the biggest one I think is shadow.
00:09:52 Marco Horvat
Yeah, so, you know, McKinsey did a survey last year where they look at AI usage within organizations and they found that 90% of people were using AI in some sort of capacity at work. And and and you look at that compared to only about 65% of organizations have a fully articulated.
00:10:13 Marco Horvat
AI usage policy.
00:10:15 Marco Horvat
So that creates a lot of risks to organizations, especially with these oncoming regulations, where it's really, really important for organizations to understand what AI tools are, people using, how and how are they using them so that shadow AI usage I think has two components to it. 1 is what are all the tools that people are using.
00:10:37 Marco Horvat
You can have a correct into.
00:10:38 Marco Horvat
Right. Because a key part of like the EU regulations is for risk assessment of the tools and you can't do a comprehensive risk assessment of the tools if you don't know all the tools that you are using. And the second part of that is when it comes to shadow, AI is how are they using those tools? Because when you're looking at the risk assessment.
00:10:58 Marco Horvat
It's not necessarily just the capability of the tools.
00:11:02 Marco Horvat
That you're assessing.
00:11:04 Marco Horvat
It's the use case within the organization that's also incredibly important. So there there's a risk there where there's a mislabeling of of the risk assessment.
00:11:13 Marco Horvat
Because people are using the AI in a fundamentally different or an authorized way, so that is one of the challenges the the other big challenge is, you know, continuous monitoring, moving away from you know the the the traditional periodic check to being able to continuously monitor the system and and the environment. So you can identify things like biases.
00:11:34 Marco Horvat
Right, that they can be recognized for different patterns that commit or you know it, the IT they they had the model drift that I was talking about earlier.
00:11:40 Marco Horvat
And I think the third biggest gap is third party ecosystem on it, which has sort of been an ongoing challenge that we've had. We have the same problem with cyber security, right, right, making sure that the third party tools that we use, what kind of risk are we introducing by using them, the sort of right to audit clauses we have with those organizations I think are going to be tremendous challenges.
00:12:00 Marco Horvat
Us to look at.
00:12:02 Ernest Anunciacion
Yeah, the the last one, the third party I risk is especially tricky because most companies or organizations will often just assume that the vendors own the risk, right, that it's taken care of by them. But the regulators won't see it that way. And so if you're using AI in your processes, you're still going to be accountable for it, even if it's.
00:12:21 Ernest Anunciacion
Embedded in those third party solutions. So let's shift gears a little bit. Now. If AI governance can't be treated as a future issue anymore and everything that we kind of covered in today.
00:12:32 Marco Horvat
Episode what would you advise internal audit teams to be doing right now? I think it's getting to those fundamental sort of data literacy right? To make sure you have a good foundation in terms of your ability to assess in, in, in, review and gather information throughout your entire organization. I think that's.
00:12:52 Marco Horvat
Really important, I think it is. It is important for the internal auditors to understand the overall direction that management is to or posture that management is taking with.
00:13:04 Marco Horvat
Regards to AI.
00:13:05 Marco Horvat
In terms of what are their future ambitions so they can understand sort of the risk horizon that is coming up?
00:13:10 Marco Horvat
Some, I think it's really, really important for them to be thinking about how do we deal with shadow eye, continuous monitoring and 3rd party audits and not only that, but like what is the additional risk we're introducing into the organization. So as we transition from you know, a situation where where you know testing data.
00:13:30 Marco Horvat
On a periodic basis, and the reason we do that is this sort of idea that from sort of a meta perspective, right, is that we can't know everything all the time all at once. So we we engage in you know sampling and and other methods at on a periodic basis.
00:13:48 Marco Horvat
As a means of assessing risk.
00:13:50 Marco Horvat
As part of the overall procedure, if.
00:13:52 Marco Horvat
You move to.
00:13:53 Marco Horvat
A system where it's all data all on all the time. But I think that also could, you know, introduces additional risk to the organization because now that sort of known or knowable sort of standard when it comes to things going wrong.
00:14:08 Marco Horvat
Just fundamentally changed and thinking about how introducing all of this data can also introduce additional risk and the ability to stay on top of it and monitor it is going to become incredibly.
00:14:20 Marco Horvat
Important as we transition into this, always on all knowing omniscient sort of future that everyone is sort of striving.
00:14:28 Marco Horvat
For.
00:14:30 Ernest Anunciacion
It's an interesting point where we're at technologically right. This I feel like there's a gold rush in terms of every company incorporating incorporating AI.
00:14:40 Ernest Anunciacion
As part of their solutions, coming up with new software, there's this explosion of data that's coming along with it. And so I think the only thing I'd add to you or you know what or what internal it should be doing now is partnering with legal and compliance that that just feels essential to.
00:14:57 Ernest Anunciacion
Any all these regulations, they cut across legal, privacy, compliance, technology ethics, right and and auditors, internal auditors are uniquely positioned to connect those dots and provide that independent assurance.
00:15:10 Marco Horvat
Yeah, and I.
00:15:11 Marco Horvat
I think a lot of that, you know, governance too, right. So it's not just what you know, it's who knows it and who has access to it and what are they doing with that access. That's becoming incredibly important for organizations to consider in terms of as they move into this sort of always on frontier.
00:15:28 Ernest Anunciacion
For sure. Well, Marco, this has been incredibly insightful. Really appreciate your perspective here. I think if there's one message for internal audit leaders to take away from our conversation, it's at a regulation, is no longer theoretical and waiting is only going to put your organization at risk. You know, every CAE that I talked to today has our.
00:15:48 Ernest Anunciacion
Or AI as part of their audit plan. And so we've got a critical role to play, not just in the assurance that Marco, like you said, even earlier on in the adoption life cycle and that will help organizations kind of build that trust, resiliency and accountability when it comes to AI. So Marco, thank you so much again for joining us today.
00:16:08 Marco Horvat
I appreciate it. And it's it's amazing to see this journey of and you know how internal audit is really taking on this trusted advisor role within organizations to maintain that independence and really be sort of an added voice of sort of ethical and compliance and all these other things within the real time working of of the organization.
00:16:28 Marco Horvat
It's really exciting. It's really exciting space to be in.
00:16:31 Ernest Anunciacion
100% agree. So that's it for today's episode. Again, I'm Ernest and and session. That was Marko Horvat. Appreciate you guys listening in and as always, stay classy humanoids.
00:16:43 The IIA
Hey, audit pros. Gam is back March 9th through the 11th in Las Vegas. It's where internal audit leaders come together to talk governance, emerging risk. And what's next for the profession? If you want smart conversation, fresh ideas and some great networking along the way, head to the IA website and register for gym today.
00:17:03 The IIA
If you like this podcast, please subscribe and rate US. You can subscribe wherever you get your podcasts. You can also catch other episodes on YouTube or at the iaa.org that's theiia.org.