00:00:02 The IIA
The Institute of Internal Auditors presents All Things Internal Audit Tech.
00:00:06 The IIA
In this episode, Antonio Cachaputi and Alessandro Casarari unpack the ethical challenges of artificial intelligence in internal audit and anti-financial crime.
00:00:16 The IIA
They discuss AI hallucinations as a risk to be governed, not eliminated, and examine why governance, accountability, and human judgment are central to ethical AI.
00:00:28 Antonio Cacciapuoti
How do you think internal audit can evaluate a system that by definition can hallucinate?
00:00:34 Alessandro Casarotti
For internal audit, we need to change a little bit the question, because the point is not if an AI system can hallucinate, because by design, of course, it can and it will.
00:00:48 Alessandro Casarotti
The real question here is if the organization is ready to live with that, because from
00:00:55 Alessandro Casarotti
an internal audit perspective, AI cannot be assessed in the same way as a deterministic system.
00:01:02 Alessandro Casarotti
So the point is not to expect that the result will be correct every single time.
00:01:09 Alessandro Casarotti
For me, what really is important is understanding what the organization has put in place
00:01:15 Alessandro Casarotti
to manage and to mitigate situation when the model can be wrong.
00:01:20 Alessandro Casarotti
So in my opinion, the internal auditors should focus on three important things.
00:01:26 Alessandro Casarotti
The first is the governance.
00:01:27 Alessandro Casarotti
So who owns the AI and who is accountable for the use of AI.
00:01:33 Alessandro Casarotti
The second is the control around the usage and not the model itself.
00:01:38 Alessandro Casarotti
I mean that auditors should try to audit the algorithm line by line, but they should assess if there are human in the loop, if there are controls, especially in high-risk use cases.
00:01:52 Alessandro Casarotti
And the last but not least, the user knowledge and transparencies, because
00:01:59 Alessandro Casarotti
it's important to understand if people understand that the system can hallucinate or if the system is perceived as an authoritative source.
00:02:09 Alessandro Casarotti
Because here the distinction is very important because it changes the risk profile and the type of control we can put around it.
00:02:19 Alessandro Casarotti
For internal audit, the hallucinations are not a technical failure, but it's just a risk, a normal risk to be mitigated.
00:02:28 Alessandro Casarotti
So
00:02:29 Alessandro Casarotti
So if you want to put an ethical point on this, I would say that an ethical AI system, it's not a system that never hallucinates, but it's a system where hallucinations are expected and safely assorted by the organization.
00:02:46 Alessandro Casarotti
So this is my view on your question, Alessandro.
00:02:50 Alessandro Casarotti
And now I would like to shift the focus to your experience in the financial crime space.
00:02:57 Alessandro Casarotti
And I'm really
00:02:58 Alessandro Casarotti
curious about your answer.
00:03:01 Alessandro Casarotti
So based on what you have seen so far, of course, how are the main AI providers building controls to make sure that decisions remain ethical and that these tools are not used for fraud?
00:03:15 Alessandro Casarotti
And please, if you have any concrete examples, just give it to us.
00:03:20 Antonio Cacciapuoti
Remain ethical for big AI players is a big point of concern.
00:03:26 Antonio Cacciapuoti
Lately, I think we had a big statement from Sema Altman of OpenAI, but also from the CEO of Anthropic in regards to where really they are looking for also ethical regulation that gives a bit of grounds on how to address
00:03:42 Antonio Cacciapuoti
those topics and the development that is taking because it's going faster and faster.
00:03:47 Antonio Cacciapuoti
Because big players have taken different levels of safeguards on ethical perspective, which can be a combination of technical safeguards, policy frameworks, active monitoring,
00:04:02 Antonio Cacciapuoti
to try to, of course, limit the risk also that fraudster can exploit.
00:04:08 Antonio Cacciapuoti
But I think, at least in my perspective, the most interesting one is from Anthropic, because they're building their own model and AI constitution which evaluates the answer on a model perspective.
00:04:22 Antonio Cacciapuoti
And I think that's quite interesting that it's a private institution that builds
00:04:28 Antonio Cacciapuoti
in its own model, an AI constitution to evaluate the ethical perspective.
00:04:34 Antonio Cacciapuoti
And there is no public push for having this as a consensus.
00:04:41 Antonio Cacciapuoti
Actually, we have also very different big AI system, which has been criticized actually for the opposite, to not have ethical safeguards.
00:04:49 Antonio Cacciapuoti
So I think as a first approach, it's very interesting where Anthropic went and the fact that it's a private institution pushing for that
00:04:58 Antonio Cacciapuoti
and not a public one, even though we know that regulation, at least in Europe, they are trying to ensure that some safeguards are put into place.
00:05:09 Antonio Cacciapuoti
But then if we look more overall on all other safeguards, I mean,
00:05:14 Antonio Cacciapuoti
There are several things that have been built in the model, such, I mean, classification on wireless detection, toxicity, bias detection, that can run to ensure that the model blocks certain outputs.
00:05:30 Antonio Cacciapuoti
But that goes a bit also in line with what you were saying before, I mean, and it's how the model was validated, because the model was validated, trained by human.
00:05:40 Antonio Cacciapuoti
And so the famous human judgments needs to stay in somehow.
00:05:44 Antonio Cacciapuoti
And it's the famous human in the loop that also the UAI act is pushing for.
00:05:50 Antonio Cacciapuoti
I mean, it's important in a gray area and really subjective area such ethics to have human judgments that is let somehow also controlled by the human in the end, whether with human feedback, whether with some specific testing, which can be
00:06:10 Antonio Cacciapuoti
also done by prompts that are studied for that.
00:06:14 Antonio Cacciapuoti
I mean, there is a specific team testing also that goes a bit in that direction also to ensure that social engineering is not used for fraud models or so to phishing or other potential manipulation.
00:06:28 Antonio Cacciapuoti
So all in all, I think what is important is that we keep in mind that fraudsters are always the one step ahead.
00:06:38 Antonio Cacciapuoti
And I mentioned a bit earlier, I mean, if they know how to prompt, they can always rewrite the question to move around the ethical safeguards through to looks legit.
00:06:50 Antonio Cacciapuoti
And that's why human in the loop is important to ensure that the model.
00:06:57 Alessandro Casarotti
So we need to act as criminal to stay one step ahead.
00:07:04 Alessandro Casarotti
And.
00:07:05 Antonio Cacciapuoti
Usually we used to say as a motto that you need to think like a criminal to catch such criminals.
00:07:12 Alessandro Casarotti
And maybe we will talk later about it.
00:07:14 Alessandro Casarotti
But I think also in your opinion, the human are the main
00:07:20 Alessandro Casarotti
characters, I mean, first human and then machine.
00:07:24 Antonio Cacciapuoti
Yes, I mean, I mentioned anthropic, but anthropic and then AI constitution made by humans.
00:07:29 Antonio Cacciapuoti
That's very interesting.
00:07:31 Antonio Cacciapuoti
I mean, in the end, the human is the one creating the machine.
00:07:36 Antonio Cacciapuoti
So being the one that creating a machine is also sure the ethics in the machine, because it's the ultimate responsible.
00:07:43 Antonio Cacciapuoti
I said, the human has this creativity,
00:07:46 Antonio Cacciapuoti
that can help to circumvent the safeguards.
00:07:49 Antonio Cacciapuoti
So that's why fraudsters are always one step ahead also of artificial intelligence actually.
00:07:54 Antonio Cacciapuoti
Right.
00:07:55 Antonio Cacciapuoti
Because if they know how to prompt.
00:07:57 Antonio Cacciapuoti
They know also how to circumvent those safeguards.
00:08:03 Antonio Cacciapuoti
And if you want to stay a bit on this philosophical part, at least what I find interesting is that you need to be able also to, as you mentioned, I mean, you don't need to know how to code and to read all the code, but you need to be able to explain a bit the process to ensure you can audit it.
00:08:19 Antonio Cacciapuoti
So how ethical is for you to automate a process if you're not able to explain it?
00:08:26 Alessandro Casarotti
I love philosophy, and here I don't think ethics is just about if we can fully explain a system, but it's more about if we understand the consequences of using that system.
00:08:40 Alessandro Casarotti
Because many processes were never fully explainable, even before AI.
00:08:45 Alessandro Casarotti
Think about some example linked to my
00:08:48 Alessandro Casarotti
asset management work, for example, the complex credit model, so the investment decision made by the portfolio managers.
00:08:56 Alessandro Casarotti
We often can't explain every step, but we still have accountability for them.
00:09:02 Alessandro Casarotti
So the ethical problem for me starts when automatation removes responsibility.
00:09:09 Alessandro Casarotti
If nobody can explain what the system does, why it does it, and especially who is accountable when something goes wrong, then automatation becomes ethically weak, not because it's so back, but just because it's not well governed.
00:09:26 Alessandro Casarotti
So if also here, because we stay in an internal audit perspective, if we jump in the internal audit world, the key question for me is not if we can explain, as I said before, the algorithm line by line, but if we can explain when the system should be used and when not, and especially if we can explain who is responsible for the outcome produced.
00:09:55 Alessandro Casarotti
So if you cannot explain those things, you as a company, you shouldn't start to automate the process.
00:10:04 Alessandro Casarotti
So ethically speaking, for me, automatation without the possibility to explain what is acceptable, the automatation without possibility to explain, it's only acceptable when the impact is low.
00:10:20 Alessandro Casarotti
So when the risk is low, as you were saying before, the humans remain in the control.
00:10:25 Alessandro Casarotti
And if there are clear control put in place by the organization when things go wrong, because things, of course, can be wrong.
00:10:36 Alessandro Casarotti
So otherwise, if you combine just opacity with automation, we are not talking about ethic.
00:10:43 Alessandro Casarotti
It's not ethic.
00:10:45 Alessandro Casarotti
It's just risk moving faster, which is not responsible.
00:10:49 Antonio Cacciapuoti
And that's interesting because what you said, I mean, mainly speaking about the risk perspective, it's a bit where there, at least in Europe, the regulation is trying to give a bit of safeguards when we say, okay, depending on certain type of activities can be defined whether it is low, medium, high, or very high risk, and therefore certain level of controls is expected in the activities.
00:11:11 Alessandro Casarotti
Yeah, because then the perception of risk
00:11:15 Alessandro Casarotti
can change and also the control that every organization can put in place to mitigate this risk.
00:11:21 Alessandro Casarotti
But, Alessandro, today the world is more connected than ever, but especially in our sector and in yours, I imagine that you often have to navigate local regulatory barriers from clients operating under different legislation.
00:11:39 Alessandro Casarotti
So with that in mind,
00:11:41 Alessandro Casarotti
However, AI players approach ethical decision when operating under such a different regulation.
00:11:48 Alessandro Casarotti
For example, if we can mention Europe.
00:11:52 Alessandro Casarotti
Asia or United States?
00:11:54 Antonio Cacciapuoti
I mean, if we take these three macro regions, I think we can see already three quite different approach on AI and also mainly regulation.
00:12:04 Antonio Cacciapuoti
I mean, we mentioned already Europe and we know that also Europe is somehow also teased about over-regulating, but for sure gave us quite clear ground
00:12:16 Antonio Cacciapuoti
to build up some governance on what is EU AI with the EU AI Act, and also therefore to ensure that there are some parameters embedded for having safeguards that are brought forward.
00:12:31 Antonio Cacciapuoti
On that perspective, it's way different from United States and Asia, which have more a pragmatic approach also for
00:12:39 Antonio Cacciapuoti
constant development.
00:12:41 Antonio Cacciapuoti
Because if we think on an overall perspective, I mean, we mentioned so far the EUAI Act, but at least here in Europe, any big players needs to ensure multiple regulatory compliance.
00:12:56 Antonio Cacciapuoti
Not only the EU Act, EU Act is a foundation, but there is also other consideration to do.
00:13:00 Antonio Cacciapuoti
For instance, the GDPR regulation.
00:13:02 Antonio Cacciapuoti
I mean, all the data that you are inputting, where are they going, how they are treated, because also this may have impacts.
00:13:09 Antonio Cacciapuoti
on how the model is learning and therefore also about also some ethical or bias discrimination.
00:13:18 Antonio Cacciapuoti
And if for you in your sector, in the financial sector, you have even other regulation that goes on top.
00:13:24 Antonio Cacciapuoti
So the Dora one, the Cloud one, the outsourcing one, all together, I mean, it becomes very complex to put a system in place that really fits all activities.
00:13:36 Antonio Cacciapuoti
But at the same time, I think it helps
00:13:39 Antonio Cacciapuoti
to perform a reflection in the development of the model and of the need that AI is required to answer to ensure that all potential safeguards and risks are actually assessed.
00:13:54 Antonio Cacciapuoti
Because when you have to consider all these regulatory compliance, I mean,
00:13:58 Antonio Cacciapuoti
the technology is moving way faster.
00:14:00 Antonio Cacciapuoti
And that need of having a reflection on compliance on multiple regulation help the governance behind AI to ensure that the model is strong enough to assess the risk and the safeguards required.
00:14:16 Antonio Cacciapuoti
Because if we think all this regulation was potentially written or drafted even before ChatGPT was publicly released,
00:14:24 Antonio Cacciapuoti
which means that, I mean, usually we say technology is five years forward, the regulation, or regulation is five years behind the technology.
00:14:33 Antonio Cacciapuoti
So if you consider how much is going fast now, the AI development is regulation with the risk to stay a bit behind more and more.
00:14:43 Antonio Cacciapuoti
But at the same time, it helps to ensure that when we build a model,
00:14:49 Antonio Cacciapuoti
we have some reflection on risk and safeguards.
00:14:52 Antonio Cacciapuoti
That's how we see the things.
00:14:54 Antonio Cacciapuoti
And that's where the human still clear, having a very important part, since ethics is very subjective, a judgmental topic.
00:15:01 Antonio Cacciapuoti
So when you build, when each company builds its own safeguards, I mean, it's managed and built by humans.
00:15:10 Antonio Cacciapuoti
So this is not a thing that cannot be delegated to a machine.
00:15:13 Alessandro Casarotti
I will just jump in because the fact that here in Europe, we have a strong
00:15:19 Alessandro Casarotti
So we are maybe more regulated than other countries.
00:15:24 Alessandro Casarotti
For you to represent a barrier for technology development or is just for you an advantage in terms of a more ethical AI compared to, I mean, Asia and the United States?
00:15:39 Antonio Cacciapuoti
For sure, give us some time to recover the development that they may have in USA and
00:15:47 Antonio Cacciapuoti
Asia, because they have potentially less regulatory barrier, but at the same time gives, I think, comfort and ground, mainly when we go on ethical safeguards, because as I mentioned before, the reflection that you need to give and make when you build the model and the governance on risk and barrier and safeguards that you're implementing.
00:16:09 Antonio Cacciapuoti
And since this is mainly done by humans, I mean, it helps really to have this human in the loop, also in the build of the governance, the model, and manage such a subjective, a judgmental topic like ethics that...
00:16:25 Antonio Cacciapuoti
can be handled not by a machine, or at least at the beginning, cannot be dedicated to a machine.
00:16:31 Antonio Cacciapuoti
Then we go in the famous human in the loop, because it's also part of the regulation that is expected here.
00:16:38 Antonio Cacciapuoti
I mean, the human in the loop is sufficient for you, Antonio, to ensure an ethical AI, or it's just a slogan?
00:16:48 Alessandro Casarotti
Alessandro, to be honest with you and of course with all the audience, human in the loop is necessary, but it's not enough.
00:16:59 Alessandro Casarotti
Yes, I have to admit, sometimes it's just a slogan, because having a human in the loop only works if that human has knowledge, has time and authority
00:17:14 Alessandro Casarotti
and of course, more knowledge to operate.
00:17:16 Alessandro Casarotti
But if that person is just clicking approve because the system is very complex or is fast, in that case, the human in the loop is just theory.
00:17:29 Alessandro Casarotti
But I mean, it's just theory and you cannot apply that in practice.
00:17:33 Alessandro Casarotti
From an ethical perspective, here the risk is the, as I said before, is the automation bias.
00:17:41 Alessandro Casarotti
Because when AI results, when AI
00:17:44 Alessandro Casarotti
high outcomes looks too confident, too good, people tend to trust them even when they shouldn't.
00:17:51 Alessandro Casarotti
So in those cases, the human becomes just a blind approval and not a safeguard.
00:17:58 Alessandro Casarotti
And internal audit, in my opinion, should understand if the human can understand why the system produced that kind, that specific output.
00:18:10 Alessandro Casarotti
and especially if they allow to override it without any justification or any penalties.
00:18:17 Alessandro Casarotti
Because for me, of course, ethic AI requires real human in the control, not just a symbolic control.
00:18:25 Alessandro Casarotti
And that for me means that you need to, if I can give you
00:18:30 Alessandro Casarotti
example in what we do in my organization, in the internal audit function, we create a strong training program, we create escalation paths, we create clear decision limit, and especially a good communication with all the assurance providers.
00:18:52 Alessandro Casarotti
Because we need to know if humans
00:18:55 Alessandro Casarotti
really intervene, really go in when something looks wrong.
00:19:00 Alessandro Casarotti
So to conclude, yes, human in the loop, it's a good starting point, but without governance, it's not a control, it's just a checkbox.
00:19:10 Alessandro Casarotti
And as you know, ethics built on checkboxes will never survive with the reality.
00:19:16 Antonio Cacciapuoti
Indeed, indeed.
00:19:17 Antonio Cacciapuoti
You cannot put boxes for something subjective.
00:19:20 Alessandro Casarotti
Of course, you need, as we were saying before, you need to stay one step ahead of a criminal.
00:19:26 Alessandro Casarotti
And for doing that, at least what we are doing is you need to enhance your risk assessment framework.
00:19:35 Alessandro Casarotti
especially for this kind of AI fraud.
00:19:39 Alessandro Casarotti
You need to maybe enhance your detection system.
00:19:43 Alessandro Casarotti
You need to anticipate risk.
00:19:45 Alessandro Casarotti
You need to move faster in order to apply the best control you can.
00:19:52 Alessandro Casarotti
You need to increase training and the employees
00:19:56 Alessandro Casarotti
knowledge about this kind of new risk and also collaborate, as I was saying before, with all the assurance providers, especially IT, compliance, risk management, the other second level control function in order to respond faster to this kind of risk and to put in place a good control.
00:20:18 Alessandro Casarotti
Now, I suggest to stay in the famous real human in the loop.
00:20:23 Alessandro Casarotti
I would like to know how did you implement a real human in the loop process in insensitive domains like financial crime?
00:20:33 Antonio Cacciapuoti
We said that ethics is something subjective and very grey because it's a grey area when there is a red thin line between what it is, ethical and unethical, and it's very subjective.
00:20:44 Antonio Cacciapuoti
And anti-financial crime too is a very grey area because there are some areas that are not well defined, also for law, for regulation, or I mean, as we said, since frauds are always one step ahead, I mean, they're always trying to find
00:21:01 Antonio Cacciapuoti
the right loophole or the new technology to jump in.
00:21:04 Antonio Cacciapuoti
That's the complexities because, I mean, we have some great advantages in using AI for anti-financial crime because it can help you completely transform the activity.
00:21:17 Antonio Cacciapuoti
I mean, if we take one of the most common controls that you have in anti-financial crime, such that your customer,
00:21:25 Antonio Cacciapuoti
So far is always seen as an administrative activity.
00:21:28 Antonio Cacciapuoti
You collect info, you collect documents, you need to check it off, it's more a checklist exercise.
00:21:34 Antonio Cacciapuoti
And the AI, it helps to change completely the perspective because it will help you to automate this part as much as possible or to identify also patterns.
00:21:45 Antonio Cacciapuoti
So I'm to anticipate certain thing that of course if you repeat just the activities of checking documents and info you may lose.
00:21:53 Antonio Cacciapuoti
To really have a human
00:21:55 Antonio Cacciapuoti
in the loop that performs the investigative process.
00:21:58 Antonio Cacciapuoti
But since it's very sensitive, a human involvement is there from the beginning.
00:22:05 Antonio Cacciapuoti
So that's how we implemented, for instance, when we implemented some AI for KYC, human in the loop was at the beginning.
00:22:12 Antonio Cacciapuoti
So because the model required to be tested, trained, and so validated and trained data and scenarios by human to ensure that all the different scenarios can be at least considered.
00:22:24 Antonio Cacciapuoti
But
00:22:25 Antonio Cacciapuoti
Another point is also human group needs to stay after, because as we said, the fraudster are always ahead.
00:22:31 Antonio Cacciapuoti
But one thing is having more in an investigative approach, so to identify patterns, learning, so that human in the group just assess, okay, yeah, this is risky or not risky.
00:22:41 Antonio Cacciapuoti
And it drives already to him with a predefined assessment, but can have the sensibility and creativity that the machine doesn't have to ensure that there was no safeguards that are delayed.
00:22:54 Antonio Cacciapuoti
But there is also a very important starting point there that is the data quality.
00:22:59 Antonio Cacciapuoti
I mean, and that's why a human involvement is the first safeguard there, because at this moment of transition, we are building everything on data that we have.
00:23:10 Antonio Cacciapuoti
But we know very well also that the data quality is essential to train the model.
00:23:14 Antonio Cacciapuoti
And that's why the human needs to be at the very first stage also in the model training, because it depends on which model you train.
00:23:21 Antonio Cacciapuoti
and which data you use to train the model.
00:23:26 Antonio Cacciapuoti
And therefore, the data may also have some biases and you may create biases in the model even without that you really want.
00:23:34 Antonio Cacciapuoti
So that's why the human needs to be set as a first safeguard at the very beginning and also at the end of the process, mainly when we have these
00:23:47 Antonio Cacciapuoti
risky areas such as anti-financial crime.
00:23:49 Antonio Cacciapuoti
Of course, there are potentially some parts that can be fully automated when you see really that there is no risk, low risk, and there is sufficiently confidence in the data also behind, because of course data will improve along the time.
00:24:05 Antonio Cacciapuoti
But as a starting point, cannot be a machine generating everything mainly in such domain.
00:24:12 Antonio Cacciapuoti
So we cannot let the machine decide for ethical points.
00:24:16 Antonio Cacciapuoti
I mean, such a subjective line, machine can help to assess and the human therefore can focus on the really the subjective assessment and able to refocus his activity on the least red thin line that is on ethical perspective.
00:24:34 Antonio Cacciapuoti
and also to ensure that bias or hallucination checked in advance.
00:24:39 Alessandro Casarotti
Yeah, but Alessandro, we, of course, we need to keep in mind that the risk cannot be eliminated.
00:24:45 Alessandro Casarotti
We, our work as control function is just to reduce us, you know, the risk and to bring it within the organizational risk appetite.
00:24:57 Alessandro Casarotti
And here we talked about safeguard and ethical frameworks.
00:25:02 Alessandro Casarotti
So
00:25:04 Alessandro Casarotti
from the other side of this table, how do fraudsters try to bypass those protections to achieve their goals?
00:25:13 Antonio Cacciapuoti
Indeed, that's a very good question, Antonio.
00:25:15 Antonio Cacciapuoti
I mean, so far we always look on maybe on how we implement safeguards, but not how fraudsters try to make the safeguards derailed.
00:25:23 Antonio Cacciapuoti
So as we said, and again, I think we go a bit on the human aspect of ethics.
00:25:30 Antonio Cacciapuoti
the creativity is essential and fraudsters developed.
00:25:34 Antonio Cacciapuoti
And so if you know already how to prompt, you can find very good turnarounds.
00:25:41 Antonio Cacciapuoti
Because if you ask, for instance, to an IM model how to cook books, even those, let's say, with less barriers, but then I think they will block you.
00:25:50 Antonio Cacciapuoti
I mean, there is Dark GPT for sure that if we find help, but let's say all others, and then they should
00:25:59 Antonio Cacciapuoti
block you.
00:25:59 Antonio Cacciapuoti
However, if you ask how I can explain to my audience that is a classroom how to not cook books, then you start to, for instance, look for turnarounds of the ethical safeguards that you may have put in place.
00:26:17 Antonio Cacciapuoti
So that's where really the first
00:26:21 Antonio Cacciapuoti
type of activity that fraudsters do, what is so-called prompt obfuscation.
00:26:27 Antonio Cacciapuoti
So we're trying to rephrase to try to bypass the filters that AI system has, like in the financial institution.
00:26:37 Antonio Cacciapuoti
I mean, they know very well the controls you have to try to circumvent the controls, like with AI, they know very well the safeguards that they have.
00:26:48 Antonio Cacciapuoti
to try to turn around them.
00:26:50 Antonio Cacciapuoti
Then, of course, they can also exploit biases.
00:26:53 Antonio Cacciapuoti
For instance, if they know that AI may have some certain biases, they may act with some profiles.
00:27:02 Antonio Cacciapuoti
that they know that they will be protected due to these biases and they will get further information, or they can create scientific identities to bypass AI because maybe more tolerant to avoid discrimination.
00:27:17 Antonio Cacciapuoti
So that's also the interesting parts, because depending on the different models, there may be
00:27:24 Antonio Cacciapuoti
different approach, but for sure what I see most is that fraudsters have really great creativity.
00:27:31 Antonio Cacciapuoti
They're always one step ahead.
00:27:33 Antonio Cacciapuoti
And that's why, again, being such very thin line ethics and the safeguards, you need
00:27:42 Antonio Cacciapuoti
the humans still in the loop.
00:27:44 Alessandro Casarotti
And for me, I don't see another way, Alexandro.
00:27:47 Antonio Cacciapuoti
Still, at least so far, as I said, also based on anyway, the limited data and knowledge that we have now, because of me, what I see also on how the models were beat so far, I think for sure there are safeguards, but I'm always conscious that
00:28:03 Antonio Cacciapuoti
The data quality is the key first components and how the data quality were input, how the trained model were trained, how the human validated the data quality and the model.
00:28:14 Antonio Cacciapuoti
And so the output generator is the essential part on how the AI is beat and learn.
00:28:20 Antonio Cacciapuoti
So that's why the human aspect is still very relevant, mainly in such a topic like in such gray areas, like ethics, like financial crime, like fraud, where a human interaction creativity still needs to be put in to assess a subjective domain.
00:28:38 Antonio Cacciapuoti
And that's why I think, anyway, human still will be essential even
00:28:44 Antonio Cacciapuoti
in development of AI.
00:28:47 Antonio Cacciapuoti
And maybe also to close, I mean, a final question to you.
00:28:50 Antonio Cacciapuoti
How do you think the internal audit works for us of the future will interact with AI?
00:28:57 Alessandro Casarotti
Well, I love this question and thanks for it.
00:29:00 Alessandro Casarotti
And when we talk about the future of internal audit and the work in general, we also have to talk about the new generations.
00:29:10 Alessandro Casarotti
For me,
00:29:11 Alessandro Casarotti
As I always say, AI is an incredible accelerator, but acceleration without any foundation is just dangerous.
00:29:20 Alessandro Casarotti
And here the real risk is that people stop learning how to do things just because AI will do it anyway.
00:29:30 Alessandro Casarotti
If you don't understand what you are doing, what you are looking for, I mean, you cannot
00:29:35 Alessandro Casarotti
evaluate if the AI output makes sense.
00:29:39 Alessandro Casarotti
I mean, you cannot understand if it's an AI hallucination.
00:29:44 Alessandro Casarotti
And that's how the tool start controlling people and instead of the other way around.
00:29:51 Alessandro Casarotti
No, you were saying before, it's people creating AI, but if we don't know what we are using, it will be the other way around.
00:30:01 Alessandro Casarotti
So just prove me wrong, but you learn to walk before drive a car.
00:30:06 Alessandro Casarotti
So you learn to arithmetic before using a calculator and you learn to read a map maybe before using a GPS.
00:30:16 Alessandro Casarotti
So it's the same thing.
00:30:18 Alessandro Casarotti
And in the audit, you must understand the risk, governance, and you must understand controls and judgment before asking AI to help you assess them.
00:30:30 Alessandro Casarotti
And as the godfather of AI, Geoffrey Hinton said, I'm not afraid of AI, but I'm afraid of human laziness.
00:30:39 Alessandro Casarotti
And I totally agree with that.
00:30:42 Alessandro Casarotti
And because here the key word is use and it's not outsourcing thinking that we have not to outsource
00:30:51 Alessandro Casarotti
thinking that we are not to outsource training and knowledge.
00:30:55 Alessandro Casarotti
And because for me, AI should come after the competence and not instead of it.
00:31:01 Alessandro Casarotti
Otherwise, the next generation will not be improved, will not be augmented by AI, but they will be just depending on it.
00:31:10 Alessandro Casarotti
And the dependence is not the future of internal audit.
00:31:14 Alessandro Casarotti
Judgment is and the knowledge is.
00:31:17 Alessandro Casarotti
So
00:31:17 Alessandro Casarotti
Alessandro, this was a really great conversation about a very hot topic and it's also very hard, I mean, to talk about ethics in this kind of tool, because ethic on AI, it's a very...
00:31:32 Alessandro Casarotti
difficult topic.
00:31:33 Alessandro Casarotti
So thank you very much for this discussion.
00:31:37 Alessandro Casarotti
I really enjoyed and I also want to thank you, the Institute of Internal Auditors, Grazie Miller from my side.
00:31:45 Alessandro Casarotti
And until next time.
00:31:46 Antonio Cacciapuoti
Thank you, Antonio, and thanks for having me again to speak with you.
00:31:50 Antonio Cacciapuoti
Always a pleasure and always a pleasure to be part of this podcast.
00:31:56 Antonio Cacciapuoti
Thank you, everyone.
00:31:57 Alessandro Casarotti
Thank you.
00:31:59 The IIA
Ready to strengthen your audit work with analytics, automation, and AI?
00:32:03 The IIA
Well, join the IIA's AAAI virtual conference on April 7th.
00:32:09 The IIA
You'll hear practical strategies, real-world insights, and earn up to 9 CPE credits, plus a bonus AI nano course included with registration.
00:32:18 The IIA
Learn more at the iia.org.
00:32:22 The IIA
If you like this podcast, please subscribe and rate us.
00:32:25 The IIA
You can subscribe wherever you get your podcasts.
00:32:27 The IIA
You can also catch other episodes on YouTube or at the iia.org.
00:32:31 The IIA
That's T-H-E-I-I-A.org.