Compliance officers spend lots of time these days worrying about how their own company’s use of artificial intelligence might draw the ire of regulators, but you have another dimension of AI risk to worry about, too.
You need to worry about how others might use AI against your company.
Specifically, the U.S. Justice Department recently warned that businesses must anticipate the risk of fraudsters using AI to dupe your organizations into some sort of scheme – and then you’ll need to modify your existing processes and controls to keep pace with those new, AI-enhanced fraud risks.
Think about how that will need to work in practice. Compliance officers will need to collaborate with internal audit or anti-fraud teams to assess risk; and work with IT departments to develop better controls and processes; and coordinate with business functions across the enterprise to put those new controls and processes into effect.
And you’ll need to do it all soon, since fraudsters are already hard at work figuring out how to use AI in ways your current anti-fraud program was never meant to address.
The many AI-driven fraud risks that are coming
First let’s consider a few examples of how fraudsters might use artificial intelligence against you.
- They could use generative AI tools to forge convincing fabrications of passports or other identity documents to pass themselves off as a legitimate customer rather than, say, a terrorist looking to launder money
- They could use voice-cloning tools to impersonate senior executives at your organization, and then call lower-level employees and order them a wire transfer of money overseas or release important news early
- They could intercept legitimate invoices and use AI to alter important details, to divert payments to accounts in their control or otherwise siphon off company money
- They could use various tools to create fake shell companies and bogus business histories, as part of a scheme to hide corruption payments
We could keep going. (When I asked ChatGPT for examples of how AI could be used in corporate crime, it gave me a half-dozen other examples.) The theme in all of it, however, is that AI will help fraudsters to appear more convincing: better forgeries, better impersonations, more realistic dummy corporations, and so forth.
That is the fixed point around which all your company’s anti-fraud efforts must orbit. If AI will make fraudsters appear even more slick, more compelling, more believable – then your company’s anti-fraud program will need to become even more skeptical, with more “points of challenge” to sniff out that AI-enhanced fraudster.
In practice, that will require a team approach of (a) better technology to detect AI-manipulated material; (b) more controls that slow down important decisions (such as wiring $10 million overseas); and (c) better training of human employees, to exercise more skepticism when something simply doesn’t feel right.
All of that will then need to be tested and documented, so that if your company suffers an incident and regulators do start asking how prepared you were, you’ll have all the evidence of a strong risk management program ready to go.
Improving fraud risk assessment and controls
Strong anti-fraud controls rest upon a strong fraud risk assessment, so start there. Bring together important business functions (sales, procurement, accounting, and so forth) that might encounter fraudsters, as well as whatever anti-fraud team your organization has (internal audit or a dedicated fraud team, for example).
Review through the various fraud risks your company has, and how fraudsters might use AI to evade the controls you have in place. Your anti-fraud team could even use AI itself to test whether the controls pass muster.
Where your current AI controls might not pass muster any longer, you’ll need to design, test, implement, and document new controls. This is where things could get interesting.
For example, your company might implement more detailed policies about when employees can transfer money outside the company. Even if the CFO is on the phone directly telling an admin to execute a transfer, that could be voice-cloning; so you might want to implement a challenge question of some kind or institute a policy that someone physically in the office must counter-sign the CFO’s phone command.
Compliance and anti-fraud teams must also remember the importance of training employees on new procedures, and then testing them to see whether those procedures are followed. After all, a payroll clerk might feel awkward asking the CEO, “What was the book you mentioned to me last week?” but the clerk would feel a lot worse if he didn’t, and then sent the entire company’s personal data to parts unknown.
None of this is new per se. Faudsters have been trying to dupe payroll clerks into sharing personnel data for years. The fraudsters just did so with crude tools such as email, and then more sophisticated tools as technology improved.
Now AI is giving fraudsters another leg up, and your company can’t ignore that fact.
Indeed, go back to the Justice Department’s warning; Nicole Argentieri, head of the Criminal Division, expressly said at a recent compliance conference:
“Prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data used by the business.”
The specifics of how you assess AI-driven fraud risks and what controls you implement to keep those risks in check – that’s up to each individual company. Whatever path you choose, however, the Justice Department will want to see that you took a thoughtful, reasoned approach to your risk assessment and then made good-faith judgments about the best mix of technology, training, and processes your company uses to fight fraud.
To that extent, the Justice Department’s demands are nothing new, and as usual, compliance officers should be in the thick of it.
When it comes to AI, there is no shortage of new information, new risks and quickly moving advancements. We’re staying close to what this all means for compliance programs, so subscribe to Risk & Compliance Matters for more information and check out our other articles on AI below.