Skip to content.

This article was originally published on the Radical Compliance blog.


The Justice Department is rolling out fresh guidance for how it evaluates corporate compliance programs, with new sections addressing artificial intelligence, a company’s speak up culture, and whether compliance teams have sufficient access to the data they need to keep their programs on the right path.

The head of the department’s Criminal Division, Nicole Argentieri, announced the updates Monday morning during a speech she delivered at the  Society of Corporate Compliance & Ethics’ annual conference, which is happening this week in Dallas. Argentieri had lots to say, so expect to hear plenty of analysis on her speech over the next several weeks. 

Let’s begin with how a company supports a speak up culture, since that issue drives toward the overall corporate culture at an organization more than anything else. In her speech, Argentieri hit all the notes one would expect to hear from a Justice Department official.

“Our prosecutors will closely consider the company’s commitment to whistleblower protection and anti-retaliation by assessing policies and training, as well as treatment of employees who report misconduct,” she said. “We will evaluate whether companies ensure that individuals who suspect misconduct know how to report it and feel comfortable doing so, including by showing that there is no tolerance for retaliation.”

OK, fair enough; and if we look at the new guidance itself, those lofty words translate into more specific questions such as:

  • Does the company encourage and incentivize reporting of potential misconduct or violation of company policy? 
  • Conversely, does the company use practices that tend to chill such reporting? 
  • How does the company assess employees’ willingness to report misconduct?

Right away, we see that compliance officers will need to do well at assessing the culture of your organization: everything from incentive structures meant to encourage reporting, to the more intangible pressures that might induce employees not to speak up. 

So, think about which metrics would help you understand the speak up culture in your organization (or the lack thereof), and how you’d gather that information – including the possibility that the compliance team doesn’t gather it at all, and instead you work with HR to gather it on your behalf. 

The guidance also has a few other questions specifically about whistleblower protection and anti-retaliation policies:

  • Does the company train employees on both internal anti-retaliation policies and external anti-retaliation and whistleblower protection laws? 
  • Does the company train employees on internal reporting systems as well as external whistleblower programs and regulatory regimes?
  • To the extent that the company disciplines employees involved in misconduct, are employees who reported internally treated differently than others involved in misconduct who did not? 

Interesting to see that the department won’t confine its questions to your own internal policies and procedures for whistleblower protections; it reserves the right to ask whether you tell employees about external laws and reporting channels they could also use to raise allegations of misconduct. We’ll pause for a moment while you double-check whether your training specifically mentions those various options employees have to take their complaints outside your house.

AI risks and other issues

Argentieri also talked about artificial intelligence, and specifically how companies wrestle with AI from a governance perspective. That is, how much does your company think about the ways it uses AI itself; and how much do you think about the ways AI might be used against you? And then, what system does the company have in place to adjust its policies, procedures, and controls accordingly?

For example, Argentieri said this: 

Prosecutors will consider whether the company is vulnerable to criminal schemes enabled by new technology, such as false approvals and documentation generated by AI. If so, we will consider whether compliance controls and tools are in place to identify and mitigate those risks, such as tools to confirm the accuracy or reliability of data used by the business. 

That’s a can of worms to put on the compliance officer’s table. It forces you – and the cybersecurity team, and the internal audit team, and operations teams – to think about “challenge controls” you might put in place to sniff out false documentation created by AI. 

For example, sketchy customers or overseas agents out there will inevitably use AI to supply you with false documentation so that they appear like upstanding business partners. In that case, you’ll need robust due diligence procedures to cross-reference that material against external databases, so you can identify the bogus stuff. Do you have those due diligence capabilities? Can you access independent data that might disprove the AI-generated falsehoods that third parties give you? 

We could say the same for criminals impersonating customers or business partners, such as through a business email compromise. Have you implemented requirements for multi-factor authentication? Have you trained employees on what a bogus wire transfer request might look like, and when to seek independent confirmation? (Better yet, have you amended payment policies so that no single executive, including the CEO, has authority to demand a large wire transfer?)

AI is going to make fraud detection much harder. Now here is the Justice Department telling us it expects you to upgrade your anti-fraud controls to address that new reality. So, at what point would your inability to make those improvements lead to some sort of negligence that the Justice Department might pounce upon? I don’t know, but compliance and internal audit teams would be wise to move quickly on this rather than find out.

Compliance program access to data

Argentieri’s third big point was the compliance officer’s access to data, and whether the compliance team has sufficient technology to put that data to proper use. As part of prosecutors’ assessment of compliance programs, she said, “we will also consider whether companies are putting the same resources and technology into gathering and leveraging data for compliance purposes that they are using in their business.”

Access to data is not a new theme; Justice Department officials  have been talking about it for years. This week’s guidance, however, does put a sharper point on the issue with questions such as:

  • Do any impediments exist that limit or delay access to relevant sources of data and, if so, what is the company doing to address the impediments? 
  • Do compliance personnel have knowledge of and means to access all relevant data sources in a reasonably timely manner?

Then come two questions about “proportionate resource allocation,” as the department calls it:

  • How do the assets, resources, and technology available to compliance and risk management compare to those available elsewhere in the company? 
  • Is there an imbalance between the technology and resources used by the company to identify and capture market opportunities and the technology and resources used to detect and mitigate risks?

Think about what the department is really asking here: whether your company considers ethics and compliance risks as it is developing and deploying growth plans

That’s the difference between an organization that treats compliance as a bolt-on, necessary evil (“this is how we plan to conquer the world; and compliance officer, here are some peanuts to keep you fed,”) and one that puts good business conduct at the center of its strategy (“we want to conquer the world in the following ways; but how do we do that ethically?”). 

The battle for compliance officers is how to tilt your board and senior management toward the latter, rather than the former.