Skip to content.

Recent revisions made to the Criminal Division’s “ Evaluation of Corporate Compliance Programs (ECCP)” guidance highlight, among other things, risks created by artificial intelligence (AI). Thus, it would be prudent for compliance professionals to consider these revisions as they reevaluate the adequacy and effectiveness of their organization’s compliance program.

The most comprehensive updates to the newest revision of the ECCP, published in September 2024, highlight the Department of Justice’s enhanced focus on reducing risks associated with disruptive technologies. The revisions follow remarks by Deputy Attorney General Lisa Monaco in March 2024, when she directed the Criminal Division to “incorporate assessment of disruptive technology risks, including risks associated with AI,” in the ECCP.

Prosecutors often assess how the compliance program mitigated certain risks when resolving a corporate criminal case. “For a growing number of businesses, that now includes the risk of misusing AI,” Monaco stated in her remarks.

Additionally, Monaco forewarned that prosecutors will seek stiffer sentences for offenses “made significantly more dangerous by the misuse of AI.” Such AI misuse could include false approvals or AI-generated documentation, for example.

Principal Deputy Assistant Attorney General Nicole Argentieri remarked that compliance professionals and compliance programs play a role in mitigating AI-related risks. Prosecutors will consider, for example, whether compliance controls and tools are in place that can “confirm the accuracy or reliability of data used by the business.”

Prosecutors also will assess “whether the company is monitoring and testing its technology to evaluate if it is functioning as intended and consistent with the company’s code of conduct,” Argentieri added.

AI questions to consider

A new section in the revised ECCP directs prosecutors to assess how the company and the compliance program manage emerging risks. Taking cues from the ECCP, compliance professionals should consider the following additional questions in evaluating the organization’s compliance program:

  • Is management of risks related to the use of AI and other new technologies integrated into broader enterprise risk management (ERM) strategies?
  • What is the company’s approach to governance regarding the use of new technologies, such as AI, in its commercial business and in the compliance program?
  • How is the company curbing any potential negative or unintended consequences resulting from the use of technologies, both in its commercial business and in its compliance program?
  • How is the company mitigating the potential for deliberate or reckless misuse of technologies, including by company insiders?

Prosecutors also will assess whether monitoring controls are in place to ensure the trustworthy use of AI, that it complies with applicable law and the company’s values, and that it’s being used for its intended purposes. Additionally, prosecutors will assess human decision-making processes; how accountability for AI use is monitored and enforced; and how employees are trained to responsibly use AI.

AI regulations

In addition to the ECCP revisions addressing AI risk, many countries and some U.S. states continue to enact or consider AI laws and regulations.

In the United States, for example, Colorado’s landmark AI legislation, the Colorado AI Act, requires developers of “high-risk” AI systems to use “reasonable care” to protect consumers from any “known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system,” according to the bill summary. A rebuttable presumption for “reasonable care” exists where the developer “complied with specified provisions in the Act.”

Outside the United States, the European Union passed its EU AI Act, the “first comprehensive regulation on AI by a major regulator anywhere.” The AI Act establishes certain compliance obligations, based on the level of risk that an AI system poses.

With a greater focus on AI risk mitigation by Department of Justice prosecutors, and more U.S. states and countries enacting their own AI laws and regulations, it’s more important than ever that compliance professionals implement a robust AI governance framework, including the establishment of effective AI policies and procedures, and instilling ethical AI practices.

To assist organizations in developing and deploying AI-related policies and procedures and to help them stay aligned with the fast-evolving AI regulatory landscape, NAVEX is continuously curating its content library. Currently, the NAVEX content library offers over 400 regulations and compliance frameworks, including several AI regulations and frameworks.

Key features of the NAVEX AI content library include:

  • Centralized AI regulatory resources, providing organizations access to a consolidated library of global AI regulations and industry-specific guidelines
  • Streamlined control development, enabling organizations to simplify the process of creating and implementing AI-specific controls to align with emerging regulations
  • Automated compliance monitoring, enabling organizations to leverage automation to track compliance requirements and ensure adherence through continuous control testing
  • Enhanced risk mitigation so that organizations can identify and mitigate AI-related risks proactively using structured regulatory frameworks
  • Future-proof compliance strategies that enable organizations to stay ahead of evolving AI laws and standards, ensuring the organization remains compliant and competitive

“As AI technologies revolutionize industries, companies find themselves standing at a crossroads, navigating the intricate landscape of effective AI governance,” said NAVEX Chief Product Officer A.G. Lambert. “Our AI content not only empowers risk management professionals to establish crucial controls but also enhances efficiency by automating compliance processes. This dual approach enables organizations to embrace AI technology with confidence, transforming challenges into opportunities for growth and innovation.”

Additionally, the NAVEX One platform offers employees and third parties a secure way to report any AI-related issues or concerns, enabling organizations to proactively identify and mitigate potential AI-related risks early on.

If you’re looking for a digestible version of the most recent DOJ ECCP guidance, you’re in the right place. Download the annotated guidance with the link below.

Download the annotated guidance