Corporate compliance officers have been bracing for regulation of how companies can use artificial intelligence in their daily business operations. Now we have a fresh glimpse of what that regulatory landscape might look like – and how compliance officers will need to respond to it – in the state of Colorado.
Colorado passed a law at the end of May requiring businesses that use “high-risk” artificial intelligence systems to take steps to avoid AI-based discrimination. With that legislation, Colorado has now become the first state in the United States to take a swing at AI regulation. More are sure to follow, so compliance officers and corporate technology teams should start thinking now about the capabilities you’ll need to meet those new obligations.
Let’s start with the Colorado law itself, which goes into effect in February 2026. It requires all “developers and deployers” of artificial intelligence – so that includes companies that simply use AI systems, rather than develop AI themselves – to use reasonable care to avoid AI-based discrimination.
Specifically, companies will need to implement an AI risk management policy, and post a public statement declaring what AI systems the company uses that might affect consumers. Companies will also need to conduct an “impact assessment” of their AI systems, and notify the state attorney general within 90 days whenever they discover AI discrimination has happened.
The good news for companies is consumers will not be able to file their own lawsuits against companies if they are victims of AI discrimination; only the state attorney general will have discretion to enforce the law, and companies with fewer than 50 employees will be exempt entirely.
That’s the quick legal analysis. So, what are the implications for compliance officers and the programs you run?
Think about AI systems, controls and risks
Foremost, the Colorado law (and others like it, both in the United States and Europe) will drive compliance officers to engage with the rest of the business in conversations about how the company wants to use artificial intelligence. All of you – the IT team developing AI, the operations teams using it, the compliance team worried about its risk – will need to work together to develop sensible policies, procedures, and other governance mechanisms to fulfill that mandate against AI-driven discrimination.
For example, you should have policies declaring that anti-discrimination is a high priority for any AI systems employees use, and procedures to test the results of AI systems to confirm that, no, they don’t discriminate against customers or consumers. You might also want to impose policies and controls on which employees can use AI systems, which business needs AI systems are allowed to address, and so forth.
Where might those policies and procedures come from? You could start by searching for template policies available online; or by adopting any of several AI risk management frameworks that have emerged lately, such the AI Risk Management Framework or the ISO 42001 standard for AI management systems. You may want to use a GRC tool to map out your company’s AI operations, any internal controls you do or don’t have, and what regulations require.
Another useful resource comes from the state of New York. Earlier this year its Department of Financial Services proposed rules for AI in the insurance sector. Firms would need to provide written documentation of how they plan to use AI, including:
- A description of how you identify operational, financial and compliance risks associated with AI, and the associated internal controls designed to mitigate those risks.
- An up-to-date inventory of all AI that you are either using right now, is under development, or recently retired.
- A description of how each AI system operates, including any external data or other inputs and their sources, and any potential risks and appropriate safeguards.
- A description of how you track changes to your AI usage over time, including documented explanations of any changes, the rationale for those changes, and who approved them.
- A description of how you monitor AI usage and performance, including a list of any previous exceptions to policy and reporting.
Those are great practices for any business to follow, whether you’re in the insurance industry or not. They force the company to think seriously about how it wants to use artificial intelligence, the risks that might arise from those use cases, and how you plan to manage those risks. That’s exactly what the Colorado law wants you to do too.
Forge the right relationships
Of course, even when armed with template policies and AI risk management frameworks, compliance officers still need to translate those abstract best practices into specific policies, procedures and controls that make sense for your business. So yet again we’re back to the importance of compliance officers working well with the business; that’s going to matter enormously for the success of any AI compliance program.
The reality is that in most companies the chief compliance officer can’t just declare a new set of policies and procedures by fiat; that’s how you get labeled ‘the Department of No’ and ignored. You’ll need to talk with other parts of the enterprise about their AI ambitions, listen to their concerns, convey the reality of the situation (“This is what the law requires us to do; we need to figure it out somehow”) and arrive at a practical plan forward.
Compliance officers might have an easier time engaging in this exercise with artificial intelligence, since AI is still new and most people understand that poorly managed AI can lead to disaster. That’s a far different situation than, say, the dawn of anti-corruption compliance in the mid-2000s, where compliance officers had to graft new policies, procedures, and ethical expectations onto existing business processes. AI is still in its infancy, and we have a unique opportunity to steer it down a compliance-centric path.
Will the new Colorado law spark that conversation among CCOs, senior managers, and the rest of the business? One can hope. Or if the Colorado law doesn’t, other AI regulations are close behind.
No matter which regulation catches your company first, all these points about strong risk management and an enterprise-wide awareness of compliance are going to hold true.
AI, its implications, laws governing its use and how organizations need to adapt to stay compliant is an ongoing discussion that promises to be lively and complicated. Subscribe to our blog for the latest updates, and check our posts related to AI to dive deeper into this area!