Skip to content.

California sets the pace for all sorts of trends in the world – so perhaps compliance officers should take note of two recent advisories the state’s attorney general published on artificial intelligence. They capture a lot about the AI risks on regulators’ minds, and the capabilities your compliance program will need in response.

The advisories were published by state attorney general Robert Bonta on January 13, 2025. The first reviews how California’s existing laws on competition, false advertising, discrimination, and other issues all extend to artificial intelligence; the second specifically talks about healthcare and privacy laws applied to AI.

Compliance officers, however, should appreciate the larger message the two advisories send together. They are a warning shot to companies doing business in California (which is pretty much every large company) that you already have AI compliance risks today, regardless of whatever the United States, Europe, or anyone else might do for comprehensive AI regulation. Businesses must pay attention to those AI risks, and be sure they adopt the technology wisely.

Bonta even said as much in a press release. “AI might be changing, innovating, and evolving quickly,” he said, “but the fifth largest economy in the world is not the wild west; existing California laws apply to both the development and use of AI. Companies, including healthcare entities, are responsible for complying with new and existing California laws and must take full accountability for their actions, decisions, and products.”

So, what does that mean for compliance officers on a practical basis? What mechanisms should you have in place so that businesses can, in Bonta’s words, take full accountability for their actions, decisions, and products?

A few best practices come to mind. In fact, most of them are existing practices your company should already be doing, with an AI spin on them.

Getting yourself ready for the AI world

First, build an awareness of AI risk into the governance of the company. For example, management could establish some sort of designated AI governance committee that reviews and approves new AI use-cases before some enterprising business team gets carried away with its AI dreams.

This shouldn’t be a new idea. Smart companies will already have an in-house risk committee that discusses operations across the enterprise and whether those activities might trigger new risks, compliance or otherwise. That committee could neatly add AI to its purview, to assure your AI adoption is channeled in prudent, risk-aware directions.

Second, brace for more challenging risk assessments. Yes, as always, compliance officers should start with a risk assessment – but in the case of AI, you actually need to perform two risk assessments.

First should be an assessment of any new laws that expressly regulate artificial intelligence, and whether those laws apply to you. For example, the California advisories mention several laws that will go into effect in the Golden State in 2026 that apply specifically to AI developers. The EU AI Act will eventually impose a great many compliance obligations on companies using AI. Clearly strong regulatory change management capability will be important here.

Second and more challenging, however, is an assessment of how your use of AI might change existing compliance risks your business already has. That will require close consultation (and perhaps some creative thinking) with legal, IT, and operations teams to understand what they want to do with AI, and whether your current policies and controls would still be fit for purpose.

Third, think about testing and monitoring of AI data and systems. AI systems “learn” by consuming ever more data. In that case, how is your company validating that data before it’s fed into the AI model? How are you assuring that no personal data goes into the mix without consent?

Along similar lines, the decisions your AI makes need close observation to be sure the system doesn’t pick up bad habits, such as discriminating against certain groups or giving erroneous advice to consumers. (Both scenarios have happened already in the real world, by the way.) So how will your company monitor the AI’s behavior?

Both challenges will require you, the compliance officer, to have a close relationship with the IT and internal audit teams, since they’ll likely be the ones doing the work.

Fourth, prepare your humans for their AI tools. Some AI systems will act alone, making decisions and executing tasks in place of people; but many others will work in tandem with employees – say, to develop new products, intercept suspected shoplifters, or interact with patients. Your company will need to assure those employees are properly trained on the AI tools (and, conversely, that anyone who isn’t supposed to use the AI tools is blocked from those systems). 

Fitting old risks into the AI era

The challenges of artificial intelligence might seem diverse and overwhelming, but to a certain extent, they’re also rather straightforward. Just read those advisories from the California attorney general – most of the material is about laws that have already been on the books in California for years. AI might raise new questions about how you comply with those laws, but what you must comply with has not changed.

Put another way, you’ll need to retool some elements of your compliance program to keep pace with artificial intelligence, but you won’t need to invent wholly new elements from whole cloth. Regulatory change management, risk assessment, internal controls, testing, training – those capabilities will become even more critical, as artificial intelligence seeps deeper and deeper into our world.

Artificial Intelligence is just one of the many topics we cover in the 2025 Top 10 Trends in Risk & Compliance. For more insights into AI compliance strategy and trends, as well as many others, download the eBook and watch the webinar on demand.

Take me to the Top 10 Trends