Skip to content.

Ask a compliance officer to name their top worry about artificial intelligence, and odds are they will blurt out something to do with privacy. That doesn’t just tell us what the risks of AI are – it also gives us hints about how companies should try to manage those risks, too.

Indeed, if compliance officers want a thoughtful overview of how AI and privacy risks overlap, one good place to start is a recent report from the Bipartisan Artificial Intelligence Task Force in the U.S. House of Representatives. The report, released in December, cataloged the public policy implications of AI and recommended goals that lawmakers should keep in mind as they consider possible AI legislation.

Whether we’ll actually see any such legislation is anyone’s guess. Regardless, the report’s section on privacy raises important points that any compliance officer worried about AI will want to consider – because it explains how data fuels the growth of AI, and from there we can reverse-engineer some of the compliance risks that might arise.

For example, AI models “learn” by ingesting large amounts of data. Some of the well-known AI tools out there (think ChatGPT and its other consumer-facing brethren) learn by scraping the internet for whatever data they can find. Other companies are building their own AI systems based on data they control themselves. However, as this is not being tracked, it’s impossible to know how many home-grown generative AI solutions are being developed in-house.

Consider the compliance risks lurking beneath those AI efforts

Your company might not secure consent from customers or business partners before feeding their data into the AI.

Your company might believe it secured consent by amending privacy policies or user agreements, but subtle changes to those policies might still qualify as deceptive practices in the eyes of the Federal Trade Commission or other regulators.

The IT team purchases training data from some external provider, without confirming that the data was properly sourced.

You secure all necessary permissions for the data you collect, but the AI becomes so clever it can infer undisclosed private data about someone anyway. (Famously, in 2012 a large retailer’s marketing systems deduced that a teen customer was pregnant and then accidentally disclosed that fact to her father when it mailed discounts for maternity products to her home.)

There’s more. A company can try to train its AI systems on “synthetic data,” which isn’t real and therefore avoids the consent issue – but AI trained on synthetic data might not perform as well. That simply exchanges compliance risks for operational risks, where the AI is making worse choices. That, in turn, could even lead to other compliance risks, such as a poorly trained AI discriminating against customers or showing inappropriate material to minors.

We could keep going, but you get the picture: privacy issues are inseparable from AI risks. So as compliance officers try to understand the compliance issues that arise from artificial intelligence, they need to start there.

Questions to ask as you get underway

If we look at AI compliance risks through that privacy lens, then as your company starts to put artificial intelligence to use, several questions should be on the compliance officer’s mind.

Who is in charge of AI at your organization? It could be that AI adoption is controlled through the technology department. Or perhaps nobody is in charge of AI, and various teams experiment with AI in their own way.

Neither answer is good. Managing artificial intelligence and its attendant risks is a team-based approach – and the compliance officer should very much be part of that team. So should the technology, legal, cybersecurity, and finance functions; and you should all work together to define your AI risks and how you’ll address them.

Are you making the right disclosures to users via privacy policies? Consult with legal, privacy, or regulatory experts as necessary to understand what you should disclose in privacy policies or user agreements about how users’ data (personal or otherwise) might be used for AI training and decision-making purposes.

Remember that different regulators might take different views on what constitutes a clear and sufficient disclosure. You’ll need to worry about the Federal Trade Commission, European privacy regulators working under the EU General Data Protection Directive, state regulators, and perhaps others.

Are you sourcing training data from proper, reliable sources? As always, third-party risk is never far away. Companies will also need mechanisms (such as contract management, due diligence, and data validation testing) to assure no third parties working with your AI efforts are introducing data that shouldn’t be there.

How will you test the results of your AI to be sure it’s working as intended? This is one of the new frontiers of artificial intelligence: it might learn to behave in ways you don’t want. It might pick up bad judgment from bad data or decide to take actions you didn’t expect. Either way, the output of AI systems will need close and regular scrutiny, to be sure its actions don’t bring compliance risks of their own.

Prepare now

Compliance officers can’t stop the adoption of AI at your employer, nor should you. But you can be (and ideally should be) an indispensable part of adopting AI in a smart, risk-aware manner.

That will require capabilities that play to compliance officers’ strengths, such as risk assessment, regulatory change management, training, third-party risk management, and reporting. Consider the tools and processes you’ll need to do that efficiently and at scale (including the prospect of using AI to manage AI).

Equally important, consider the relationships you’ll need to cultivate to assure AI adoption goes well. That will include the IT, internal audit, legal, and security teams, and probably other parts of the enterprise as well. Above all, senior management will need to support the idea that compliance should be involved in AI plans from the very start.

For additional insights into how to manage the new frontier of AI, subscribe to the Risk & Compliance Matters blog. Read more about AI in our other articles in the link below.

Read on about AI