The rise of artificial intelligence – and specifically of generative AI, which can create entirely new images, sounds, and text with just a few prompts – was the most important technology development of this decade.
The challenge for 2024 (and years to come after that) will be how to put AI to profitable, gainful, ethical use in the corporate enterprise. The compliance function needs to anticipate everything that challenge will entail.
For example, compliance teams themselves could use AI to streamline or strengthen the compliance function. Other parts of your enterprise could find ways to put AI to good use in their own operations, too – or they could blunder forward recklessly, causing all manner of compliance and cybersecurity risks.
So even as compliance officers start using AI within their own function, they’ll need to serve as trusted advisers to senior management and the rest of the enterprise, so those other parts of the business can use AI in a prudent, legal, and risk-aware manner.
Understand both the positive and the negative
Remember what we said earlier: the technology behind generative AI is enormously powerful. Companies will need to channel that enormous power in the proper ways, or risk courting disaster.
The positive is that the technology behind ChatGPT and its generative AI brethren is enormously powerful. Generative AI first uses natural language processing (NLP) to let human users submit queries to the AI in the same plain language we use with each other. Then, based on vast troves of data it has already studied, the AI calculates the string of words, numbers, or pixels that are most likely to be a good answer to the user’s question.
One can see the compelling use cases here. A business could essentially layer an NLP interface over its own data, so employees could ask questions such as: Which customers are our biggest spenders? Which job applicants have the skills most relevant to our needs? Which resellers ask permission to offer price discounts most often? And so many more. The AI would then return clear, straightforward answers immediately.
The negative, however, is that without strong guardrails, the AI might not always submit accurate answers. Or it might consume the information you provide it – including confidential information – to help it learn how to answer questions for the next user. It could interact with employees and customers in unexpected ways. It could learn from a flawed set of data, picking up bad intellectual habits and giving bad answers just like any human would.
Remember what we said earlier: the technology behind generative AI is enormously powerful. Companies will need to channel that enormous power in the proper ways, or risk courting disaster.
The guardrails begin with governance
As we enter 2024, the immediate challenge for organizations will be to establish an enterprise-wide governance structure for how your business embraces AI. That is, some senior group within the company – let’s call it a steering committee – needs to articulate the basic guidelines for how the company adopts AI in a sensible, compliance-oriented manner. Then other employees further down the org chart can develop the specific AI use cases that make the most sense for your business.
That steering committee should at least include the CISO, the chief compliance officer, your head of technology, the CFO, and the general counsel. Other plausible candidates (depending on your business model and objectives) might include the heads of HR, marketing, and others.
These steering committees could be a place for the chief compliance officer to shine. After all, most members of the steering committee will be strong on envisioning use cases, but not on understanding all the risks involved. You, the CCO, should be the consigliere guiding the committee as it maps out your AI adoption strategy.
For example, we already see some early instances of governments regulating how AI is used. In New York City, employers that want to use AI to screen out job applicants (including something as simple as automated keyword searches) must perform a “bias audit” on the AI and post the results online. If that rule applies to your business, does the HR team know about it? Who is working to assure the bias audit is conducted promptly and correctly?
Those are the sorts of questions an AI steering committee can explore, and compliance officers can play a valuable role bridging the regulatory world and the business world. 2024 will be a year where compliance officers can help the enterprise adopt AI in an ethical, legal, sustainable way. Seize that opportunity.
2024 will be a year where compliance officers can help the enterprise adopt AI in an ethical, legal, sustainable way.
A model for AI in the compliance function
Compliance officers can also spend 2024 figuring out how to integrate AI into your own operations. There’s a lot of potential here.
We mentioned earlier that AI learns by consuming large piles of data. Well, corporations have data in spades. So, you could develop a generative AI tool that only studies your own data about transactions, third parties, internal employee communications, and more. You could then start asking the AI questions about compliance risks in simple, direct sentences. You’d get simple, direct answers in return.
Of course, this assumes your company has good data management practices, and that the compliance team has access to all of the data. This presents another goal compliance officers might want to set in 2024: work closely with other parts of the enterprise to build strong data management practices, and be sure you have access to and management of all of the data so your adoption of AI can return maximum value.
What about AI regulation?
The regulation of AI is still in its infancy. We’ve seen some early attempts at the task – such as the aforementioned New York City law – but in both the United States and Europe, specific regulations are still rare.
It’s possible we’ll see more movement on that front in 2024. The European Union, for example, proposed the EU Artificial Intelligence Act in 2023, which would (among other things) require all generative AI to undergo external review before commercial release; but that legislation still has lengthy negotiations in front of it before going into effect. The Biden Administration also proposed several “pillars” of good AI use, but those are both vague and voluntary.
Then again, compliance officers don’t need AI-specific regulations to keep themselves busy. AI is already here, seeping into business operations across the enterprise. 2024 will be a year for compliance officers to engage with senior management about how to adopt artificial intelligence, and for you to sharpen your own GRC technology capabilities to take full advantage of AI yourself.
2024 prediction
As AI technology continues to develop and gain traction across the world, so will the proposed regulations to govern how it is used. We can expect these regulations to vary depending on the use cases, geographies and specific function of the artificial intelligence itself.
Artificial Intelligence will likely galvanize leaders who must be on the same page for how it is used in the business and how policies are enforced. We can expect more and more compliance and cybersecurity leaders to step up and uphold appropriate governance and security as AI becomes a staple technology to improve efficiency and accuracy in organizations.
Top 10 Trends in Risk & Compliance
For many more insights and guidance, download the full eBook and access to the accompanying webinar featuring analysis and expert insights from Carrie Penman and Kristy Grant-Hart.