Skip to content.

Artificial intelligence keeps improving at all sorts of things – including how to challenge corporate ethics and compliance programs. Even while you may still be struggling to tame the risks of generative AI, its more powerful cousin is already coming up fast: agentic AI.

Agentic AI, as the name implies, is an AI “agent” that can act independently of humans to achieve various goals. It can devise its own strategies to achieve those goals, and learn from previous experiences to improve its strategies. It can even collaborate with other AI agents to solve complicated tasks in a coordinated fashion.

On one hand, agentic AI sounds great. AI agents could act on your behalf to purchase concert tickets, keep your home heated or cooled throughout the day, or re-book the rest of your travel plans if you miss your first flight.

In the corporate world, however, the risks of agentic AI become clearer. Consider:

  • AI agents could monitor your inventory levels and forecast demand, and then order new supplies and materials as necessary. That raises the question of which suppliers the agent might use, and all the financial, forced labor, cybersecurity, or sanctions risk therein.
  • AI agents could operate as customer service bots, resolving customer complaints on their own. That raises the specter of AI agents making promises to customers that violate company policy and expose the company to new legal obligations. (This actually happened to a Canadian airline last year.)
  • AI agents could be the front line of the HR function, reviewing resumes and even screening job applicants in preliminary interviews. Which raises questions of transparency, fairness and discrimination risk.

In other words, agentic AI could do all sorts of things for corporations – but it will bring along all sorts of compliance, operational and ethical risks, too. Your corporate compliance program needs to anticipate those risks now.

What are agentic AI risks?

The good news is that while agentic AI is a new category of artificial intelligence, it’s still artificial intelligence, and companies have been tinkering with AI for several years now. So, the same fundamental steps companies should already be taking to guide their adoption of AI apply just as equally to agentic AI too.

Key actions to manage agentic AI, and AI risks in general include:

  1. Have a system in place to govern how your organization adopts AI. You don’t want employees experimenting with AI use-cases on their own, where they might not understand the legal, security or ethical risks lurking within the idea they want to try
  2. Perform rigorous risk assessments on those use cases. Yes, AI systems must comply with an emerging set of AI-centric laws, such as the EU AI Act – but they also need to comply with a much larger set of pre-existing laws and regulations for consumer protection, anti-discrimination, privacy and more
  3. Implement controls to keep your AI risks in check. Those controls could be anything from procedures to validate data before it’s fed into AI systems, to audits or monitoring of the output of AI systems, to training employees on the careful use of AI and more.

The challenge for compliance officers will be to apply those basic steps in ways that make sense for agentic AI, when the technology is still so new and has so much potential both good and bad. You’ll need to work with more people across the enterprise, considering more potential use-cases and devising new controls for new risks.

Top agentic AI challenges

So, what will that look like in practice? Consider these likely hurdles.

How to evaluate which tasks an AI agent should do. Your company should have some sort of policy or process to decide how and when it will use agentic AI at all. Will you trust it with high-profile or mission-critical tasks, such as managing a new marketing campaign or monitoring inventory? Do department heads get to make that decision, or executive vice presidents?

How to decide which AI agents to use. Will you only trust AI agents developed or vetted by your IT team, or will employees be allowed to use any commercially available AI agents they find online? For that matter, since AI agents can themselves collaborate with other AI agents, how will you govern which agents “your” AI agent uses?

How to make agents’ actions explainable. Your AI agent will need some way to show its thinking and decision-making processes, especially if it makes a questionable decision that attracts regulatory or media scrutiny. So, will you test the agent for explainability before setting it loose in the wider world? How will you monitor its behavior as it learns from past experience and starts to make new decisions?

Human-agent interactions

It can help here to think of AI agents as akin to third-party contractors. They aren’t quite employees you can directly control – but they do act on your organization’s behalf and can bring legal and compliance risk to you if they act in a reckless manner.

In that case, remember where ethics and compliance programs typically start when trying to govern third-party agents: with your employees who hire them. By that same token, you’ll need policies, procedures, and controls to govern how your human employees use AI agents.

Some of that might be training to educate employees about the risks of using agentic AI. Some of it will be policy development, to articulate when employees can use agentic AI, or which AI agents they can use (agents developed and tested in-house, good; agents purchased from some unknown website, bad).

Overall, however, you’ll want a set of policies, procedures, and controls that enforce accountability for employees’ use of agentic AI. That, in turn, depends on senior management’s determination to adopt artificial intelligence in a prudent, compliance-aware manner.

So yet again, we’re back to the importance of human ethics and awareness to make artificial intelligence succeed. What a surprise.

The conversation about AI and it’s many uses and implications promises to continue and increase as the technology permeates business and consumer life. Read more about our coverage of the intersection of AI and compliance here.

Keep reading about AI