Artificial intelligence reached another milestone at the start of February, this one particularly relevant for corporate compliance officers: on February 2, 2025, the first five articles of the EU AI Act went into effect.
This means that the era of AI compliance has now formally begun. If your company uses AI and operates in Europe, or develops and sells AI systems that are used in Europe, then it could be subject to regulatory enforcement. So you need to start incorporating compliance-aware policies and procedures into your company’s AI adoption strategy, the sooner the better.
The first three articles of the AI Act we can put aside; they’re mostly preamble to outline the purpose and scope of the law and to define key terms. Articles 4 and 5 are where compliance officers should pay attention and start thinking about the implications for your policies, risk assessments, and training programs.
What is Article 4 of the EU AI Act?
Starting with Article 4. It states that all providers and deployers of AI systems must:
“Take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
The EU AI Act definition of “AI literacy”
“AI literacy” means skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
In other words, your company must train employees so that they understand the risks that AI can cause – and from that seemingly simple requirement, a host of practical challenges arise.
AI compliance starts with AI governance
Fundamentally the challenge is this: you can’t develop the necessary AI literacy in your organization if you don’t know how the company is using AI. That’s increasingly problematic, because it’s now so easy for employees to incorporate artificial intelligence into their daily workflows and routines.
Just look at DeepSeek, the Chinese generative AI app that seemingly sprang from nowhere to become one of the most popular apps on the internet. What are the privacy risks to DeepSeek? What cybersecurity risks might it introduce to your organization? Nobody knows. (Although pretty much every privacy regulator in Europe is trying to find out.)
So, before you even begin to contemplate the policies, procedures, and training that might be necessary at your business to achieve the AI literacy that you’re supposed to have, your management team first needs to establish some sort of governance mechanism to guide how employees use AI in the first place.
For example, a large company could establish an “AI usage board” of some kind, where the heads of various First Line operating functions meet with Second Line risk management functions (compliance, privacy, HR, legal, IT security) to hash out the rules for employees to adopt AI. Maybe they use certain AI systems but not others; maybe they use AI for certain tasks but not others; maybe all customer-facing AI systems start with a “you’re using AI” interface, and so forth.
Where do ethics, tone at the top, and corporate culture fit into all this? Ideally, they should permeate the whole discussion. That is, senior management needs to demonstrate a commitment to ethical use of AI – even when the company isn’t quite clear on all the ethical concerns for specific AI use-cases; that’s what your AI governance board is there to hash out.
Then, once senior management makes clear to everyone that (a) sure, using AI is great, but (b) we’ll be adopting it in a careful, ethical, and compliant manner – that strong culture of ethics will drive a culture of responsible AI usage. Then your quest to find the appropriate level of AI literacy will become much easier to achieve.
What is Article 5 of the EU AI Act?
We also have Article 5, which introduces prohibited AI practices. This is a crucial piece of the EU AI Act because it establishes the idea of “tiers” of acceptable AI use – starting with the most severe use-cases, which won’t be allowed at all.
Many of these banned use-cases will come as no surprise to Western executives. For example, the law prohibits AI that:
- Deploys “subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques” with the goal of “materially distorting the behaviour of a person by appreciably impairing their ability to make an informed decision.”
- Monitors a person and then predicts the risk of that person committing a crime, “based solely on the profiling of a natural person or on assessing their personality traits and characteristics.”
- Infers the emotions of a person when at work or at school, except for medical or safety reasons.
We don’t need to review all the prohibited uses here. The point for compliance officers is that your organization will need clear policies about which uses of AI you will not embrace, supported by procedures to make sure nobody is embracing them.
For example, it’s not far-fetched to imagine that some contractor or other business partner of yours uses AI in a prohibited manner on your company’s behalf; so you’ll need clear policies, as well as strong contract management and third-party monitoring capabilities. And to underline our AI literacy point mentioned earlier, you’ll need strong training of your own employees to be sure they understand that this is a third-party AI risk, and the company will need their help to avoid it.
In the fullness of time, the EU AI Act will introduce other tiers of AI usage; the less risk the use-case poses, the less oversight you’ll need to exercise. That will tax corporate ethics and compliance teams in yet more new ways, as you develop processes to assess the risks of those use-cases and implement appropriate controls.
In some ways, your AI compliance program will still rest on the fundamentals of a strong ethics and compliance program just like always. In other ways, it’s a brave new world – and ready or not, that world is here.
For more information about the EU AI Act, follow the link below.