In a historic move, European legislators made significant progress in defining the trajectory of artificial intelligence (AI) by striking a political agreement on extensive and ethical regulations for AI use. As outlined in the European Union’s (EU) Artificial Intelligence Act, this noteworthy advancement creates a global standard and sets benchmarks for organizations.
EU takes the lead in AI regulation
The EU solidified its position as a trailblazer in AI regulation, with the European Parliament’s Dragos Tudorache stating, “The EU is the first in the world to set robust regulation on AI in place.” Building on earlier legislation targeting U.S. tech giants like Meta Platforms, Apple, and Alphabet, the AI Act aims to provide a framework for responsible AI use.
In practice, the EU Act is the first comprehensive regulation addressing the risks of artificial intelligence through a set of obligations and requirements to safeguard the health, safety and fundamental rights of EU citizens and beyond.
Key components of the AI Act
So, what regulations does the Act include? The negotiated deal encompasses bans on specific AI applications, such as the untargeted scraping of images for facial recognition databases. The legislation also introduces rules for high-risk AI systems and emphasizes transparency for general-purpose AI systems and their underlying models.
Specific key components of the Act include:
- Safeguards agreed on general-purpose artificial intelligence
- Limitations for the use of biometric identification systems by law enforcement
- Bans on social scoring and AI are used to manipulate or exploit user vulnerabilities.
- Right of consumers to launch complaints and receive meaningful explanations
- Violations may incur fines ranging from 35 million euros or 7% of a company’s global turnover, contingent on the size of the company and the nature of the infraction.
The full details of what was agreed in the Act won’t be confirmed until a final text is compiled and made public. However, in December 2023, a European Parliament press release shows a deal reached with the Council includes a total prohibition on the use of AI for:
-
Biometric categorization systems that use sensitive characteristics (e.g., political, religious, philosophical beliefs, sexual orientation, race)
-
Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
-
Emotion recognition in the workplace and educational institutions
-
Social scoring based on social behavior or personal characteristics
-
AI systems that manipulate human behavior to circumvent their free will
-
AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation)
Global struggles with AI regulation
While Europe forges ahead, other nations grapple with regulating AI. In October, the Biden administration’s executive order signaled a robust approach AI governance, and Chinese regulators have also issued rules addressing generative AI. The EU’s AI Act, initially proposed in 2021, gained increased attention with the rise of AI applications like OpenAI’s ChatGPT and Google’s Bard.
Controversial aspects and industry response
A focal debate surrounding the EU legislation has been whether to establish blanket rules for general-purpose AI and foundation models. These models, trained on extensive datasets, underpin specialized AI applications. The AI Act mandates transparency rules, including compliance with EU copyright law and creating detailed content summaries for training AI models. High-impact models facing systemic risk will be subject to more stringent regulations.
However, the deal has faced criticism from industry and consumer groups. DigitalEurope, a tech lobby group, expressed concerns over the financial burden on AI companies and the potential competitive disadvantage for Europe. Cecilia Bonefeld-Dahl, the group’s director-general, emphasized, “The AI race is not one Europe can miss out on.”
Similarly, the European Consumer Organization argued the rules must be revised to protect consumers adequately. Ursula Pachl, the organization’s deputy director-general, highlighted concerns about underregulated areas and an overreliance on companies’ self-regulation.
The road ahead and final approval
While the deal signifies a groundbreaking moment, it is essential to note it still requires final approval from both parliamentarians and representatives from the EU’s 27 member states. Full implementation is not anticipated until 2026. As such, the real impact of this legislation will hinge on the EU’s dedication to implementation, the diligent execution of its provisions, the commitment to oversight, and the collaborative efforts of standard-making bodies – defining what trustworthy AI means in Europe and beyond.
Want to know how to remain compliant with current and upcoming EU regulations? For more information on how NAVEX can help, discover our E&C online solution here.