Artificial intelligence (AI) is transformative technology that has captured industries worldwide and is here to stay. Defined as a system designed to operate with a certain level of autonomy, AI uses machine learning and logic-based approaches to achieve human-defined objectives. Unlike traditional computing, which follows explicit programming instructions, AI infers and adapts to new data, producing outputs like content, predictions, recommendations, or decisions that influence its environment.
Jan Stappers, regulatory solutions director at NAVEX, and Caro Robson, digital, legal, and risk compliance consultant, recently delved into AI’s unique aspects and risks in a webinar session focused on its regulatory and compliance landscape.
One of the fundamental distinctions between AI and traditional computing lies in AI’s ability to infer from data rather than follow predetermined code. Traditional computing operates on a rigid set of instructions, while AI systems are trained on vast amounts of data, enabling them to adapt, learn and improve over time. This inference capability makes AI powerful but also introduces a layer of complexity and unpredictability.
A few considerations of AI and third-party risks
However, this complexity brings specific risks, particularly concerning third party and supply chain vulnerabilities. AI systems often rely on large datasets that can be scraped from the web, cleaned and processed manually. This process is labor-intensive and involves significant human and environmental costs. The production of AI models consumes substantial energy and resources, utilizing minerals like cobalt and nickel. Additionally, the supply chains creating these models are frequently opaque, raising concerns about environmental, legal and labor practices.
This inference capability makes AI powerful but also introduces a layer of complexity and unpredictability.
Once an AI model is developed, it may be sold to a provider or distributor, who will integrate it into various software products. This integration can introduce biases and unsafe inputs and outputs. For instance, commercially sensitive data or personal information inputted into the system might be used without clear rights, leading to potential breaches of data privacy and intellectual property. The need for more transparency in how data is utilized and transferred across different supply chain layers exacerbates these risks.
Furthermore, AI impacts ESG (Environmental, Social, and Governance) reporting significantly. AI systems used in reporting on carbon and social footprints might rely on products from opaque supply chains, complicating the accuracy and reliability of these reports. Understanding and managing these risks is crucial for organizations to ensure compliance and maintain trust.
Managing AI risks to meet regulatory compliance
Managing risks in regulatory compliance involves several steps. Organizations must identify and understand their AI systems, including any recent software updates. This understanding extends to the data these systems handle and the security measures to protect it. Implementing standard risk management practices and ensuring the board is aware of these practices is essential. Additionally, having a clear AI policy, either standalone or integrated into IT and data protection policies, helps employees understand and mitigate risks. Transparency in the supply chain and awareness among the data protection team are also critical components of effective risk management.
AI has become a new Governance, Risk, and Compliance (GRC) pillar, introducing complexities, especially in supply chains. The forthcoming AI Act in the European Union aims to provide a regulatory framework to help organizations navigate these intricate and fluid requirements. This act and other regulations like GDPR, copyright laws, cybersecurity directives, and corporate social sustainability directives form the regulatory landscape in Europe.
In addition, AI can play a pivotal role in enhancing regulatory compliance. It can assist in third-party risk management, board reporting, systems mapping and identifying cybersecurity threats. By leveraging AI, organizations can streamline these processes, ensuring more accurate and timely risk assessments. Understanding and managing risks from external partnerships is crucial as AI technologies become more integrated into business operations. This involves gaining visibility into the supply chain, respecting labor rights and maintaining clear communication channels.
The EU AI Act represents a significant step in creating a comprehensive regulatory framework for artificial intelligence. This legislation primarily functions as a product safety regulation for AI, focusing on managing implementation and operational risks while promoting innovation.
Distinct from being centered on fundamental rights, the act categorizes AI systems from an end-user perspective into different risk levels, determining the requirements based on the associated risk. This tiered approach helps balance safety and innovation, ensuring the protection of both consumers and businesses.
In addition to its primary focus, the act complements existing regulations to form a balanced framework that safeguards consumers and businesses. It also allows users of AI products to verify their compliance with ethical standards, further strengthening trust and reliability in AI technologies.
Overall, AI offers immense potential but poses unique risks, particularly in the case of supply chains, third parties, and data management. By understanding these risks and implementing robust regulatory compliance strategies, organizations can harness the power of AI while safeguarding against potential pitfalls.
You can now watch “GRC Latest News – AI Benefits Your Compliance Program” on-demand! Follow the link below to continue the journey and learn more about how AI can benefit your compliance program.