The European Union (EU) is taking a major step in regulating artificial intelligence (AI) with the passing of the EU Artificial Intelligence Act (AI Act) by the European Parliament. The AI Act aims to restrict the use of high-risk AI technologies, while also providing a clearer definition of what AI actually is. After two years of development and an increase in interest and advancements in AI technology, the AI Act is nearing its final stages before coming into effect.
The need for AI regulation stems from the recognition of the potential benefits and risks associated with the technology. Lawmakers acknowledge that AI can provide significant economic and societal benefits, but they are also aware of the potential negative consequences it may have on individuals and society as a whole. The rapid development of advanced AI tools like generative AI models has made the regulation of AI more complex and challenging.
To effectively regulate AI, a clear definition of what AI encompasses is necessary. The AI Act categorizes different applications of AI based on their levels of risk: unacceptable risk, high risk, limited risk, and minimal or no risk. Unacceptable risk models, such as social “credit scores” and real-time biometric identification in public spaces, are completely prohibited. Minimal risk applications, like spam filters and inventory management systems, are not subject to additional regulations. Applications that fall in between these categories are subject to transparency and safety restrictions if they wish to operate in the EU market.
Initially, the focus of the AI Act was on concrete AI tools already being used in various fields. However, the emergence of generative AI models posed a challenge to the regulatory framework. These models, like OpenAI’s GPT-3.5 and GPT-4 large language models (LLMs), are highly adaptable and can be used for a wide range of tasks. Lawmakers had to consider how to regulate these models and their potential applications under the proposed legislation.
Generative AI models, often referred to as foundation models, can be used for tasks like producing reports, generating code, and answering user inquiries. However, their capabilities extend beyond these specific applications. Developers can build applications on top of these models, making it difficult for regulators to keep up with the evolving technology. As a result, lawmakers have proposed amendments to ensure that these emerging technologies, along with their yet-unknown applications, are covered by the AI Act.
The AI Act provides strict regulations for high-risk AI systems. These systems, which include AI in self-driving cars and predictive policing systems, undergo conformity assessments before entering the EU market. Companies must ensure that their systems meet all necessary requirements, including data protection laws and the use of high-quality training data. High-risk AI systems incorporated into tangible products, like toys and medical devices, require reporting to independent third parties designated by the EU.
The AI Act is now in the final phase of inter-institutional negotiations, where communication between Member States, the Parliament, and the Commission will refine the draft into finalized legislation. Some provisions may be adjusted during these negotiations to address contentious issues. There is potential for stronger measures regarding generative AI to be introduced due to concerns about its role in political disinformation.
The introduction of generative AI models has heightened concerns about the regulation of AI. These models have extensive capabilities and potential legal pitfalls. Issues such as misinformation, intellectual property, and privacy have emerged as challenges for policymakers. While the EU works towards comprehensive AI regulation, regulators in member states have implemented alternative measures to address the risks associated with AI technologies.
In conclusion, the EU is taking significant steps towards regulating AI with the introduction of the AI Act. The Act aims to restrict high-risk AI systems and provide a clearer definition of AI. The emergence of generative AI models has posed challenges for the regulatory framework, leading to proposed amendments to cover these technologies. As negotiations continue, adjustments may be made to the AI Act, potentially strengthening regulations for generative AI. The goal is to strike a balance between reaping the benefits of AI and mitigating the potential risks it poses to individuals and society.