European lawmakers give final approval to groundbreaking AI regulations
Despite support from major tech firms for AI regulation, lobbying efforts have aimed to ensure that any regulations are favorable to their interests.
How Does the AI Act Function?
Similar to numerous EU regulations, the AI Act was originally designed to function as consumer safety legislation, employing a "risk-based approach" towards products or services utilizing artificial intelligence.
AI applications undergo increased scrutiny based on their level of risk. The majority of AI systems are anticipated to pose low risks, such as content recommendation algorithms or spam filters. Companies have the option to adhere to voluntary standards and codes of conduct.
In contrast, high-risk AI applications, such as those in medical devices or critical infrastructure such as water or electrical networks, are subject to more stringent requirements, including the use of high-quality data and the provision of clear user information. Certain AI applications are prohibited due to their deemed unacceptable risk, such as social scoring systems, certain types of predictive policing, and emotion recognition systems in educational and professional settings.
Additional banned applications include the use of AI-powered remote "biometric identification" systems by law enforcement for public facial scanning, except in cases of serious crimes like kidnapping or terrorism.
In London, European Union legislators granted final approval to the artificial intelligence legislation of the 27-nation bloc on Wednesday, setting the stage for the implementation of pioneering regulations later this year.
After five years since the initial proposal, members of the European Parliament overwhelmingly endorsed the Artificial Intelligence Act, which is anticipated to serve as a global model for governments navigating the complexities of regulating the swiftly advancing technology.
Romanian legislator Dragos Tudorache, a co-leader of the Parliament negotiations on the draft law, emphasized the AI Act's focus on human-centric principles, ensuring human control over technology and its role in fostering innovation, economic prosperity, societal advancement, and unlocking human potential.
Despite support from major tech firms for AI regulation, lobbying efforts have aimed to ensure that any regulations are favorable to their interests. OpenAI CEO Sam Altman caused a stir last year by suggesting the possibility of withdrawing from Europe if compliance with the AI Act became problematic, though he later clarified that there were no imminent plans to do so.
The enactment of the world's first comprehensive set of AI regulations marks a significant milestone in the governance of artificial intelligence.
Are Europe's Regulations Impacting Global Policies?
Brussels initially proposed AI regulations in 2019, assuming a familiar global role in intensifying scrutiny over emerging industries, while other governments strive to catch up.
In the United States, President Joe Biden enacted a comprehensive executive order on AI in October, anticipated to be reinforced by legislation and international agreements. Concurrently, lawmakers in at least seven U.S. states are crafting their own AI-related laws. Chinese President Xi Jinping has introduced the Global AI Governance Initiative aimed at ensuring fair and safe AI usage, while authorities have implemented "interim measures" for managing generative AI, covering various forms of content generated for domestic consumption.
Countries such as Brazil and Japan, along with international bodies like the United Nations and the Group of Seven (G7) industrialized nations, are also actively formulating frameworks to regulate AI.
Understanding Generative AI
Early versions of the law concentrated on AI systems performing specific tasks, like scanning resumes and job applications. However, the emergence of versatile AI models, such as OpenAI's ChatGPT, prompted EU policymakers to adjust their approach.
Provisions were introduced for "generative AI" models, which power AI chatbots capable of generating unique and realistic responses, images, and more.
Developers of these general-purpose AI models, including European startups, OpenAI, and Google, must furnish comprehensive summaries of the internet data—text, images, videos, etc.—used to train the systems and adhere to EU copyright regulations.
AI-generated deepfake content depicting real individuals, places, or events must be clearly labeled as artificially manipulated.
Particular scrutiny is applied to the largest and most potent AI models deemed to pose "systemic risks," such as OpenAI's GPT4 and Google's Gemini. Concerns revolve around potential accidents, misuse for cyberattacks, and the propagation of harmful biases across various applications, impacting numerous individuals.
Companies providing these systems are mandated to evaluate and mitigate risks, report significant incidents (e.g., malfunctions causing injury or property damage), implement cybersecurity measures, and disclose energy consumption levels of their models.
What Comes Next?
The AI Act is poised to formally become legislation around May or June, following some remaining procedural steps, including approval from EU member states. Implementation will occur gradually, with countries mandated to prohibit restricted AI systems six months after the rules are incorporated into law. Regulations concerning general-purpose AI systems, such as chatbots, will come into effect one year after the law's enactment. By mid-2026, the full spectrum of regulations, encompassing requirements for high-risk systems, will be fully operational.
Regarding enforcement, each EU member state will establish its own AI oversight body, enabling citizens to lodge complaints in case of suspected rule violations. Simultaneously, Brussels will establish an AI Office tasked with enforcing and overseeing compliance with the law pertaining to general-purpose AI systems.
Violations of the AI Act could incur fines of up to 35 million euros ($38 million), or 7% of a company's global revenue.
Italian lawmaker Brando Benifei, who co-led Parliament's efforts on the legislation, emphasized that this isn't the final iteration of AI regulations from Brussels. He indicated the possibility of additional AI-related legislation post-summer elections, particularly in areas partially addressed by the new law, such as AI in the workplace.