Home / News / Artificial Intelligence / European Parliament approves agreement on EU AI Act

European Parliament approves agreement on EU AI Act

The European Parliament voted to approve the AI Act, establishing the EU as a leader in creating regulations for a wide range of artificial intelligence software, which regional MPs have described as “the world’s first comprehensive AI law.”

MEPs decisively supported the interim agreement achieved in December during trilogue discussions with the Council, with 523 votes in favor compared to 46 opposing votes (and 49 abstentions).

The law establishes a risk-based framework for AI, which imposes different norms and standards based on the amount of risk associated with the specific use case.

The parliament’s decision today comes after positive committee votes and the preliminary accord receiving support from all 27 ambassadors of EU Member States last month. The plenary’s decision indicates that the AI Act is progressing towards becoming legislation in the area, pending final approval by the Council.

The AI Act will become effective 20 days after being published in the EU’s Official Journal in the next months. The implementation is gradual, with the first collection of requirements (prohibited use-cases) taking effect after six months, and others being enforced after 12, 24, and 36 months. The complete adoption is not anticipated until mid-2027.

Penalties for not following the rules may reach up to 7% of a company’s worldwide yearly revenue (or €35 million if greater) for breaking the restriction on certain AI usage. Violations of other regulations regarding AI systems may result in fines of up to 3% (or €15M). Noncompliance with regulatory authorities may result in penalties of up to 1%.

During a discussion on Tuesday before the plenary vote, Dragoș Tudorache, MEP and co-rapporteur for the AI Act, emphasized that core societal values are intrinsically linked to the notion of artificial intelligence. The AI Act has influenced the future of AI in a human-centric path. In a scenario where people dominate technology and use it to facilitate new discoveries, economic advancement, social development, and unleash human capabilities.

The European Commission first introduced the risk-based concept in April 2021. The EU co-legislators considerably altered and expanded the document throughout a multi-year negotiating process, leading to a political agreement being reached following marathon final sessions in December.

Some AI use cases, such as social scoring or subliminal manipulation, are considered too risky and are completely prohibited by the Act. The regulation also outlines a specific category of “high-risk” applications, including AI used in education or employment, as well as for remote biometrics. These systems need to be registered, and their creators must adhere to the risk and quality management regulations specified in the legislation.

Most AI applications are excluded from the legislation under the EU’s risk-based approach due to being classified as low-risk, resulting in the absence of specific regulations. The regulation imposes transparency responsibilities on a certain group of applications, such as AI chatbots, generative AI tools capable of creating synthetic media (deepfakes), and general-purpose AI models (GPAI). If the most influential general-purpose artificial intelligences are deemed to possess “systemic risk,” they will be subject to further regulations triggering risk management responsibilities.

Following that, MEPs who were concerned added GPAI rules to the AI Act. Lawmakers in parliament recommended a tiered set of rules last year to regulate the sophisticated wave of models responsible for the current surge in generative AI technologies.

Despite opposition, some EU Member States, particularly France, advocated for a regulatory exemption for advanced AI model creators. This push was driven by lobbying from domestic AI startups like Mistral, who argued that Europe should prioritize supporting national leaders in the rapidly advancing AI sector to prevent lagging behind in the global AI competition.

Lawmakers compromised in December by diluting the initial plan for regulating general-purpose artificial intelligences due to intense lobbying.

It did not provide a complete exemption from the legislation, but most of these models will only be subject to minimal transparency regulations. Only GPAs that received training using computational capacity above 10^25 FLOPs are expected to do risk assessment and mitigation on their models.

Following the compromise agreement, it has been revealed that Mistral has received funding from Microsoft. The US IT company has a significant ownership interest in OpenAI, a US-based company that developed ChatGPT.

During a press conference before the plenary vote, the co-rapporteurs were questioned about Mistral’s lobbying efforts and their impact on the EU’s regulations for GPAIs. “I believe we can all agree that the results speak for themselves,” said Brando Benifei. The statute explicitly outlines the safety requirements for the most powerful models using specific criteria. I believe we have provided a comprehensive structure that will guarantee transparency and safety standards for the most advanced models.

Tudorache denied that lobbyists had a detrimental impact on the final version of the bill. He said that they bargained and made concessions they deemed appropriate, describing the result as a required balance. “The behavior and decisions made by companies have not affected the work in any way.”

“There were motivations for those developing these models to maintain secrecy about the data used in these algorithms,” he continued. “We advocated for transparency, especially for copyrighted material, believing it is crucial to uphold the rights of authors.”

Benifei highlighted the inclusion of environmental reporting obligations in the AI Act as another positive development.

The legislators emphasized that the AI Act marks the beginning of the EU’s regulation of AI, highlighting the need for future development and expansion via new laws. Benifei specifically highlighted the importance of a directive to establish regulations for AI use in the workplace.

Tudorache said that the AI Act is only the start of a broader journey since AI will affect several areas like education systems, the labor market, and combat.

AI will play a vital role in a new world that is emerging. As we establish governance based on the Act, we must be aware of the technology’s future progress. Be ready to address any new difficulties that may arise from this technological growth.

Tudorache restated his proposal from last year for collaborative efforts on AI regulation among nations that share similar views, perhaps extending to other regions where agreements may be reached.

We must strive for interoperability and collaborate with several democracies and like-minded partners to establish a governing structure. Technology is universal, regardless of your location on the globe. Hence, we must invest in integrating this governance within a coherent framework.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …