Home / News / Artificial Intelligence / EU lawmakers support generative AI transparency and safety

EU lawmakers support generative AI transparency and safety

 

MEPs voted this morning to amend the bloc’s draft AI legislation, including requiring foundational models that underpin generative AI technologies like OpenAI’s ChatGPT.

The amendment agreed by MEPs in two committees required foundational model providers to apply safety checks, data governance measures, and risk mitigations before releasing their models, including “foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law”.

The amendment also requires foundational model makers to reduce energy and resource use and register their systems in an EU database created by the AI Act. The regulation requires providers of generative AI technologies like ChatGPT to comply with transparency requirements (informing users the content was machine generated), apply “adequate safeguards” to content their systems generate, and provide a summary of any copyrighted materials used to train their AIs.

As we reported earlier, MEPs have focused on ensuring general purpose AI will not escape regulatory requirements in recent weeks.

MEPs also agreed to strengthen fundamental rights protections in biometric surveillance.

To advance the EU’s co-legislative process, lawmakers are negotiating the parliament’s AI Act negotiating mandate.

MEPs in the Internal Market Committee and Civil Liberties Committee voted on 3,000 amendments today, adopting a draft mandate on the planned artificial intelligence rulebook with 84 votes in favor, 7 against, and 12 abstentions.

MEPs want AI systems to be safe, transparent, traceable, non-discriminatory, and environmentally friendly. In a press release, the parliament stated that they want a technology-neutral definition of AI to apply to current and future AI systems.

The committees agreed to ban “intrusive” and “discriminatory” AI system uses like:

  • “Real-time” remote biometric identification systems in publicly accessible spaces;
  • “Post” remote biometric identification systems, except for law enforcement for serious crimes and only after judicial authorization;
  • Biometric categorisation systems using sensitive characteristics (e.g., gender, race, ethnicity, citizenship status, religion, political orientation);
  • Predictive policing based on profiling, location, or past criminal behavior;
  • Emotion recognition systems in law enforcement, border management, workplaces, and schools; and indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases (violating human rights and privacy).

The latter would outlaw Clearview AI’s business model a day after France’s data protection watchdog fined the startup for violating EU laws. Thus, enforcing such bans on foreign entities that break bloc rules will be difficult. But first, hard law.

Dragos Tudorache, co-rapporteur and MEP, stated after the vote:

Given the profound transformative impact AI will have on our societies and economies, the AI Act is very likely the most important piece of legislation in this mandate. It’s the first piece of legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe. We have worked to support AI innovation in Europe and to give start-ups, SMEs and industry space to grow and innovate, while protecting fundamental rights, strengthening democratic oversight and ensuring a mature system of AI governance and enforcement.

Trilogue talks with the Council will begin after parliament’s plenary vote to seal the mandate next month (12-15 June).

When the Commission presented its draft AI Act proposal in 2021, it suggested the risk-based framework would create a blueprint for “human” and “trustworthy” AI. The Commission only proposed a limited ban on highly intrusive technology like facial recognition in public, raising concerns that the plan fell far short, including in biometric surveillance.

Civil society groups and EU bodies urged amendments to strengthen fundamental rights protections, with the European Data Protection Supervisor and Board urging EU lawmakers to ban public biometrics surveillance.

Civil society’s request appears to have been heeded by MEPs. Concerns persist. (And of course it remains to be seen how the proposal MEPs have strengthened could be watered down again as Member States governments enter the negotiations in the coming months.)

Parliamentarians also expanded the regulation’s fixed “high-risk” areas to include harm to health, safety, fundamental rights, and the environment in today’s committee votes.

The high-risk list also included political campaign AI systems and social media platforms’ recommender systems (with more than 45 million users, per the Digital Services Act’s VLOPs classification).

At the same time, MEPs supported changes to what counts as high risk, proposing to leave it up to AI developers to decide if their system is significant enough to meet the bar where obligations apply, something digital rights groups are warning (see below) is “a major red flag” for enforcing the rules.

MEPs also supported amendments to increase citizens’ rights to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that “significantly” impact their rights.

Civil society groups called for revisions in fall 2021, citing the Commission’s AI Act proposal’s lack of meaningful redress for those harmed by harmful AIs. Under the bloc’s General Data Protection Act, individuals can complain to regulators and seek other forms of redress.

Today, MEPs also agree on a reformed role for the EU AI Office, which they want to monitor rulebook implementation to supplement decentralized Member State oversight.

In a nod to the perennial industry cry that too much regulation is harmful for “innovation,” they also added exemptions to rules for research activities and AI components provided under open-source licenses, while noting that the law promotes regulatory sandboxes, or controlled environments, being established by public authorities to test AI before its deployment.

Digital rights group EDRi, which has been urging major revisions to the Commission draft, said MEPs passed everything it had been pushing for “in some form or another,” including the (now) full ban on facial recognition in public, predictive policing, emotion recognition, and other harmful AI uses.

Another win is the inclusion of accountability and transparency obligations on deployers of high-risk AI, including a duty to do a fundamental rights impact assessment and mechanisms for affected people to challenge AI systems.

“The Parliament is sending a clear message to governments and AI developers with its list of bans, ceding civil society’s demands that some uses of AI are just too harmful to be allowed,” Sarah Chander, EDRi senior policy advisor.

“This new text is a vast improvement from the Commission’s original proposal when it comes to reigning in the abuse of sensitive data about our faces, bodies, and identities,” said EDRi senior policy advisor Ella Jakubowska, who focuses on biometrics.

EDRi noted that AI for migration control is still a concern.

Chander noted that MEPs failed to include “illegal pushbacks” or discriminatory profiling in the list of prohibited practices, as EDRi had requested. “Unfortunately, the [European Parliament’s] support for peoples’ rights stops short of protecting migrants from AI harms, including where AI is used to facilitate pushbacks,” she said. “Without these prohibitions the European Parliament is opening the door for a panopticon at the EU border.”

The group also wants the proposed ban on predictive policing to include location-based predictive policing, which Chander called “essentially a form of automated racial profiling”. She said it’s concerned that the proposed remote biometrics identification ban won’t cover all mass surveillance practices it’s seen across Europe.

“While the Parliament’s approach to biometrics is very comprehensive, there are a few practices that we would like to see even further restricted. Retrospective public facial recognition is banned, but law enforcement can use it, which we think is too risky. “In particular, it could incentivise mass retention of CCTV footage and biometric data, which we would clearly oppose,” Jakubowska said. “As this ‘technology’ is fundamentally flawed, unscientific, and discriminatory by design, we would also like to see the EU outlaw emotion recognition in all contexts”.

EDRi also worries about MEPs’ plan to let AI developers decide if their systems are high-risk.

“Unfortunately, the Parliament is proposing some very worrying changes to what counts as ‘high-risk AI. “With the changes in the text, developers will be able to decide if their system is ‘significant’ enough to be considered high risk, a major red flag for the enforcement of this legislation,” Chander suggested.

Today’s committee vote is a big step toward setting the parliament’s mandate and setting the tone for the upcoming trilogue talks with the Council, but much could still change and Member States governments, which tend to prioritize national security over fundamental rights, may push back.

Jakubowska said, “We can see from the Council’s general approach last year that they want to water down the already insufficient protections in the Commission’s original text. Despite having no credible evidence of effectiveness and lots of evidence of harms, many member state governments want to keep biometric mass surveillance.

“They often do this under the pretense of ‘national security’ such as in the case of the French Olympics and Paralympics, and/or as part of broader trends criminalizing migration and other minority communities. That being said, we saw what could be considered ‘dissenting opinions’ from both Austria and Germany, who both favour stronger protections of biometric data in the AI Act. We’ve heard rumors that several other countries are willing to compromise on biometrics provisions. This gives us hope that the trilogues will produce a positive result, even though we expect strong pushback from several Member States.”

Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties (ICCL), which also joined the 2021 call for major revisions to the AI Act, warned that while the parliament has strengthened enforceability by explicitly allowing regulators to perform remote inspections, MEPs are simultaneously tying regulators hands by preventing them access quoted him as saying, “We are also concerned that we will see a repeat of GDPR-like enforcement problems.”

On the plus side, MEPs have addressed “the shortcomings” of the Commission’s definition of AI systems by including generative AI systems and applying transparency obligations to them, which he called “a key step towards addressing their harms”.

On copyright and AI training data, Shrishak criticized MEPs for not taking a “firm stand” to stop data mining giants from ingesting free data, including copyrighted data.

The copyright amendment only requires companies to provide a summary of copyright-protected data used for training, leaving rights holders to sue.

He agreed that exemptions for research activities and AI components provided under open source licenses may create new loopholes for AI giants to avoid the rules.

Research is a loophole that extends the regulation. This is likely to be exploited by companies,” he suggested. “AI research is mostly done in companies, so it’s a big loophole. Google says they’re “experimenting” with Bard. I also expect some companies to say they develop AI components rather than AI systems (I heard this from one large corporation during General purpose AI discussions). “GPAI [general purpose AI] should not be regulated” was one of their arguments.

About Chambers

Check Also

The Air Force has abandoned its attempt to install a directed-energy weapon on a fighter jet, marking another failure for airborne lasers

The U.S. military’s most recent endeavor to create an airborne laser weapon, designed to safeguard …