EU closes in on AI Act with last-minute ChatGPT-related adjustments
Members of the European Parliament have reached an agreement on the EU's AI Act with a few last-minute adjustments and will go to committee voting in two weeks.
The European Parliament is set to formalize its position on what could be the world’s first set of regulations for AI by a major legislative body, as EU lawmakers reached a provisional political agreement on Thursday.
“The compromised amendments were agreed upon yesterday through internal talks between the rapporteurs and shadow rapporteurs and will be voted on in the committee by MEPs on May 11, 2023,” a Parliament official said. “This will be followed by a voting in plenary in June, after which talks with Council (member states) will start with the view of reaching an interinstitutional agreement before the end of the year.”
The European Commission, which is the executive branch of the EU, has been working on a regulatory framework for AI since early 2020. In April 2021, the Commission proposed a set of draft regulations for the AI Act, which aims to provide a comprehensive legal framework for the development and deployment of AI systems in the EU. The act must be ratified by the European Parliament.
The proposed regulations cover a wide range of AI applications and are intended to promote trust and transparency in the use of AI, protect fundamental rights and freedoms, and ensure safety and ethical principles in AI development and deployment.
ChatGPT and related AIs receive last-minute tweaks
Dealing with AI systems that do not cater to a specific use case has been the most debated issue in the proposal. General-purpose AI systems handle a wide variety of tasks and were not covered in the original proposal. They only found consideration after the disruption created by ChatGPT, a general-purpose, generative AI model that takes text input and returns high-quality, context-based responses to users.
MEPs agreed that generative tools such as ChatGPT, DALL-E, and Midjourney must be regulated in terms of design and deployment, to be in accordance with EU law and fundamental rights, including freedom of expression. A major change made to the act is that these tools will have to disclose any copyrighted material used to develop their systems.
Other requirements for generative AI models include testing and mitigating reasonably foreseeable risks to health, safety, fundamental rights, the environment, democracy, and the rule of law, with the involvement of independent experts.
The act also mandates documentation of the non-mitigable risks in AI models and the reasons why they were not addressed. There have been concerns that ChatGPT could be used for malicious purposes, including the generation of phishing materials, infostealer payloads, DDoS and ransomware binary scripts.
Experts question whether regulations for systems based on large language models (LLMs), such as ChatGPT and DALL-E, can be enacted without without affecting their core functionality.
“Quite simply — I have no clue how one regulates something like Chat GPT without diminishing the effectiveness of their solution,” said Chris Steffen, an analyst at Enterprise Management. “Plus, what about the instances that are created specifically for nefarious purposes? Once the technology becomes common and accessible, bad actors should be able to (and will) set up their own ‘bad guy Chat AI’ to do whatever they want it to and optimize for those purposes. So, do you (and can you) also regulate the technology? Not likely.”
The AI Act primarily follows a classification system
Other than the recent debate on general-purpose AI, the act primarily focuses on classifying the existing AI solutions into risk-based categories — unacceptable, high, limited, and minimal.
AI systems that present only limited and minimal risks, such as those used for spam filters or video games, may be used with few requirements as long as there is transparency. However, AI systems posing unacceptable risks, such as those used for government social-scoring systems and real-time biometric identification systems in public spaces, will generally not be allowed, with few exceptions.
Developers and users are allowed to use high-risk AI systems, but they must comply with regulations that mandate thorough testing, proper documentation of data quality, and an accountability framework that outlines human oversight. Examples of AI systems deemed high risk include autonomous vehicles, medical devices, and critical infrastructure machinery.
The deal reached on Thursday finalizes the text, still subject to minor adjustments before voting on May 11, where all groups will have to vote on the compromise without the possibility of alternative amendments.
Shweta Sharma is a senior journalist covering enterprise information security and digital ledger technologies for IDG’s CSO Online, Computerworld, and other enterprise sites.
Copyright © 2023 IDG Communications, Inc.