The European Parliament has agreed to close a key issue in AI law by adopting the definition used by the Organization for Economic Co-operation and Development (OECD). Most other definitions have also been agreed, and new measures such as the right to explanation are on the table of EU MPs.
Last Friday (March 3), representatives of the European Parliament’s political group working on AI legislation were among the most politically sensitive parts of the file, according to two European Parliament officials, and reached a political consensus on the very definition of artificial intelligence.
The AI Act is a major legislative proposal to regulate this emerging technology based on its ability to do harm. What is defined as artificial intelligence is very important as it also defines the scope of the EU AI rulebook.
“‘Artificial Intelligence System’ (AI system) means a system designed to operate with varying levels of autonomy and produce outputs, such as predictions, recommendations, decisions, or the like, either physically or externally, for explicit or implicit purposes. It means a machine-based system that can virtually influence the environment,” read the text seen by EUACTIV discussed on Friday.
According to one EU official present at the meeting, the agreement was to remove the concept of “machine-based” systems from the text. Now co-rapporteur, he expects a revised text from Brando Benifei and Dragoş Tudorache’s office.
This definition often overlaps with that of the OECD, an international organization considered the club of rich countries.
Similarly, the addition to the preamble of the text states that the definition of AI “will work closely with the work of international organizations working on artificial intelligence to ensure legal certainty, harmonization and broad acceptance”.
International recognition was a policy pushed by the conservative European People’s Party, which wanted to narrow the definition to systems based on machine learning, while left-wing centrist lawmakers targeted automated decision-making.
In addition, references to predictions contain language indicating that they contain content. This is intended to prevent generative AI models like ChatGPT from slipping out of regulatory gaps.
“But the next question, of course, is whether they fall under high risk, and Chapter 2 [related obligations] or you can count on a large GPAI [General Purpose Artificial Intelligence] Parliamentarians have highlighted a divisive issue that members of the European Parliament are barely beginning to address.
The accompanying text also clarifies that AI should be distinguished by simpler software systems or programming approaches, and that the stated purpose may differ from the intended purpose of the AI system in certain contexts.
Surprisingly, this compromise points out that when AI models are integrated into a wider system that relies entirely on AI components, the whole system is considered part of a single AI solution.
At the technical meeting on Monday (March 6), agreement was reached on most other definitions of AI regulation. In this case, the most important additions to the compromise fix seen by EUACTIV relate to the definition of key risks, biometrics and identification.
“Significant risk” means a risk that is significant in terms of its severity, intensity, probability of occurrence, duration of impact and ability to affect an individual, multiple persons or a particular group of persons.
A remote biometric system was defined as an AI system used, with prior consent, to verify an individual’s identity by comparing biometric data to a reference database. This is distinguished by an authentication system in which the person seeks authentication.
A taxonomy of biometrics, recently added to the list of banned use cases, added references to infer personal characteristics and attributes such as gender and health.
References to law enforcement directives for profiling by law enforcement agencies were introduced.
Another technical meeting is scheduled on Thursday to discuss so-called stand-alone clauses, or clauses that are not necessarily related to the rest of the bill.
A new article has been introduced with the right to an explanation of individual decisions, which applies when AI informs someone of a decision that has a legal or similar effect.
A meaningful description should include the role the AI solution plays in decision-making, the logic and key parameters used, and the input data. To make this provision effective, law enforcement and judicial authorities will not be able to use proprietary AI systems.
Other new measures include accessibility requirements for AI providers and users, the right not to be subject to non-compliant AI models, and obligations to design and develop risky AI systems to minimize environmental impact.
Articles on general principles that apply to all AI systems and AI literacy have largely been maintained since they were first proposed at a political conference in mid-February, 2023.
DELTA Data Protection & Compliance, Inc. Academy & Consulting – The DELTA NEWS – Visit: delta-compliance.com