The EU AI Act: two steps forward, one step back

Briefing
Medlir Mema
19
March 2024

In a historic first, the European Parliament approved on March 13, 2024, the Artificial Intelligence Act. The vote is one of the final steps before a process - that started more than three years ago with the European Commission’s April 2021 proposal - is brought to an end by May or June of this year. Adopting a risk-based approach, which according to EU officials ensures the future proofing of the legislation, the EU AI Act has the potential to become an important AI governance model. That is both good and bad news.

AI Definition

To begin with, the EU AI Act consolidates the definitional debate on AI systems by borrowing from the OECD definition and proposing under Article 3 of the AI Act that AI systems are “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The definition may be considered by some as too broad. However, at a time when the group of governmental experts on lethal autonomous weapons (LAWS) meeting under the auspices of the Convention on Certain Conventional Weapons (CCW) have found it difficult, if not impossible, to make any meaningful progress on the issue of the definition of LAWS, EU’s decision to adopt the OECD definition is wise, efficient, and likely to further institutionalize what is increasingly becoming a widely accepted and recognized definition.


A Risk-Based Approach

In what is perhaps one of the more innovative aspects of the legislation, the AI Act, as mentioned, adopts a risk-based approach and categorizes AI systems under four rubrics – unacceptable risks, high risks, limited risks, and minimal risks – based on the level of threat they pose. Whereas minimal risks include technologies and applications “such as AI-enabled video games or spam filters,” and limited risks include AI systems intended to directly interact with persons, such as chatbots, and are governed by less stringent regulations, the same can’t be said with regards to unacceptable risk and high risk systems.

Unacceptable risk applications include practices such as social scoring by governmental bodies, exploitation of vulnerable groups, predictive policing, and AI-enabled indiscriminate surveillance practices, which according to the EU constitute a “clear threat to the fundamental rights of people.” AI systems that may lead to these applications are banned within the EU and violations are punished by the imposition of hefty fines of “up to 35 million Euros or 7% of global annual revenue (whichever is higher), [compared to] €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.”

Protection of fundamental rights

The legislation goes some way towards addressing some of the main concerns of civil society, which had consistently lobbied against allowing such practices and AI-enabled technologies to permeate the public space. Despite such concessions, civil society and human rights advocates have expressed serious reservations regarding the final draft of the legislation approved by the European Parliament. At the heart of their criticism, lies the exceptions provided in legislation for law enforcement. Organizations like Article 19, for example, have taken issue with the AI Act’s failures “to completely ban the use, among others, of emotional recognition technologies and real-time remote biometric identification in publicly-accessible spaces,” arguing that there should be no allowance made for the use of these technologies by law enforcement in real time or after the fact.

Others lament the fact that the EU's “list of high-risk systems fails to capture the many AI systems used in the migration context and which, eventually, will not be subjected to the obligations of this Regulation. The list excludes dangerous systems such as biometric identification systems, fingerprint scanners, or forecasting tools used to predict, interdict, and curtail migration.” The concern here arises from the fact that while the AI Act foresees that all AI systems regardless of their risk categorization must comply with EU regulation between 6 months and 2 years from the entry into force of the law, “AI used as part of EU large-scale databases in migration, such as Eurodac, the Schengen Information System, and ETIAS  will not have to be compliant with the Regulation until 2030.”

National security exceptions constitute a third and related source of concern for many who see in the legislation a carte blanche for national security institutions to carve out space for the use of “the most nefarious kind of AI, one which invades the right to privacy of often the most marginalized and vulnerable groups.” Worse, there is a perception that by surrendering to pressure from law enforcement authorities and national security voices, the EU AI Act has created a “parallel legal framework when AI is deployed by law enforcement, migration, and national security authorities,” which is likely to weaken EU’s normative and institutional power on AI global governance.

General Purpose AI Systems and Foundation Models

One area where the new legislation is likely to have a significant, and potentially market-defining impact, are the demands for data transparency. Negotiators of the EU AI Act insisted that “in order to increase transparency on the data that is used in the pre-training and training of general-purpose AI models, including text and data protected by copyright law, it is adequate that providers of such models draw up and make publicly available a sufficiently detailed summary of the content used for training the general-purpose model.” Companies like OpenAI, already under pressure from US domestic regulators as well as competitors like Elon Musk’s xAI, who have expressed concern both about the lack of transparency on the part of OpenAI as well as the latter’s insistence to maintain the code closed.

Additionally, foundation models that are considered high impact or high risk due to their above average performance parameters and which have been adjudged to pose a systemic risk along the value chain must submit to a stricter regime which includes abiding by strict transparency obligations ahead of deployment. Fundamental Rights Impact Assessment for High-Risk AI Systems, as reflected in Article 29a of the EU AI Act, requires that “deployers that are bodies governed by public law or private operators providing public services and operators deploying high-risk systems perform an assessment of the impact on fundamental rights that the use of the system may produce.”

Additionally, the data transparency requirement and the fundamental rights impact assessment for high risk AI systems are both seen as challenging existing practices and operating procedures among the world’s largest companies developing general purpose AI systems. This includes the necessity for companies to adopt more rigorous and transparent data governance frameworks, ensuring the traceability and accountability of AI systems throughout their lifecycle. Also, companies may need to allocate additional resources for ongoing monitoring and evaluation of AI systems’ impact on fundamental rights, leading to a reevaluation of their risk management strategies and potentially influencing their business models to prioritize transparency and ethical integrity.

EU AI Office

Finally, one of the main concerns regarding the EU AI Act is the question of relevance and responsiveness. On this count, it is worth noting that the European Commission released its proposal in April 2021 in response a series of EU-wide initiatives including a 2017 European Council call for a “sense of urgency to address emerging trends’ including ‘issues such as artificial intelligence …, while at the same time ensuring a high level of data protection, digital rights and ethical standards,” followed by additional calls from various EU institutions in 2019, 2020, and 2021.

Remarkably, however, many of these developments were soon overtaken by the impressive performance of OpenAI’s ChatGPT, which turned the attention of the policy makers and of the general public towards the promise of large language models, thus rendering moot much of the regulatory framework that EU officials had envisioned. OpenAI’s unexpected and spectacular growth then serves as a reminder of the disruptive nature of emerging technologies, underlining the difficulty of anticipating the appropriate regulatory framework for what in effect are unknown unknowns.

Here, the EU's risk-based approach is likely to serve it well at least in the short term, even though most EU officials and observers have serious concerns about whether or not by the time the AI Act comes fully into effect, any of the legislation will be relevant given the advances in technology. In part to mitigate these concerns, the EU legislation established under Article 55b the European AI office, a European AI Board, and a Scientific Panel of Independent Experts, among others.

Conclusion

The EU AI Act represents a significant step forward in regulating AI technology within the European Union and beyond. Its importance lies in its potential to safeguard fundamental rights, ensure consumer protection, and foster innovation while addressing the risks associated with AI deployment. The Act introduces a comprehensive framework that promotes transparency, accountability, and trust in AI systems.

However, the EU AI Act, while aiming to regulate artificial intelligence for the protection of civil liberties, introduces concerns regarding surveillance and national security exceptions. The Act's broad allowances for national security purposes could lead to unchecked government surveillance, potentially infringing upon privacy rights and denial of fundamental rights for the most vulnerable segments of our society. The Act's vague language and loopholes may facilitate the development of opaque surveillance systems, undermining transparency and accountability while prioritizing security over fundamental freedoms.

Despite these challenges, the EU AI Act represents a crucial foundation for shaping the future of AI governance, both within the EU and globally. Its emphasis on balancing innovation with ethical considerations reflects a proactive approach towards harnessing the potential benefits of AI technology while mitigating its risks. Moving forward, it will be essential to monitor the implementation of the Act closely, ensuring that it strikes the right balance between fostering innovation and protecting societal values and rights.

Download PDF