After the EU Parliament passed the European Union’s Artificial Intelligence Act (AIA) on Wednesday, 14 May, once again all eyes are on Brussels, where the details of the AIA will be negotiated. The EU could be the first government worldwide to regulate the use of AI, potentially setting worldwide standards.
The first draft of the AIA published in April 2022 did already spark strong reactions. As AI is increasingly applied across sectors, companies, civil society organisations and government institutions alike have a strong interest in the final design of the AIA. Passing the European Parliament was another important step towards ending the process that started with an expert group in 2018.
In short, the AIA categorises AI systems along a four-tier risk system, ranging from minimal and limited risk up to high and unacceptable risk technologies. Regulations and requirements are set according to this risk assessment. For minimal and limited-risk technologies such as spam filters and AI-enabled videogames (minimal) as well as chatbots and deep fakes (limited), regulations are mainly concerned with transparency obligations.
High-risk technologies, in contrast, will be strictly regulated and need to meet several requirements. This risk category includes technologies related to critical infrastructure like autonomous driving vehicles, grade prediction in education or employment recruitment tools. Requirements for this kind of technology range from human oversight and risk management systems to data documentation and governance with respect to training and validation data.
Technologies manipulating human interaction, like social scoring systems or real-time remote biometric identification (RBI) systems, fall into the highest risk category and are, except for their use in defence against terrorism and other serious crimes, banned completely.
In addition to the risk categories and their regulations, the AIA intends the establishment of an EU AI Board which will be responsible for the implementation of regulations and provides recommendations and guidance to national authorities.
Does the AIA meet its objective “to boost research and industrial capacity while ensuring safety and fundamental rights” though? There are several points of criticism put forward by economic, state and civil society actors concerning different aspects of the Act.
One major point addresses the lack of enforcement mechanisms. Law professionals criticise that compliance would almost exclusively depend on the self-assessment of the respective actors.
Additionally, there are actors who claim that the AIA places excessive burdens on small and medium-sized enterprises, resulting in reduced investment in AI development. Others, in contrast, regard the establishment of standards as a catalyst for investment. Transparent regulations not only offer a sense of security to investors but also create incentives for those wanting to support ethical AI.
So far, the AIA focuses on protecting individuals from harm and thus falls short in recognising risks for society at large. Several non-profit organisations emphasise this aspect outlining AI’s ability to influence voting behaviour or foster the spread of mis- and disinformation. Human rights organisations moreover criticise the AIA for not going far enough when it comes to the protection of marginalised groups, which are disproportionately negatively affected by the use of AI. Amnesty International, for instance, specifically calls upon the EU lawmakers to, in addition to real-time RBI, also ban the use of retrospective RBI tools used to predict crimes or identify individuals. These technologies, according to Amnesty International, can even lead to refugees being denied the right to asylum.
Being the first legislative text addressing the regulation of AI, the AIA is viewed critically across the globe. The various points of criticism discussed above are valid and need to be considered by EU policymakers during the final crafting of the act. Nevertheless, the AIA should still be regarded as a first piece of legislation that will most likely be followed by more as technology evolves. At this time, the full scope of its impact cannot be foreseen as it depends both on the final design and, especially when it comes to investment, on the reaction of economic actors. Moreover, real-life and unintended consequences will start showing only after the act enters into force. Therefore, the Act and policymakers should remain open to adjustments if need be. It is thus clear that while attention might divert elsewhere now that the AIA has passed Parliament, the process towards an ethical and innovation-boosting approach to AI is just getting started.
European Diplomats utilise the power of AI to enhance efficiency and optimise our productivity. We recognise that the ethical use of AI is essential to foster innovation within Europe and worldwide. We keep up with the latest technological and regulatory developments to deliver the best results for our clients.