With the AI Act, the EU has taken the first steps towards using and developing AI in a responsible manner
On 9 December 2023, the European Parliament and the Council of the European Union arrived at an agreement on the European Union Artificial Intelligence Act (EU AI Act). While the final draft is yet to be published, the broad contours have been set for what could prove to be a landmark in the history of AI regulation. With this, the EU has become one of the first AI regulators in the world.
In recent years, rapid advances in AI have raised questions about the preparedness of governments and regulatory agencies when it comes to safeguarding citizens’ rights and well-being. Industry leaders have also expressed concern over the potential of AI to disrupt our lives in the long term. AI applications in fields like health and education, among others, have potentially far-reaching implications. On top of that, the AI industry in itself is a trillion-dollar business opportunity of which the governments want a share and unlike the internet, AI is not a product of government laboratories. The European Commission first came with a draft to regulate AI in the EU in April 2021. However, with the release of OpenAI’s ChatGPT in 2022 and in addition to other technological advances, Brussels felt the need to have a re-look at the original draft. Thus, after three days of intense negotiations, the three branches of the Union arrived at a compromise over the face of the bloc’s AI regulation for the foreseeable future, thus, aiming to ensure human oversight over AI.
The AI industry in itself is a trillion-dollar business opportunity of which the governments want a share and unlike the internet, AI is not a product of government laboratories.
It has been reported that the definition of AI proposed by the Act leans towards how the OECD has defined it. The updated definition of AI systems by the OECD reads as: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”
The Act follows a “risk-based” approach. As per this approach, AI systems have been classified into four categories based on the risk they pose: a) unacceptable risk (social scoring, biometric identification whether real-time or remote, biometric categorization which deduces personal preferences and beliefs as well as cognitive manipulation); b) high risk (AI systems used in domains like transport, education as well as those used in products coming under the EU’s product safety legislation); c) general purpose and generative AI (systems like OpenAI’s ChatGPT); and, d) limited risk (like deepfakes).
Systems categorised as unacceptable risks will be banned while those termed high risk will undergo a compulsory fundamental rights impact assessment before being released into the market and will be labelled with a CE mark. General Purpose AI (GPAI) systems and the models on which they are based are required to follow transparency obligations. These include adhering to the EU copyright law, preparing technical documents, and releasing the summary of the training material for these GPAI systems. Also, more advanced GPAI systems will be subject to stricter regulations. No limitations on the use of limited risk systems are placed apart from the recommendation of using voluntary codes of conduct.
Systems categorised as unacceptable risks will be banned while those termed high risk will undergo a compulsory fundamental rights impact assessment before being released into the market and will be labelled with a CE mark.
However, there are certain exceptions in place. Use of unacceptable risks AI systems will be allowed only in the case of very serious crimes. However, it will be subject to judicial approval with a defined list of crimes. There will be certain areas where the act will not apply at all: military or defence; systems used only for research and innovation; and used by people for non-professional purposes.
In terms of the governance structure, it is expected that the act will be enforced by competent national agencies in each of the 27 member states. On the European level, the European AI Office will be tasked with the administration and enforcement of the act while there will also be a European AI Board in an advisory capacity and will be composed of member states’ representatives.
To help small and medium enterprises (SMEs) grow, provisions for “Regulatory sandboxes” and “real-world testing” have been included.
Citizens have also been given the right to seek redressal under the AI regime. They will be able to file complaints and “receive explanations about decisions based on high-risk AI systems that impact their rights.” Penalties will be imposed in case of violations of rules and will range from 7.5 million Euros to 35 million Euros (or as a percentage of turnover whichever is higher). However, smaller companies will be given a respite as their fines will be capped.
The European AI Office will be tasked with the administration and enforcement of the act while there will also be a European AI Board in an advisory capacity and will be composed of member states’ representatives.
A careful reading of the initial 2021 draft and the press releases after the agreement was reached shows a number of positives. First, the risk-based approach that the act follows is an innovative way of dealing with the myriad challenges that AI is expected to pose. Second, it also balances the needs of law enforcement along with citizens’ rights. Third, the provision of a fundamental rights impact assessment keeps the citizens’ welfare at the forefront. Likewise, empowering citizens to seek redressal enables a powerful citizenry. Fourth, provisions which help the SMEs grow are also commendable.
Though it has many praiseworthy features, the EU’s AI Act has also been criticised and concerns have been raised. The fear of over-regulation has been voiced by different quarters about some of the stringent provisions (like high fines) of the Act with observers opining that it might lead to stifling innovation. The Act envisions the setting up of a European AI Office and regulators in all the member states which could prove to be difficult as there is less room for budgetary manoeuvre at present.
The original draft text of the Act still remains to be finalised. This opens another set of possibilities as this process can take as long as after the European parliamentary elections scheduled for June 2024. This might lead to tinkering with the agreed-upon provisions. Final approval for the Act from member states’ representatives in the Council will also be sought after the draft text takes shape. The Act might get derailed if member states are not satisfied with the final provisions of the Act and if they see them infringing on their own powers. Apart from that, the Act won’t completely come into force before 2026. Considering the speed of AI development, there is a possibility that the Act might be found wanting in certain areas.
The Act might get derailed if member states are not satisfied with the final provisions of the Act and if they see them infringing on their own powers.
Regulation of open-source AI software is another matter of extreme concern due to its potential to be misused.
With the AI Act, the EU has taken the first steps towards using and developing AI in a responsible manner. This places it ahead of its peers like the United States and the United Kingdom when it comes to regulating AI. This legislation has the potential to become a benchmark in AI regulation as was the case with the GDPR for data privacy and protection. However, the EU needs to make sure that the Act in its final form, balances the protection of citizens’ rights with the need to give a push to innovation while at the same time, retaining sufficient flexibility to remain relevant with the speed of AI development.
Abhishek Khajuria is a Research Intern with the Strategic Studies Programme at the Observer Research Foundation
The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.
Abhishek Khajuria is a Research Intern with the Strategic Studies Programme at the Observer Research Foundation ...Read More +