The EU Artificial Intelligence Act is in its final stage of negotiations. A consolidated legal text is expected in February/March 2024. In order to get a first understanding of the new rules, we have summarized below the most important aspects.
The EU AI Act is the world's first comprehensive legal framework for artificial intelligence (AI). It aims to address the risks and opportunities of AI for health, safety, fundamental rights, democracy, rule of law and the environment in the EU. It also seeks to foster innovation, growth and competitiveness in the EU's internal market for AI.
AI is a rapidly developing technology that can bring significant benefits to society and the economy, but also poses new challenges and risks that need to be addressed in order to avoid undesirable outcomes. For example, some AI systems may be opaque, biased, inaccurate or harmful to users or third parties. Therefore, the EU has decided to act as one to regulate the use of AI in a human-centric and proportionate manner based on its values and principles.
It will apply to both public and private actors inside and outside the EU, as long as the AI system is placed on the EU market or its use affects people located in the EU.
It can concern both providers (e.g. the developer of a CV-screening tool) and deployers of high-risk AI systems (e.g. a bank buying this screening tool). Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking, and is accompanied by the required documentation and instructions of use.
In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models.
Providers of free and open-source models are exempted from most of these obligations. This exemption does not cover obligations for providers of general-purpose AI models with systemic risks.
Obligations also do not apply to research, development and prototyping activities preceding the release on the market and, furthermore, the regulation does not apply to AI systems that are exclusively for military, defence or national security purposes, regardless of the type of entity carrying out those activities.
The Commission proposes a risk-based approach, with four levels of risk for AI systems as well as an identification of risks specific to general-purpose models:
In addition, the AI Act considers systemic risks which could arise from general-purpose AI models, including large generative AI models. These can be used for a variety of tasks, and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are highly capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected if a model propagates harmful biases across many applications.
The AI Act provides a clear definition of high-risk AI systems as well as a methodology to identify them within the legal framework. The high-risk AI systems are either listed in Annex III of the proposal, which contains a number of use cases in specific sectors or areas of application, or fall under the scope of Annex II, which contains a list of existing EU harmonisation legislation that covers certain products or services which rely on AI.
The AI Act also empowers the Commission to amend or update these annexes by delegated acts, taking into account the advice of the European Artificial Intelligence Board and the scientific panel of independent experts, as well as the feedback from stakeholders and the public.
Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment has to be repeated if the system or its purpose are substantially modified.
Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements as well as minimise risks for users and affected persons, even after a product is placed on the market.
High-risk AI systems that are deployed by public authorities or entities acting on their behalf will have to be registered in a public EU database.
General-purpose AI models, including large generative AI models, can be used for a variety of tasks. Individual models may be integrated into a large number of AI systems.
It is important that a provider wishing to build upon a general-purpose AI model has all the necessary information to make sure its system is safe and compliant with the AI Act. Therefore, the AI Act obliges providers of such models to disclose certain information to downstream system providers. Such transparency enables a better understanding of these models.
Model providers also need to have policies in place to ensure that they respect copyright law when training their models. In addition, some of these models could pose systemic risks because they are highly capable or widely used.
The AI Act can be amended by delegating and implementing acts to add criteria for classifying the GPAI models as presenting systemic risks (delegated acts) as well as to amend modalities to establish regulatory sandboxes and elements of the real-world testing plan (implementing acts).
The use of a high-risk AI system may have an impact on fundamental rights. Therefore, deployers which are bodies governed by public law or private operators providing public services, as well as operators providing high-risk systems, shall perform an assessment of the impact on fundamental rights and notify the respective national authorities of the results.
The assessment shall consist of a description of the deployer's processes in which the high-risk AI system will be used, of the period of time and frequency in which the high-risk AI system is intended to be used, of the categories of natural persons and groups likely to be affected by its use in the specific context, of the specific risks of harm likely to impact the affected categories of persons or group of persons, as well as a description of the implementation of human oversight measures and of measures to be taken in the event of the risks materialising.
If the provider has already met this obligation through the data protection impact assessment, the fundamental rights impact assessment shall be conducted in conjunction with the data protection impact assessment.
Following its adoption by the European Parliament and the Council, the AI Act shall enter into force on the twentieth day following its publication in the official Journal. It will become fully applicable 24 months after entry into force, with a staggered approach as follows:
Each member state should designate one or more competent national authorities to supervise the application and implementation of the AI Act as well as carry out market surveillance activities.
To increase efficiency and to create an official point of contact regarding the public and other counterparts, each member state should designate one national supervisory authority which will also represent the country in the European Artificial Intelligence Board.
Additional technical expertise will be provided by an advisory forum representing a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society and academia.
In addition, the Commission will establish a new European AI Office within the Commission, which will supervise general-purpose AI models, cooperate with the European Artificial Intelligence Board and be supported by a scientific panel of independent experts.
The AI Office has as its mission to develop expertise and capabilities in the field of artificial intelligence within the European Union and to contribute to the implementation of EU legislation of artificial intelligence in a centralised structure.
In particular, the AI Office shall enforce and supervise the new rules for general-purpose AI models. This includes drawing up codes of practice to detail out rules, its role in classifying models with systemic risks, and monitoring the effective implementation and compliance with the Regulation. The latter is facilitated by its powers to request documentation, conduct model evaluations, investigate in the case of alerts and request providers to take corrective action.
When AI systems are put on the market or put into use that do not respect the requirements of the Regulation, member states will have to enact effective, proportionate and dissuasive penalties, including administrative fines, in relation to infringements, and communicate them to the Commission.
The Regulation sets out thresholds that must be taken into account:
In order to harmonise national rules and practices in setting administrative fines, the Commission will draw up guidelines with advice from the Board.
Since EU institutions, agencies or bodies should lead by example, they will also be subject to the rules and also to possible penalties, and the European Data Protection Supervisor will have the power to impose fines on them.
Partner, Leader Digital Assurance & Trust and Cybersecurity & Privacy, PwC Switzerland
+41 58 792 84 59
Fatih Sahin