Philipp Rosenauer
Head Data Privacy | ICT | Implementationᐩ, PwC Switzerland
Artificial intelligence (AI) is an emerging general-purpose technology: a highly powerful family of computer programming techniques. The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the use of AI systems can create problems. The specific characteristics of certain AI systems may create new risks related to (1) safety and security and (2) fundamental rights, and accelerate the probability or intensity of the existing risks. AI systems also (3) make it hard for enforcement authorities to verify compliance with and enforce the existing rules. This set of issues in turn leads to (4) legal uncertainty for companies, (5) potentially slower uptake of AI technologies, due to the lack of trust, by businesses and citizens as well as (6) regulatory responses by national authorities to mitigate possible externalities that risk fragmenting the internal market.
The main objective of the EU Artificial Intelligence Act (AIA) is to ensure that AI systems within the EU are safe and comply with existing laws on fundamental rights, norms and values. The AIA defines AI systems broadly by including logic- or rule-based information processing (such as expert systems), as well as probabilistic algorithms (such as machine learning). Like the GDPR, it applies to all firms wishing to operate AI systems within the EU, irrespective of whether they’re based in the EU or not. The AIA adopts a risk-based approach to regulating AI systems. In terms of their perceived risk, some AI systems are banned outright, while others aren’t regulated at all.
Interested in our data protection services?
Learn more
Do you want to talk to our experts?Contact us
If you’re developing or using software that’s developed with one or more of these techniques, you might be in scope of the AIA:
First, there are ‘prohibited AI practices’, which are banned outright. This includes a very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes).
Second, there are ‘high-risk AI systems’. In line with a risk-based approach, those high-risk AI systems are permitted on the European market subject to compliance with certain mandatory requirements and an ex-ante conformity assessment. The classification of an AI system as high-risk is based on the intended purpose of the AI system, in line with existing product safety legislation. Therefore, the classification as high risk depends not only on the function performed by the AI system, but also on the specific purpose and modalities for which that system is used, such as:
Area | AI system classified as ‘high risk’ |
Biometric identification and categorisation of natural persons | AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons |
Management and operation of critical infrastructure | AI systems intended to be used as safety components in the management and operation of road traffic and the supply of water, gas, heating and electricity |
Education and vocational training | AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions |
AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions | |
Employment, workers management and access to self-employment | AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, evaluating candidates in the course of interviews or tests |
AI intended to be used for making decisions on promotion and termination of work-related contractual relationships, for task allocation and for monitoring and evaluating performance and behaviour of persons in such relationships | |
Access to and enjoyment of essential private services and public services and benefits | AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke or reclaim such benefits and services |
AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems put into service by small-scale providers for their own use | |
AI systems intended to be used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid | |
Law enforcement | AI systems intended to be used by law enforcement authorities for making individual risk assessments of natural persons in order to assess the risk of a natural person for offending or reoffending or the risk for potential victims of criminal offences |
AI systems intended to be used by law enforcement authorities as polygraphs and similar tools or to detect the emotional state of a natural person | |
AI systems intended to be used by law enforcement authorities to detect deep fakes | |
AI systems intended to be used by law enforcement authorities for evaluation of the reliability of evidence in the course of investigation or prosecution of criminal offences | |
AI systems intended to be used by law enforcement authorities for predicting the occurrence or reoccurrence of an actual or potential criminal offence based on profiling of natural persons or assessing personality traits and characteristics or past criminal behaviour of natural persons or groups | |
AI systems intended to be used by law enforcement authorities for profiling of natural persons in the course of detection, investigation or prosecution of criminal offences | |
AI systems intended to be used for crime analytics regarding natural persons, allowing law enforcement authorities to search complex related and unrelated large data sets available in different data sources or in different data formats in order to identify unknown patterns or discover hidden relationships in the data | |
Migration, asylum and border control management | AI systems intended to be used by competent public authorities as polygraphs and similar tools or to detect the emotional state of a natural person |
AI systems intended to be used by competent public authorities to assess a risk, including a security risk, a risk of irregular immigration, or a health risk, posed by a natural person who intends to enter or has entered into the territory of a member state | |
AI systems intended to be used by competent public authorities for the verification of the authenticity of travel documents and supporting documentation of natural persons and detect non-authentic documents by checking their security features | |
AI systems intended to assist competent public authorities in the examination of applications for asylum, visa and residence permits and associated complaints with regard to the eligibility of the natural persons applying for a status | |
Administration of justice and democratic processes | AI systems intended to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts |
These also include safety components of products covered by sectoral Union legislation. They’ll always be high risk when subject to third-party conformity assessment under that sectoral legislation.
Third, there are ‘limited-risk AI systems’. AI systems under this category are subject to transparency obligations to allow individuals interacting with the system to make informed decisions. This is the case for a chatbot, where transparency means letting the user know they’re speaking to an AI-empowered machine. Further examples may include spam filters, AI-enabled video and computer games, inventory management systems or customer and market segmentation systems. Providers need to ensure that natural persons are informed that they’re interacting with an AI system (unless this is obvious from the circumstances and the context of use).
Fourth, there are ‘low-risk AI systems’; they’re low risk because they neither use personal data nor make any predictions that influence human beings. According to the European Commission, most AI systems will fall into this category. A typical example is industrial applications in process control or predictive maintenance. Here there’s little to no perceived risk, and as such no formal requirements are stipulated by the AIA.
It's important to note that the requirements stipulated in the AIA apply to all high-risk AI systems. However, the need to conduct conformity assessments only applies to ‘standalone’ AI systems. For algorithms embedded in products where sector regulations apply, such as medical devices, the requirements stipulated in the AIA will simply be incorporated into existing sectoral testing and certification procedures.
It’s important to determine at an early stage what risk categories your AI system falls into. Depending on the classification there are different legal implications.
The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. It can concern both providers (e.g. a developer of a CV screening tool) and users of high-risk AI systems (e.g. a bank buying this CV screening tool). It doesn’t apply to private, non-professional uses.
In general, the AIA distinguished between the following roles:
The following table provides an overview of which party has which obligations under the AIA:
Legal Requirements | Providers | Importers | Distributors | Users |
Establishment of a risk management system | X | |||
Requirements regarding training, validation and testing data | X | |||
Technical documentation | X | X | X | |
Record keeping | X | |||
Transparency and provision of information to users | X | |||
Human oversight | X | |||
Accuracy, robustness and cybersecurity | X | |||
Quality management system | X | |||
Conformity assessment | X | X | ||
Registration obligation | X | |||
Information of national competent authority | X | |||
Affix CE marking | X | X | X | |
Comply with instructions for use | X | |||
Consider relevance of input data | X | |||
Monitor operation of the system | X | |||
Record keeping for automatically generated logs | X | |||
Execution of data protection impact assessment | X |
It’s important to note that this role concept isn’t a fixed one. There might be situations where an importer, distributor or any other third party might be considered as a provider. In essence, this means that this party then also has to follow the obligations for providers. Such a change in role takes place when:
First, you need to check whether your AI system affects people located in the EU.
Second, you need to check whether you’re considered as a provider, importer, distributor or just a user of the AI system.
The AI Act will likely have a significant impact on Swiss companies that provide or use AI systems, even if they don’t have a legal presence in the EU. In fact, similar to the EU General Data Protection Regulation (‘GDPR’), the draft AI Act has an extraterritorial effect and thus also applies to organisations outside the EU, essentially to:
Consequently, the AI Act in principle applies if an AI system or its output is used within the EU. As an example, the use of a chatbot to answer enquiries from EU-based individuals regarding a credit or the use of AI systems for checks on creditworthiness regarding individuals in the EU by a Swiss bank would likely trigger the application of the AI Act.
The AIA requires providers of high-risk AI systems to conduct conformity assessments before placing their product or service on the European market. A conformity assessment is a process carried out to demonstrate whether specific consumer protection and product integrity requirements are fulfilled, and if not, what if any remedial measures can be implemented to satisfy such requirements. Occasionally, such conformity assessments may need to be performed with the involvement of an independent third-party body. But for most AI systems, conformity assessments based on ‘internal control’ will be sufficient. However, while the AIA stipulates a wide range of procedural requirements for conformity assessments based on internal control, it doesn’t provide any detailed guidance on how these requirements should be implemented in practice.
If an AI system falls under the AIA, then the actions needed are determined by the level of risk embedded in the respective system. The initial question for providers is therefore to determine that risk level in light of the types and categories set out in the AIA.
In contrast, ‘standalone’ high-risk AI systems have to undergo an AI-specific conformity assessment before they can be placed on the EU market.
There are two ways to conduct such conformity assessments: conformity assessment based on internal controls and, in some cases, a conformity assessment of the quality management system and technical documentation conducted by a third party, referred to as a ‘notified body’. These are two fundamentally different conformity assessment procedures. The type of procedure required for a specific AI system depends on the use case, in other words the purpose for which it is employed.
In short, high-risk AI systems that use biometric identification and categorisation of national persons must conduct a third-party conformity assessment. For most high-risk AI systems, however, conformity assessment using internal controls will be sufficient.
The AIA itself doesn’t specifically stipulate how to execute a conformity assessment based on internal control. Only the following is stated:
Providers of AI systems that interact directly with humans – chatbots, emotional recognition, biometric categorisation and content-generating (‘deepfake’) systems – are subject to further transparency obligations. In these cases, the AIA requires providers to make it clear to the users that they’re interacting with an AI system and/or are being provided with artificially generated content. The purpose of this additional requirement is to allow users to make an informed choice as to whether or not to interact with an AI system and the content it may generate.
You need to check whether a conformity assessment based on internal control is sufficient for your AI system or if you need to involve an independent third party.
The penalties set out in the AIA for non-conformance are very similar to those set out in the GDPR. The main thrust is for penalties to be effective, proportionate and dissuasive. The sanctions cover three main levels:
It should be noted that the enforcement of the AIA sits with the competent national authorities. Individuals adversely affected by an AI system may have direct rights of action, for example concerning privacy violations or discrimination.
It isn’t yet clear by when the AIA will enter into force and become applicable. However, the political discussions are already quite advanced. On 20 April 2022, the Draft Report for the Artificial Intelligence Act was published. The lead committees have been the Committee for Internal Market and Consumer Protection (IMCO) and the Committee on Civil Liberties, Justice and Home Affairs (LIBE).
The political discussions around the AIA are likely to be finalised by Q3/Q4 2022.
We can support you in particular in the following areas around the AIA:
Designing a risk management system:
For high-risk AI systems, a risk management system needs to be established. This system needs to consist of an iterative process run throughout the entire lifecycle of a high-risk AI system, requiring regular systematic updating. It needs to include the following steps:
Creating the technical documentation:
This documentation for high-risk AI systems needs to be created before the system is placed on the market or put into service. It regularly needs to be kept up to date. The technical documentation needs to demonstrate how the AI system complies with the requirements of the AIA. It should enable national competent authorities as well as notified bodies with all necessary information to assess the compliance of the AI system.
Designing transparency and information requirements:
High-risk AI systems need to be designed in a way that their operation is fully transparent for users. Therefore, it should be possible for users to interpret the system’s output. Information requirements include, for example, the instructions for use, identification of the provider as well as characteristics, capabilities and limitations of performance of the high-risk AI system.
Designing a quality management system:
Providers of high-risk AI systems need to put in place a quality management system. It needs to be documented in the form of written policies, procedures and instructions, including, for example, the following aspects:
Conformity assessment procedure based on internal controls
We support you with the execution of a conformity assessment procedure based on internal controls.
Besides that, our global Responsible Artificial Intelligence Toolkit is a suite of customisable frameworks, tools and processes designed to help organisations harness the power of AI in an ethical, unbiased and responsible manner – from strategy to execution.
With this offering, we can help you to ensure the responsible use of A.I. at different levels of technical depth by
https://pages.pwc.ch/core-contact-page?form_id=7014L000000kkHMQAY&embed=true&lang=en
#social#
Lorena Rota
Manager, MLaw, Data Privacy & Security Healthcare, PwC Switzerland
Tel.: +41 58 792 2750