ES|EN|日本語|DE

On 1st August 2024, the European Regulation on Artificial Intelligence (El Reglamento Europeo de Inteligencia Artificial – ‘RIA’) entered into force. The RIA aims to establish a uniform legal framework in all member states of the European Union (hereinafter, the “EU”) that regulates the development and application of artificial intelligence (hereinafter, “AI”), as well as mitigates the risks arising from its use.

The EU was a pioneer in the regulation of this technology. The regulatory process started in April 2021 with the proposal by the European Commission of the first EU regulatory framework and the agreement of the Council and the Parliament on its parliamentary regulation (more on this issue can be found in our article “NEW EU AGREEMENT REGARDING THE REGULATION OF ARTIFICIAL INTELLIGENCE“), the proposal finally materialised in March 2024 with the adoption of RIA  by the European Parliament, its approval in May 2024 by the Council and finally now with its entry into force.

We will now describe its scope of application, the main legal obligations it imposes on European Union companies (public and private), the provisions that have been applicable since its entry into force, as well as those that will come into force in the coming months.

Scope of application

Like the General Data Protection Regulation (GDPR), RIA has an extraterritorial scope, i.e., its application does not depend solely on the location of the AI marketers, but also caters to the nature of the AI system or model, as well as the location of the information or product generated by the AI system.

In particular, according to article 2 thereof, the RIA applies to (i) providers marketing or putting into service AI systems or AI models in general use in the EU, whether or not such providers are located in an EU member state; (ii) implementers of AI systems established or located in the EU; (iii) providers and implementers of AI systems whose AI-generated product is used in the EU (whether or not located or established in a third country); (iv) importers and distributors of AI systems; (v) manufacturers who place on the market or put into service an AI system together with their product under their own name or brand; (vi) authorised representatives of AI system providers who are not established in an EU member state; (iv) affected personswho are located in the EU.

Risk levels of AI systems

The entry into force of RIA implies, for both public and private entities, compliance with specific obligations that will depend mainly on the level of risk of the AI system or model they manage, with associated penalties that depend mainly on the size of the AI system provider.

RIA first defines an Artificial Intelligence System as a system with a certain degree of autonomy that, using data and inputs provided by humans or machines, deduces how to achieve proposed objectives or responses. And to do so, it uses techniques based on machine learning, reasoning or modeling and this is its main characteristic that differentiates it from a mere computer program, a software system or a traditional programming approach. RIA then establishes 3 levels of risk depending on the use of AI used by the system, and a fourth category of unacceptable risk or prohibited practices. In this article, we will focus on the unacceptable risk category and on high-risk systems as the latter category imposes more obligations on their “deployers”, i.e. the person or company using this type of system for professional purposes.

Unacceptable risk or prohibited practices

RIA prohibits any “unacceptable risk” AI system or model, including those systems that use subliminal, manipulative or deceptive techniques to influence the behavior of a person or group to do something harmful to them, those that take advantage of the vulnerabilities of a given group or collective in order to alter their behavior with harmful results. It also expressly prohibits “Social Scoring” systems, i.e., those AI systems used to evaluate or classify individuals or groups according to their social behavior, among others.

High-risk systems

RIA considers an AI system to be high risk when its use poses a significant danger to health, safety or fundamental rights. It also distinguishes between two subcategories of high risk, (i) systems linked to harmonised product safety legislation, which, due to their sectoral regulation, must undergo a special assessment (Annex I of RIA) and, (ii) systems that due to the sector where they are used or the specific use for which they are intended, RIA considers them to present a high risk (Annex III of RIA).

Among these systems are worth highlighting, due to their scope (Annex III of RIA), those AI systems of biometric identification, AI systems in charge of security and management of critical infrastructures, education and vocational training, management of workers, essential public and private services, those related to law enforcement and administration of justice, among others.

However, while a specific case may fall within a high-risk AI system, for example, for using biometric identification, it will not be considered high-risk if the use of AI does not pose a significant risk of harming a fundamental right, health or safety. Therefore, a priori it seems that the RIA will pose a number of problems in determining under what conditions it is understood that there is a real risk of causing harm. In fact, RIA initially proposes that the Commission should draw up an exhaustive list of specific examples of high-risk and low-risk AI systems in order to facilitate the classification of AI systems.

Obligations on companies using high-risk AI systems.

The main obligations imposed by the RIA on providers of high-risk AI systems include:

  • The implementation of a risk management system to identify and mitigate those related to health, safety and fundamental rights.
  • Proper management of training and test data to ensure the correctness, relevance and statistical representation of the data, avoiding biases that negatively affect users.
  • A minimum content of updated technical documentation proving compliance with the requirements demanded by RIA, with a minimum content to be established by the European Commission.
  • The obligation to inform users about the capabilities of the system, its accuracy, scope and application. And in case of professional use in the workplace, the obligation to inform workers and their legal representatives of this fact.

The information obligation includes the duty of transparency of the AI provider with its users or affected persons about the operation and results it generates. An example of this is the obligation to inform the affected person when interacting directly with an AI system and not with a real person, or, the duty to (i) mark those output results that correspond to artificially generated or manipulated content in order to distinguish them from those that have not been manipulated and (ii) inform and make public the artificially generated or manipulated content when the content produced resembles real persons and may constitute an ultra-impersonation or “Deep fake”.

  • Mandatory custody of all log files generated by the AI system and under its control.
  • Mandatory human supervision of the AI system to minimise its risks, e.g., through verification by one or more natural persons of the result or output of data generated by the AI system.
  • Ensuring the training and suitability of personnel responsible for the operation and use of AI systems.
  • Accuracy and cybersecurity of AI systems to ensure that they are accurate, robust and secure, with the implementation of specific measures against data manipulation.
  • Collaboration with the authorities in the communication of non-compliances or risks detected, as well as the duty to inform and provide all information and documentation requested by them.
  • Mandatory registration of the AI system in the EU database before putting it into service or on the market, with the exception of those public sector AI systems that are intended for border control, immigration, public order or asylum.
  • Mandatory fundamental rights impact assessment for those public law bodies or private entities providing public services and using high-risk AI systems.

Furthermore, RIA establishes that compliance with these obligations and requirements by providers and users of AI systems will be supervised by a “notifying authority” to be designated by each member state in order to ensure and monitor proper risk management by AI marketers. Furthermore, RIA creates a new institution, the “European AI Committee”, with one representative from each member state, in order to cooperate and further develop a harmonised regulatory framework for AI in the EU.

Entry into force

RIA entered into force on 1 August, although it will not be fully applicable until 36 months after its entry into force according to the provision in question. In particular, the prohibition of AI systems posing unacceptable risks applies from the beginning of 2025, 6 months after the entry into force of the RIA, and the obligations and requirements for providers of high-risk AI systems apply from August 2027, 36 months after its entry into force. However, some of these obligations, such as reporting and transparency, apply from August 2025, 12 months after its entry into force.

RIA is the culmination of a slow and controversial regulatory process by the European institutions. However, its impact will depend on its adoption by the different EU member states, which already show initial divergences in its interpretation. Furthermore, it should be noted that the RIA is not intended to fully regulate AI, but will be complemented by two other legislative initiatives still in the pipeline, (i) the proposal for a Directive on civil liability for AI, the main objective of which is to regulate non-contractual civil liability for damages caused by AI systems and, (ii) the Proposal for a Directive on liability for defective products.

 

 

Julio González

Vilá Abogados

 

For more information, please contact:

va@vila.es

 

23rd August 2024