CE mark for AI systems – extension of product safety law to artificial intelligence

It is a matter of course for classic products with a potential for danger that the EU makes certain specifications regarding the safety of these products. In future, this will also be applicable to AI products.

The Commission proposal for the AI Act includes the provision that potentially risky artificial intelligence (AI) systems must also bear a CE mark. This means that the tried and tested approach of product safety law will also be applied to AI systems, the aim of which is on the one hand to support the free movement of goods in the internal market by setting uniform requirements, and on the other hand to set high safety standards. In future, it will be the individual responsibility of the providers, who must assess and ensure the conformity of their systems themselves. At the same time, this should give them the necessary freedom to create innovations and new technologies.

 

I) Graduated approach

 In the draft, the Commission follows a graduated approach based on possible threats to EU values and fundamental rights:

  1. systems with unacceptable risk are prohibited;
  2. systems with high risk are subject to stringent regulatory requirements;
  3. low-risk systems are subject to special transparency requirements;
  4. other systems are permitted – subject to compliance with general laws.

 

III) Particularly dangerous AI systems – prohibited

Certain systems will be inadmissible from the outset because in the view of the Commission, they are too dangerous for the protected legal interests. These include, among others, AI systems that manipulate human behaviour in order to circumvent the free will of users, systems that exploit the weakness or vulnerability of a group of people, as well as systems that enable the authorities to evaluate social behaviour (social scoring).

 

II) High risk AI systems

The Commission has placed special restrictions on AI systems that pose a high risk to the health and safety or fundamental rights of natural persons. These so-called high-risk AI systems can only be placed on the market in the EU if they meet certain requirements.

 

1) What constitutes high-risk AI?

 Whether an AI system is a so-called high-risk AI system does not depend on its design, but on its purpose. The purpose and the application modalities of the system are therefore decisive for the classification. There are two criteria for this:

  1. On the one hand, AI systems are considered high-risk systems if they are intended to be used as safety components of products covered by one of the CE standards listed in Annex II of the AI Regulation (including Machinery Directive, Toys Directive, etc.) and are also subject to a prior conformity assessment by third parties.
  2. On the other hand, stand-alone AI systems that are listed in Annex III to the AI Regulation are also high-risk AI systems. Annex III names such systems, which, in the Commission’s view, are particularly likely to interfere with security, health and fundamental rights. This is the case of AI, for example, in the areas of biometric identification and categorisation of natural persons, management and operation of critical infrastructures, education and training, employment, human resource management and access to self-employment, credit and emergency services, etc. This is a very broad field and the Commission reserves the right to add to Annex III, which was also the Commission’s way of ensuring that the categorisation will keep pace with further developments.

The idea that it depends on the intended purpose is not new and is already firmly anchored in product safety law. Nevertheless, there is criticism of the Commission’s approach, because it means that one and the same product can be “only” AI or also high-risk AI, depending on the individual case. The provider (but also the user) bears the risk of correct classification, and since the penalties for a violation are severe – including high fines and sales bans – the concerns are understandable.

 

2) Which qualitative requirements exist for high-risk AI systems?

High-risk AI systems must comply with the requirements set out in Arts. 8-15 of the AI Act. These include:

  • A risk management system must be established, applied, documented and maintained.
  • Training, validation and test data sets must meet certain qualitative requirements.
  • Technical documentation must be compiled.
  • Systems must provide for automatic logging.
  • There are transparency and information requirements.
  • The systems must facilitate supervision by natural persons. In the particularly sensitive area of biometric identification and categorisation of natural persons, the requirements go even further.
  • Finally, the systems must achieve an appropriate level of accuracy, robustness and cybersecurity, and it must be ensured that the systems function consistently in these respects throughout their life cycle.

 

3) When can an AI system be placed on the market in the Union?

Through the certification approach used by the Commission from the area of functional safety through product testing and CE marking, providers who wish to place a high-risk AI system on the market must subject it to a conformity assessment according to uniform EU standards and within the framework of an internal control and certify the “product”. This means that the control is in the hands of the providers themselves.

The core element here is the conformity assessment procedure, for which the provider is basically responsible. The main objective of a conformity assessment procedure is to prove that the system placed on the market complies with the (especially qualitative) requirements of the AI Act. A distinction must be made between two procedures:

  • In principle, the conformity assessment procedure of internal control shall be carried out without involving third parties.
  • In contrast, for AI systems for biometric identification and categorisation, the provider shall involve a notified body.

After successful conformity assessment, the high-risk AI systems are to be registered in an EU database managed by the Commission in order to increase transparency vis-à-vis the public and to strengthen supervision and ex-post monitoring by the competent authorities. Furthermore, the declaration of conformity must be drawn up by the provider. In order to make it clear to the outside world that the requirements are met, the provider must affix the familiar CE mark.

 

III) Low-risk systems

For low-risk systems, the AI Act provides for certain transparency obligations, which incidentally must also be complied with for high-risk AI systems:

  •  AI systems intended to interact with natural persons must inform them that they are dealing with an AI system, unless this is obvious from the circumstances and context of use.
  • Anyone who uses an emotion recognition system or a system for biometric categorisation shall inform the natural persons affected by it about the operation of the system.
  • Users of an AI system that creates so-called deepfakes, i.e. creates or manipulates image, sound or video content that noticeably resembles real persons, objects, places or other facilities or events and would falsely appear to a person to be real or true, must disclose that the content was artificially created or manipulated.

In the latter case, there are exceptions if the technology is necessary for the exercise of freedom of expression and freedom of art and science, but at the same time, there are appropriate safeguards for the rights and freedoms of third parties. In all three cases, exceptions are provided for systems for the detection, prevention, investigation and prosecution of criminal offences.

 

IV) Systems with minimal risk

Systems with minimal risk can be used largely without restrictions (within the scope of other law). The vast majority of AI systems will probably fall under this category

 

V) Interaction between the AI Act and the other CE standards

The declared aim of the Commission is that the CE standards run in parallel and complement each other, because products often fall within the scope of several CE standards at the same time. This approach is fully reflected in the interaction between the AI Act and the Machinery Directive (or in future Machinery Regulation). The requirements in the AI Act address the safety risks posed by AI systems that control safety functions in machinery, whereas certain specific requirements in the Machinery Directive will ensure that an AI system is integrated in a safe way into the whole machine so that the safety of the machine as a whole is not compromised.

In order to define responsibilities, Art. 24 of the AI Act states that if a high-risk AI system for products covered by the CE standards in Annex II is placed on the market or put into service under the name of the product manufacturer together with the product manufactured in accordance with the AI Act, the manufacturer of the product shall assume responsibility for the conformity of the AI system and shall be subject to the same obligations in relation to the AI system as a supplier under the AI Act.

Outlook

The sector-specific and purpose-oriented approach of the AI Act to create a uniform legal framework with the other CE standards is to be welcomed, even though the providers are now also subject to the known risk of correctly classifying the AI systems for this area. The Commission’s proposal now has to be adopted by the European Parliament and the Member States in the ordinary legislative procedure. Once adopted, the AI Act will be directly applicable throughout the EU.