First of all: Our two cybersecurity consultants have taken a closer look at Regulation (EU) 2024/1689 (AI Act for short) [1], as we view European legislation through the lens of product compliance.
Based on initial sightings of Regulation (EU) 2024/1689 (AI Act for short), we expect the AI Act to impose few restrictions on general day-to-day business. Be it the use of AI systems in industrial production or AI for the end consumer.
But why do we think that and who does the AI Act affect anyway?
And why can we hardly wait for February 2025?
But first of all, an overview of what the AI Act regulates: The classification of AI systems
With the AI Act, the EU is trying to strengthen trust in AI systems while at the same time safeguarding the fundamental rights of EU citizens. The legislation therefore focuses on the use of AI systems by humans themselves and thus follows a risk-based approach. The riskier the use of an AI system in an area of application, the more strictly it is regulated by the AI Act.
Excessive risk
However, there are also areas of application that the EU considers to be too risky and must therefore be prohibited. These are the so-called prohibited practices, which are listed in Article 5. They form the top of the risk pyramid.
The use of AI systems for emotion recognition, social scoring, untargeted facial recognition, biometric categorization or the subliminal influencing or manipulation of human decision-making processes may be prohibited.
High risk
The next level of the risk pyramid is AI systems that are classified as high-risk. A large part of the legal text regulates the requirements for these high-risk AI systems. A list of all high-risk AI systems can be found in Annex III.
However, an AI system can be classified as high-risk even if it is not listed in Annex III. This is the case if the criteria in Article 6 are met.
For example, if an AI system is used in a product that falls within a list of selected harmonization legislation, it may also have to be classified as a high-risk AI system. Some of these harmonization regulations are, for example, the Radio Equipment Directive (2014/53/EU) and the Machinery Directive (2006/42/EC). It follows that machinery as well as radio equipment that integrates AI systems may be high-risk AI systems. But how exactly such a product is to be assessed is one of the questions we are asking ourselves with regard to the AI Act and which will hopefully be answered by the practical guidelines published in February 2025.
This is because one of the requirements for a high-risk classification is that the product must be subject to a third-party conformity assessment. In addition, the AI system must fulfill a safety-relevant task in the product, i.e. be a so-called "safety component". However, it remains unclear at what point a component is considered a safety component.
Therefore, the classification as a high-risk AI system appears to us to be a non-trivial challenge. Without the practical guidelines, we are not yet in a position to accurately assess such cases.
Let us consider an AI system in the area of quality assurance, in particular the predictive maintenance of machines. [2] Here it is unclear whether such an AI system must be classified as high-risk, as its malfunction or failure can cause damage to property (of the company). It can also be argued that it is a safety component, but the product itself may not be subject to third-party conformity assessment. A conclusive assessment is therefore not possible with the current state of knowledge.
Furthermore, high-risk AI systems must be registered in an EU database before being placed on the market and are the only AI system subject to CE marking.
Transparency risk
In addition to high-risk AI systems, transparency requirements are also formulated for "certain AI systems".
When using any AI system that interacts with natural persons, they must inevitably be informed that they are dealing with an AI system, including chatbots, for example. The transparency requirements can be found in Article 50. In addition, so-called general purpose AI models (GPAI models) are considered. A distinction is made between GPAI models with systemic risk and those without. The classification of these is regulated in Article 51.
The placing on the market of a GPAI model must be notified to the Commission within 2 weeks of fulfilling the criteria in Article 51.
In addition, the concept of deepfakes is defined and it is also stipulated that image, sound, video or text content generated by AI systems must be marked in a machine-readable format. However, the use of AI systems is not subject to any special labeling requirements unless they are used to produce texts of public interest.
Copyright remains untouched by the AI Act and is only extended to the extent that Article 53 requires providers of GPAIs to create a copyright compliance strategy.
No or minimal risk
However, most AI systems in use will fall into the category of low-risk AI systems and will therefore only be subject to Article 4. This requires basic AI competence of personnel when using AI systems.
In this way, the AI Act does not severely restrict the use of AI systems, but attempts to limit the innovation potential of this technology as little as possible. This is also reflected in Chapter VI, which contains measures to promote innovation. As a result, providers of AI models with free and open-source licenses are only affected to the extent that they are also not allowed to implement any prohibited practices with their AI systems.
This raises the following questions for us:
- Are the essential health, safety and EMC requirements of the Radio Equipment Directive (2014/53/EU) affected by the classification? What happens if no mandatory "third-party conformity assessment" is required, even if there are no harmonized standards listed in the Official Journal?
- With regard to the Machinery Directive (2006/42/EC), we ask ourselves which components are to be assumed to be safety components within the meaning of the AI Act. Which of the components used pose a risk to the health and safety of persons or property in the event of failure or malfunction? And are they classified as high-risk due to this fact, even though they do not have to undergo third-party conformity testing?
- Let's stay with the Machinery Directive (2006/42/EC). With regard to "third party conformity assessment", "Annex IV" of the Machinery Directive immediately comes to mind: "Categories of machinery to which one of the procedures referred to in Article 12(3) and (4) shall apply". Does the AI Act then only apply to products such as saws and lifting platforms?
Please do not hesitate to contact us for further details.
Authors
Anne Barsuhn
Junior Consultant Cybersecurity
Benjamin Kerger (B. Eng.)
Product Compliance Consultant
DEFINITIONS AND ABBREVIATIONS
AI systems are software and hardware solutions that use artificial intelligence to act in the physical or digital world.