6 of 9

16 November 2021

KI-Verordnung / AI Act (dt./eng.) – 6 of 9 Insights

Data governance in the AI Regulation - in conflict with the GDPR?

  • In-depth analysis
More
Author

Detlef Klett

Partner

Read More

Three years after the entry into force of the General Data Protection Regulation (GDPR), the EU’s next prestige project is in the starting blocks: The Act on the Regulation of AI Systems (AI Act), which aims to create a framework for the use and development of artificial intelligence (AI). Like the GDPR, the AI Act also standardizes requirements for the handling of data. However, while the GDPR focuses on the protection of personal data, the AI Act provides the framework for the general handling of data (data governance).

Requirements for data processing by AI systems

  • Whether as a virtual assistant or expert system in medicine - the expectations of providers and society are high, especially for future-oriented AI systems that are supposed to learn and reason independently using methods such as deep learning. Just how successfully these systems actually operate in the real world depends largely on the amount, scope, and precision of the learned data. However, even if these parameters are fulfilled to the best possible extent, a certain degree of unpredictability and partial autonomy is inherent in the behaviour of AI systems. The Commission aims to address this risk - in particular the risk of error - through regulation, focusing on AI systems that pose a high risk to the fundamental rights, health and safety of citizens (so-called high-risk AI systems[1]).
  • The central regulatory requirement can be found in Art. 10 AI Act. Insofar as high-risk AI systems are trained with data, training, validation and test datasets must be used that meet the quality criteria specified in paragraphs 2 to 5. For example, appropriate data governance and data management procedures should apply, covering, inter alia, data collection, relevant data preparation processes and prior assessment of the availability, quantity and suitability of the required datasets (paragraph 2). Similarly, the datasets must be relevant, representative, accurate and complete (paragraph 3) and, to the extent necessary for the intended purpose, correspond to the characteristics or elements specific to the particular geographic, behavioural or functional context in which the high-risk AI system is intended to be used (paragraph 4).
  • However, the Commission does not define these characteristics or its understanding of “data governance”. It therefore remains open which objective requirements are to be imposed on this with regard to compliance, for example, when data sets are representative or they correspond to typical characteristics within the meaning of paragraph 4.
  • This results in legal uncertainty, which can sometimes have serious consequences. This is because the new AI Act provides for even higher fines [2]than the GDPR in the event of a breach of the requirements under Article 10, namely up to EUR 30,000,000 or - in the case of companies - up to 6% of the total annual worldwide turnover of the previous financial year, whichever is higher.

Relationship between the AI Act and the GDPR

  • In the explanatory memorandum to the draft law, the Commission clarifies that the GDPR is not affected by the AI Act. Accordingly, both regulations apply side by side. Provided that they use personal data to develop their high-risk AI systems, providers will be obliged to comply with the data handling requirements of the AI Act and the personal data processing requirements of the GDPR.
Such a “double obligation” exists, for example, in the case of the learning of AI systems with personnel data. On the one hand, according to Art. 10 AI Act, the provider must comply with the above-mentioned requirements regarding the data sets, according to which they must be relevant, representative, error-free and complete. On the other hand, the verification of the data sets with regard to these criteria will also generally constitute processing of personal data within the meaning of the GDPR. It follows that the provider, and now data controller, will need a legal basis under Article 6 GDPR for the processing, which - as the Commission explains in recital 41 - is not contained in the AI Act. In this respect, providers will have to resort to other corresponding bases, such as consent. In addition, the other requirements of the GDPR must also be complied with.

Data Governance in AI Real Labs [3]

Criticism

The Commission’s basic approach of imposing obligations on providers based on a risk assessment of their AI system is also reflected in the handling of data. The additional requirements imposed on data processing are to be welcomed.

Conclusion and outlook

[1] Reference Plugin High Risk Systems [2] Reference Plugin Fines [3]  Reference Plugin AI Real Labs

Return to

home

Go to Interface main hub