Overview

On July 12, 2024, the EU AI Act was published in the Official Journal of the European Union, establishing harmonized rules on artificial intelligence. The Act will come into force 20 days after publication, making August 1, 2024, the compliance start date for relevant entities such as providers, operators, and deployers.

Purpose and Scope

The EU AI Act aims to create an ethical and legal framework grounded in the values of the Treaty on European Union (TEU) and the EU Charter of Fundamental Rights. It addresses the challenges posed by the growing use of AI, applying harmonized rules across sectors without affecting existing Union laws on data protection, consumer protection, product safety, and employment. The Act regulates AI systems intended for the EU market, complementing existing Union laws.

Defining AI

AI, or artificial intelligence, refers to cognitive computing systems that exhibit intelligent behaviour by analysing their environments and autonomously taking actions to achieve specific goals (see Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe, COM/2018/237 final).

AI can be software-based, like facial recognition systems, or embedded in hardware, like drones and driverless cars.

Definitions in the EU AI Act

  • AI System: A machine-based system operating with varying autonomy levels, inferring from inputs to generate outputs such as predictions or decisions influencing physical or virtual environments.

  • Provider: An entity that develops or markets an AI system or general-purpose AI model under its name or trademark.

  • Deployer: An entity using an AI system under its authority, except for personal non-professional use.

  • Operators: A broad category encompassing providers, authorized representatives, deployers, importers, and distributors.

Classification of AI According to Risk

The EU AI Act focuses on regulating high-risk AI systems, which include:

  • Products or security components covered by specific EU legislation (e.g., civil aviation, vehicle security).

  • AI systems listed in Annex III (e.g., biometric identification, AI in critical infrastructure, education, employment, credit scoring, law enforcement).

Guidelines for practical implementation and examples of high-risk and non-high-risk AI use cases will be provided within 18 months of the Act’s enforcement.

Prohibited AI Practices

The Act prohibits AI practices that:

  • Use subliminal or manipulative techniques causing significant harm.

  • Exploit vulnerabilities of specific groups leading to significant harm.

  • Categorize individuals based on sensitive biometric data.

  • Implement social scoring systems.

  • Use real-time biometric identification for law enforcement, with some exceptions.

  • Engage in predictive policing based on profiling.

  • Utilize facial recognition databases from untargeted scraping.

  • Infer emotions in workplaces or educational institutions, except for medical or safety reasons.

Exception from High-Risk Qualification

The AI systems listed as high-risk in Annex III may be exempt if they do not pose significant risks to health, safety, or fundamental rights, and if they perform narrow procedural tasks, support human activities, or detect decision-making patterns without replacing human assessment. Providers must document their assessments and register the AI system in the EU database before market placement or service initiation.

This exception is crucial, as many AI system providers will seek to avoid the regulatory burden of high-risk qualification. However, providers must substantiate their claims through proper documentation.

Extensive Obligations for High-Risk AI Systems

Requirements for Providers

Providers of high-risk AI systems must adhere to stringent requirements to ensure their AI systems are trustworthy, transparent, and accountable. Before these systems can be marketed, they must undergo several checks, including:

  • Risk Assessment and Mitigation: Implement adequate systems to identify and minimize risks.

  • High-Quality Datasets: Ensure the datasets used are of high quality to minimize risks and prevent discriminatory outcomes.

  • Activity Logging: Maintain logs to ensure traceability of results.

  • Detailed Documentation: Provide comprehensive information on the system and its purpose for regulatory compliance assessment.

  • Clear Information to Deployers: Offer clear and sufficient information to those deploying the AI system.

  • Human Oversight: Implement appropriate human oversight measures to minimize risks.

  • Robustness, Security, and Accuracy: Ensure the system is robust, secure, and accurate.

Providers must also test these systems for conformity with the rules before market placement or service initiation and register them in a publicly accessible EU database.

Obligations for Deployers (Users) of High-Risk AI Systems

Deployers, formerly known as users, must also follow a set of obligations when using high-risk AI systems:

  • Adherence to Instructions: Use the AI system according to the provider’s instructions. Providers may argue non-compliance with instructions if damage occurs.

  • Human Oversight and Monitoring: Install human oversight as much as possible and monitor input data and system operation.

  • Log Maintenance: Keep automated logs for at least six months.

Right to Explanation of Individual Decision-Making

Affected individuals have the right to clear and meaningful explanations from deployers about the role of the AI system in decision-making and the main elements of the decision, especially if it significantly impacts their health, safety, or fundamental rights.

Though the GDPR already provides this right, the new EU AI Act provision raises concerns:

  • Plausible but Incorrect Explanations: Companies might provide plausible but incorrect explanations, potentially justifying unfair outcomes.

  • Incomplete Explanations: Incomplete explanations may lead to arbitrary decisions, which the right to explanation aims to prevent.

  • Complexity and Information Overload: Detailed explanations might be too complex, causing information overload and making it harder to identify and challenge incorrect predictions.

Thus, there is a risk that the right to explanation could increase automation bias instead of mitigating it.

Broad Right to Complain

Any natural person or legal entity can submit complaints regarding infringements of the EU AI Act to the relevant market surveillance authority. These complaints will be considered for market surveillance activities and handled according to established procedures.

This provision grants a broad scope for complaints, differing from other instruments like the GDPR, which require a direct relation to personal data processing for submitting a complaint.

Key Dates for Organizations

February 2, 2025 – Ban on Certain AI Systems

  • AI systems with the highest risk under the AI Act, such as those using subliminal techniques to distort behaviour, exploiting individual vulnerabilities, or scraping facial images without consent, will be prohibited.

  • Providers and deployers must ensure their employees and personnel have a sufficient level of AI literacy.

May 2, 2025 – Publication of Codes of Practice

  • The EU AI Office will publish codes of practice to help AI system providers demonstrate compliance with the AI Act.

August 2, 2025 – Obligations for General-Purpose AI Models

  • Providers of general-purpose AI models must comply with the EU AI Act’s obligations.

  • Providers who have already placed their AI models on the market have until August 2, 2027, to comply.

  • Penalties for non-compliance will be enforced, and EU member states must have rules and enforcement measures in place and appoint national competent authorities by this date

February 2, 2026 – Guidance for High-Risk AI Systems

  • The European Commission will issue guidance on implementing requirements for high-risk AI systems, including practical examples and use cases.

August 2, 2026 – Compliance for a Subset of High-Risk AI Systems

  • Providers, importers, distributors, and other relevant parties must comply with AI Act obligations for the high-risk AI systems listed in Annex III (e.g., biometric or emotional identification, educational uses, employment).

  • Limited-risk AI systems that interact with humans or generate synthetic content must also comply with the relevant requirements.

August 2, 2027 – Compliance for Remaining High-Risk AI Systems

  • Providers of high-risk AI systems, particularly those used as safety components or requiring third-party conformity assessments (e.g., radio equipment, personal protective equipment, agricultural vehicles), must comply with the AI Act.

  • General-purpose AI system providers who placed their systems on the market before August 2, 2025, must comply with the AI Act by this date.

  • Operators of Large-Scale IT AI Systems, like the Schengen Information System, must comply by December 31, 2030.

August 2, 2029 – First Review of the EU AI Act

  • The European Commission will review the EU AI Act and will continue to do so every four years. This includes reviewing the lists of prohibited and high-risk AI systems.

December 31, 2030 – Final Compliance Deadline for Large-Scale IT AI Systems

  • Operators of Large-Scale IT AI systems placed on the market or put into service before August 2, 2027, must comply with the AI Act by this date.

PETERKA & PARTNERS Romania remains at your disposal to provide more information and/or related legal assistance connected to this topic.

                                                                                                   ***

No information contained in this article should be considered or interpreted in any manner as legal advice and/or the provision of legal services. This article has been prepared for the purposes of general information only. PETERKA & PARTNERS does not accept any responsibility for any omission and/or action undertaken by you and/or by any third party on the basis of the information contained herein.