Regulation (EU) 2024/1689 Adopted 13 June 2024 OJ L · 12 July 2024 113 Articles · 13 Annexes

The world's first
comprehensive law on artificial intelligence.

A risk-based framework that reshapes how AI is built, sold, and used across the European Union, and by extension anywhere AI touches European citizens.

4tiers
Risk classification
8domains
High-risk use cases
€35M/ 7%
Maximum administrative fine
1025
FLOPs systemic-risk threshold
§ 01 · The Architecture

Risk decides everything.

The Act doesn't regulate "AI" as a monolith. It sorts each system into one of four tiers, and the tier determines whether it's banned outright, heavily controlled, lightly disclosed, or left alone. Tap any tier to inspect.

Unacceptable Risk
Article 5 · Banned outright
Banned
High Risk
Article 6 + Annex III
Controlled
Limited Risk
Article 50 · Transparency
Disclose
Minimal Risk
No mandatory obligations
Free
§ 02 · Article 5

The eight red lines.

Practices the EU considers fundamentally incompatible with its values. Placing such a system on the market, putting it into service, or using it triggers the maximum penalty: up to €35M or 7% of global turnover.

5(1)(a)

Subliminal manipulation

Systems using subliminal, purposefully manipulative or deceptive techniques to materially distort behaviour and cause significant harm.

Recital 29 · Behavioural distortion
5(1)(b)

Exploiting vulnerabilities

Systems that exploit vulnerabilities of a person or group due to age, disability, or specific social or economic situation, causing significant harm.

Recital 29 · Protection of the vulnerable
5(1)(c)

Social scoring

Evaluation or classification of natural persons over time based on social behaviour or personality traits, leading to unjustified or disproportionate detrimental treatment.

Recital 31 · Public + private-sector ban
5(1)(d)

Predictive policing on individuals

Risk-assessment of natural persons to predict criminal offences based solely on profiling, unless supporting human assessment grounded in objective, verifiable facts.

Recital 42 · Presumption of innocence
5(1)(e)

Untargeted face-image scraping

Systems that create or expand facial-recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Recital 43 · Mass-surveillance prevention
5(1)(f)

Emotion recognition at work and school

Inferring emotions of natural persons in workplaces and educational institutions, except for medical or safety reasons.

Recital 44 · Workplace + academic dignity
5(1)(g)

Biometric categorisation by sensitive traits

Categorising individuals via biometric data to deduce or infer race, political opinions, trade-union membership, religious beliefs, sex life, or sexual orientation.

Recital 30 · Anti-discrimination core
5(1)(h)

Real-time remote biometric ID in public

'Real-time' RBI in publicly accessible spaces for law enforcement, with narrow, judicially authorised exceptions.

Recitals 32–37 · Strict procedural safeguards
§ 03 · Annex III

The high-risk map.

Most regulated AI systems live here. Annex III lists eight domains; if your system is intended for any of these uses, you're high-risk by default, with conformity assessments, human oversight, data governance, technical documentation and registration in the EU database all required.

01
Biometrics
02
Critical Infrastructure
03
Education & Training
04
Employment & HR
05
Essential Services
06
Law Enforcement
07
Migration & Borders
08
Justice & Democracy
§ 04 · Chapter V

General-purpose AI & the systemic-risk threshold.

The Act introduces a parallel regime for general-purpose AI models, foundation models adaptable across many tasks. A subset, those with "systemic risk", face the heaviest obligations the Act imposes on any technology.

1025
Floating-point operations · presumption threshold

Article 51(2): A general-purpose AI model is presumed to have "high impact capabilities", and is therefore classified as posing systemic risk, when the cumulative compute used for training exceeds 10²⁵ FLOPs.

Designated models must notify the Commission, conduct model evaluations and adversarial testing, assess and mitigate systemic risks, report serious incidents, and maintain cybersecurity protections (Article 55). The Commission may also designate models below the threshold based on qualified scientific alerts.

All GPAI models
Article 53
Technical documentation, copyright policy, training-data summary, downstream-provider information.
Systemic-risk GPAI
Article 55
Model evaluations, adversarial testing, risk assessment + mitigation, incident reporting, cybersecurity.
Codes of practice
Article 56
Industry-developed codes facilitated by the AI Office to demonstrate compliance until harmonised standards exist.
Fines
Article 101
Up to €15M or 3% of global annual turnover for GPAI providers. A separate regime from operator fines.
§ 05 · The Value Chain

Who must do what.

The Act defines distinct roles along the AI supply chain. Knowing which role you occupy, sometimes more than one, determines the obligations that apply. A deployer who substantially modifies a system, for example, becomes a provider.

Provider

Develops the AI system or has it developed and places it on the market under its own name or trademark.
  • Risk-management system (Art. 9)
  • Data & data-governance (Art. 10)
  • Technical documentation (Art. 11)
  • Record-keeping & logs (Art. 12)
  • Transparency to deployers (Art. 13)
  • Human oversight design (Art. 14)
  • Conformity assessment + CE mark
  • EU database registration

Deployer

Uses an AI system under its authority, except for personal non-professional activity.
  • Use according to instructions (Art. 26)
  • Human oversight by competent staff
  • Input-data relevance & representativeness
  • Monitor & report serious incidents
  • Inform workers before deployment
  • Fundamental-rights impact assessment (Art. 27)
  • Inform affected persons of high-risk decisions

Importer

Established in the Union, places on the EU market a system bearing the name of an entity outside the Union.
  • Verify CE marking & documentation (Art. 23)
  • Confirm conformity assessment performed
  • Confirm authorised representative designated
  • Indicate name & contact on the system
  • Storage & transport must not impair conformity

Distributor

Any person in the supply chain, other than provider or importer, that makes an AI system available on the EU market.
  • Verify CE mark & documentation (Art. 24)
  • Storage conditions preserve conformity
  • Corrective action when non-compliance suspected
  • Cooperate with national authorities
⚡ Role-shift rule · Article 25
A distributor, importer, deployer or other third party becomes a provider, and inherits all provider obligations, if it puts its name on a high-risk system, substantially modifies it, or modifies its intended purpose so it becomes high-risk.
§ 06 · Article 113

The phased switch-on.

The Act entered into force on 1 August 2024, but its obligations switch on in stages, giving industry, member states, and the Commission's new AI Office time to build the enforcement scaffolding.

01 AUG 2024
Entry into force
20 days after OJ publication. The legal clock starts.
02 FEB 2025
Prohibitions live
Chapters I & II apply. The eight banned practices are now enforceable. AI-literacy obligation kicks in.
02 AUG 2025
GPAI & governance
GPAI obligations, notifying authorities, AI Board, penalty framework, confidentiality.
02 AUG 2026
General application
Most provisions apply, including obligations for Annex III high-risk systems.
02 AUG 2027
Article 6(1) high-risk
High-risk systems that are safety components of regulated products under Annex I.
§ 07 · Article 99

Three tiers of fine.

Penalties are calibrated to the severity of the breach. For undertakings, the higher of an absolute amount or a percentage of worldwide annual turnover applies, making the AI Act, like the GDPR, materially significant for global businesses.

Tier 01 · Most severe
€35Mor 7% of worldwide annual turnover
Breach of prohibited-AI-practices rules (Article 5)
Whichever is higher applies. Reserved for the most fundamental violations: deployment of any of the eight banned practices.
Relative severity
Tier 02 · Substantial
€15Mor 3% of worldwide annual turnover
Breach of operator or notified-body obligations
Includes provider obligations (Art. 16), authorised representatives, importers, distributors, deployers (Art. 26), notified bodies, and Article 50 transparency.
Relative severity
Tier 03 · Informational
€7.5Mor 1% of worldwide annual turnover
Supplying incorrect, incomplete, or misleading information
To notified bodies or national competent authorities responding to a request.
Relative severity
◇ SME & start-up carve-out · Article 99(6)
For SMEs and start-ups, the fine is the lower of the absolute amount or the percentage. This is the inverse of the rule for large undertakings. This protects smaller players from disproportionate exposure.
§ 08 · Self-Assessment

Where does your system land?

A guided walk through the Act's logic. Answer up to five questions to identify the tier and surface the next steps. This is a heuristic, not legal advice. The wording of Articles 5, 6, and Annex III governs the actual classification.