A risk-based framework that reshapes how AI is built, sold, and used across the European Union, and by extension anywhere AI touches European citizens.
The Act doesn't regulate "AI" as a monolith. It sorts each system into one of four tiers, and the tier determines whether it's banned outright, heavily controlled, lightly disclosed, or left alone. Tap any tier to inspect.
Practices the EU considers fundamentally incompatible with its values. Placing such a system on the market, putting it into service, or using it triggers the maximum penalty: up to €35M or 7% of global turnover.
Systems using subliminal, purposefully manipulative or deceptive techniques to materially distort behaviour and cause significant harm.
Systems that exploit vulnerabilities of a person or group due to age, disability, or specific social or economic situation, causing significant harm.
Evaluation or classification of natural persons over time based on social behaviour or personality traits, leading to unjustified or disproportionate detrimental treatment.
Risk-assessment of natural persons to predict criminal offences based solely on profiling, unless supporting human assessment grounded in objective, verifiable facts.
Systems that create or expand facial-recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Inferring emotions of natural persons in workplaces and educational institutions, except for medical or safety reasons.
Categorising individuals via biometric data to deduce or infer race, political opinions, trade-union membership, religious beliefs, sex life, or sexual orientation.
'Real-time' RBI in publicly accessible spaces for law enforcement, with narrow, judicially authorised exceptions.
Most regulated AI systems live here. Annex III lists eight domains; if your system is intended for any of these uses, you're high-risk by default, with conformity assessments, human oversight, data governance, technical documentation and registration in the EU database all required.
The Act introduces a parallel regime for general-purpose AI models, foundation models adaptable across many tasks. A subset, those with "systemic risk", face the heaviest obligations the Act imposes on any technology.
Article 51(2): A general-purpose AI model is presumed to have "high impact capabilities", and is therefore classified as posing systemic risk, when the cumulative compute used for training exceeds 10²⁵ FLOPs.
Designated models must notify the Commission, conduct model evaluations and adversarial testing, assess and mitigate systemic risks, report serious incidents, and maintain cybersecurity protections (Article 55). The Commission may also designate models below the threshold based on qualified scientific alerts.
The Act defines distinct roles along the AI supply chain. Knowing which role you occupy, sometimes more than one, determines the obligations that apply. A deployer who substantially modifies a system, for example, becomes a provider.
The Act entered into force on 1 August 2024, but its obligations switch on in stages, giving industry, member states, and the Commission's new AI Office time to build the enforcement scaffolding.
Penalties are calibrated to the severity of the breach. For undertakings, the higher of an absolute amount or a percentage of worldwide annual turnover applies, making the AI Act, like the GDPR, materially significant for global businesses.
A guided walk through the Act's logic. Answer up to five questions to identify the tier and surface the next steps. This is a heuristic, not legal advice. The wording of Articles 5, 6, and Annex III governs the actual classification.