ReferenceEU · RegulationLast reviewed · April 23, 2026

EU AI Act Compliance Reference

The EU's comprehensive law governing AI systems. Risk-based, extraterritorial, phased into force 2025–2027.

At a glance

Free tool

Want to know where your AI systems stand against the Act? Take the free 7-minute readiness checklist.

Open the EU AI Act checklist →

Who it applies to

You are in scope if any of these are true:

  • You place an AI system on the EU market
  • You deploy an AI system in the EU
  • The output of your AI system is used in the EU, even if you are elsewhere

"We don't have EU customers" is not a defense if third parties deploy your AI in the EU.

Risk tiers

Prohibited (unacceptable risk)

Banned outright from February 2025. Includes social scoring by governments, real-time biometric ID in public spaces, emotion recognition in workplaces, untargeted facial image scraping, and manipulative techniques causing harm.

High risk

Heavily regulated but permitted. Covers AI in hiring, credit scoring, education, critical infrastructure, law enforcement, and AI embedded in regulated products like medical devices. Requires risk management, technical documentation, human oversight, conformity assessment, and post-market monitoring.

Limited risk

Transparency only. Chatbots, deepfakes, AI-generated content, and emotion recognition systems must disclose they are AI.

Minimal risk

No obligations. Spam filters, recommendation engines, game AI. Voluntary codes of conduct encouraged.

Enforcement timeline

DateWhat enforcesStatus
Aug 1, 2024Regulation enters into force.Enforced
Feb 2, 2025Prohibited practices and AI literacy obligations enforceable.Enforced
Aug 2, 2025General-purpose AI (GPAI) obligations, governance, and penalties.Enforced
Aug 2, 2026Most remaining obligations, including high-risk AI under Annex III.Upcoming
Aug 2, 2027High-risk AI embedded in regulated products (medical devices, machinery, toys).Upcoming

Penalties

TierMaximum penaltyApplies to
Prohibited practices€35M or 7% of turnoverUsing banned AI practices (Art. 5).
Most other breaches€15M or 3% of turnoverHigh-risk obligations, GPAI provider duties, deployer duties.
Misinformation€7.5M or 1% of turnoverProviding incorrect info to authorities or notified bodies.

What people get wrong

We're a US company, so it doesn't apply.

The Act catches you if your AI's output is used in the EU, wherever you're based. OpenAI and Anthropic are in scope.

My chatbot is high-risk.

Most chatbots are limited-risk (transparency only). High-risk triggers are specific: hiring decisions, credit scoring, access to essential services, etc.

We'll deal with it in 2026.

A real ISO 42001 implementation takes 4–8 months. Starting in 2026 means missing the August deadline.

FrameworkRelationshipPractical impact
ISO 42001Operational layerThe AI Management System standard. Implementing it maps directly to many Act obligations.
NIST AI RMFComplementaryVoluntary US framework. Useful reference; does not grant presumption of conformity under the Act.
GDPROften co-appliesTraining data is often personal data. Both regimes apply when AI processes personal data.
Product safetyStackedAI in regulated products (medical devices, machinery) must satisfy both the sector law and the AI Act.

Sources