At a glance
Free tool
Want to know where your AI systems stand against the Act? Take the free 7-minute readiness checklist.
Open the EU AI Act checklist →Who it applies to
You are in scope if any of these are true:
- You place an AI system on the EU market
- You deploy an AI system in the EU
- The output of your AI system is used in the EU, even if you are elsewhere
"We don't have EU customers" is not a defense if third parties deploy your AI in the EU.
Risk tiers
Prohibited (unacceptable risk)
Banned outright from February 2025. Includes social scoring by governments, real-time biometric ID in public spaces, emotion recognition in workplaces, untargeted facial image scraping, and manipulative techniques causing harm.
High risk
Heavily regulated but permitted. Covers AI in hiring, credit scoring, education, critical infrastructure, law enforcement, and AI embedded in regulated products like medical devices. Requires risk management, technical documentation, human oversight, conformity assessment, and post-market monitoring.
Limited risk
Transparency only. Chatbots, deepfakes, AI-generated content, and emotion recognition systems must disclose they are AI.
Minimal risk
No obligations. Spam filters, recommendation engines, game AI. Voluntary codes of conduct encouraged.
Enforcement timeline
| Date | What enforces | Status |
|---|---|---|
| Aug 1, 2024 | Regulation enters into force. | Enforced |
| Feb 2, 2025 | Prohibited practices and AI literacy obligations enforceable. | Enforced |
| Aug 2, 2025 | General-purpose AI (GPAI) obligations, governance, and penalties. | Enforced |
| Aug 2, 2026 | Most remaining obligations, including high-risk AI under Annex III. | Upcoming |
| Aug 2, 2027 | High-risk AI embedded in regulated products (medical devices, machinery, toys). | Upcoming |
Penalties
| Tier | Maximum penalty | Applies to |
|---|---|---|
| Prohibited practices | €35M or 7% of turnover | Using banned AI practices (Art. 5). |
| Most other breaches | €15M or 3% of turnover | High-risk obligations, GPAI provider duties, deployer duties. |
| Misinformation | €7.5M or 1% of turnover | Providing incorrect info to authorities or notified bodies. |
What people get wrong
“We're a US company, so it doesn't apply.”
The Act catches you if your AI's output is used in the EU, wherever you're based. OpenAI and Anthropic are in scope.
“My chatbot is high-risk.”
Most chatbots are limited-risk (transparency only). High-risk triggers are specific: hiring decisions, credit scoring, access to essential services, etc.
“We'll deal with it in 2026.”
A real ISO 42001 implementation takes 4–8 months. Starting in 2026 means missing the August deadline.
Related frameworks
| Framework | Relationship | Practical impact |
|---|---|---|
| ISO 42001 | Operational layer | The AI Management System standard. Implementing it maps directly to many Act obligations. |
| NIST AI RMF | Complementary | Voluntary US framework. Useful reference; does not grant presumption of conformity under the Act. |
| GDPR | Often co-applies | Training data is often personal data. Both regimes apply when AI processes personal data. |
| Product safety | Stacked | AI in regulated products (medical devices, machinery) must satisfy both the sector law and the AI Act. |