EU AI Act Article 6: The Distinction Most Companies Are Getting Wrong
There's a quiet problem in how the AI industry talks about EU AI Act compliance. Two dates keep getting conflated:
The difference matters. If you're deploying AI agents in HR, credit scoring, employment, education, or any "essential service," you're almost certainly subject to the 2026 date — not 2027. Treating these as interchangeable is the most common compliance mistake we see.
This post breaks down Article 6 — the part of the Act that decides which date applies to your AI systems — and what enterprise teams actually need to do before August 2, 2026.
What Article 113 says about timing
The EU AI Act (Regulation (EU) 2024/1689) applies in phases. Article 113 governs the phasing:
That single phrase — "the remainder of the Act, except Article 6(1)" — is where the confusion starts. It means the high-risk obligations under Article 6(2) and Annex III are enforceable in 2026, not 2027.
Article 6(1) vs Article 6(2): the actual distinction
Article 6 defines two paths by which an AI system becomes "high-risk." Both result in the same downstream obligations (audit trails, human oversight, FRIAs, etc.) — but they have different enforcement dates.
Article 6(1) — Annex I: embedded AI in regulated products
This applies to AI systems that are safety components of products already regulated under one of the EU's sectoral product-safety laws listed in Annex I. Examples:
- AI in medical devices (Regulation 2017/745, MDR)
- AI in machinery (Regulation 2023/1230)
- AI in toys (Directive 2009/48/EC)
- AI in radio equipment, civil aviation, watercraft, and other regulated product categories
If your product was already regulated under one of these laws before the AI Act, the AI components inside it fall under Article 6(1). These get the 2027 date.
Article 6(2) — Annex III: standalone high-risk AI systems
This applies to AI systems used in any of the eight high-risk areas listed in Annex III, regardless of whether the surrounding product was already regulated. Annex III covers:
Annex III gets the 2026 date.
Where most enterprise AI agents land
Almost every enterprise AI agent in production today touches at least one Annex III category:
If your AI agent is doing any of these — and most enterprise agents do — your compliance deadline is August 2, 2026, not 2027.
What "compliant under Article 6(2)" actually requires
Treating an AI system as high-risk under Article 6(2) triggers a specific set of obligations. The big ones, with article references:
| Obligation | Article | What it means in practice |
|---|---|---|
| Risk management system | Article 9 | Documented, ongoing risk identification and mitigation across the agent lifecycle |
| Data and data governance | Article 10 | Quality, representativeness, and bias testing of training and operational data |
| Technical documentation | Article 11 | A specific dossier describing the system's design, capabilities, and limitations |
| Record-keeping (logs) | Article 12 | Automatic logging of agent activity; logs retained and accessible to auditors |
| Transparency and information | Article 13 | Clear instructions for deployers on use, limitations, and oversight |
| Human oversight | Article 14 | Effective HITL controls — not just an "approve" button, but real intervention capacity |
| Accuracy, robustness, cybersecurity | Article 15 | Tested resilience against drift, manipulation, and adversarial input |
| Fundamental rights impact assessment | Article 27 | Mandatory for many deployers in finance, education, employment, and beyond |
| Serious incident reporting | Article 73 | Notify market surveillance within 72 hours of certain events |
There's no path to compliance that skips audit trails (Article 12) or human oversight (Article 14). And for many deployer organizations — especially in finance, education, and employment — the FRIA under Article 27 is non-negotiable.
What "ready" looks like 99 days out
If you're an enterprise team running AI agents in scope of Article 6(2), here's a practical readiness check:
1. Inventory. Do you have a documented list of every AI agent your organization runs? Include shadow agents — the ones individual teams deployed without IT review. You can't classify what you can't see.
2. Classification. For each agent, can you cite which Annex III category (or none) it falls into? Without this, you can't even start the Article 9 risk management process.
3. Logging. Does each agent emit structured, immutable logs of its inputs, outputs, model used, and outcome? Article 12 requires "automatic recording of events," not whatever your dev team manually wrote.
4. Human oversight. Is there a human with authority to override or stop each high-risk agent in real time? Article 14 isn't just a UI requirement — it's an operational capacity requirement.
5. FRIA. For deployer obligations under Article 27, have you completed the fundamental rights impact assessment? This is often the longest single piece of compliance work.
6. Incident reporting. Do you have a process to notify market surveillance authorities within 72 hours under Article 73? Most teams discover this requirement after an incident.
If any of these is "no" or "partially," you have ~99 days.
Why this matters now
The 2027 date applies to a relatively narrow slice of AI systems — those embedded in products that were already regulated under sectoral law. Most enterprise AI agents aren't in that slice.
If your compliance plan assumes you have until 2027, double-check Article 6 against your actual agent inventory. We've seen teams discover, three months into their compliance program, that they were planning for the wrong date.
MeshAI is the Agent Control Plane — discovery, monitoring, audit trails, human oversight, and FRIA workflows for organizations preparing for August 2, 2026. We're accepting pilot partners — free pilot, white-glove onboarding, direct founder support.