EU AI Act deadline: Aug 2, 2026
← Back to Blog
eu-ai-actcompliancearticle-6high-risk

EU AI Act Article 6: The Distinction Most Companies Are Getting Wrong

Henrique Veiga Curi2026-04-2510 min read

There's a quiet problem in how the AI industry talks about EU AI Act compliance. Two dates keep getting conflated:

  • August 2, 2026 — when most of the Act applies to high-risk AI systems
  • August 2, 2027 — when the Act applies to AI embedded in products already regulated under EU sectoral laws
  • The difference matters. If you're deploying AI agents in HR, credit scoring, employment, education, or any "essential service," you're almost certainly subject to the 2026 date — not 2027. Treating these as interchangeable is the most common compliance mistake we see.

    This post breaks down Article 6 — the part of the Act that decides which date applies to your AI systems — and what enterprise teams actually need to do before August 2, 2026.

    What Article 113 says about timing

    The EU AI Act (Regulation (EU) 2024/1689) applies in phases. Article 113 governs the phasing:

  • February 2, 2025: prohibited AI practices (Article 5)
  • August 2, 2025: general-purpose AI model obligations
  • August 2, 2026: the remainder of the Act, except Article 6(1)
  • August 2, 2027: Article 6(1) — high-risk AI systems intended as safety components in products covered by EU sectoral law
  • That single phrase — "the remainder of the Act, except Article 6(1)" — is where the confusion starts. It means the high-risk obligations under Article 6(2) and Annex III are enforceable in 2026, not 2027.

    Article 6(1) vs Article 6(2): the actual distinction

    Article 6 defines two paths by which an AI system becomes "high-risk." Both result in the same downstream obligations (audit trails, human oversight, FRIAs, etc.) — but they have different enforcement dates.

    Article 6(1) — Annex I: embedded AI in regulated products

    This applies to AI systems that are safety components of products already regulated under one of the EU's sectoral product-safety laws listed in Annex I. Examples:

    - AI in medical devices (Regulation 2017/745, MDR)

    - AI in machinery (Regulation 2023/1230)

    - AI in toys (Directive 2009/48/EC)

    - AI in radio equipment, civil aviation, watercraft, and other regulated product categories

    If your product was already regulated under one of these laws before the AI Act, the AI components inside it fall under Article 6(1). These get the 2027 date.

    Article 6(2) — Annex III: standalone high-risk AI systems

    This applies to AI systems used in any of the eight high-risk areas listed in Annex III, regardless of whether the surrounding product was already regulated. Annex III covers:

  • Biometrics — remote biometric ID, biometric categorization, emotion recognition
  • Critical infrastructure — safety components of digital infrastructure, traffic, water, gas, electricity
  • Education and vocational training — admissions, assessment, monitoring
  • Employment, workers management, and access to self-employment — recruitment, performance evaluation, task allocation, monitoring
  • Access to essential services — credit scoring, insurance pricing, benefits, dispatching emergency services
  • Law enforcement — risk assessment, polygraph-like systems, evidence evaluation
  • Migration, asylum, border control — risk assessment, document verification, application examination
  • Administration of justice and democratic processes — judicial research, election influence
  • Annex III gets the 2026 date.

    Where most enterprise AI agents land

    Almost every enterprise AI agent in production today touches at least one Annex III category:

  • HR and recruiting agents → category 4 (employment)
  • Customer service agents that route or prioritize → category 5 (access to services), depending on the service
  • Underwriting and pricing agents → category 5 (essential services)
  • Internal performance monitoring agents → category 4 (workers management)
  • Education and training agents that score or assess → category 3 (education)
  • Compliance and fraud detection agents → may fall under category 5 or 6
  • If your AI agent is doing any of these — and most enterprise agents do — your compliance deadline is August 2, 2026, not 2027.

    What "compliant under Article 6(2)" actually requires

    Treating an AI system as high-risk under Article 6(2) triggers a specific set of obligations. The big ones, with article references:

    ObligationArticleWhat it means in practice
    Risk management systemArticle 9Documented, ongoing risk identification and mitigation across the agent lifecycle
    Data and data governanceArticle 10Quality, representativeness, and bias testing of training and operational data
    Technical documentationArticle 11A specific dossier describing the system's design, capabilities, and limitations
    Record-keeping (logs)Article 12Automatic logging of agent activity; logs retained and accessible to auditors
    Transparency and informationArticle 13Clear instructions for deployers on use, limitations, and oversight
    Human oversightArticle 14Effective HITL controls — not just an "approve" button, but real intervention capacity
    Accuracy, robustness, cybersecurityArticle 15Tested resilience against drift, manipulation, and adversarial input
    Fundamental rights impact assessmentArticle 27Mandatory for many deployers in finance, education, employment, and beyond
    Serious incident reportingArticle 73Notify market surveillance within 72 hours of certain events

    There's no path to compliance that skips audit trails (Article 12) or human oversight (Article 14). And for many deployer organizations — especially in finance, education, and employment — the FRIA under Article 27 is non-negotiable.

    What "ready" looks like 99 days out

    If you're an enterprise team running AI agents in scope of Article 6(2), here's a practical readiness check:

    1. Inventory. Do you have a documented list of every AI agent your organization runs? Include shadow agents — the ones individual teams deployed without IT review. You can't classify what you can't see.

    2. Classification. For each agent, can you cite which Annex III category (or none) it falls into? Without this, you can't even start the Article 9 risk management process.

    3. Logging. Does each agent emit structured, immutable logs of its inputs, outputs, model used, and outcome? Article 12 requires "automatic recording of events," not whatever your dev team manually wrote.

    4. Human oversight. Is there a human with authority to override or stop each high-risk agent in real time? Article 14 isn't just a UI requirement — it's an operational capacity requirement.

    5. FRIA. For deployer obligations under Article 27, have you completed the fundamental rights impact assessment? This is often the longest single piece of compliance work.

    6. Incident reporting. Do you have a process to notify market surveillance authorities within 72 hours under Article 73? Most teams discover this requirement after an incident.

    If any of these is "no" or "partially," you have ~99 days.

    Why this matters now

    The 2027 date applies to a relatively narrow slice of AI systems — those embedded in products that were already regulated under sectoral law. Most enterprise AI agents aren't in that slice.

    If your compliance plan assumes you have until 2027, double-check Article 6 against your actual agent inventory. We've seen teams discover, three months into their compliance program, that they were planning for the wrong date.


    MeshAI is the Agent Control Plane — discovery, monitoring, audit trails, human oversight, and FRIA workflows for organizations preparing for August 2, 2026. We're accepting pilot partners — free pilot, white-glove onboarding, direct founder support.