EU agrees to delay key AI Act compliance deadlines
EU agrees to delay key AI Act compliance deadlines

Overview

Businesses facing the onset of high-risk AI system obligations in August 2026 will now have significantly more time to prepare. On 7 May 2026, EU lawmakers reached political agreement on revisions to the AI Act, bringing some much-needed certainty after a fraught end to April, when negotiations on the Digital Omnibus on AI almost broke down. This briefing explains the revised deal, which is still subject to formal adoption, and its implications for compliance deadlines and other important aspects of the AI Act.

What are the revised compliance deadlines?

The timeline for high-risk systems is no longer looming quite so soon.  Separate, fixed deadlines will apply to each of the two main high-risk categories, allowing time for regulatory guidance and technical standards first to be finalised:

2 December 2027
Deadline for "Annex III" systems

A 16-month postponement covers new or substantially modified high-risk AI systems listed in Annex III, such as:

  • certain biometrics use cases
  • critical infrastructure
  • access to educational and vocational training
  • employment-related uses (e.g. recruitment, selection, performance monitoring, promotion, termination)
  • credit checking and life and health insurance risk and pricing assessments
  • various public sector functions (e.g. benefit eligibility, border/asylum control, law enforcement, justice, elections)
2 August 2028
Deadline for "Annex I" systems

A 12-month postponement applies to AI systems that are products or safety components of products governed by EU product safety rules (medical devices, toys, lifts, radios, etc). The definition of “safety component” will also be narrowed: if an AI component merely assists users or optimises performance without creating health or safety risks, it will not be subject to high-risk obligations.

2 December 2026
Deadline for transparency for AI generated content

The delay to the deadline to implement transparency solutions for artificially generated content (such as watermarking) has been reduced to three months (from six), with compliance due by 2 December 2026.

Reducing overlap with sectoral rules

To address concerns about regulatory duplication, the Commission is empowered (via implementing acts) to disapply overlapping AI Act requirements where sectoral rules already cover the same ground. Embedded AI subject to the Machinery Regulation will be removed from the AI Act's direct application; AI-related safety measures for machinery will be introduced by delegated acts under that regulation instead. The Commission will also be required to issue guidance to help industries comply with the AI Act in a way that minimises the compliance burden.

This compromise resolves a major friction point from April, but some tech and medical device companies may still be disappointed that full exemption from the AI Act was not achieved for AI embedded in their products.

Reinstatement of registration requirement for exempted high-risk AI systems

Providers seeking exemption from high-risk classification for certain systems (e.g. systems limited to performing a narrow procedural task or refining the results of tasks previously completed by a human) will still need to register them in the EU database for high-risk systems, albeit with reduced information requirements. This represents a reversal from earlier drafts, which proposed a carve-out from the registration requirement for these systems.

Roll-back on relaxations for bias screening

Processing special category data for the purpose of bias detection and correction returns to a strict necessity test, requiring providers to demonstrate that no less intrusive means exist to achieve the same objective.  This reverses earlier proposals which provided for more flexibility.

Relief for small mid-cap companies (SMCs)

Relaxations previously extended only to SMEs - such as simplified technical documentation, proportionate penalties, and less prescriptive quality management systems - will be available to SMCs, expanding access to a more practicable compliance pathway for a larger group of EU enterprises.

Ban on non-consensual intimate content

There will be an explicit prohibition on AI systems that generate non-consensual sexually explicit or intimate content or child sexual abuse material (CSAM), such as so-called 'nudification' apps.

Governance

The AI Office has a supervisory role over AI systems based on general-purpose AI models where the model and that system are developed by the same provider, but with national authorities retaining competence in areas such as law enforcement, border management and financial services.

The deadline for the establishment of national AI regulatory sandboxes is postponed to 2 August 2027, offering authorities and innovators additional time to develop safe, real-world experimentation environments. 

What's next?

While businesses now have a clearer way forward, the detail of these changes can only be fully assessed once the full text becomes available. For instance, it remains unclear if the general AI literacy duty on providers and deployers (Article 4) will be dropped or stay – the recent press releases are silent on the landing position.

Moreover, the provisional agreement still requires formal approval by both the Council and the European Parliament before it is adopted and appears in the Official Journal – we will be monitoring developments closely and will share further updates as the position evolves. 

Key contacts

Read Louisa Chambers Profile
Louisa Chambers
Read Helen Reddish Profile
Helen Reddish
Back To Top Back To Top chevron up