Legal briefing | |

The EU AI Act: what you need to know

Originally published in September 2024 and last updated in September 2025

The EU AI Act: what you need to know

Overview

The AI Act has implications for businesses around the world, not just in the EU. For most businesses, the key date in their sights is 2 August 2026, when the majority of obligations begin to apply.  However, different obligations come into play at different stages – some apply already.  This briefing explains which obligations apply when and suggests some practical steps for businesses to take.

What and who is within the scope of the AI Act?

The obligations in the AI Act apply to providers (such as developers), deployers (users), importers, distributors, and product manufacturers of "AI systems" and providers of "general-purpose AI models".  The most onerous obligations are borne by providers. 

 What counts as an "AI system"?

"a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments."

The definition is broad but with a focus on autonomy and inference capabilities to distinguish AI from conventional software operating by reference to predetermined rules.  We can expect guidelines from the Commission on the application of this definition by 2 February 2026.

In a similar way to the GDPR, the AI Act applies to entities established outside the EU, as well as to those within the EU, if they put AI systems on the market or into service in the EU or the output of the AI system is used in the EU. 

The obligations under the AI Act vary according to how the AI is categorised:

  • prohibitive risk – AI systems that pose an unacceptable risk and so are banned under the AI Act.

  • high risk – AI systems that create a high risk to the health and safety or fundamental rights of individuals and so are subject to some of the most onerous obligations.

  • limited risk – AI systems that pose a transparency risk.

There are separate sets of obligations for general-purpose AI models and systems ("GPAI") – systems trained on large amounts of data that have a wide range of uses and are often integrated into other downstream systems and applications.  The two regimes are not mutually exclusive: a GPAI model can become part of a high-risk system subject to high-risk obligations in addition to GPAI obligations.  There are two tiers of obligations for GPAI: a set of obligations that apply to all GPAI models and an additional set of obligations that apply to a subset of GPAI models with "systemic risk".  Models with systemic risk are those which have high impact capabilities (which are presumed if the computational power exceeds a designated threshold) or they are designated as such by the Commission. 

Many AI systems will not fall into any of these categories and are considered low risk.  Organisations developing and using such systems are encouraged to adhere to voluntary codes and will be subject to an "AI literacy" obligation (in essence, to inform and train their staff/supply chain engaging with AI on the risks and impacts of AI, dependant on the relevant context) but the AI Act will not otherwise apply to such systems.  Implementing AI "Responsible Use" policies will play an important role in helping to meet the "AI literacy" requirement.

Other carve-outs from the AI Act's scope

Certain AI systems are not in scope.  For example, AI systems used solely for scientific research and development, or for personal use, or for research and testing prior to a system being put on the market, or those released under open source licences (unless they are classified as prohibited or high-risk systems.  

What happens and when?

The AI Act has a staged application, with different timings for the different categories of AI. The following sets out the timings of key obligations.

2 February 2025
Prohibited AI Systems' ban and the AI literacy obligation began to apply

Prohibited AI systems are banned completely.  In-scope organisations must ensure that no prohibited AI systems are being used, whether as a product offering or within the business.  The vast majority of organisations will have had no engagement with these banned activities, although perhaps a watch out for some may be the ban on emotion recognition in the workplace and in education. The EU Commission has published Guidelines on prohibited AI practices.

Which systems are banned?

In summary, AI systems are banned for:

  • manipulating and distorting people's behaviour causing significant harm.

  • exploiting people's vulnerabilities in ways reasonably likely to cause significant harm.

  • social scoring.

  • predictive policing, based solely on personality traits and characteristics. (This does not apply individuals that have already been linked to criminal activities with objective and verifiable facts).

  • creating or expanding facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

  • emotion recognition in a workplace or educational setting, with the exception of those that are there for medical or safety reasons.

  • biometric categorisation to deduce an individual's race, sexual orientation, political opinions or other such sensitive data – with a carve out for a law enforcement context.

  • 'real time' remote biometric systems in publicly accessible spaces for the purposes of law enforcement, unless certain exceptions are met. 

The AI Literacy obligation also began to apply to providers and deployers of AI systems from February 2025 - see our separate AI literacy briefing.

2 August 2025
Rules for new GPAI models and systems

Requirements on GPAI systems that apply from 2 August 2025 only apply to new GPAI models put on the market on or after that date. GPAI models on the market prior to that date will have a further two years to comply (i.e. until 2 August 2027).  GPAI models must meet additional transparency requirements and providers must maintain technical documentation recording the training and testing process of the underlying model, as well as documentation to supply to downstream providers which allows them to understand the capabilities and limitations of the system. Systemic risk GPAI models are subject to a further layer of obligations, e.g. providers must perform adversarial testing to identify and mitigate system risks, document these risks and track and report incidents to the EU AI Office.

The Code of Practice and Commission's Guidelines for GPAI models (to help providers demonstrate compliance) were published later than anticipated, only a few weeks prior to the August 2025 deadline.

2 August 2026
(Most of) the high-risk framework and the transparency risk obligations apply

The majority of the AI Act will apply from 2 August 2026. This includes the obligations on most high-risk AI systems. To help prepare for this deadline, watch out for the guidelines on the classification of AI systems as high risk to be published by the Commission by February 2026.

High-risk AI systems fall into 2 categories:

1. any AI system which is a safety component of, or is itself a product subject to EU product safety regulations and required to undergo a third-party conformity assessment pursuant to that legislation. The full list of legislation is set out at Annex I of the AI Act, covering medical equipment, toys, radios, PPE etc; or

2. any AI system that is specifically designated at Annex III.  Annex III sets out areas such as:

  • certain biometrics use cases.
  • critical infrastructure.
  • access to educational and vocational training.
  • certain employment contexts e.g. systems used in a recruitment or selection process, or to make decisions during the working relationships including performance monitoring, work allocation, promotion or termination.
  • credit checking and life and health insurance risk and pricing assessments.
  • various public sector applications, such as assessing eligibility for benefits, border control and asylum, law enforcement and administration of justice and elections.

Even if a system falls into the Annex III areas, it may be possible to demonstrate that the system is not in fact high-risk by reference to certain criteria, for example, if the use case is limited to performing a narrow procedural task or involves refining the results of tasks previously completed by a human.

The 2 August 2026 enforcement date does not apply to all high-risk AI systems:

  • There will be no enforcement in respect of any high-risk AI system in the private sector on the market prior to that date, provided it is not significantly changed after that date;

  • Pre-existing high-risk AI systems intended for public authority use will have until 2 August 2030 to comply; and

  • High-risk AI systems under Annex I of the AI Act (products subject to product safety legislation) that are put on the market on or after 2 August 2026 will have until 2 August 2027 to comply. 

Providers of high-risk AI systems will be subject to a range of obligations, including:

  • conformity assessments.

  • registration of the system in a new EU AI database.

  • implementing detailed rules on areas such as human oversight, data quality and governance, transparency, accuracy and cybersecurity.

  • post-market monitoring system and incident reporting.

An organisation can also be deemed a provider if it puts its trade mark on, or substantially modifies, a high-risk AI system that is on the market or alters the purpose of an AI System (including a general-purpose AI system) so that it becomes high-risk.

 

Users of high-risk AI systems will have fewer, but still considerable obligations, e.g.:

  • using technical and organisational measures to comply with a provider's instructions for the use of the system.

  • assigning human oversight to competent, trained personnel.

  • ensuring relevant and representative input data for the AI system's intended purpose.

  • monitoring the operation of the system, keeping logs of its operation and reporting risks and incidents.

  • informing workers' representatives when using high-risk systems in the workplace.

The obligations relating to AI systems classified as a transparency risk will also apply from 2 August 2026.

Providers of AI systems intended to interact directly with people, or which generate synthetic audio, image, video or text content will be subject to transparency obligations. Users of emotion recognition or biometric categorisation systems or of AI systems to generate deep fakes will also be subject to transparency obligations.  Disclosing to end-users the fact that content has been generated or manipulated by AI and that they are interacting with an AI system will be key to compliance.

2 August 2027
Obligations in respect of Annex 1 high-risk AI systems

High-risk AI systems under Annex I of the AI Act (products subject to product safety legislation) put on the market on or after 2 August 2026 must comply with applicable obligations by 2 August 2027.

Enforcement

The European Commission has established an Artificial Intelligence Office to enforce the AI Act  - it will have the exclusive power to monitor, supervise and enforce against providers of GPAI. The EU AI Board, made up of representatives from Member States, will advise and assist and help ensure a consistent implementation of the AI Act. Member States were also required to designate national authorities to enforce the regulation.

There is a tiered approach to penalties with maximum fines of up to €35 million or 7% of worldwide group turnover (whichever is the greater) for breach of provisions related to banned AI, fines of up to €15 million or 3% of worldwide group turnover for certain violations relating to other systems and €7.5 million or 1% worldwide group turnover for certain false reporting breaches.

What's happening in the UK?

The previous UK government proposed relying on existing regulators and regulatory structures, rather than establishing broadly applicable AI-specific regulations or a dedicated AI regulator. The Labour government seems to have a greater appetite for legislating, on a targeted basis at least. Following the King's Speech in July 2024, it looked likely that a future AI Bill would target only the largest GPAI models. However the AI Bill has been delayed - owing to the "Trump effect", the concern about prejudicing the UK's attractiveness to AI companies, and the wrangling over AI providers' use of copyright material. It looks as though an AI Bill, which could be more comprehensive than initially suggested to address the copyright issue, will be delayed until at least the summer of 2026.

What practical steps can businesses take?

Businesses impacted by the AI Act are encouraged to start complying on a voluntary basis earlier than the deadlines. That said, many of the obligations are very high-level – in some cases, a product of uncomfortable compromises reached following the heavy negotiation of the text. Secondary legislation, codes of practice, guidelines, standards and templates should help to clarify what practical and technical steps organisations are expected to take, but these are sometimes slow to emerge.

Businesses using AI should be taking steps to:

  • train staff on the implications of the AI Act and AI's risks.

  • review and risk assess current and prospective AI products and use cases against the requirements of the EU AI Act.

  • develop governance processes, documentation and policies, including a Responsible AI Policy or similar.

  • update contracts with suppliers and terms of business to address AI requirements and risks.

  • monitor for secondary legislation and guidance.

Remember that readying your business for compliance with the EU AI Act is only part, albeit an important part, of managing risks associated with AI.   You should also:

  • check that you are complying with existing legislation – in particular, the GDPR (as we explain further in this briefing).

  • be clear in staff policies and contracts with suppliers on how your data can be used and specify what can and cannot be inputted into AI systems to prevent the leaking of confidential and proprietary information and personal data of staff/customers.

  • check who owns the intellectual property rights in the output produced by AI system and whether your business is protected in respect of third party intellectual property infringement claims (as we explain in this briefing).

  • appropriately allocate responsibility for AI in your contracts with suppliers.

The Technology & Commercial Transactions team at Travers Smith, alongside experts from around the firm, are helping clients to address the complex legal challenges that AI technology poses. For more information about risks associated with AI, please listen to Travers Smith's podcast series, AI Insights.

Key contact

Read Louisa Chambers Profile
Louisa Chambers
Read Helen Reddish Profile
Helen Reddish
Back To Top Back To Top chevron up