UK's data protection reforms take effect - a new era for automated decision-making

UK's data protection reforms take effect - a new era for automated decision-making

Overview

The core data protection and e-privacy reforms introduced by the Data (Use and Access) Act 2025 (DUAA) were brought into force on 5 February 2026, with minimal notice. These changes cover everything from e-privacy and cookie rules to subject access requests, the introduction of "recognised legitimate interests," a new test for international data transfers, expanded ICO enforcement powers - and, from 19 June 2026, a new statutory regime for handling data protection complaints. This briefing focuses on one of the most transformative changes: the more flexible regime for automated decision-making (ADM).

For an overview of the other DUAA reforms, please see our previous briefing.

Why change the ADM rules?

ADM enables organisations to make faster and more consistent decisions, particularly when processing large volumes of data at speed. The UK government considered the previous ADM rules to be too complex. Its aim: remove unnecessary hurdles, stimulate innovation and business efficiency, and modernise UK data law for an AI-driven economy - while maintaining robust safeguards against unfairness and discrimination.

What counts as ADM?

"Automated individual decision-making” under Article 22 of UK GDPR refers to making significant decisions about individuals based solely on automated processes - without meaningful human involvement – often using algorithms or artificial intelligence tools.  It can include profiling, i.e. automated processing of personal data to evaluate an individual's behaviour, preferences, or status.

In practice, many decisions that are regarded as automated involve human intervention at some stage.  If a human reviews and has the discretion to alter the decision, rather than simply rubber-stamping the machine's determination, then the processing falls outside the ADM rules; we can expect the ICO to expand further on what counts as meaningful human involvement in fresh guidance.

ADM rules only apply where the processing results in "significant decisions" - those with legal or similarly substantial effects, such as decisions that affect someone's employment opportunities, financial position, access to essential services, health, reputation, behaviour or choices. Typical examples include automatically shortlisting or rejecting job applicants in recruitment, approving or denying loan or credit applications in financial services, determining eligibility for welfare benefits, or dynamically adjusting insurance premiums based on risk scoring. 

Routine business activities that do not have a material impact on individuals’ rights or circumstances - such as prioritising customer service tickets, displaying website content, or recommending TV programmes based on user preferences or viewing habits - generally fall outside the ADM rules. However, context is important: if, for instance, automated marketing specifically targets a vulnerable group (such as children), the ADM rules may apply because of the significant potential impact on their choices or behaviour.

The previous ADM rules

Previously, organisations could only make automated decisions if one of three narrow conditions was met: 

  • the individual had given explicit consent

  • the processing was necessary to perform a contract

  • the decision was specifically authorised by UK law, such as for the purposes of detecting fraud or tax evasion - subject to strict safeguards. 

A further layer of protection applied when processing special category data. In addition to meeting one of the above exceptions, organisations either needed the individual’s explicit consent, or the processing needed to satisfy specific substantial public interest conditions.

What's changed?

The prohibition on ADM has been lifted except for where special category data are involved.  For non-sensitive data, this enables organisations to rely on other lawful bases, such as legitimate interests (although not the newly introduced "recognised" legitimate interests bases) to carry out ADM. 

Where ADM involves special category data, the stricter limitations survive. This is likely to arise in the context of health insurance underwriting, employee absence monitoring that reveals health data, and biometric access controls, for example.

 

Mandatory safeguards for ADM

There are still mandatory safeguards for all ADM.  Organisations must:

  • inform individuals that an automated decision-making process is being used

  • provide individuals with an opportunity to make representations

  • offer a route to meaningful human intervention

  • allow individuals to contest the automated decision.

The Secretary of State can also specify, by regulation, which types of decisions are “significant decisions”, and may vary or expand the required safeguards.

Businesses still need to exercise caution

Although the data protection constraints around ADM have eased, the risk of unfairness, discrimination and reputational damage remain.  Businesses should still consider whether it is appropriate to remove "the human from the loop" and must ensure diligent ADM governance, robust transparency, active bias monitoring, and justification for decision logic to mitigate these risks, especially in high impact systems.  

UK and EU approaches now diverge

The UK and the EU now have differing approaches to ADM, meaning a UK-compliant strategy will not automatically satisfy EU GDPR (or AI Act) requirements. While the EU’s Digital Omnibus proposes some clarifications to ADM rules, it is not likely to result in alignment with the UK approach.   Businesses operating in both the UK and the EU can either retain the stricter, pre-DUAA regime for both (to adopt a harmonised approach) or implement distinct UK and EU compliance frameworks for ADM.

What should businesses do now?

New ICO ADM guidance is expected imminently

The ICO has indicated that detailed ADM guidance will be consulted on this winter and published in spring 2026, so it is important to stay alert for the incoming guidance and be prepared to adapt processes when it lands.

In the meantime, organisations should:

  • audit and document all current ADM use cases and those in the pipeline, with a clear record of data types involved (whether special category data is impacted), legal bases, explanation of the logic, and safeguards

  • update privacy notices to describe ADM in plain language

  • have in place straightforward procedures that allow individuals to challenge ADM and obtain human review

  • ensure that all Data Protection Impact Assessments have been completed

  • build in regular reviews for bias, fairness, and accuracy.

KEY CONTACTS

Read Louisa Chambers Profile
Louisa Chambers
Read Helen Reddish Profile
Helen Reddish
Back To Top Back To Top chevron up