Context-specific
The UK government's approach seeks to regulate the use of AI rather than the technology itself - dubbing this approach as "context-specific". It has some key advantages over the more rigid risk-level categories proposed in the EU's AI Act. Take, for example, an AI-powered chatbot. In a retail, customer-satisfaction context, the use of that technology is not so high risk as the use of that same technology in a medical diagnostic context. The latter use case arguably merits tighter regulation than the former. Moreover, this approach can be adapted more quickly to new developments in AI and removes the need constantly to update AI regulations as technology advances.
Suck it and see…
The Government's approach is iterative: it will see what works (and what doesn't) before intervening further.
There is a concern amongst UK regulators that if the framework is not placed on a form of statutory footing it will not be enforceable. The Government is therefore considering (but only after an initial implementation period if it still considers it necessary) imposing a statutory statutory duty on regulators to have "due regard" to the principles.
The Government also says that it's "too soon" to make decisions about the liability regime for AI "as it is a complex, rapidly evolving issue which must be handled properly to ensure the success of [the] wider AI ecosystem" and so does not propose to make changes at this stage. Nonetheless, the paper recognises that there are areas where the lack of clarity around liability may prove to be an issue - it provides a case study on automated healthcare triage systems, noting that there is "unclear liability" if such a system provides incorrect medical advice, which may affect the patient’s ability to seek redress. Some might say that this is a case of the Government "kicking the can down the road" and leaving the important task of allocating responsibility between actors in the supply chain to regulators. Once again, this contrasts with the EU's proposed new AI Liability Directive to address liabilities for harms that may arise from the use of AI, which sets out a rebuttable presumption of causality between a failed duty of care and the harm caused by the AI system.
There's also nervousness that the Government's iterative approach may take too long, at a time when the capability of the technology is advancing at a terrific pace.
Foundation Models
The Government has set up a £100m AI Foundation Model Taskforce charged with leading AI safety research, developing responsible standards and governance that can be used to underpin the White Paper and the UK’s approach to regulation of foundation models. This taskforce is designed on the same model as the Vaccine Taskforce that was launched at the start of the Covid-19 pandemic.
The European Parliament, on the other hand, has chosen to add specific requirements on generative AI systems to the proposed AI Act, such as an obligation to disclose that content was generated by AI, designing the AI system in a way that prevents it from generating illegal content and publishing summaries of copyright material used for training.