The regulation of AI in international arbitration – restrictive red tape, or a necessary evil?

The regulation of AI in international arbitration – restrictive red tape, or a necessary evil?

Overview

In 2024 the Commercial Court saw its quietest year in a decade, with only 470 claims issued (compared to 714 the previous year). Many factors contributed to this substantial decline, including a slowdown in new group actions and ongoing uncertainty regarding litigation funding. The ICC's Court of Arbitration registered 841 cases in 2024, only a slightly lower number than the 890 registered in 2023. By contrast, the total caseload value at year-end reached a record-setting $354 billion. This distinction in performance suggests that there is a unique opportunity for international arbitration to cement its position as the leading choice for high value complex dispute resolution. To do so, however, it must exploit the benefits of operating in an environment increasingly reliant on artificial intelligence (AI), whilst adequately navigating the accompanying dangers.

How is AI used in international arbitration?

The benefits (both existing and potential) of AI use in arbitration are well known and widely discussed. AI can automate mundane administrative tasks, allowing parties to focus on more complex tasks. It can be used to analyse and summarise large quantities of documents, to untangle evidence and to identify patterns. Translating text into a foreign language is increasingly speedy and more accurate than ever, which is especially helpful in cross-border arbitration disputes. Against a backdrop of spiralling costs and bloated procedural timelines, the introduction of accessible, user-friendly AI tools is a welcome development.

Set against the cost-saving benefits, however, are obvious risks of using AI in arbitration, most notably hallucinations and other limitations of AI tools themselves. A less discussed issue is the extent to which arbitrators themselves should be able to utilise AI in their analysis and decision making (and if so, how such use can and should be regulated). Some will argue it is an inevitable development in international arbitration. Others will counter that the very nature of parties being able to choose their arbitrator(s) militates against such individuals outsourcing their judicial power to a machine that cannot replicate human judgement, empathy or contextual understanding. Indeed in LaPaglia v Valve Corporation [No. 3:25-cv-00833 (S.D. Cal. Apr. 8, 2025)], a Californian claimant is contesting an adverse arbitral decision on the grounds that the arbitrator relied on AI to such an extent that he "outsourced his adjudicative role."

A robust regulatory framework (or equivalent) is clearly needed to address these concerns and to ensure that AI is integrated responsibly and fairly. This is crucial to ensure that the trust and confidence placed in arbitrators and the arbitral process – one of its major selling points – is maintained.

Is current AI regulation in international arbitration fit for purpose?

Various arbitration institutions and organisations have made recent progress in this field, albeit currently at guideline level only. The Chartered Institute of Arbitrators (CIArb) published guidelines specifically aimed at AI use in arbitration on 19 March 2025. The CIArb guidelines follow the publication of similar guidelines by the Silicon Valley Arbitration & Mediation Centre in the USA on 30 April 2024 and, albeit they are a global organisation, are a crucial step towards a regulatory framework in the UK. Although non-mandatory, the CIArb guidelines provide clarity in the sense that they explicitly refer to risks associated with use of AI in arbitration, for example the fact that third-party AI use entails "substantial risks in regard to confidentiality, which is a fundamental prerequisite in the realm of arbitration". On decision-making in particular, the guidelines warn that "questions related to bias take on a distinct character when contemplating the use of an AI tool by arbitrators" and advise that arbitrators must assume responsibility for all aspects of an arbitral award, irrespective of their use of AI. They also provide structured recommendations for how parties may use AI tools during arbitral proceedings, in areas such as procedural oversight, party autonomy and admissibility and disclosure. Other major arbitration institutions, such as the Stockholm Chamber of Commerce Arbitration Institute (SCC) and the Vienna International Arbitration Centre (VIA), have also published short notes on the use of AI in arbitration proceedings. These notes are high-level, intended to facilitate discussions between parties rather than to protect them from unethical uses of AI. Interestingly, however, both notes explicitly refer to decision-making as something which arbitrators should retain full control over, without delegating it to an AI tool.

In theory, the nature of arbitration allows tribunals broad discretionary power to manage the use of AI in proceedings. However, in practice, due process paranoia and party autonomy limits these powers. In addition, such powers do not give the parties clarity on how the tribunal itself can and should use AI in its decision making.

Final thoughts

AI is fast developing, which means that any regulation or set of guidelines must be able to adapt in real time to new dangers and ethical dilemmas. Institutions have a duty to attempt standardisation and consistency in how AI is regulated. In reality, however, there is a limit to how effectively (and quickly) this can be done. Parties themselves can mitigate these limitations by proactively seeking to manage the use of AI in proceedings. As an example, they can specify in a procedural order exactly how AI can be used, by whom it can be used (including by the tribunal themselves), and (crucially) what should still be done the old-fashioned way. Factoring in early discussions and express duties to disclose how and when AI has been used will give comfort to parties, as well as providing protection against later challenges (a win for all parties, including the tribunal). Arbitrators can assist this process by encouraging the parties to be proactive and to address such issues at the outset of an arbitration.

Commentators have suggested a world in the near future whereby parties can opt in to have simple disputes decided (in part or completely) by AI. Whilst this would bring obvious time and cost benefits, tribunal expertise will likely remain a key benefit for parties using arbitration in particular, which means that control (and transparency) over how judicial decisions is reached will remain a real priority.

get in touch

Back To Top Back To Top chevron up