Copyright : Re-publication of this article is authorised only in the following circumstances; the writer and Africa Legal are both recognised as the author and the website address www.africa-legal.com and original article link are back linked. Re-publication without both must be preauthorised by contacting editor@africa-legal.com
New South African AI in Arbitration Guidelines balance innovation with ethics and risk management

Vanessa Jacklin-Levin and Rachel Potter of Bowmans’ Johannesburg office take a closer look at new AI Guidelines recently issued by the Association of Arbitrators (Southern Africa) that aim to balance responsible AI use with protecting the integrity of proceedings.
In the ongoing absence of a legislative framework in South Africa that addresses the use of AI in alternative dispute resolution (ADR) proceedings – the Association of Arbitrators (Southern Africa) has adopted and issued its Guidelines on the Use of Artificial Intelligence (AI) in Arbitrations and Adjudications to offer much-needed direction for parties and tribunals integrating AI into ADR proceedings.
While these guidelines are not exhaustive nor a substitute for legal advice, they provide a helpful framework to promote responsible AI use, protect the integrity of proceedings, and balance innovation with ethical awareness and risk-management.
As a starting point, the AI Guidelines stress the importance of parties reaching agreement upfront on the use of AI, including whether the arbitrator or tribunal has the power to issue directives regarding the use of AI.
What is AI used for in ADR proceedings?
Commonly, AI tools assist with the following types of tasks in ADR proceedings:
collating and sequencing complex case facts and chronologies;
document management and expediting the review of large volumes of documents;
conducting legal research and sourcing precedents;
drafting text (for example, legal submissions or procedural documents); and
facilitating real-time translation or transcription during hearings.
So, what are the core principles governing the use of AI in ADR?
The AI Guidelines highlight several principles that should be upheld whenever AI is deployed, including accountability, confidentiality and security, transparency and disclosure, and fairness. In relation to accountability, tribunals and arbitrators must not cede their adjudicative responsibilities to software. Humans ultimately bear responsibility for the outcome of a dispute. Regarding confidentiality and security, public AI models sometimes use user inputs for further ‘training’, which raises the risk that sensitive information could inadvertently be exposed.
Considering transparency and disclosure, parties and tribunals should consider whether AI usage needs to be disclosed to all participants. And in terms of fairness in decision-making, the AI Guidelines note there is a risk of underlying biases or inaccuracies in AI-generated outputs due to training data biases. Human oversight of any AI-driven analysis is indispensable to ensure just and equitable results.
What are the key risks introduced to ADR proceedings by the use of AI?
One of the key advantages of arbitration is the confidentiality of the proceedings that it offers, as opposed to public court proceedings. This can be threatened by irresponsible use of AI by the parties or the tribunal and expose the parties to confidentiality and data security risks.
AI tools can also produce flawed or ‘hallucinated’ results, especially in complex or novel fact patterns. These technical limitations and unreliability can lead to misleading outputs or fabricated references. AI tools are well known to fabricate case law references to answer legal questions posed to them.
What is best practice?
The AI Guidelines advise tribunals to adopt a transparent approach to AI usage throughout proceedings, whether deployed by the tribunal itself or by the parties. Tribunals should consider obtaining explicit agreement on whether, and how, AI-based tools may be used and determine upfront if disclosure of the use of AI tools is required.
Safeguarding confidentiality should be considered upfront and throughout the proceedings, and agreement should be reached on what information can be shared with what AI tools to ensure parties are protected.
During hearings, any AI-driven transcription or translation services should be thoroughly vetted to preserve both accuracy and confidentiality. Equal access to AI tools for all parties should be ensured so that no party is prejudiced.
Ultimately, the arbitrator’s or adjudicator’s independent professional judgement must determine the outcome of any proceeding, even if certain AI-generated analyses or texts help shape the final award.
As disputes become ever more data-intensive and as technological solutions proliferate, we recommend that parties, counsel and tribunals consider how best to incorporate AI tools into their processes while upholding the foundational values of ADR: speed, flexibility, independence, fairness and confidentiality.
Vanessa Jacklin-Levin is a dispute resolution partner in Bowmans’ Johannesburg office, and Rachel Potter is a senior associate in the Dispute Resolution Department. The full Guidelines on the Use of Artificial Intelligence (AI) in Arbitrations and Adjudications can be accessed here.