
The American Arbitration Association (AAA) announced last week that it is about to roll out an AI arbitrator. At first, this AI arbitrator will be used for documents-only, construction disputes, but next year, the AAA anticipates using the AI arbitrator for other types of disputes.
I had been expecting that, sooner or later, an AI arbitrator would arise, and I have written an article (titled “Stranger Disputes,” in honor of one of my favorite series, Stranger Things) explaining that an AI arbitral award can be enforceable under the Federal Arbitration Act’s framework.
I have concerns about the use of AI in arbitration. However, some features of the AAA’s system would seem to alleviate concerns. For example, the AAA has significant experience (almost 100 years) in administering arbitration, and the AAA’s AI arbitration system was trained on more than 1,500 of its construction awards. Also, continuous feedback from experts was used to refine the AI system, and human arbitrators will ultimately review and make necessary changes before finalizing the award.
The AAA’s dataset of 1,500 prior construction awards used to train the AI system helps provide some assurance that the system could be used to produce similar awards (and I’m assuming those awards satisfy some definition of fairness for the construction industry). However, imagine that a start-up organization, without this database of prior awards, creates its own AI arbitrator platform and markets it to companies or employers for the use of arbitrating consumer or employment disputes. Could the platform be designed to be extremely critical of the claimant’s evidence? Or suppose the designer of the platform sets the platform to issue awards in favor of the claimant only if the AI system has a high degree of confidence (say 99%) that an award in favor of the claimant is correct, as opposed to a lesser degree of confidence (say 75% or 65%). I’m not sure how a court will evaluate concerns of bias in connection with AI arbitration. However, with great transparency, input from all interested parties, a large dataset of awards broadly accepted as fair, constant human review, oversight, and fine-tuning, I hope that AI systems can be developed that are fair and neutral. For example, if an AI system is being designed for employment disputes, I would hope that workers’ rights groups are involved in the testing and evaluation of the system and in the selection of awards for the dataset used to train the platform. I am concerned that financial incentives may drive some creators of AI arbitration systems to develop systems that tend to favor stronger parties, so that the stronger parties who draft non-negotiable arbitration clauses designate such an AI system in their arbitration clauses.
One way to address concerns about bias is for drafting parties to stop designating arbitration organizations in their non-negotiable, pre-dispute arbitration agreements, so that after a dispute arises, both parties would have to agree on an organization or AI arbitrator accepted as neutral and fair. In other words, if the selection of an AI arbitration system occurs only after a dispute arises, arbitral organizations or designers of AI platforms may have greater incentives to design a fair process attractive to all parties, instead of the potential incentive to cater to stronger parties who draft arbitration agreements.
Another possible safeguard to build up trust in an AI arbitrator would be to publish or release every award that forms part of the dataset used to train the AI system, and before the dataset is finalized, public comment from all interested parties can be solicited.
I’m also concerned about access to the AI arbitrator. For example, suppose an employer drafts an arbitration clause requiring arbitration before the AI system of the AAA or another organization. I am guessing the AAA will have given access to the company ahead of time to convince the company to designate the AAA’s AI system in its arbitration clauses. Will the company have continuous access to the platform to submit hypothetical facts and see hypothetical awards, almost like an advisory opinion? If only one side has continuous access to the system, it is possible that party would gain an unfair advantage in connection with future disputes. However, if all parties have equal and early access to the system, I can see how the use of an advisory opinion could facilitate resolution of a dispute, prevent disputes from arising, or inform decision-making to avoid future problems.