Skip to Content (custom) - bh

Advice

The Human in AI-Assisted Dispute Resolution

  • Legal Operations
  • 2 mins

Integrating AI into dispute resolution work leaves plenty of room for human lawyers to deliver better value to clients without diminishing their role as advisors and advocates. This was the key takeaway from the ‘Reconciling Data Intelligence With Human Judgement in Legal Decision Making’ panel at London International Disputes Week 2024. The session addressed this urgent and sometimes controversial aspect of AI: what is the lawyer’s role in dispute resolution when AI is increasingly capable of augmenting or even replacing some of the manual work?

Lorraine Medcraft, Vice President of Epiq’s Court Reporting business, moderated a discussion with Peter Nussey, Chief Revenue Officer at Solomonic, a provider of litigation intelligence solutions, and Guy Pendell, Partner in CMS Cameron McKenna Nabarro Olswang LLP’s dispute resolution practice and founder of online dispute platform, pinqDR. 

Accountability for Legal Outputs

AI is set to replace some of the dispute resolution work formerly done by lawyers. This work includes summarising documents, drafting legal contracts and filings, using generative AI to produce arbitration submissions for an oral hearing, and, in the not-too-distant future, ingesting transcripts from hearings and comparing them to the documentary record to spot inconsistencies.

As Pendell put it, “There’s quite a bit of lawyering going on there.” So, what’s left for humans? 

The common feature in all those examples is that humans must make the judgement call. Lawyers won’t just turn over a first draft of an AI-generated contract or filing to another party or court. The driving factor is that law is still a regulated profession, and regulators will hold humans accountable. Pendell continued, “At the moment, I’m not aware of a regulator anywhere in the world that allows technology to produce regulated output in the legal space. It still involves humans because we’re the ones who are regulated, and we’re the ones who ultimately take responsibility for the outputs.” 

The idea that young lawyers must do routine, menial work as a rite of passage needs to be updated. Today’s AI tools put lawyers at the top of an accountability chain, allowing them to practice law using judgement and strategy as they supervise the work of AI. 

Humans in Bias and Error Spotting — It Works Both Ways    

Bias in AI is a common concern. The human-created training data used by AI systems can embed various social or other biases. AI systems learning from such data might reproduce the biases, resulting in harmful AI-driven decisions. 

Just as often, however, AI can have the opposite effect. AI can help identify and remediate human biases and errors in the “real” world. 

Nussey described the challenges of detecting and remediating bias in AI systems. “When we think about AI bias and how we address that, we typically think about transparency, the ability to audit, some kind of regulation. But we may soon be in a realm where humans cannot manage the machine. We may be at a point where we’re looking at such large data sets that the AI returns things we cannot evaluate.”

But what if the AI evaluated and corrected human decisions and not the other way around? A lawyer might advise a client on a fifty percent likelihood of a positive trial outcome — but it’s rarely that. Different types of cases and fact patterns can have widely divergent likelihoods of success. For example, in the area of fraud, only nineteen percent of claims have a positive outcome via trial. The problem is that lawyers tend to provide estimates based on the biases inherent in their own experiences. Litigation intelligence solutions, like Solomonic, can correct or balance the biases in this type of data.  

AI systems can sometimes lead humans to question their own biases. Lawyers might have a particular view on something, begin interrogating a large language model, and receive a response that causes them to question whether that view was necessarily the right one in the first place.

Pendell noted the irony in this. “For lawyers who consider themselves driven by evidence and data, that’s an important learning. I have to check whether we are giving opinions or giving opinions based on full information.” AI won’t provide certainty about the outcome of a case, but it will give an indication, particularly if the settlement rate looks statistically significant. Data about past cases and AI-assisted decision support can help. Lawyers often deal with anecdotal information, but it’s not always accurate. AI can serve to check on those tendencies. 


AI systems will continue to improve over the next five to 10 years. Lawyers and legal technology providers should plan for a future that eliminates the risks of bias and inaccuracy. 

The Challenge of AI Adoption in the Legal Space

Understandably, lawyers can be laggards on the technology adoption curve as they are trained to identify risk.  

AI presents more risk for the profession as new types of service providers move in. While a client might understand and appreciate a law firm’s conservative and traditional approach, they might also be offered a technology-based service turned around in half the time and half the cost. 

The Future of AI in Dispute Resolution

It’s essential to focus on the AI we will have in the future rather than just the AI we have today, as the technology is developing quickly. There are several future developments to look forward to in dispute resolution: 

  • Using AI for real-time cross-examinations. AI has been used in mock arbitration, where questions from human tribunals were run through a generative AI system to produce responses, which were then fact-checked by humans and passed back to the advocates in the hearing. A future version of this could compare witness statements and the documentary record to look for inconsistencies between the two in real-time. 

  • Prevention, not cure. Technology-based legal solutions could fundamentally turn the profession on its head. So much of current legal practice, particularly dispute resolution, is about curing existing problems. There’s an alternate future where AI can introduce solutions much earlier, making legal services more about prevention than cures. Clients are already using technology to make better operational decisions so that problems don’t emerge in the first place. Lawyers need to ensure that legal risk prevention is part of those processes. 

  • Better and broader access to justice. Clients of the world’s largest firms can afford their high level of service. However, AI-based legal tools will continue to make inroads in the Access to Justice and subject matter expert spaces, where legal needs are great and widespread, but traditional human-based legal services are too expensive. Technology can provide the scale necessary to serve those markets better. 

  • A cultural change toward data-driven decisions. One of the benefits of AI is the push to make the profession more data-driven, relying less on experience, hunches, and personal knowledge. 


There’s good reason to be bullish on AI and its positive impacts on dispute resolution. As Medcraft summarised, “AI won’t replace humans, but humans who are using AI will eventually replace those who don’t.” AI is giving humans better tools to enhance their ability to provide the legal insights and judgements that humans do best. 

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Subscribe to Future Blog Posts