US Supreme Court warns of dangers of AI in legal profession

The US Supreme Court has addressed the use of artificial intelligence (AI) in the legal system, acknowledging its potential while cautioning against “dehumanizing the law.” 

Published on Sunday, the 2023 Year-End Report on the Federal Judiciary offers a 13-page overview of the past year in the US legal system. This year US Chief Justice John G. Roberts, Jr. chose to continue its history of addressing “a major issue relevant to the whole federal court system” by focusing on AI, comparing machine learning to past technological advancements such as personal computers.

SEE ALSO:

ChatGPT is writing legislation now

“For those who cannot afford a lawyer, AI can help,” said Roberts. “It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge — all without leaving home.”

However, though Roberts acknowledged the benefits AI may offer, he also noted that it comes with risks, particularly when inappropriately applied. In particular, he noted that much decision-making in the judicial system requires human assessment, discretion, and understanding of nuance. Simply entrusting such power to an algorithm is likely to result in unsatisfactory and unjust results, especially considering that AI models often contain inadvertent bias.

“In criminal cases, the use of AI in assessing flight risk, recidivism, and other largely discretionary decisions that involve predictions has generated concerns about due process, reliability, and potential bias,” wrote Roberts. “At least at present, studies show a persistent public perception of a ‘human-AI fairness gap,’ reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out.”

Roberts did state that many AI use cases help the judicial system resolve cases in a “just, speedy, and inexpensive” manner. Still, he cautioned that AI isn’t necessarily suitable for all situations, and that “courts will need to consider its proper uses in litigation” as the technology evolves.

“I predict that human judges will be around for a while,” said Roberts. “But with equal confidence I predict that judicial work — particularly at the trial level — will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.”

AI is already impacting the US legal system

Unfortunately, legal professionals’ understanding of AI is already lagging behind overeager application in at least a few cases, with machine learning technology having had a dubious impact on the US legal system thus far.

Last year two lawyers were fined for citing non-existent cases in a legal filing after using OpenAI’s ChatGPT. The AI chatbot had completely fabricated six cases, which the lawyers had subsequently attempted to rely upon in their arguments. According to one of them, he had been “unaware of the possibility that its content could be false.”

Though this case was widely reported, not all lawyers seem to have gotten the memo about relying too heavily on AI. Another US lawyer was recently pulled up for also citing fake cases, having failed to check them after his client generated them using Google Bard. Said client was disbarred former Trump attorney Michael Cohen, who stated last week that he thought Bard was a “super‑charged search engine” and hadn’t known it could generate results.

Attempts have also been made to use AI chatbots to generate legal arguments. Early last year online legal service DoNotPay cancelled plans to have its AI chatbot represent a defendant in court after being warned it could be charged for unauthorised practice of law. DoNotPay’s chatbot was developed using ChatGPT.

AI company Luminance also conducted a demonstration of their legal large language model Autopilot last November, automating a contract negotiation “without human intervention.” Artificial intelligence is even being used by lawmakers to write legislation, both within the US and internationally.

Anyone who has drafted or read a legal document will know it is typically an arduous task that requires parsing long, dull pages of complicated, obfuscating text. Simply asking an AI to check over a contract, evaluate a legal submission, or generate an affidavit may feel like a much quicker, less painful solution. Still, even using AI as an assistive tool comes with dangers, as humans may subconsciously absorb its biases.

There may be a few carefully considered use cases for machine learning algorithms in the legal system. However, the technology should be approached with caution. Undue reliance on AI in law carries a real risk of further muting humanity in an already notoriously bureaucratic system.

Topics
Artificial Intelligence