AI in the court: Are robot judges next?


The legal industry finds ways to leverage AI
How far are we from seeing “robot judges”? ZDNet’s Stephanie Condon discusses with Karen Roby. Read more: https://zd.net/3auO0jl

As in many other industries, AI carries great promise as well as risks for the legal industry. In the court system, though, the stakes are unusually high. Using a predictive algorithm to determine your child custody terms isn’t quite the same as Netflix suggesting which movie you should watch next. 

Even so, AI and automation are already playing a large part in the US legal system. At a conference in Portland, Oregon last week, hosted by the Legal Services Corporation, legal professionals from around the country gathered to collectively pause and consider how they’re modernizing their systems — and to what extent they should be using AI. 

“We’ve got to stop doing things just because we can,” said Alan Carlson, a consultant and retired  Court Executive Officer of the Orange County Superior Court in California. Before adopting new technology solutions, he said, it’s worth asking, “Do you really need to use AI? Could you just use a decision tree? Is the analytics enough?” 

To be sure, there are already examples of simple, effective ways to integrate automation into the courts, including some emerging AI use cases. Deploying automation can make the justice system more accessible for people who don’t have the money for a lawyer or the time for a court date. 

For instance, at the Superior Court of Los Angeles County in California — the world’s largest court — Gina the Avatar helps residents handle their traffic citations. Gina knows five languages and helps more than 5,000 customers a month. Gina’s not true AI —  she’s programmed to work down predefined paths. Still, she’s laid the groundwork for more sophisticated automation. 

Los Angeles is now working on a Jury Chat Bot project that will leverage true AI, Snorri Ogata, CIO of the LA Superior Court, told ZDNet. It’s being built on top of the Microsoft Cognitive Services platform, leveraging features like natural language understanding, QnA maker (to build “FAQ on steroids,” as Ogata put it) and translation services. The court is initially focusing its efforts around the jury summons process and will be narrowing the chat dialog universe to known outcomes (such as “retrieve my juror ID” or “request a postponement”). 

“This is key for two reasons,” Ogata said. “First, it helps us establish a clear return on investment framework. Second, it narrows the universe of what we need to teach the ‘bot.’ We will likely expand features over time, but we are very excited with what we are currently working on.”

Meanwhile, other courts in the US are adopting online dispute resolution (ODR) initiatives to handle a range of conflicts. An ODR system can help make the final call when the opposing parties in a conflict have reached a stalemate, explained Colin Rule, VP of ODR at Tyler Technologies and co-founder of the ODR service provider Modria. It can help, for instance, in “night baseball” arbitration, in which the arbitrator hands down a reward, which is then adjusted to conform to the closest of the two sides’ competing proposals. 

That kind of assistance may sound close to a “digital judge,” Rule acknowledged. However, he said, “It’s not about building a digital brain.”

special feature


Managing AI and ML in the Enterprise

The AI and ML deployments are well underway, but for CXOs the biggest issue will be managing these initiatives, and figuring out where the data science team fits in and what algorithms to buy versus build.

Read More

To explain the difference, Rule compared an algorithm-powered justice system to IBM’s Watson. When Ken Jennings goes up against the AI-powered Watson on the game show Jeopardy!, “he’s not coming up with 10,000 answers and then ranking them and picking the one that scored the highest,” Rule said. “That’s what Watson does in real time.” 

It’s also how an ODR would consider a dispute and consider all of the information in a dispute and the final offers from both parties. An algorithm “can go out and look at 10 million similar cases, come up with its idea of what’s fair, look at the two offers and say, ‘I pick Party A’s offer.’

“That’s not the way judges work, but it’s a good way for computers to work,” Rule continued. “‘Judge’ means something very specific in a human context. We don’t want to just replicate that in technology, but we can meet the need through technology in different ways.”

To meet the court system’s needs, though, people have to trust the technology involved. Some AI use cases in the legal system have already come under serious scrutiny, such as its use in sentencing guidelines. In 2016, the Wisconsin Supreme Court ruled that it was all right for the Wisconsin Department of Corrections to use a private vendor’s proprietary algorithm to help determine how long someone should be in jail — even if the vendor doesn’t explain how the algorithm works. After a man named Eric Loomis was found guilty of a drive-by shooting, his sentence was informed in part by the “high risk” label a risk-assessment tool gave him. Loomis appealed to the US Supreme Court, but the high court refused to take up his case. 

Explainability — an issue already very top of mind for AI experts — can go a long way in establishing trust, officials at the conference said. At the same time, so can proven results. 



Rule served as director of ODR at eBay and PayPal for eight years, where he helped build a system that handled 60 million disputes a year. But the ODR system was rolled out gradually, he said, after a period during which the system was simply a silent observer, learning how to best resolve disputes. The gradual rollout helped build a quality system the company and customers could trust, he said. 

Courts deploying AI may also consider subjecting their systems to an assessment similar to an environmental impact report, suggested Carlson. Just as construction projects have to undergo an assessment to consider all of the possible impacts (on population, traffic, schools, etc.), AI systems could be assessed by third parties in terms of who they impact and how.  

With that kind of objective assessment built into the AI deployment process, “even though you can’t explain [how the algorithms work], you can have trust in the outcomes,” Carlson said. 

It may not be particularly hard to build an AI-based system that delivers better results than humans, panelists at the conference noted. There’s plenty of evidence of all kinds of human bias built into justice systems. In 2011, for instance, a study of an Israeli parole board showed by the parole board delivered harsher decisions in the hour before lunch and the hour before the end of the day. 

It’s possible, Rule said, that citizens in future decades could “look back on the human era of courts and say this was just a little better than flipping a coin.”

Still, Rule cautioned that it’s important to build AI systems that “work hand in glove” with people, rather than completely replacing them — in part to ensure that people using these services feel they’ve been heard. 

Judge Wendy Chang of the LA Superior Court also expressed reservations about trying to build fully automated judicial systems. 

“In my experience in judging, especially with a self-represented litigant, most of the time people don’t even now what to tell you,” she said. If an automated system builds its decision based on the information it receives, she continued, “how are you going to train it to look for other stuff? For me that’s a very subjective, in-the-moment thing.” 

For instance, Chang said, “if they’re fidgeting, I’ll start asking them questions, and it will come to a wholly different result.”

And in some cases, Chang said, the stake are just too high to leave out humans. “Legal issues sometimes lead to irreversible consequences, especially in areas like immigration,” she said. “You have people sent to their home countries… and immediately murdered.”

If a court decides to move forward with an AI implementation, it should do so in a methodical way considers all the risks, Carlson said. For one thing, the project needs to have defined goals. Administrators also need to implement rules that deter the use of AI for “off-label” goals, he said. For instance, a pre-trial tool that determines the likelihood of whether someone will show up to court shouldn’t be used to predict the likelihood of whether someone will commit a crime. 

Additionally, courts have to ensure they’re using quality data and get it organized. “I don’t think people have figured out yet how much data we have in courts,” Carlson said. “We haven’t organized it very well.”

Like Rule and Chang, Carlson stressed the limits of AI. 

“We need to figure out where is that boundary between technology and human,” he said. 



Source link