Top 5 This Week

Related Posts

Chatbot Disaster: Lawyer’s Reliance On AI Results In Fake Court Decisions

A lawyer got the idea to use ChatGPT to do legal research for a case, so the bot gave him other similar cases he could cite in court – something fairly typical for lawyers to do.  There was only one little problem: all cases cited by the AI were fake.

According to the New York Times, a customer named Roberto Mata sued Avianca after a serving cart injured his knee during a flight. Avianca tried to toss the case, but Mata’s lawyer objected and submitted a brief with more than half a dozen relevant court decisions: Varghese v. China Southern Airlines, Shaboon v. Egyptair, and Martinez v. Delta Air Lines, among others.

Sadly, neither the airline’s lawyers nor the judge himself could find the quotations cited.

The lawyer who used ChatGPT, Steven Schwartz, has been practicing law for three decades. Schwartz said that he had no intention to deceive the court and that he had never used the chatbot. He even asked the bot if the cases cited were real and the AI said that the cases could be found in reputable legal databases. But nobody could find them.

ChatGPT, like other chatbots, may provide wrong information. These AIs are not intelligent, but they are a statistical model trained to speak like a human, and when asked, they should answer with something that has a probability of being true or something that just sounds true.

The lawyer ordered a hearing next month to discuss potential sanctions for Schwartz for filing a legal brief using fake court decisions.

Staff
Staff
Written by RA News staff.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles

Award-App Footer

Download our award-winning app