Lawyers blame ChatGPT for tricking them into citing bogus case law

The judge is deciding whether to punish two lawyers who blamed an online chatbot for their own mistakes.

NEW YORK, NY (AP) - Two lawyers who were responding to a judge's anger blamed ChatGPT for tricking them Thursday into including fictitious research in a legal filing.

The attorneys Steven A. Schwartz, and Peter LoDuca could be punished for a filing against an airline in which references were made to court cases Schwartz believed were real but that were invented by a chatbot powered by artificial intelligence.

Schwartz said that he was using the program to search for precedents in a case against Colombian airline Avianca involving an injury sustained on a flight in 2019.

The chatbot suggested several aviation-related cases that Schwartz had not been able find using the usual methods at his law office.

It was found that some of these cases were not real or involved airlines which did not exist.

Schwartz said to Judge P. Kevin Castel that he had 'operated under a misperception... that the cases on this website were obtained from a source I didn't have access too'

He admitted that he had 'failed miserably" in his efforts to verify the accuracy of the citations.

Schwartz stated, 'I didn't understand that ChatGPT was capable of fabricating cases.

Microsoft has invested $1 billion in OpenAI (the company behind ChatGPT), a software development firm.

Some are concerned about its success in demonstrating the potential of artificial intelligence to change how humans learn and work. In May, hundreds of industry leaders wrote a letter warning that'mitigating the risks of extinction due to AI should be a priority for society along with other large-scale risks like pandemics or nuclear war'

Judge Castel was both confused and upset by the occurrence. He also expressed disappointment that the lawyers had not acted quickly to correct the false legal citations after being alerted of the problem first by Avianca’s lawyers and the Court. Avianca cited the false case law in March's filing.

Schwartz was presented with a legal case that had been created by the computer. The case was originally described as a wrongful-death lawsuit brought by a female against an airline, but then morphed into a claim for a man who had missed a flight from New York to Los Angeles and was forced incur extra expenses.

Can we agree that is legal gibberish' Castel asked.

Schwartz claimed that he mistakenly believed the confusion was caused by the use of different excerpts from the case.

After Castel had finished his questions, he then asked Schwartz whether he still had any other comments.

Schwartz apologized.

He said that the mistake had caused him to suffer personally and professionally and that he felt "embarrassed" and "humiliated".

He claimed that he, along with the firm where he was employed -- Levidow Levidow & Oberman - had taken steps to prevent a similar incident from happening again.

LoDuca said that Schwartz was trusted and that he did not review the information he had provided.

LoDuca stated that he never realized that the case was bogus.

He said that the result 'pains me no end'.

Ronald Minkoff told the judge the submission was a result of carelessness and not bad faith. He said that sanctions should not be imposed.

He said that lawyers, in the past, have had difficulty with technology and new technology. 'And it doesn't get easier.'

Schwartz chose this technology, despite the fact that he barely conducts federal research. Minkoff stated that Schwartz thought he had a normal search engine. What he did was play with live ammunition.

Daniel Shin, adjunct professor and assistant research director at William & Mary Law School's Center for Legal and Court Technology, introduced the Avianca Case during a conference held last week. The event attracted dozens in person and via the internet from state and federal courts across the U.S. including Manhattan federal court.

He said that the topic caused shock and confusion at the conference.

Shin stated that he was referring to the Southern District of New York. This is the federal district which handles all of the major financial crimes and 9/11 cases. This was the first documented case of possible professional misconduct by an lawyer using generative AI.

He said that the case showed how the lawyers may not have understood the ChatGPT system because it has a tendency to hallucinate and talk about fictional things, in a way that sounds real but isn't.

Shin stated that the report highlights the risks of using AI technology without understanding the potential dangers.

The judge stated that he will rule on the sanctions at a future date.