Attorney Misused ChatGPT – Invented Case Law in Affidavit

Published by:
Deepa Kruse

Reviewed by:
Alistair Vigier
Last Modified: 2023-05-27
In a scenario straight out of a legal drama, a New York attorney is facing disciplinary action for using an artificial intelligence tool, ChatGPT, to draft an affidavit for a personal injury lawsuit against an airline. The twist? The court cases he referenced were entirely fictitious.
The attorney, Steven Schwartz of the Levidow, Levidow & Oberman firm, is scheduled for a disciplinary hearing on June 8. Schwartz’s admission of using ChatGPT for legal document preparation was a shock to the court and his peers. Another lawyer at the firm, Peter LoDuca, is also in the hot seat; however, he denied involvement in researching for the affidavit.

Authored with AI assistance
The unique legal document, authored with AI assistance, was for a case involving a man alleging injury by a serving cart on an Avianca flight. Interestingly, the affidavit was peppered with court cases that simply didn’t exist.
Judge Kevin Castel, overseeing the matter, described the situation as an “unprecedented circumstance.” He noted, “The submitted cases appear to be bogus judicial decisions with concocted quotes and citations.” Both the airline’s legal team and the judge himself were stumped by the nonexistent cases cited in the affidavit.
Bart Banino, a lawyer representing Avianca, admitted to The New York Times that his team initially suspected the cases were fraudulent, but the concept of a chatbot’s involvement was initially doubtful.
Apology to Kevin Judge Castel
As the dust settled, Schwartz tendered an apology to Judge Castel, acknowledging that he was a novice user of the AI tool and was ignorant of the potential for erroneous content generation. “ChatGPT was a source that proved itself to be unreliable,” he conceded. Neither Avianca, LoDuca, nor Schwartz responded to Insider’s requests for comment at the time.
ChatGPT, a tool that generates text on demand, comes with the caution that it might “produce inaccurate information.” In this instance, the injury lawsuit involved an attempt to leverage alleged precedent cases – cases that turned out to be non-existent.
Judge Castel demanded an explanation from the legal team for their citation of phantom cases. The startling revelation was that Peter LoDuca, the plaintiff’s lawyer, had not prepared the research. It was, in fact, Schwartz’s work, who had used ChatGPT for sourcing similar cases.
Attorney Misused ChatGPT
In a written clarification, Schwartz stressed that LoDuca was neither involved in the research nor aware of its execution. Schwartz expressed deep regret for relying on the AI tool and vowed to never use AI for legal research in the future “without absolute verification of its authenticity.”
Documentary evidence revealed an exchange between Schwartz and ChatGPT. In response to Schwartz’s query about the authenticity of one of the cited cases, ChatGPT confirmed its existence. The AI tool reiterated the same when Schwartz sought verification of its source.
The disciplinary hearing on June 8 will require both Schwartz and LoDuca to justify why they should not be penalized. Since its launch in November 2022, ChatGPT has been used by millions and it can respond in a human-like manner and imitate various writing styles, but its database only extends to the internet as it was in 2021.
This incident highlights the potential risks of AI, particularly in terms of misinformation spread and bias, stirring ongoing debates about the scope and limits of artificial intelligence in sensitive sectors like law.
We hope you found this case on the attorney who misused ChatGPT interesting.
RELATED POSTS
No related posts found.