the daily wire

Lawyer may be punished for using ChatGPT for fake legal research.

New York Attorney Admits to Using “Bogus” Legal Research Obtained Through ChatGPT

A New York attorney is in hot water after admitting that his firm used “bogus” legal research obtained through the artificial intelligence chatbot program, ChatGPT, for a personal injury case. Attorney Steven Schwartz, who has been practicing law for over 30 years, submitted a brief containing several references to non-existent cases his legal team gathered through the program.

“The Court is presented with an unprecedented circumstance,” said U.S. Judge Kevin Castel of the Southern District of New York in an order.

The non-existent cases in the filing included Varghese v. China South Airlines, Martinez v. Delta Airlines, Shaboon v. EgyptAir, Petersen v. Iran Air, Miller v. United Airlines, and Estate of Durden v. KLM Royal Dutch Airlines. Judge Castel said that these cases contained “bogus judicial decisions with bogus quotes and bogus internal citations.”

ChatGPT’s Role in the Incident

In a written statement to Castel, Schwartz attached screenshots showing a conversation between himself and ChatGPT. The lawyer asked the chatbot if Varghese v. China South Airlines was a real case, to which ChatGPT responded that it was. When asked about the other cases, ChatGPT doubled down on finding them in legal databases.

“I apologize for the confusion earlier,” ChatGPT replied. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

Schwartz accepted responsibility for not confirming the sources, saying that it was the first time using ChatGPT as legal research and “was unaware of the possibility that its content could be false.” ChatGPT has sparked massive criticism for its role in several industries and has warned users that it could produce inaccurate information.

Consequences for the Attorneys

While Schwartz is trying to convince the judge that he doesn’t deserve sanctions, his colleague Peter LoDuca must show cause why the court shouldn’t sanction him “for the use of a false and fraudulent notarization” in a hearing on June 8. LoDuca “had no reason to doubt the sincerity,” Schwartz said, adding he did not have direct knowledge of how the legal team acquired the research.

Schwartz said he “greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity.”

Lessons Learned

This incident serves as a cautionary tale for attorneys and other professionals who rely on artificial intelligence chatbots for research. While these tools can be helpful, it’s important to verify the accuracy of the information they provide before using it in legal proceedings or other important matters.

  • Always double-check the sources of information obtained through chatbots or other AI tools.
  • Be aware of the limitations of these tools and the potential for inaccuracies.
  • Take responsibility for any mistakes made and learn from them to avoid similar incidents in the future.

By following these guidelines, attorneys and other professionals can avoid the pitfalls of relying too heavily on AI tools and ensure that their work is accurate and reliable.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."

Related Articles

Sponsored Content
Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker