Many people believe that one day, ChatGPT programs will put them out of work. However, at least some professions don’t have to fear any possible AI overtake. As reported by the New York Times, Steven Schwartz, a New York lawyer, recently used OpenAI’s chatbot to aid in writing a legal brief with disastrous results.
The chatbot’s output contained grammatical errors and nonsensical statements that could have cost Schwartz his case. This highlights the importance of human expertise and critical thinking in fields such as law, where a single mistake can have serious consequences.
Schwartz’s law firm is suing Avianca on behalf of Roberto Mata, who alleges he was injured on a flight to New York City. The airline recently asked a federal judge to dismiss the case. However, according to Mata’s lawyers, there are multiple cases supporting plaintiff’s case, such as “Varghese v. China Southern Airlines,” “Martinez v. Delta Airlines,” and “Miller v. United Airlines”. However, there’s just one issue: No one could identify court decisions cited in Mata’s brief because ChatGPT created them all. This revelation raises serious concerns about the credibility of Mata’s legal team and their arguments. It also calls into question the validity of any other evidence or citations presented by Mata’s lawyers in this and prior cases.
On Thursday, Schwartz filed an affidavit in which he alleged that he had used ChatGPT to supplement his research for the case. He claimed that he had been unaware of the possibility that the material he filed may be false. He also shared screenshots where he had asked the chatbot if the cases it cited were real. The chatbot declared they were, telling Schwartz that “reputable legal databases” like Westlaw and LexisNexis contained the decisions. However, upon further investigation, it was discovered that the chatbot had been programmed to pull information from unreliable sources. This highlights the importance of fact-checking and verifying sources before sharing information online.
“I regret using ChatGPT in the past and will never do so in the future without absolute verification of its authenticity,” said Schwartz in court. On June 8, a hearing will be held to discuss potential sanctions for the “unprecedented circumstance” Schwartz has created. Schwartz’s statement suggests that he may have experienced negative consequences from using ChatGPT without verifying its authenticity. It remains to be seen what the outcome of the hearing will be and how it will impact Schwartz moving forward.
The Chinese authorities have been persecuting ChatGPT, a chatbot used by Tencent and Ant Group after it became embroiled in a political scandal. This has led to greater restrictions being placed on the use of ChatGPT.
Users on social media may have noticed fake replies, which are replies generated by the ChatGPT chatbot. ChatGPT churns out fake replies in an attempt to appear more human-like.
Read more related articles:
A Guy Writes His Thesis in One Day Using Only ChatGPT
D-ID Launches Face-to-Face Conversational AI Chatbot Empowered by ChatGPT
Apple Restricts Employees From Using ChatGPT
The post Lawyer Who Used ChatGPT to Write A Brief Riddled With Fake Citations to Face Sanctions appeared first on Metaverse Post.