“While we have safeguards in place, the system may occasionally generate incorrect or misleading information and produce offensive or biased content. It is not intended to give advice.”

– OpenAI’s opening disclaimer

And that brings us to the heart of our biggest fear – what happens when technology turns against us? 

What happens when technology is prematurely deployed without the proper testing and knowledge behind its capabilities?

Earlier this month, OpenAI, the world’s most talked about artificial intelligence (AI) company, just got served with its first-ever defamation lawsuit that further showcases the dangers of ChatGPT’s unchecked ability to generate results that have no factual backing or legal backing.

Mark Walters, a nationally syndicated radio host in Georgia, filed his lawsuit against OpenAI on June 5 alleging that its AI-powered chatbot, ChatGPT, fabricated legal claims against him. 

The 13-page Complaint references AmmoLand.com journalist Fred Riehl and his May 4 request to ChatGPT to summarize the legal case of Second Amendment Foundation v. Ferguson, a federal case filed in Washington federal court that accused the state’s Attorney General Bob Ferguson of abusing his power by chilling the activities of the gun rights foundation and provided the OpenAI chatbot with a link to the lawsuit. 

While Walter was not named in that original lawsuit, ChatGPT responded to Riehl’s summary request of Second Amendment Foundation, stating that it was:

“...a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.”

But here’s where things get distorted and dangerous – none of ChatGPT’s statements concerning Walters are in the actual SAF complaint. 

This AI-generated “complaint” also alleged that Walters, who served as the organization’s treasurer and chief financial officer, “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership.”

As a form of relief, the plaintiff allegedly was seeking “the recovery of the misappropriated funds, damages for breach of fiduciary duty and fraud, and Walter’s removal from his position as a member of the SAF’s board of directors.”

However, herein lies the problem – according to Walters, “[e]very statement of fact in the [ChatGPT] summary pertaining to [him] is false” where OpenAI’s chatbot went so far as to even create “an erroneous case number.”

“ChatGPT’s allegations concerning Walters were false and malicious, expressed in print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing him to public hatred, contempt, or ridicule,” the lawsuit states. “By sending the allegations to Riehl, [OpenAI] published libelous matter regarding Walters.”

If you were to ask ChatGPT to provide a summary of SAF’s lawsuit that was cited in Walters’ complaint, you may also get a response similar to this:

“I apologize, but as an AI language model, my responses are based on pre-existing knowledge up until September 2021. Therefore, I cannot access or browse the internet or view specific documents or links that were published after my knowledge cutoff. Consequently, I’m unable to provide you with a summary of the accusations in the lawsuit you mentioned…[t]o get information about the lawsuit and its accusations, I recommend reviewing the document yourself or referring to trusted news sources or legal websites that may have covered the case. They can provide you with accurate and up-to-date information regarding the specific lawsuit you mentioned.”

While OpenAI hasn’t responded to any comments on Walters’ ongoing defamation lawsuit, it does beg the question of why the AI company isn’t pressing harder on these arguably foreseeable consequences of a code that was in retrospect, negligently deployed without the proper testing.

The case is Mark Walters v. OpenAI, LLC, cv-23-A-04860-2.

  • You can read Walter’s June 5 complaint here. 

In other news, read about US President Joe Biden meeting with 8 tech leaders in addressing AI bias and workforce benefits.

Click here to view full gallery at Hypemoon