On January 11, 2024, a U.S. County Court judge denied OpenAI’s motion to dismiss a libel action based on the hallucinations of ChatGPT. This means the judge agreed that the Plaintiff’s claim was “plausible on its face”. There are several legal issues in the case that are familiar elements of libel, but the one issue that’s new is the inclusion of an AI author.
Background
Plaintiff Mark Walters is “the host of Armed America Radio and has a reputation as the ‘Loudest Voice in America Fighting For Gun Rights.’”
Fred Riehl works for a media outlet as a journalist and is a subscriber of ChatGPT.
On May 4, 2023, Riehl interacted with ChatGPT about a lawsuit on which Riehl was reporting.
In that interaction, ChatGPT fabricated the entire legal complaint with an erroneous case number, definitively identified Mark Walters, falsely claimed that Walters had been accused of defrauding and embezzling funds from the Second Amendment Foundation (SAF), and asserted positively that “Walters has breached [his] duties and responsibilities by, among other things, embezzling and misappropriating SAF's funds and assets for his own benefit, and manipulating SAF's financial records and bank statements to conceal his activities.”
The Plaintiff cites the applicable law:
O.C.G.A. § 51-5-1(a) provides, “A libel is a false and malicious defamation of another, expressed in print, writing, pictures, or signs, tending to injure the reputation of the person and exposing him to hatred, contempt, or ridicule.”
Reihl confirmed the ChatGPT output was false and then warned Walters.
The Facts in Reverse
Fred Reihl was the first human in the loop, and he didn’t defame Mark Walters, he warned Mark Walters of a falsehood.
OpenAI auto-published and auto-assigned a falsehood to Fred Reihl before any human agency was involved. It’s hard to interpret that as anything but reckless disregard for the content of that message.
And then, at the beginning, ChatGPT, like a child at play, innocently created a falsehood in an attempt to accurately autocomplete a sentence according to its limited internal model of the world.
Terms of Service
While not definitive, OpenAI’s Terms of Service (TOS) do put both liability for and the ownership of ChatGPT’s output on OpenAI’s users. Interestingly, the TOS do also appear to at least partly presume OpenAI is the original ‘owner’ of ChatGPT output and that ownership of ChatGPT output must be assigned to the users – “We hereby assign to you all our right, title, and interest, if any, in and to Output”. The TOS also served to warn Fred Reihl of the potential for falsehoods in order to allow Reihl to enjoy the use of its product, which may ultimately exacerbate or mitigate the judgement.
Publication is Key
The Plaintiff routinely conflates OpenAI and ChatGPT but ultimately hinges this part of its case on the fact that, wherever they originated, the lies were “published” by OpenAI:
OpenAI claims that its statements to Riehl were not published … arguing, that its Terms of Use made clear that Riehl was the “owner” of the libelous material and that if he republished the material, he should inform his readers that he is responsible for the content of what he publishes.
It is true that a re-publisher of libel can be responsible for what he re-publishes. But that responsibility does not have the effect of negating the responsibility of the original publisher of the material (in this case, OpenAI).
All that is required for publication is communication of the libelous material to someone other than the subject. OpenAI’s statements to Riehl were communication to someone other than Walter, so the statements were “published” for the purposes of Georgia law.
The sexy questions of AI authorial intent, maliciousness, creativity, or the personhood of ChatGPT just aren’t in question in this case.
Very interesting. So ChatGPT clearly produced and published libelous material according to Georgia, and IIUC, the case boils down to whether the TOS can negate OpenAI's responsibility. Is there relevant precedent for TOS absolving responsibility of tortious publications? Should we expect the future ruling to be very Georgia-specific?
What do you think would happen in practice if OpenAI were actually convicted of libel? Would they have to take down Chat-GPT until it was incapable of producing libellous statements? Or would they appeal to a higher court, who in the interests of Chat-GPT's users would redefine libel not to apply in this case?