{"id":50849,"date":"2026-02-25T19:34:05","date_gmt":"2026-02-25T19:34:05","guid":{"rendered":"https:\/\/eduzim.co.zw\/news\/?p=50849"},"modified":"2026-02-25T19:34:05","modified_gmt":"2026-02-25T19:34:05","slug":"who-is-to-blame-when-ai-defames","status":"publish","type":"post","link":"https:\/\/eduzim.co.zw\/news\/2026\/02\/25\/who-is-to-blame-when-ai-defames\/","title":{"rendered":"Who is to blame when AI defames?"},"content":{"rendered":"<p>\n<\/p>\n<div>\n<p>The rise of generative AI has also given rise to an increase in litigation based on false results which cause harm to a person\u2019s reputation. Defamatory output is sometimes caused by generative AI, and at other times, AI can be used to create false impressions about a person, as in the case of deepfakes. If a person\u2019s reputation or dignity is harmed, a cause of action arises in South African law.<\/p>\n<p>AI-related defamation lawsuits are being brought with increasing regularity. One of the first, against ChatGPT, was introduced in 2023 in Australia. The Hepburn Shire Council Mayor, Brian Hood, launched a defamation lawsuit against OpenAI, the owner of ChatGPT. The lawsuit concerned a false result generated by ChatGPT that claimed the mayor had served time in prison for a bribery charge in relation to a matter where he was, in fact, the whistleblower. The lawsuit was resolved in early 2024 after corrections were made to the ChatGPT outputs.<\/p>\n<p>Another interesting case\u2013this time in the United States of America (USA)\u2013involved Robert Starbuck, an American filmmaker, journalist, and activist. His complaint, filed on 29\u00a0April\u00a02025, set the scene:<\/p>\n<p><em>\u201c<\/em>Imagine waking up one day and learning that a multi-billion-dollar corporation was telling whoever asked that you had been an active participant in one of the most stigmatised events in American history\u2013the Capitol riot on January 6th, 2021\u2013and that you were arrested for and charged with a misdemeanour in connection with your involvement in that event.<\/p>\n<p>Further imagine that these accusations were completely false\u2026<\/p>\n<p>\u2026Finally, imagine that the technology company continued to publish these and other lies about you for nine months after you first asked them to stop.<em>\u201c<\/em><\/p>\n<p>This was the basis on which Starbuck brought a defamation lawsuit against Meta Platforms, Inc.\u2013the owner of the Meta AI chatbot. In August 2024, Starbuck discovered the chatbot included these false and damaging statements about him in its outputs. According to his complaint, Starbuck \u201cdid everything within his power to alert Meta about the error and enlist its help to address the problem.\u201d However, despite his attempts to bring this to the company\u2019s attention, the defamatory outputs reportedly continued. It seems that while all information relating to Starbuck was eventually erased from all text outputs, additional misinformation was added via the Meta AI voice feature, including claims that Starbuck had \u201cpled guilty over disorderly conduct\u201d relating to the Capitol riot and that he had \u201cadvanced Holocaust denialism.\u201d<\/p>\n<p>The question \u201cwho is to blame when AI defames?\u201d may have been answered by the Delaware Superior Court in this case, but a public apology by Meta\u2019s Joel Kaplan indicated that the \u201cparties [had] resolved this matter\u201d and that the parties were collaborating to mitigate risks relating to hallucinations.<\/p>\n<p>Another case\u2013also in the USA\u2013involved Mark Walters, a media personality, radio talk show host, and Second Amendment (right to bear arms) advocate, who launched a defamation lawsuit against OpenAI in 2023. He claimed that Frederick Riehl-a journalist and editor of a news site focusing on Second Amendment rights-used ChatGPT, which produced statements about Walters being involved in embezzlement. Walters sued Open AI (the owner of ChatGPT).\u00a0 However, the Superior Court of Gwinnett County in the State of Georgia ruled in favour of Open AI in May2025, on various grounds, one of which was that, as a public figure, Walters had to demonstrate actual malice (knowledge of falsity) on the part of ChatGPT. The court held that OpenAI could not be held liable; the key basis for the decision appears to be that the inclusion by ChatGPT of a disclaimer below the prompt bar meant that reasonable readers would know that ChatGPT makes mistakes. When considering whether the disputed output communicated a defamatory meaning as a matter of law, the court scrutinised this \u201chypothetical reasonable reader\u201d test. The court identified that \u201c[d]isclaimer or cautionary language weighs in the determination of whether this objective, \u2019reasonable reader\u2019 standard is met\u201d. Due to the recurrent disclaimers that applied, users of ChatGPT in Riehl\u2019s position could not have believed that the output consisted of \u201cactual facts\u201d without venturing to verify the information. In the order, reference was made to Riehl\u2019s testimony, in that he was \u201csceptical\u201d of the output; knew that it \u201cwas not true\u201d and consisted of \u201cthe wrong information\u201d; and that he was cognisant of ChatGPT\u2019s capability to produce hallucinations. Because Riehl did not believe the output, the court concluded that it could not have communicated a defamatory meaning as a matter of law. The court confirmed that this alone would have been adequate to find in favour of OpenAI and grant summary judgment.<\/p>\n<p>In South Africa, while no cases have yet been decided, the AI platforms may not be as lucky as ChatGPT was in the Walters case. In South African law, the publication would likely be regarded as defamatory despite the disclaimer. Disclaimers are not \u201cmagic wands\u201d to cure defamatory speech. And if, as we believe likely, they are required to show that they acted without negligence, then a court will need to take a very close look at the systems and processes the platform has adopted. At the very least, it is likely that such platforms will have a duty to act reasonably once notified of the defamatory or unlawful content. As AI platforms operating in South Africa will soon see, there is nothing artificial about a defamation lawsuit.<\/p>\n<p><em>*Dario is a partner at Webber Wentzel and a member of the firm\u2019s AI specialist team in dispute resolution, advising clients on emerging AI-related disputes, legal issues and potential risks.<\/em><\/p>\n<p><em><strong>By Dario Milo, Partner &#038; Lia Wheeler, Candidate Attorney form Webber Wentzel<\/strong><\/em><\/p>\n<\/p><\/div>\n<p>\n<script data-jetpack-boost=\"ignore\" async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-1669381584671856\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<!-- Africa tv video display -->\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-client=\"ca-pub-1669381584671856\"\r\n     data-ad-slot=\"3579572842\"\r\n     data-ad-format=\"auto\"\r\n     data-full-width-responsive=\"true\"><\/ins>\r\n<script data-jetpack-boost=\"ignore\">\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n#blame #defames<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The rise of generative AI has also given rise to an increase in litigation based on false results which cause&hellip;<\/p>\n","protected":false},"author":1,"featured_media":50850,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32,11],"tags":[2561,9727],"class_list":["post-50849","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mzansi","category-world","tag-blame","tag-defames"],"_links":{"self":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/50849","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/comments?post=50849"}],"version-history":[{"count":1,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/50849\/revisions"}],"predecessor-version":[{"id":50851,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/50849\/revisions\/50851"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media\/50850"}],"wp:attachment":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media?parent=50849"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/categories?post=50849"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/tags?post=50849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}