{"id":55566,"date":"2026-04-10T00:41:06","date_gmt":"2026-04-10T00:41:06","guid":{"rendered":"https:\/\/eduzim.co.zw\/news\/?p=55566"},"modified":"2026-04-10T00:41:06","modified_gmt":"2026-04-10T00:41:06","slug":"openai-backs-bill-exempt-ai-firms-model-harm-lawsuits","status":"publish","type":"post","link":"https:\/\/eduzim.co.zw\/news\/2026\/04\/10\/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits\/","title":{"rendered":"OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters"},"content":{"rendered":"<p> <script data-jetpack-boost=\"ignore\" async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-1669381584671856\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<!-- Africa tv video display -->\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-client=\"ca-pub-1669381584671856\"\r\n     data-ad-slot=\"3579572842\"\r\n     data-ad-format=\"auto\"\r\n     data-full-width-responsive=\"true\"><\/ins>\r\n<script data-jetpack-boost=\"ignore\">\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<\/p>\n<div>\n<p><span class=\"lead-in-text-callout\">OpenAI is throwing<\/span> its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.<\/p>\n<p class=\"paywall\">The effort seems to mark a shift in OpenAI\u2019s legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technology\u2019s harms. Several AI policy experts tell WIRED that SB 3444\u2014which could set a new standard for the industry\u2014is a more extreme measure than bills OpenAI has supported in the past.<\/p>\n<p class=\"paywall\">The bill would shield frontier AI developers from liability for \u201ccritical harms\u201d caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines a frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to America\u2019s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.<\/p>\n<p class=\"paywall\">\u201cWe support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses\u2014small and big\u2014of Illinois,\u201d said OpenAI spokesperson Jamie Radice in an emailed statement. \u201cThey also help avoid a patchwork of state-by-state rules and move toward clearer, more consistent national standards.\u201d<\/p>\n<p class=\"paywall\">Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasn\u2019t intentional and they published their reports.<\/p>\n<p class=\"paywall\">Federal and state legislatures in the US have yet to pass any laws specifically determining whether AI model developers, like OpenAI, could be liable for these types of harm caused by their technology. But as AI labs continue to release more powerful AI models that raise novel safety and cybersecurity challenges, such as Anthropic\u2019s Claude Mythos, these questions feel increasingly prescient.<\/p>\n<p class=\"paywall\">In her testimony supporting SB 3444, a member of OpenAI\u2019s Global Affairs team, Caitlin Niedermeyer, also argued in favor of a federal framework for AI regulation. Niedermeyer struck a message that\u2019s consistent with the Trump administration\u2019s crackdown on state AI safety laws, claiming it\u2019s important to avoid \u201ca patchwork of inconsistent state requirements that could create friction without meaningfully improving safety.\u201d This is also consistent with the broader view of Silicon Valley in recent years, which has generally argued that it\u2019s paramount for AI legislation to not hamper America\u2019s position in the global AI race. While SB 3444 is itself a state-level safety law, Niedermeyer argued that those can be effective if they \u201creinforce a path toward harmonization with federal systems.\u201d<\/p>\n<p class=\"paywall\">\u201cAt OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,\u201d Niedermeyer said.<\/p>\n<p class=\"paywall\">Scott Wisor, policy director for the Secure AI project, tells WIRED he believes this bill has a slim chance of passing, given Illinois&#8217; reputation for aggressively regulating technology. \u201cWe polled people in Illinois, asking whether they think AI companies should be exempt from liability, and 90 percent of people oppose it. There\u2019s no reason existing AI companies should be facing reduced liability,\u201d Wisor says.<\/p>\n<\/div>\n<p><script data-jetpack-boost=\"ignore\" async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-1669381584671856\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<!-- Africa tv video display -->\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-client=\"ca-pub-1669381584671856\"\r\n     data-ad-slot=\"3579572842\"\r\n     data-ad-format=\"auto\"\r\n     data-full-width-responsive=\"true\"><\/ins>\r\n<script data-jetpack-boost=\"ignore\">\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n#OpenAI #Backs #Bill #Limit #Liability #AIEnabled #Mass #Deaths #Financial #Disasters<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where&hellip;<\/p>\n","protected":false},"author":1,"featured_media":55567,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32],"tags":[],"class_list":["post-55566","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mzansi"],"_links":{"self":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/55566","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/comments?post=55566"}],"version-history":[{"count":1,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/55566\/revisions"}],"predecessor-version":[{"id":55568,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/55566\/revisions\/55568"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media\/55567"}],"wp:attachment":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media?parent=55566"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/categories?post=55566"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/tags?post=55566"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}