{"id":43605,"date":"2025-12-01T02:15:44","date_gmt":"2025-12-01T02:15:44","guid":{"rendered":"https:\/\/eduzim.co.zw\/news\/?p=43605"},"modified":"2025-12-01T02:15:44","modified_gmt":"2025-12-01T02:15:44","slug":"how-openai-reacted-when-some-chatgpt-users-lost-touch-with-realityutm_sourcerss1-0mainlinkanonutm_mediumfeed","status":"publish","type":"post","link":"https:\/\/eduzim.co.zw\/news\/2025\/12\/01\/how-openai-reacted-when-some-chatgpt-users-lost-touch-with-realityutm_sourcerss1-0mainlinkanonutm_mediumfeed\/","title":{"rendered":"How OpenAI Reacted When Some ChatGPT Users Lost Touch with Reality"},"content":{"rendered":"<p> <script data-jetpack-boost=\"ignore\" async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-1669381584671856\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<!-- Africa tv video display -->\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-client=\"ca-pub-1669381584671856\"\r\n     data-ad-slot=\"3579572842\"\r\n     data-ad-format=\"auto\"\r\n     data-full-width-responsive=\"true\"><\/ins>\r\n<script data-jetpack-boost=\"ignore\">\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n<\/p>\n<div id=\"text-180247039\">\n<p>\t\t\t\tSome AI experts were reportedly shocked ChatGPT wasn&#8217;t fully tested for sycophancy by last spring. &#8220;OpenAI did not see the scale at which disturbing conversations were happening,&#8221; writes the New York Times \u2014 sharing what they learned after interviewing more than 40 current and former OpenAI employees, including safety engineers,  executives, and researchers.<\/p>\n<p>The team responsible for ChatGPT&#8217;s tone had raised concerns about last spring&#8217;s model (which the Times describes as &#8220;too eager to keep the conversation going and to validate the user with over-the-top language.&#8221;)  But they were overruled when A\/B testing showed users kept coming back:<\/p>\n<p><i>Now, a company built around the concept of safe, beneficial AI faces five wrongful death lawsuits&#8230;  OpenAI is now seeking the optimal setting that will attract more users without sending them spiraling.<br \/>\nThroughout this spring and summer, ChatGPT acted as a yes-man echo chamber for some people. They came back daily, for many hours a day, with devastating consequences&#8230;.  The Times has uncovered nearly 50 cases of people having mental health crises during conversations with ChatGPT. Nine were hospitalised; three died&#8230; One conclusion that OpenAI came to, as Altman put it on X, was that &#8220;for a very small percentage of users in mentally fragile states there can be serious problems.&#8221;  But mental health professionals interviewed by the Times say OpenAI may be understating the risk. Some of the people most vulnerable to the chatbot&#8217;s unceasing validation, they say, were those prone to delusional thinking, which studies have suggested could include 5% to 15% of the population&#8230;<\/i><\/p>\n<p>In August, OpenAI released a new default model, called GPT-5, that was less validating and pushed back against delusional thinking. Another update in October, the company said, helped the model better identify users in distress and de-escalate the conversations.  Experts agree that the new model, GPT-5, is safer&#8230;.   Teams from across OpenAI worked on other new safety features: The chatbot now encourages users to take breaks during a long session. The company is also now searching for discussions of suicide and self-harm, and parents can get alerts if their children indicate plans to harm themselves. The company says age verification is coming in December, with plans to provide a more restrictive model to teenagers.<\/p>\n<p>After the release of GPT-5 in August, [OpenAI safety systems chief Johannes] Heidecke&#8217;s team analysed a statistical sample of conversations and found that 0.07% of users, which would be equivalent to 560,000 people, showed possible signs of psychosis or mania, and 0.15% showed &#8220;potentially heightened levels of emotional attachment to ChatGPT,&#8221; according to a company blog post.   But some users were unhappy with this new, safer model. They said it was colder, and they felt as if they had lost a friend.  By mid-October, Altman was ready to accommodate them. In a social media post, he said that the company had been able to &#8220;mitigate the serious mental health issues.&#8221; That meant ChatGPT could be a friend again.  Customers can now choose its personality, including &#8220;candid,&#8221; &#8220;quirky,&#8221; or &#8220;friendly.&#8221; Adult users will soon be able to have erotic conversations, lifting the Replika-era ban on adult content. (How erotica might affect users&#8217; well-being, the company said, is a question that will be posed to a newly formed council of outside experts on mental health and human-computer interaction.)<\/p>\n<p>OpenAI is letting users take control of the dial and hopes that will keep them coming back. That metric still matters, maybe more than ever.  In October, [30-year-old &#8220;Head of ChatGPT&#8221; Nick] Turley, who runs ChatGPT, made an urgent announcement to all employees. He declared a &#8220;Code Orange.&#8221; OpenAI was facing &#8220;the greatest competitive pressure we&#8217;ve ever seen,&#8221; he wrote, according to four employees with access to OpenAI&#8217;s Slack. The new, safer version of the chatbot wasn&#8217;t connecting with users, he said. <\/p>\n<p>The message linked to a memo with goals. One of them was to increase daily active users by 5% by the end of the year.<\/p>\n<p><\/p>\n<\/div>\n<p><script data-jetpack-boost=\"ignore\" async src=\"https:\/\/pagead2.googlesyndication.com\/pagead\/js\/adsbygoogle.js?client=ca-pub-1669381584671856\"\r\n     crossorigin=\"anonymous\"><\/script>\r\n<!-- Africa tv video display -->\r\n<ins class=\"adsbygoogle\"\r\n     style=\"display:block\"\r\n     data-ad-client=\"ca-pub-1669381584671856\"\r\n     data-ad-slot=\"3579572842\"\r\n     data-ad-format=\"auto\"\r\n     data-full-width-responsive=\"true\"><\/ins>\r\n<script data-jetpack-boost=\"ignore\">\r\n     (adsbygoogle = window.adsbygoogle || []).push({});\r\n<\/script><br \/>\n#OpenAI #Reacted #ChatGPT #Users #Lost #Touch #Reality<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Some AI experts were reportedly shocked ChatGPT wasn&#8217;t fully tested for sycophancy by last spring. &#8220;OpenAI did not see the&hellip;<\/p>\n","protected":false},"author":1,"featured_media":30632,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[32],"tags":[],"class_list":["post-43605","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-mzansi"],"_links":{"self":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/43605","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/comments?post=43605"}],"version-history":[{"count":1,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/43605\/revisions"}],"predecessor-version":[{"id":43606,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/posts\/43605\/revisions\/43606"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media\/30632"}],"wp:attachment":[{"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/media?parent=43605"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/categories?post=43605"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/eduzim.co.zw\/news\/wp-json\/wp\/v2\/tags?post=43605"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}