OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

Earlier this month, the corporate unveiled a wellness council to handle these concerns, though critics noted the council didn’t include a suicide prevention expert. OpenAI also recently rolled out controls for folks of youngsters who use ChatGPT. The corporate says it’s constructing an age prediction system to routinely detect children using ChatGPT and impose a stricter set of age-related safeguards.

Rare but impactful conversations

The info shared on Monday appears to be a part of the corporate’s effort to exhibit progress on these issues, even though it also shines a highlight on just how deeply AI chatbots could also be affecting the health of the general public at large.

In a blog post on the recently released data, OpenAI says these kinds of conversations in ChatGPT which may trigger concerns about “psychosis, mania, or suicidal pondering” are “extremely rare,” and thus difficult to measure. The corporate estimates that around 0.07 percent of users lively in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. For emotional attachment, the corporate estimates around 0.15 percent of users lively in a given week and 0.03 percent of messages indicate potentially heightened levels of emotional attachment to ChatGPT.

OpenAI also claims that on an evaluation of over 1,000 difficult mental health-related conversations, the brand new GPT-5 model was 92 percent compliant with its desired behaviors, in comparison with 27 percent for a previous GPT-5 model released on August 15. The corporate also says its latest version of GPT-5 holds as much as OpenAI’s safeguards higher in long conversations. OpenAI has previously admitted that its safeguards are less effective during prolonged conversations.

As well as, OpenAI says it’s adding latest evaluations to try to measure a number of the most serious mental health issues facing ChatGPT users. The corporate says its baseline safety testing for its AI language models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

Despite the continued mental health concerns, OpenAI CEO Sam Altman announced on October 14 that the corporate will allow verified adult users to have erotic conversations with ChatGPT starting in December. The corporate had loosened ChatGPT content restrictions in February but then dramatically tightened them after the August lawsuit. Altman explained that OpenAI had made ChatGPT “pretty restrictive to be sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.”

For those who or someone you realize is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which is able to put you in contact with an area crisis center.

Related Post

Leave a Reply