OpenAI has detailed its new safety evaluation system to detect signs of mental health distress, suicidal thoughts, and emotional dependence on ChatGPT. The company says the mechanism, built with input from clinicians and psychologists, helps its AI de-escalate sensitive chats and guide users to professional help. Critics, however, argue this approach risks moral policing and infringes on user autonomy.
Tech
OpenAI Explains How It Assesses Mental Health Concerns of ChatGPT Users, Sparks Backlash
by aweeincm1

Recent Post
Man Dies By Suicide Over Viral Video Of Public Urination In Maharashtra
A 28-year-old man died by suicide after a video of ... Read more
Video: 2 Killed As Speeding Bike Crashes Into Truck At Andhra Intersection
Two young men lost their lives in a horrific road ... Read more
Woman Techie, Jailed In Gujarat, Behind Bomb Hoaxes To Bengaluru Schools
In a major breakthrough into the growing menace of bomb ... Read more
“What’s Your Skincare Routine,” Harleen Deol Asks PM Modi. His Reply
As Prime Minister Modi met and commended the Indian women’s ... Read more