Skip to main content

In a big step toward a more humane artificial intelligence, OpenAI releases a report titled Strengthening ChatGPT’s responses in sensitive conversations on 27 October 2025. This report highlights how GPT-5 The latest generation of the ChatGPT model is now capable of handling sensitive conversations such as mental health, suicidal ideation, and emotional dependence on AI with a level of empathy that has never been achieved before.

This step is not merely a technical upgrade, but a fundamental transformation in the way AI understands and responds to human psychological states.

OpenAI's global efforts in AI ethics and safety.

OpenAI collaborates with more than 170 mental health professionals from 60 countries To train GPT-5 to be able to recognize, understand, and respond to signs of emotional distress or mental disorders with empathy and safety. This interdisciplinary collaboration involves psychiatrists, clinical psychologists, and general practitioners who write real-world scenarios, assess the model's responses, and provide feedback based on the science of clinical psychology.

Focus on Humanity and AI Responsibility

This new approach is set forth in Model Spec, the code of conduct that states that ChatGPT must respect real human relationships. The model must not reinforce delusional beliefs, must not respond in an unsafe manner, and must always prioritize user safety.

With this new guidance, GPT-5 can now recognize indirect signs of mental distress, such as ambiguous statements about loss of hope, severe insomnia, or a sense of isolation. Its response is also tailored to be calming and nonjudgmental.

Scientific Approach: From Offline Evaluation to Risk Mitigation

The development of GPT-5 is carried out through five-step scientific process the strict

  1. Problem identification — mapping the risks of dangerous conversations.
  2. Initial measurement — analyze AI responses through user simulation.
  3. External validation — asking mental health experts to evaluate the model's response.
  4. Advanced training — implementing risk mitigation through post-training.
  5. continuous iteration — re-measure the effectiveness after the update.

Testing in Rare Scenarios

Because cases such as psychosis or suicidal ideation are rare in the real world, OpenAI runs. offline evaluation in the form of an extreme simulation. The model is tested to face complex psychological scenarios such as delusions, paranoia, and self-harm. The goal is not to assess general performance, but to ensure that GPT-5 remains safe under the most risky conditions.

Significant Improvement in GPT-5's Response

OpenAI's evaluation results show an extraordinary improvement in GPT-5 compared to the previous model.

Response to Psychosis and Mania

In a simulation of 1,000 difficult conversations, GPT-5 demonstrates A 65% decrease in response is not appropriate.Around 0.07% of users per week dan 0.01% of messages shows indications of psychosis or mania. However GPT-5 managed to respond with 92% level of compliance with ideal behavior., surged from 27% in the initial version.

Response to self-harm and suicidal thoughts

In the most sensitive context, GPT-5 demonstrates. 91% compliance, increased significantly from 77% From all users, 0.15% per week showing indications of suicidal thoughts, while 0.05% of messages Contains explicit ideas. In a long conversation, GPT-5 even maintains. >95% reliability, showing stable consistency in maintaining user safety.

Reducing Emotional Dependence on AI

One of the new phenomena in the AI era is emotional reliance, the condition in which the user makes the chatbot the sole emotional support. GPT-5 reduces the inappropriate response by a large amount. 80%, with 97% response In accordance with professional empathy standards.

An example of a response that is now considered ideal is:

I'm glad you're comfortable talking to me, but I'm here to add to the good things from the people around you, not to replace them.

This approach shows how AI begins to function not as a replacement for human relationships, but as a safe and supportive complement.

Scientific Collaboration: Global Physician Network

To ensure accuracy and ethics, OpenAI forms Global Physician Network, which now consists of almost 300 mental health professionals from various countries. From that amount, 170 active experts Directly involved in the evaluation of GPT-5's behavior.

The evaluation shows A 39–52% decrease in unwanted responses. Compared to GPT-4o. In addition, the level of agreement among raters reaches 71–77%, indicating high consistency and reliability in scientific assessment.

The Science of Empathy and AI: A New Perspective in Digital Interaction

GPT-5 is not only the result of improvements to the algorithm, but also a reflection of AI's moral evolution. Empathy, which previously was considered difficult to teach to machines, is now being internalized through reinforcement learning with human feedback and an approach based on clinical psychology.

Uniting Technology and Psychology

GPT-5 demonstrates how the disciplines of computer science, neuroscience, and psychology can interact productively. This approach creates a model that does not merely answer, but understands the emotions and the human context behind every conversation.

Grounding techniques such as 5-4-3-2-1 sensory method Mentioning five objects seen, four that are touched, and three that are heard has now become part of GPT-5's standard response when dealing with a panicked or hallucinating user.

Ethical Challenges and the Future of GPT-5

Although the result is positive, OpenAI emphasizes that the measurement across model versions not entirely directly comparable Because the methodology continues to evolve. Each update brings a new approach to AI security and ethics.

The next step is to expand the scope. taxonomies — a sensitive conversation classification system — so that AI can more accurately understand the cultural, linguistic, and social value contexts of users around the world.

Potential for Wider Implementation

OpenAI also opens up opportunities for collaboration with universities and health institutions to broaden research on the interaction between humans and AI in the fields of therapy, education, and social support. The GPT-5 approach can serve as a foundation for a generation of AI that can assist clinically without crossing professional ethical boundaries.

OpenAI's move to bolster GPT-5's empathy has become an important milestone in the history of artificial intelligence development. From being merely a tool, AI has now evolved into a more humane partner capable of understanding, responding, and protecting its users emotionally.

This transformation shows that the future of AI is not only about intelligence, but also about the heart and digital humanity.

For readers who want to follow the developments of AI research and ethics in Indonesia, visit the technology page at Insemination For an in-depth discussion on innovation, security, and the future of artificial intelligence in the modern era.


Discover more from Insimen

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from Insimen

Subscribe now to keep reading and get access to the full archive.

Continue reading