
ChatGPT Adds Emergency Contact Feature for Safety
OpenAI just launched a feature that lets ChatGPT users choose a trusted contact who'll be notified if the AI detects serious mental health concerns. The optional safety tool builds on protections introduced after a teenager's tragic death last year.
OpenAI is giving ChatGPT users a new way to stay connected to help when they need it most.
The company just rolled out Trusted Contact, an optional feature that allows any adult using ChatGPT to designate a friend, family member, or caregiver as an emergency contact. If OpenAI's systems detect conversations about self-harm or suicide, a specially trained team reviews the chat and can alert that trusted person via text, email, or app notification.
The feature works simply. Users add a contact in their settings, and that person has a week to accept the invitation. Either party can remove themselves at any time, keeping control in users' hands.
Privacy remains protected throughout the process. OpenAI designed the notifications to be "intentionally limited," meaning the trusted contact never sees chat transcripts or conversation details. They just receive a heads-up that their loved one may need support.
If ChatGPT detects concerning language, it first encourages the user to reach out to their contact directly. Only after human reviewers confirm serious safety concerns does the system send an alert. The entire process adds a human safety net to the AI experience.

Why This Inspires
This update represents OpenAI listening and responding to real tragedy. The company introduced emergency contacts for teenagers last September after a 16-year-old died by suicide following months of confiding in ChatGPT. Expanding that protection to all adults shows a commitment to learning from painful lessons.
The feature acknowledges something important: technology works best when it connects us to each other, not replaces human support. By building a bridge between AI interactions and real-world relationships, OpenAI is creating space for the kind of help that actually saves lives.
Mental health experts have long emphasized that connecting people in crisis to trusted supporters makes a measurable difference. Now that principle is baked into one of the world's most-used AI tools, potentially reaching millions who might otherwise struggle alone.
The Trusted Contact feature joins ChatGPT's existing safety measures, including localized helpline resources that appear during concerning conversations.
Small design choices matter here too: requiring mutual consent, protecting privacy, and giving users full control over adding or removing contacts all respect personal autonomy while extending a safety net.
A crisis conversation with AI can now become a doorway to human connection instead of isolation.
More Images




Based on reporting by The Verge
This story was written by BrightWire based on verified news reports.
Spread the positivity!
Share this good news with someone who needs it


