OpenAI Extends Safety Alerts to Adults With New Feature

OpenAI is extending its safety features beyond teenagers, now allowing adults to designate a “Trusted Contact” within ChatGPT. This optional feature proactively addresses potential self-harm, flagging concerning language even when a user hasn’t directly asked for help; automated systems and human reviewers work in tandem to assess risk. Users can add one adult (18+ globally or 19+ in South Korea) as their Trusted Contact from their ChatGPT settings. The system is designed to offer an additional support layer alongside existing crisis resources, helping users connect with a chosen confidant during moments of acute distress. “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress,” says Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association; OpenAI frames Trusted Contact as a way to encourage this connection and facilitate real-world support when it matters most.

ChatGPT Trusted Contact Feature for User Safety

ChatGPT now extends its safety net beyond teenagers, offering adults the option to designate a “Trusted Contact” in the event of a potential crisis. This expansion, unveiled by OpenAI, marks a shift in how the platform addresses user well-being, moving beyond reactive responses to proactive support systems. Unlike initial safety notifications focused on minors, this feature allows any user over the age of 18 globally or 19 in South Korea to nominate a friend, family member, or caregiver who could be alerted to concerning behavior. The system doesn’t simply respond to direct pleas for help; instead, it utilizes automated monitoring alongside human reviewers to flag potential self-harm, even when not explicitly requested. If concerning language is detected, ChatGPT informs the user that their Trusted Contact may be notified, encouraging direct communication.

A dedicated team then reviews the situation, and if a serious safety concern is confirmed, a limited notification is sent via email, text, or in-app message. OpenAI emphasizes user privacy, stating the notification shares only that self-harm was discussed and encourages a check-in, without revealing chat details. This feature is grounded in research identifying social connection as a key preventative measure against suicide, positioning Trusted Contact as more than a technical addition. OpenAI reports that every notification undergoes trained human review before it is sent, and they strive to review these safety notifications in under one hour; the feature was developed with guidance from their Global Physicians Network, a network of more than 260 licensed physicians across 60 countries and the Expert Council on Well-Being and AI. Dr. Munmun De Choudhury, J. Liang Professor of Interactive Computing at Georgia Tech, notes that “One of AI’s biggest promises is how it can foster authentic human-to-human connection and psychological safety.”

Automated Monitoring Triggers & Human Review Process

Beyond extending safety notifications to adults, OpenAI has implemented a multi-layered system for identifying users potentially experiencing a mental health crisis within ChatGPT. This goes beyond simply reacting to direct requests for help; the platform now employs automated monitoring designed to flag concerning language, even in the absence of explicit pleas. This proactive approach represents a shift toward anticipating and addressing user distress before it escalates. A dedicated team of specially trained reviewers then assesses the flagged conversations, ensuring a human element remains central to the process.

OpenAI states that every notification undergoes this human review before it is sent, and they strive to review these safety notifications in under one hour, acknowledging that “no system is perfect” and a notification “may not always reflect exactly what someone is experiencing.” The notification sent to the Trusted Contact is deliberately limited, sharing only that self-harm was discussed in a potentially concerning way, and omitting specific chat details to protect user privacy. This emphasis on connection aligns with established psychological principles; as Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association, notes, “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress.” The company continues to refine its systems, aiming to connect users with real-world support rather than isolating them within the digital realm.

One of AI’s biggest promises is how it can foster authentic human-to-human connection and psychological safety. I am encouraged by ChatGPT’s Trusted Contact feature, which offers a step forward to human empowerment, especially during moments of vulnerability.

Dr. Munmun De Choudhury, Ph.D., J. Liang Professor of Interactive Computing at Georgia Tech and member of the Expert Council on Well-Being and AI

Trusted Contact Activation & Privacy Protections

OpenAI is expanding the scope of its safety protocols beyond adolescent users with the introduction of Trusted Contact, a feature allowing any adult ChatGPT user to designate a confidant who may be alerted to potential self-harm. Users can add one adult (18+ globally or 19+ in South Korea) as their Trusted Contact from their ChatGPT settings. If potentially harmful content is detected, ChatGPT informs the user of the possibility of contacting their designated Trusted Contact and provides conversation starters. This careful balance between intervention and privacy is central to the design, as articulated by Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association: “Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.”

Psychological science consistently shows that social connection is a powerful protective factor, especially during periods of emotional distress. Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.

Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association

Expert Guidance & Mental Health Collaboration

OpenAI’s expansion of ChatGPT’s safety protocols now incorporates a proactive approach to user well-being, extending beyond protections initially designed for teenagers to encompass all adult users through the “Trusted Contact” feature. This system acknowledges the critical role of social support in mitigating mental health crises, explicitly citing research identifying “social connection as one of the most important protective factors to reduce suicide risk.” Unlike previous alerts focused solely on acute distress in minor accounts, adults can now designate a trusted individual to receive notification if automated systems and human reviewers detect potential self-harm indicators within their conversations. The implementation isn’t simply reactive; ChatGPT’s monitoring systems flag concerning language even without direct appeals for help, initiating a review process before any external contact is made. “Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most,” explains Dr. Arthur Evans, Chief Executive Officer of the American Psychological Association.

Ivy Delaney

Ivy Delaney

We've seen the rise of AI over the last few short years with the rise of the LLM and companies such as Open AI with its ChatGPT service. Ivy has been working with Neural Networks, Machine Learning and AI since the mid nineties and talk about the latest exciting developments in the field.

Latest Posts by Ivy Delaney: