ChatGPT can now warn a friend or family member if it believes a user may be in danger

ChatGPT will now notify a friend or family if it feels that a chatter is in danger.

OpenAI has introduced a new feature for ChatGPT called trusted contacts.
Credit : arda savasciogullari. Shutterstock

OpenAI’s new ChatGPT allows users to select a trusted friend or relative who will be notified if AI detects a potential safety threat. Adult users can select a relative, friend or caregiver to receive an alert if ChatGPT detects a conversation that suggests the person is in danger or facing a crisis.

The new feature is attracting attention as it allows ChatGPT to play a more personal role during deep conversations. OpenAI reports that while AI is still used by many for everyday questions and work-related tasks, more people are turning to ChatGPT in difficult emotional situations or times of stress.

The new feature, according to the company, is intended to be an extra layer of support and not replace professional mental healthcare or emergency services.

ChatGPT: How it works

Adult users can activate the feature Trusted Contact through ChatGPT settings.

Once ChatGPT is enabled, users are able to select someone they trust that could be contacted in the event of a serious danger being detected during a conversation.

OpenAI claims that the system is based on an automated safety monitoring tool, which has been used for years to detect self-harm or situations in which a person’s safety could be at risk.

OpenAI’s safety team can then review the conversation, if it detects language that suggests a grave concern.

If the situation was deemed to be serious, the trusted contact would receive a notification encouraging him or her to check in on the user.

OpenAI states that the notification can arrive via email, text or app notification when the trusted contact uses ChatGPT.

The idea, according to the company, is to reconnect people with someone who they already trust and know in moments when they feel overwhelmed or isolated.

This feature is not automatic and can be turned off. The users are responsible for selecting a trusted contact. They must also agree that the person they choose will take on this role.

After selecting the contact, they will receive an invitation that explains how the system operates. They have one week to accept the invitation. If they refuse to accept, the user may choose someone else instead.

OpenAI reports that ChatGPT is a popular way to communicate with friends and family.

OpenAI said the update reflects that people increasingly use AI assistants more emotionally and personally.

The company stated on its blog that many users are turning to ChatGPT Not only to solve informational or productivity problems, but also for personal issues, stressful situations, or emotional difficulties.

The debate over how AI should respond to users who appear vulnerable has grown.

Chatbots are seen by some as a useful companion during difficult or lonely moments. Some people worry that people will start relying on artificial intelligence to provide emotional support, instead of asking for help from people.

OpenAI claims ChatGPT was designed to be empathetic while encouraging users to reach out for professional help and human contact when necessary.

The company claims that the new system of trusted contacts is designed to enhance these real-world connections, not replace them.

ChatGPT is also continuing to direct users toward emergency services or crisis hotlines, when appropriate.

This new feature is based on existing safety features for young users, such as parental safety notifications. Applying similar ideas to adult conversation raises questions about privacy, trust, and how much AI companies should be involved when users seem emotionally distressed.

This new feature will likely divide opinion

Some people may welcome the idea of alerting a trusted relative or friend during a crisis.

Users who live alone, or suffer from isolation may find it comforting to know that someone can be alerted.

Other people, however, may be uncomfortable with the idea of a personal conversation being analysed in such a way as to trigger triggering human review and external notifications.

OpenAI claims that trained staff will only review conversations if there are severe safety concerns detected. However, this feature has already raised questions about privacy and the way AI moderation systems work behind the scenes.

Interpretation is also a difficult problem. Conversations are often difficult and emotional. Online, people often express their frustration, humor or fear without being in danger.

As these features become more prevalent, the accuracy of AI-based safety systems is likely to remain under scrutiny. OpenAI did not present the system as an alternative to therapists or doctors, nor emergency support services.

It is described by the company as a way to reconnect people with someone who they already trust at difficult times.

Even so, this launch highlights the rapid evolution of AI assistants beyond simple digital tools.

Conversations with chatbots have become more personal for many users than companies imagined a few short years ago. The line between artificial intelligence (AI) and real world support is blurring with the introduction of new features, such as Trusted Contact.


Free Subscribe

Sign up to stay ahead with the latest news straight to your email.

We respect your privacy and will never spam you!

About Liam Bradford

Avatar photo
Liam Bradford, a seasoned news editor with over 20 years of experience, currently based in Spain, is known for his editorial expertise, commitment to journalistic integrity, and advocating for press freedom.

Check Also

New EU phone rules could change the way you buy your next mobile

The EU’s new rules for mobile phone purchases could impact your purchase.

From 2027, new EU regulations could affect the way smartphones are manufactured Credit : James …