OpenAI launches new ‘Trusted Contact’ safeguard for cases of possible self-harm
·2 min read
Compiled by KHAO Editorial
— aggregated from 1 outlet + 6 references discovered via search.
See llms.txt for citation guidance.
◌ Single Source
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation.
Key facts
Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company wrote in the announcement post
On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation
Crucially, Trust Contact is optional and, even if the protection is activated on a particular account, any user can have multiple ChatGPT accounts
In cases where a conversation may turn to self-harm, OpenAI will now encourage the user to reach out to that contact
Summary
OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert, either by email, text message, or an in-app notification. The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.