← Back to KHAO

OpenAI · ChatGPT ·

OpenAI launches new ‘Trusted Contact’ safeguard for cases of possible self-harm

2 min read

Compiled by KHAO Editorial — aggregated from 1 outlet + 6 references discovered via search. See llms.txt for citation guidance.

◌ Single Source

On Thursday OpenAI announced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation.

Key facts

Summary

OpenAI has faced a wave of lawsuits from the families of people who have committed suicide after talking with its chatbot. OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. If OpenAI’s internal team decides that the situation represents a serious safety risk, ChatGPT proceeds to send the trusted contact an alert, either by email, text message, or an in-app notification. The Trusted Contact feature follows the safeguards the company introduced last September that gave parents the power to have some oversight of their teens’ accounts, including receiving safety notifications designed to alert the parent if OpenAI’s system believes their child is facing a “serious safety risk.

Read full article at TechCrunch AI →

#OpenAI #ChatGPT