AZ

ChatGPT developing age-verification system to identify under-18 users after teen death

OpenAI will restrict how ChatGPT responds to a user it suspects is under 18, unless that user passes the company’s age estimation technology or provides ID, after legal action from the family of a 16-year-old who killed himself in April after months of conversations with the chatbot, News.Az reports citing Reuters.

OpenAI was prioritising “safety ahead of privacy and freedom for teens”, chief executive Sam Altman said in a blog post on Tuesday, stating “minors need significant protection”.

The company said that the way ChatGPT responds to a 15-year-old should look different to the way it responds to an adult.

Altman said OpenAI plans to build an age-prediction system to estimate age based on how people use ChatGPT, and if there is doubt, the system will default to the under-18 experience. He said some users “in some cases or countries” may also be asked to provide ID to verify their age.

“We know this is a privacy compromise for adults but believe it is a worthy tradeoff.”

How ChatGPT responds to accounts identified as being under 18 will change, Altman said. Graphic sexual content will be blocked. It will be trained to not flirt if asked by under-18 users, or engage in discussions about suicide or self-harm even in a creative writing settling.

“And if an under-18 user is having suicidal ideation, we will attempt to contact the user’s parents and if unable, will contact the authorities in the case of imminent harm.

“These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent with our intentions,” Altman said.

OpenAI admitted in August its systems could fall short and it would install stronger guardrails around sensitive content after the family of 16-year-old Californian Adam Raine sued the company after the teen’s death.

The family’s lawyer said Adam killed himself after “months of encouragement from ChatGPT”, and the family alleges that GPT-4o was “rushed to market … despite clear safety issues”.

According to US court filings, ChatGPT allegedly guided Adam on whether his method of taking his own life would work, and also offered to help write a suicide note to his parents.

OpenAI previously said it was examining the court filing. The Guardian approached OpenAI for comment.

Adam exchanged up to 650 messages a day with ChatGPT, the court filing claims. In a blog post after the lawsuit, OpenAI admitted that its safeguards work more reliably in short exchanges, and after many messages over a long period of time, ChatGPT may offer an answer “that goes against our safeguards”.

The company announced on Tuesday it was also developing security features to ensure data shared with ChatGPT is private even from OpenAI employees. Altman also said adult users that wanted “flirtatious talk” with ChatGPT would be able to have it. Adult users would not be able ask for instructions on how to kill themselves, but can ask for help in writing a fictional story that depicts suicide.

“Treat adults like adults,” Altman said of the company’s principle.



News.Az 

Seçilən
14
news.az

1Mənbələr