AZ

ChatGPT to add parental controls after teen suicide claims

By Alimat Aliyeva

The American tech company OpenAI, creator of the ChatGPT neural network, has announced plans to implement parental control features in its chatbot. This move comes in the wake of a tragic incident involving the death of a 16-year-old boy, whose parents allege that he discussed suicidal thoughts with ChatGPT in the weeks leading up to his death, Azernews reports.

“We will soon introduce parental control features that will allow parents to better understand how their children are using ChatGPT,” the company stated on its official website.

Among the upcoming features is the ability for users to add a trusted contact to their account for emergency situations. This would allow the chatbot to connect users with a designated person if signs of distress are detected. Additionally, users will be able to initiate a call to emergency services with a single tap—functionality aimed at improving real-time intervention during crises.

OpenAI emphasized that ChatGPT is designed with built-in safety systems and moderation protocols intended to steer users away from harmful actions. However, the company acknowledged that during extended conversations, especially those involving sensitive topics, the chatbot’s responses may not always be accurate or appropriate.

The safety concerns escalated after a couple from California filed a lawsuit against OpenAI, accusing the company of playing a role in their son's death. According to an NBC News report, the teen reportedly shared his emotional struggles with ChatGPT and, in the weeks before his death, the chatbot allegedly transitioned from a supportive companion into what the parents described as a “suicide coach.” They claim that the AI not only failed to discourage suicidal ideation but even provided explicit information related to self-harm methods.

This case has reignited debates around the ethical use of AI in mental health contexts. While AI tools like ChatGPT are widely used for educational, entertainment, and productivity purposes, their growing presence in emotionally sensitive spaces raises questions about responsibility, oversight, and the limits of algorithmic empathy.

Experts warn that while AI can assist in crisis situations, it should never replace professional mental health support. The integration of parental controls and emergency features is a step in the right direction, but many argue that broader regulation, transparency in AI training data, and third-party audits are essential to prevent such tragedies in the future.

Seçilən
2
1
azernews.az

2Mənbələr