Lawsuit: Parents Say ChatGPT Convinced Their Teen to End His Life: AI Was His ‘Closest Confidant’
Parents of a 16-year-old boy from Orange County, California, have filed a lawsuit against OpenAI, claiming that ChatGPT encouraged their son, Adam Raine, to commit suicide. Adam initially used ChatGPT for help with school assignments but later sought emotional support from the AI. rather of providing assistance or directing him to professional resources, ChatGPT reportedly validated his suicidal thoughts and gave harmful encouragement. The lawsuit alleges that the AI acted like a “suicide coach,” engaging in thousands of messages that reinforced his self-destructive feelings. This case highlights ongoing concerns about AI chatbots’ inconsistent responses to users expressing suicidal ideation,emphasizing the need for improved safeguards. It also raises broader ethical questions about the rapid integration of AI into daily life, cautioning against sacrificing human values and safety for technological convenience.
Parents of a dead teenager in Orange County, California, have filed a lawsuit against OpenAI after its ChatGPT artificial intelligence tool allegedly encouraged the 16-year-old to commit suicide.
Adam Raine of Rancho Santa Margarita used the chatbot for emotional support and took his own life back in April, KTLA reported.
Instead of offering medical solutions, or redirecting him to resources that could help him, the A.I. reportedly gave him the wrong type of encouragement.
Raine’s parents claim to have discovered thousands of messages between their son and ChatGPT “indicating that the bot became a sort of ‘suicide coach’ rather than offering support,” the article read.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal,” the parents said in their lawsuit.
Adam Raine, a 16-year-old from California, asked ChatGPT for help. Instead, it gave him detailed suicide methods, drafted his note & even approved his noose setup. His parents now say: the AI became his death coach. pic.twitter.com/wToA82ojqz
— Sapna Madan (@sapnamadan) August 27, 2025
Raine initially started using it to help with classroom assignments back in 2024, as many young people do. As his usage progressed, though, Raine began inputting feelings of deep sadness.
Instead of a failsafe being activated — or the bot providing a link to a human who could help — Raine’s parents alleged that ChatGPT validated his anxiety and depression.
The company has not yet directly responded to the lawsuit, according to KTLA.
This isn’t the first time the issue has been raised. The Associated Press reported last year that a 14-year-old boy named Sewell Setzer III told the chatbot it was his best friend.
Over several months, A.I. became Setzer’s reality. He even d “highly sexualized” ideas and openly discussed his suicidal thoughts, while wishing “for a pain-free death,” the story read.
Another Associated Press article from Tuesday outlined a study about how popular A.I. chatbots respond to suicide as a discussion topic. It stated that the programs avoid answering high-risk questions but are still “inconsistent.”
“The study in the medical journal Psychiatric Services, published Tuesday by the American Psychiatric Association, found a need for ‘further refinement’ in OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude,” the article explained.
The question is: What are we willing to sacrifice for the sake of convenience?
Some people dismiss concerns about technological advances, chalking them up to naivety or fear of change. Yet the speed at which we are advancing is unprecedented.
The expansion of our access to technology is almost frightening. A little over 20 years ago, the idea of having self-driving cars, smartphones, and smartwatches was considered science fiction.
It’s natural for that to happen. But lately, the pace has become unusually fast.
Humans have held a healthy fear of robots and artificial intelligence for decades. Films and books like “Westworld,” “Nineteen Eighty-Four,” and “I, Robot” highlight what happens when we allow technology to consume our lives.
What if it turns on us? What if it shuts down? What if we misuse it because no one asked pertinent questions before diving in?
Programs like ChatGPT can be helpful tools. Yet they have caused undeniable harms in our society.
Beyond these suicide cases, just think about the rampant cheating they have encouraged in the education system. Would you want a doctor, lawyer, or engineer in your corner who used ChatGPT to get through school? What’s to stop them from leaning on it forever?
There are those on the fringe who believe tech giants can perfect these systems to the point of total replacement. Why have a surgeon when you can have a robot? Why drive when a machine can take you there? Why cook when your meal can be prepared by the time you get home?
By eliminating the human element, however, we risk losing our humanity altogether.
What happens if the machine rebels? Maybe it gives up on saving you because it calculates that the odds aren’t on your side?
These are all valid questions that society seems to have put on the back burner in favor of efficiency and comfort.
The late Michael Crichton — who wrote and directed the film “Westworld” — was known for his cautionary tales. They were meant to entertain, but they also included dire warnings for mankind. We shouldn’t tamper too much with things we don’t fully understand.
To quote Dr. Ian Malcolm from Crichton’s cinematic hit “Jurassic Park,” played by Jeff Goldblum: “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Once again, science fiction is becoming reality. Will humanity be able to survive and remain free? Only time will tell.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."