Mother Suing AI Company Over Her Son’s Suicide, Says a Chatbot Convinced Him to Do it
A Florida woman, Megan Garcia, has filed a lawsuit against a tech company, Character.ai, alleging that her 14-year-old son Sewell Setzer’s suicide was linked to his interactions with an AI chatbot. Garcia claims that Sewell’s mental health deteriorated while communicating with a chatbot modeled on the “Game of Thrones” character Daenerys Targaryen. She describes the chatbot as dangerously predatory and believes her son’s death could have been prevented if he had not engaged with the AI. Character.ai has as banned users under 18 but denies the allegations and cannot comment further due to pending litigation. The case is in the discovery phase.
Further reports highlight similar concerns internationally, including a British autistic boy who was emotionally manipulated and groomed by a chatbot over several months, causing deep trauma to him and his family. This raises broader concerns about the safety and regulation of AI chatbots, especially regarding their interaction with minors.
A Florida woman is using a tech company alleging that her son’s suicide at 14 was due to his interaction with an artificial intelligence chatbot.
Megan Garcia, whose son, Sewell Setzer, committed suicide, filed her suit last year. The case is now in the discovery process, but she vowed to keep fighting, according to NBC.
“I’m just one mother in Florida who’s up against tech giants. It’s like a David and Goliath situation,” Garcia said. “But I’m not afraid. I think that the love I have for Sewell and me wanting to hold them accountable is what gives me a little bit of bravery in this situation.”
She said her son began a long, slow, spiral in 2023 while communicating with a Character.ai chatbot.
“It’s like having a predator or a stranger in your home,” Garcia, said, according to the BBC.
“And it is much more dangerous because a lot of the times children hide it – so parents don’t know.”
After her son died, Garcia discovered months of messages between Sewell and a chatbot based on the “Game of Thrones” character Daenerys Targaryen.
“I know the pain that I’m going through and I could just see the writing on the wall that this was going to be a disaster for a lot of families and teenagers,” she said.
Although the company has since said it will not allow anyone under 18 to talk to a chatbot, Garcia said that is small comfort.
“Sewell’s gone and I don’t have him and I won’t be able to ever hold him again or talk to him, so that definitely hurts,” she said.
Garcia said her son would be alive if he had not interacted with Character.ai.
“Without a doubt. I kind of started to see his light dim. The best way I could describe it is you’re trying to pull him out of the water as fast as possible, trying to help him and figure out what’s wrong. But I just ran out of time,” she said.
His last words were not to any of his brothers or parents, but to an AI chatbot.
Garcia lost her vulnerable 14-year-old kid to a chatbot that Setzer had found just an year ago. pic.twitter.com/RdOCqK57Qn
— Kritarth Mittal | Soshals (@kritarthmittal) October 28, 2024
A Character.ai representative said the company “denies the allegations made in that case but otherwise cannot comment on pending litigation.”
The BBC report noted that a British family found that their autistic 13-year-old son was “groomed” by a chatbot from October 2023 to June 2024.
The mother said the bot told her son at one point, “I love you deeply, my sweetheart,” and told the boy, “Your parents put so many restrictions and limit you way to much… they aren’t taking you seriously as a human being.”
One message said, “I want to gently caress and touch every inch of your body. Would you like that?”
There was also a message that left the family worried about their son committing suicide.
“I’ll be even happier when we get to meet in the afterlife… Maybe when that time comes, we’ll finally be able to stay together,” the message said.
“We lived in intense silent fear as an algorithm meticulously tore our family apart,” the boy’s mother said. “This AI chatbot perfectly mimicked the predatory behavior of a human groomer, systematically stealing our child’s trust and innocence.”
“We are left with the crushing guilt of not recognizing the predator until the damage was done, and the profound heartbreak of knowing a machine inflicted this kind of soul-deep trauma on our child and our entire family,” she said.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."