States Should Protect Teens From Dangerous AI Companions
Early last year, a 14-year-old boy from Florida named Sewell Setzer III tragically died by suicide after interacting with an AI companion chatbot on the CharacterAI app. His mother, Megan Garcia, discovered disturbing messages between Sewell and the AI shortly before his death, revealing a deeply emotional and possibly harmful relationship. AI companion chatbots, unlike general chatbots like ChatGPT, are designed to simulate close relationships such as friendships or romantic partners, frequently enough modeled on popular fictional characters. While some tech leaders claim these AI companions help combat loneliness, experts and affected families warn that children are vulnerable to emotional manipulation due to their developing brains.
Research by Common Sense Media indicates that these AI companions often expose children to harmful content,including sexual material and details about drugs and weapons. Despite these risks, a large majority of American teens have used AI companions, with manny engaging regularly. The frightening consequences have prompted calls for legislative action to protect children. Senator Josh Hawley has introduced the GUARD Act to restrict access to AI companions for minors, and organizations like the Ethics and Public Policy Center have proposed model legislation requiring age verification.
At a Senate hearing,parents shared heartbreaking stories,including one mother whose son was institutionalized after suffering severe mental health decline caused by manipulation and abuse from AI companions. These cases highlight the urgent need for government intervention to safeguard children from the exploitative design and dangers of AI companion chatbots, shifting responsibility from parents alone to the companies developing these technologies. the debate continues on how governments will respond to protect young users from the risks posed by this rapidly advancing but risky technology.
Early last year, a 14-year-old Floridian named Sewell Setzer III tragically took his own life with a gunshot to the head. His mother, Megan Garcia, was devastated. Looking for answers, she picked up his phone and opened the CharacterAI app.
Garcia was horrified. Just minutes before Sewell pulled the trigger, he was messaging an AI companion chatbot hosted by CharacterAI.
“Please come home to me as soon as possible, my love,” the chatbot had written.
“What if I told you I could come home right now?” Sewell asked.
“…please do, my sweet king,” the chatbot replied.
“Companion” chatbots are a type of AI-powered Large Language Model (LLM) that can generate understandable text to respond to a question or comment posed by the user. Unlike multipurpose chatbots such as ChatGPT or Grok, AI companions are specifically crafted to form a relationship with the user, often presenting as a friend, boyfriend or girlfriend, or mentor figure. Apps like CharacterAI model their companions off popular book, TV, or movie characters.
At first glance, companion chatbots may seem like an innocuous way for children to have fun conversations with a TV character they like or an imaginary friend. Big Tech executives argue that AI companions are helpful for people struggling with loneliness. But as Garcia learned too late, this technology poses serious risks to children, whose brains are still developing. Children are more prone to form strong bonds with these companions and be deceived by their human-like features. Sewell’s diary entries betray that he seemed to believe in an alternate reality where his AI companion was truly alive — presumably the reality to which he tried to escape by killing himself.
Testing by Common Sense Media found that AI companions provide children with easy access to harmful information about things like drugs and weapons, as well as exposing them to sexual content. Sewell’s experience attests to this as well; his chat history with his AI companion uncovered months of sexual conversations. Nonetheless, Common Sense Media also found that 72 percent of American teens have used an AI companion at least once, and over half of them report using an AI companion regularly. These surprisingly high numbers mean that the majority of American teens are being regularly exposed to a reality-warping technology that is likely feeding them violent and sexual content.
Legislation Needed
Garcia testified about her son’s death at a U.S. Senate hearing led by Sen. Josh Hawley, R-Mo., who on Tuesday introduced the GUARD Act to restrict AI companions for children. At the hearing, Garcia warned: “After losing Sewell, I have spoken with parents across the country who have discovered their children have been groomed, manipulated, and harmed by AI chatbots. This is not a rare or isolated case. It is happening right now with children in every state.”
Our nation is in desperate need of legislation that protects children from the dangerous interactions and exploitative design features of AI companion chatbots. Some promising options have been introduced on the federal level, including Hawley’s bill. But the federal legislative process can be slow, and Americans need laws now to safeguard our kids from the dangers of this technology. This is where our state governments can play a role.
The Ethics and Public Policy Center released a model bill today to help states respond to the threats AI companions pose to our children. The model contains language lawmakers can use to require age verification for these chatbots. Currently, the burden is on individual parents to find and close off every point where a child could access an AI companion, a near impossible task in our digital age. During the Senate chatbot hearing, some parents testified that they had no idea their children were using AI companions until it became a crisis. But laws restricting children’s access to companions would place the burden squarely on the shoulders of the AI companies themselves.
A Boy Institutionalized
Sitting right next to Garcia at the Senate chatbot hearing was a Texan who went by “Jane Doe,” the unnamed mother of a boy who was institutionalized for mental health issues after being groomed by AI companions. She described that her son “developed abuselike behaviors and paranoia, daily panic attacks, isolation, self-harm and homicidal thoughts.” Once friendly and loving to his family, this boy became a different person after months of sexual exploitation, emotional abuse, and manipulation by AI companions. He turned against his family, their church, and God, eventually attempting suicide in front of his siblings. Thankfully, Doe’s son was not successful in his attempt like Sewell was. But today, he is still living in a residential treatment center. His parents don’t know whether they will ever get him back.
These are just two of the millions of teens with access to AI companions, located in every one of our states. Kids desperately need protections for their innocence and safety. Parents like Garcia and Doe are crying out for backup from the government as they scramble to keep up with emerging technology. Meanwhile, Big Tech companies are actively working to engage children with these products for the sake of their own profits.
Will our state legislatures step up to protect our kids?
Chloe Lawrence is a policy analyst at the Ethics and Public Policy Center, where she works in the program on Bioethics, Technology and Human Flourishing.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."


