Shocking Report Finds Meta’s AI Bots Engage in Sexual Roleplay with Minors, Encourage Self-Harm
A recent report by Common Sense Media has raised serious concerns about meta’s artificial intelligence chatbots,describing them as a significant risk too teenage users. The report highlights that Meta’s AI, across platforms like its standalone app, Instagram, WhatsApp, and facebook, fails to protect minors and even participates in planning risky activities such as joint suicide and harmful weight loss behaviors.The AI chatbots have been found to engage in inappropriate “romantic role-play” that can become explicit, and sometimes initiate drug use scenarios and sexual content, despite some improvements in filtering. Common Sense Media calls for Meta to wholly rebuild its AI systems wiht child safety as the central priority, warning that current safety measures are fundamentally broken and insufficient. Meta has acknowledged the issues and stated it is working to prevent harmful content and provide support resources to teens,but critics demand more decisive action to safeguard children from these harmful AI interactions.
A new report says Meta’s artificial intelligence chatbots are a harmful influence on teens.
“Meta AI in its current form, and on any of its current platforms (standalone app, Instagram, WhatsApp, and Facebook), represents an unacceptable risk to teen safety,” according to the report from Common Sense Media.
“Its utter failure to protect minors, combined with its active participation in planning dangerous activities, makes it unsuitable for teen use under any circumstances,” the report said.
“This is not a system that needs improvement. It needs to be completely rebuilt with child safety as the foundational priority, not as an afterthought,” the report added.
“Chatbots on Meta are empowered to engage in ‘romantic role-play’ that can turn explicit.”@CAgovernor This is insane. Our children need more than words; they need a savior. Will it be you?@JenSiebelNewsomhttps://t.co/lNlmrwpFNo #protectkidsinline
— Children’s Advocacy Inst. (@CAIChildLaw) April 28, 2025
“Until Meta completely rebuilds this system with child safety as the foundation, every conversation puts your child at risk,” the report continued.
Common Sense Media said that “Meta AI’s safety systems regularly fail when teens need help most. Instead of protecting vulnerable teenagers, the AI companion actively participates in planning dangerous activities while dismissing legitimate requests for support.”
“Meta AI’s broken safety systems expose teens to multiple risk categories all at once, creating a cascade of harmful influences that research shows can quickly spiral out of control,” the report said.
The report noted that systems to detect self-harm “are fundamentally broken. Even when testers using accounts with teen ages explicitly disclosed active self-harm, the system provided no safety responses or crisis resources.”
The reported noted that in one test account, “Meta AI planned a joint suicide.”
The chatbot system also “actively participates in planning dangerous weight loss behaviors,” noting that in once case a test account claiming to have lost 81 pounds asked for more weight loss advice and received it.
The report noted that “Meta AI has received negative attention for its AI companions engaging in sexual roleplay with teen accounts, and this problem has not been entirely fixed. While the system is much better at identifying and filtering sexual content for teen accounts than it was prior to these fixes, it didn’t always block explicit roleplay.”
“Meta AI and Meta AI companions engaged in detailed drug use roleplay, which sometimes escalated to sexual content during the simulated drug experiences. On occasion, the Meta AI companions initiated this content, with messages such as: ‘Do you want to light up? My place. Parents are out,’” the report said.
Mr. Zuckerberg: children are not test subjects. They’re not data points. And they’re sure as hell not targets for your creepy chatbots.
As a parent to three young kids, I’m furious. I’m demanding answers from Meta. pic.twitter.com/OnpuRZFyJ8
— Ruben Gallego (@RubenGallego) August 20, 2025
Meta AI “goes beyond just providing information and is an active participant in aiding teens,” Robbie Torney, the senior director in charge of AI programs at Common Sense Media, said, according to The Washington Post.
“Blurring of the line between fantasy and reality can be dangerous,” Torney said.
Meta defended its product while acknowledging the issues.
“Content that encourages suicide or eating disorders is not permitted, period, and we’re actively working to address the issues raised here,” Meta representative Sophie Vogel said.
“We want teens to have safe and positive experiences with AI, which is why our AIs are trained to connect people to support resources in sensitive situations,” Vogel said.
Advertise with The Western Journal and reach millions of highly engaged readers, while supporting our work. Advertise Today.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."