Senators introduce bill to purge child sexual abuse from AI training data
Senators John Cornyn (R-TX) and Andy Kim (D-NJ) have introduced a bipartisan bill called the PROACTIV AI Data Act aimed at preventing artificial intelligence systems from generating child sexual abuse material (CSAM). The legislation would require AI developers to screen their training datasets for illicit images and establish voluntary best practices through the National Institute of Standards and Technology. The bill also offers legal protections for companies that comply in good faith and funds research on automated CSAM detection. This effort responds to mounting concerns about AI-generated child exploitation, highlighted by recent studies finding thousands of suspected CSAM entries in commonly used AI datasets and a sharp increase in reports of AI-generated CSAM.The PROACTIV AI Data Act is part of a broader bipartisan initiative to address AI-related abuse,following previous laws like the Take It Down Act targeting AI-generated revenge porn and deepfakes. Lawmakers are emphasizing the need for clearer standards and stronger tools to detect and prevent harmful AI-generated content online.
Senators introduce bipartisan bill to purge child sexual abuse from AI training data
EXCLUSIVE — Sens. John Cornyn (R-TX) and Andy Kim (D-NJ) introduced bipartisan legislation Tuesday aimed at stopping artificial intelligence systems from being used to generate child sexual abuse material by requiring developers to screen training data for illicit images.
The Preventing Recurring Online Abuse of Children Through Intentional Vetting of Artificial Intelligence Data Act, or PROACTIV AI Data Act, would direct the National Institute of Standards and Technology to issue voluntary best practices for AI developers to identify and remove known child sexual abuse material, or CSAM, from their training datasets.
“Modern predators are exploiting advances in AI to develop new AI-generated child sexual abuse material,” Cornyn said. “This legislation would mitigate the risk of AI platforms unintentionally enabling the creation of new content.”
Kim added that the bill represents a chance for Congress and the tech sector to “implement the necessary safeguards to keep our children safe from future misuse or exploitation.”
The bill would also provide legal protections to companies that comply with the new guidelines and act in good faith, as well as funding for new research on automated CSAM detection methods.
A recent Stanford University study identified more than 3,000 suspected CSAM entries in the LAION-5B dataset, which is commonly used to train leading AI image generators. The National Center for Missing and Exploited Children said nearly half a million reports of AI-generated CSAM were made in the first half of 2025, up from fewer than 70,000 in all of 2024.
The legislation follows President Donald Trump’s signing of the Take It Down Act, a bill championed by first lady Melania Trump to crack down on AI-generated revenge porn and deepfakes, in May.
The PROACTIV AI Data Act follows a growing bipartisan push in Congress to address the rise of AI-generated child exploitation, including efforts from lawmakers such as Sens. Josh Hawley (R-MO), Lindsey Graham (R-SC), and Richard Blumenthal (D-CT), who have championed bills targeting deepfakes and tech platform liability.
TRUMP POSTS AI VIDEO OF OBAMA’S ARREST BY FBI AGENTS
Lawmakers have also revived support for broader legislation, such as the EARN IT Act, which would hold tech companies liable for failing to remove CSAM, and the DEEPFAKES Accountability Act, aimed at curbing malicious synthetic media.
Recent House and Senate hearings have spotlighted the surge in AI-generated abuse, with lawmakers from both parties calling for clearer standards and stronger tools to detect harmful content before it spreads online.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."