3 More Rules Congress Should Pass To Protect Kids From AI
the article discusses the urgent need for a national framework for regulating artificial intelligence (AI) in the United States amid rising concerns and debates surrounding the technology. It highlights recent controversies, including a proposed 10-year moratorium on state AI laws, which has sparked discussions even among conservative circles. The author argues that Republicans should collaborate to establish a federal AI strategy that fosters innovation while protecting consumers, especially children.
The proposed framework should adhere to three main principles: first, it must implement age restrictions on generative AI products to prevent children from accessing harmful technologies. second, it should prevent AI from generating illegal content, such as child pornography and instructions for harmful acts. Third, it emphasizes the need for AI companies to be legally liable for any harm their products cause, suggesting that litigation could promote safety without excessive regulations.
Furthermore, the author stresses the importance of distinguishing AI outputs as product design rather than protected speech, to ensure accountability for harmful AI actions. The article concludes by advocating for government support in guiding AI development toward enhancing national security and societal well-being, as opposed to allowing private companies to exploit AI for profit at the potential expense of public welfare.
There has been a lot of attention, some might even say hysteria, in the news recently around artificial intelligence (AI).
Most recently, a proposed 10-year moratorium on state AI laws in the House reconciliation bill last week has caused a lot of controversy, even among conservatives. Irrespective of the proposed moratorium and whatever its final fate may be, the point remains that we need a national framework for AI in America, and conservatives must focus our energy and attention on helping Congress and the administration achieve that.
Republicans need to work together to put forward a federal plan for AI that will both protect American innovation to help us win the AI arms race with China and ensure these technologies do not harm American consumers, especially our children. This is our greatest chance to put forward a conservative agenda for AI while Republicans are in control of both Congress and the executive. If we miss this window, we will be ceding America’s future to a liberal vision for AI regulation. Time is of the essence.
Lawmakers should include, at a minimum, the following three guiding principles in a national framework bill for AI.
First and foremost, we must protect our children from the threats of AI by age-restricting the use of generative AI products to adults. Shielding minors from the harmful effects of generative AI technology should be of paramount priority. Children lack the maturity necessary to operate such powerful technology and to discern truth from fiction when AI blurs those lines. Because children without fully developed brains can be easily deceived, manipulated, or influenced by AI products such as chatbots — and, in a recent case, even be convinced by a chatbot to take his own life — companies should be prohibited by law from deploying or promoting their AI models or products to children under the age of 18, especially chatbots.
Second, a national framework bill should explicitly prohibit AI models from being allowed to generate criminal categories of speech or commit criminal acts. This would include producing obscenity or child pornography, or explaining to people how to commit acts of terrorism, such as creating a biological weapon. The revolution in artificial intelligence has sparked an explosion of “deepfakes,” as teens are creating their own obscenity. Child-safety investigators have also seen an increasing number of disturbingly lifelike images showing child sexual exploitation, which they fear will undermine efforts to determine real victims and combat real-world abuse. The government needs to be proactive in putting proper guardrails on AI so it does not result in products that commit crimes.
The Take It Down Act, which President Trump will sign into law Monday afternoon, is a critical first step in the right direction toward deterring people from using AI to generate deepfakes. The act makes it unlawful for a person to knowingly publish or threaten to publish nonconsensual intimate imagery, including AI-generated imagery, on social media and other websites. Congress could take further steps to impose liabilities on AI companies for allowing the generative AI products themselves to produce criminal content that is not protected speech, such as obscenity and child pornography, or aiding and abetting terrorism.
Finally, a national bill must impose legal liability on AI companies for harms to consumers. Our government must ensure these companies are open to litigation for harms their products may cause. Rather than imposing a host of burdensome regulations for an emerging industry on the front end, litigation gives companies the freedom to innovate while ensuring their innovation is channeled in the right direction without harming consumers. The threat of litigation compels businesses to take certain safety precautions in their research and development, without those precautions being prescribed by the government.
Opening up litigation will mean not allowing AI companies to hide behind Section 230 as an immunity shield. We cannot have the courts repeating the same mistakes they made for the social media industry over the last 15 years, of confusing companies’ product design for third-party content hosted by the websites.
The outputs of AI must be treated as product design, not protected speech nor third-party content. So when an AI chatbot tells a 14-year-old, “Please come home to me as soon as possible, my love,” and leads him to take his own life by shooting himself in the head after the bot had engaged him in sexual conversation for weeks and months, the company should be held liable for that wrongful death.
Congress should pass a simple and narrow law, like the bipartisan bill introduced by Sens. Josh Hawley, R-Mo., and Richard Blumenthal, D-Conn., in 2023 that amends Section 230 to clarify it does not apply to the use or provision of generative AI.
Our country is at a critical juncture. The great choice America faces today is whether we will pay for AI dominance with taxpayer money and govern it accordingly, or if we will pay for it with the brains of taxpayers’ children. If AI should be pursued for national security reasons, by all means, let us invest government spending in the industry. In that case, let’s carefully govern it in the national interest. But if we’re not willing to spend the necessary federal money on it, then trying to win the AI race by allowing private companies to monetize AI products exploitatively on consumers, following the maximally addictive model of the attention economy that we have seen with social media, will be a devil’s bargain.
The government should help channel the AI industry in the right direction by funding AI development that will enhance national security and promote human flourishing, such as applications in medical diagnostics, agricultural production, infrastructure building, engineering, and military vehicles and weapons. The alternative is allowing the private industry to develop AI in the areas most profitable for their business, meaning products that will be highly addictive and exploitative to users, which will only sap our national strength.
Clare Morell is a policy analyst at the Ethics and Public Policy Center, where she works on the Big Tech Project. She worked in the White House Counsel’s Office and the Justice Department during the Trump administration.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Auto Amazon Links: Could not resolve the given unit type, . Please be sure to update the auto-insert definition if you have deleted the unit.