Washington Examiner

AI poses political danger in 2024 by deceiving voters.

AI-Generated Political Disinformation: A Threat to Democracy

The Rise of Deepfakes

Computer engineers and tech-inclined political scientists have warned for years that cheap, powerful artificial intelligence tools would soon allow anyone to create fake images, video and audio that was realistic enough to fool voters and perhaps sway an election. The synthetic images that emerged were often crude, unconvincing and costly to produce, especially when other kinds of misinformation were so inexpensive and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away. No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When strapped to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low.

The Implications for the 2024 Campaigns and Elections

The implications for the 2024 campaigns and elections are as large as they are troubling: Generative AI can not only rapidly produce targeted campaign emails, texts or videos, it also could be used to mislead voters, impersonate candidates and undermine elections on a scale and at a speed not yet seen. “We’re not prepared for this,” warned A.J. Nash, vice president of intelligence at the cybersecurity firm ZeroFox. ”To me, the big leap forward is the audio and video capabilities that have emerged. When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”

Alarming Scenarios

AI experts can quickly rattle off a number of alarming scenarios in which generative AI is used to create synthetic media for the purposes of confusing voters, slandering a candidate or even inciting violence. Here are a few:

  • Automated robocall messages, in a candidate’s voice, instructing voters to cast ballots on the wrong date
  • Audio recordings of a candidate supposedly confessing to a crime or expressing racist views
  • Video footage showing someone giving a speech or interview they never gave
  • Fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race

Former President Donald Trump, who is running in 2024, has shared AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to the CNN town hall this past week with Trump, was created using an AI voice-cloning tool.

The Need for Guardrails

Legislation that would require candidates to label campaign advertisements created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact. Some states have offered their own proposals for addressing concerns about deepfakes.

Clarke said her greatest fear is that generative AI could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other. “It’s important that we keep up with the technology,” Clarke told The Associated Press. “We’ve got to set up some guardrails. People can be deceived, and it only takes a split second. People are busy with their lives and they don’t have the time to check every piece of information. AI being weaponized, in a political season, it could be extremely disruptive.”

The Rise of AI in Political Campaigning

Other forms of artificial intelligence have for years been a feature of political campaigning, using data and algorithms to automate tasks such as targeting voters on social media or tracking down donors. Campaign strategists and tech entrepreneurs hope the most recent innovations will offer some positives in 2024, too.

Mike Nellis, CEO of the progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his staff to use it, too, as long as any content drafted with the tool is reviewed by human eyes afterward. Nellis’ newest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will write, send and evaluate the effectiveness of fundraising emails –- all typically tedious tasks on campaigns. “The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” he said.



" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."

Related Articles

Sponsored Content
Back to top button
Close

Adblock Detected

Please consider supporting us by disabling your ad blocker