Rubio imposter reflects the growing threat of AI voice impersonation
The article discusses a recent deepfake scam involving AI-generated impersonations of U.S. Secretary of State Marco Rubio’s voice. These complex voice mimics contacted foreign officials and U.S. lawmakers in an apparent attempt to extract sensitive information or gain unauthorized access. The incident highlights the rapid increase in AI-driven voice fraud, with experts and lawmakers urging stronger safeguards and regulations to combat synthetic audio threats.
The scam used platforms like Signal, exploiting AI technology to create highly convincing and personalized interactions. This follows earlier incidents involving impersonations of other officials, such as White House Chief of Staff Susie Wiles. Officials warn that these AI deepfake attacks pose a serious national security threat and are becoming increasingly frequent and sophisticated.
Experts emphasize the urgent need for advanced detection technologies and clearer labeling of AI-generated content by dialog services and social media platforms. Despite some regulatory measures,such as the FCC banning AI-generated robocalls,personalized AI impersonation scams remain tough to prevent. Lawmakers also debate the challenge of balancing AI innovation with intellectual property rights and privacy protections,noting that comprehensive federal AI regulations are unlikely to be enacted soon.
While high-profile scams receive much attention, experts caution that everyday consumers face growing risks from AI-driven and conventional fraud tactics alike, contributing to a considerable rise in financial losses nationwide.
Marco Rubio deepfake scam underscores the ‘explosion’ of AI voice impersonation
A series of AI-generated attempts to mimic Secretary of State Marco Rubio’s voice has experts and lawmakers pushing for more safeguards against the rise of synthetic audio threats.
Officials are on high alert after an impostor, using AI to replicate Rubio’s voice and writing style, contacted foreign ministers, a United States lawmaker, and a governor in what appeared to be a scheme to extract sensitive information or gain access, according to the Washington Post.
The ruse involved a Signal account displaying the name “[email protected],” as detailed in a State Department cable cited by multiple outlets. The revelation comes just weeks after the FBI issued a bulletin in May about a coordinated effort to impersonate senior U.S. officials through AI-generated voice and text messages, targeting a broad network of current and former government figures.
A month later, the Canadian Centre for Cyber Security and the Canadian Anti-Fraud Centre issued a warning about a scam involving text messages and AI-generated voice calls, where attackers posed as senior officials and public figures to extract money and personal data.
Sen. Tim Kaine (D-VA) warned that the incident poses a severe national security threat, calling it “a crazy new world out there. You gotta really worry about it.”
Rubio is the second senior Trump administration official in recent months to be impersonated, following a separate May incident when someone gained access to White House Chief of Staff Susie Wiles’ phone and used it to call and message senators, governors, and business leaders. Sen. Rick Scott (R-FL) said he was targeted a few months ago after receiving a phone call from someone who sounded like Wiles.
“It took me a second to realize it wasn’t her, but I figured it out,” Scott said, speaking to the Washington Examiner. “I, of course, let Susie know right away.”
Several other lawmakers told the Washington Examiner they’ve received a growing number of phishing calls within the last year. Sen. Mike Rounds (R-SD), an Intelligence panel member, emphasized the need for the U.S. to lead in setting artificial intelligence standards, calling it an “ongoing battle” for the next decade.
“We’ve got to have better AI for detection of other AI deep fakes. This is going to be an ongoing battle for a decade, until such time as the guys that are doing it realize that we have better equipment than they do,” Rounds said, speaking to the Washington Examiner. “This is another reason why we have to improve our capabilities for the detection of these deepfakes and make it readily available.”
Vijay Balasubramaniyan, CEO of voice fraud prevention firm Pindrop, warned that AI-driven voice scams are accelerating at a staggering rate, describing a “1,300% explosion” in activity over the past year.
“Back in 2023, we’d see one deepfake attack per customer per month. By the end of last year, it jumped to five and a half per day per customer,” he said.
Balasubramaniyan described how today’s AI bots are disturbingly lifelike, capable of mimicking voices, human emotion, and empathy. In one case his team analyzed, a bot reassured someone on the line by saying, “I know it’s afternoon your time, it must be a long day. Please take your time.”
He explained how easy it is to clone someone’s voice and personalize interactions using publicly available data.
“With just a LinkedIn profile and a public audio clip, I can create a bot that speaks just like you,” he said.
To address the growing threat of AI-generated voice fraud, he’s urging urgent action from both lawmakers and tech platforms. In his view, communication services like Signal and social media platforms should take the lead in flagging and labeling AI-generated content to help users distinguish between real and fake.
“Even if it is AI-generated content that is for fun, knowing that is super important for the consumer,” he said, warning that without such transparency, people will either lose trust in everything or fall for sophisticated scams.
The calls are only growing more convincing. Last year, thousands of New Hampshire voters received deceptive robocalls impersonating former President Joe Biden, urging them to skip the January primary because “voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.” Later in the year, a deepfake caller posing as a senior Ukrainian official used AI in an attempt to extract sensitive election information from then–Senate Foreign Relations Chair Ben Cardin (D-MD).
As synthetic voice technology spreads, regulators are beginning to respond. Last year, following a surge in ultra-realistic robocalls, the Federal Communications Commission unanimously ruled that AI-generated voices in robocalls are illegal, categorizing them outright as artificial under the Telephone Consumer Protection Act, effective immediately. The move was widely seen as a major step in reining in synthetic audio abuse. But in practice, the rule does little to stop targeted impersonation scams now hitting lawmakers. It applies primarily to mass call campaigns, not personalized deepfake messages sent through encrypted apps like Signal or one-off calls designed to fool high-level officials.
Even after President Donald Trump signed a bill into law targeting AI-generated sexual content posted without permission, lawmakers pointed to this recent case with the Rubio impersonation as evidence that additional action is still needed.
“Part of it requires us to actually get an agreement between the AI community and the intellectual properties community about how we move forward in the United States, where we can utilize intellectual property assets but actually make sure they are compensated for that use, this is essential,” Rounds said.
Still, new federal regulations on AI appear unlikely in the current Congress. GOP lawmakers recently tried to block states from imposing their own AI rules for the next decade, but were unsuccessful in the Senate’s final version of the One Big Beautiful Bill Act.
WHAT MADE IT INTO THE SENATE’S FINAL ‘BIG, BEAUTIFUL BILL’ AND WHAT DIDN’T
While high-level impersonation scams are grabbing headlines, experts caution that the broader threat to everyday consumers is just as serious and growing. Most cybercriminals don’t need advanced AI to do damage; many still fall for old tactics such as phishing emails or scam texts.
“This isn’t just a problem for senators,” Balasubramaniyan said. “Anyone with a phone number and a digital footprint is a potential target.”
Last year alone, Americans lost more than $12.5 billion to fraud, a 25% jump from 2023, according to the Federal Trade Commission.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."