Reader Forum: A cohesive mitigation approach is required to curb political robocalls ahead of November elections

Americans received more than 16 billion political calls in Q1 2024

Over the first three months of 2024, Americans received more than 16 billion political robocalls. The unwanted calls, which ranged from AI deepfake election disinformation campaigns to financially motivated scams and more traditional nuisance calls, have besieged Americans throughout the primaries.

Despite party nominations being determined early in the primary process, robocall activity
remained high throughout the year’s first quarter. Iowa residents experienced a significant surge in spam calls ahead of the caucuses on January 15, as robocall volume was up more than 90x that week (January 8-14) compared to the previous week. Likewise, New Hampshire voters also experienced a substantial amount of political robocalls, up 40x in the week leading ahead of the primary (January 15-21) compared to the previous week.

The primary elections data is an ominous sign for stakeholders seeking to protect Americans from nefarious robocalls and robotexts in the months ahead. While the highly charged 2024 presidential election environment suggests unprecedented risk levels for voters, there are actions that carriers, policymakers, regulators and industry leaders can undertake to mitigate these threats. In some cases, these efforts are already underway.

Recognize the Evolving Generative AI Threat

Given the rise of generative AI and its ability to clone voices, bad actors’ political robocall
methods have become more sophisticated and convincing.

New Hampshire voters experienced this dilemma firsthand. Ahead of the state primary in January, voters were targeted by an AI deepfake impersonation of President Biden telling New Hampshire residents to refrain from voting.

Your vote makes a difference in November, not this Tuesday,” claimed the artificially generated voice of President Biden earlier this year.

Of course, this was not the first instance of political AI deepfakes.

Last June, Florida Governor Ron DeSantis’ campaign reportedly created images of President Trump embracing Dr. Anthony Fauci. After interviewing forensic experts, USA Today reporting indicated that AI almost certainly created those images. Similarly, a month before that, Sen. Richard Blumenthal of Connecticut launched a Senate Judiciary Committee hearing into the dangers of deepfakes by playing an AI-generated recording of his cloned voice.

Generative AI makes it increasingly difficult for Americans to discern legitimate calls from high-risk political robocalls. While carriers are on the front lines in neutralizing this threat, the burden needs to be a shared one in the interest of protecting voters.

Political Robocalls Require Coordinated Stakeholder Approach

To address political disinformation robocalls and on the heels of the New Hampshire deepfake, the FCC unanimously ruled earlier this year that voice cloning technology used in robocall scams is illegal. If and when these AI-generated political calls resurface, state attorneys general nationwide are empowered to investigate and punish the bad actors behind them.

The FTC has also weighed in with a new rule prohibiting the impersonation of government officials, businesses and their officials or agents in interstate business. To protect individuals against AI fraud attacks, the Commission is also reviewing whether the new rule should declare it illegal for AI platforms to provide services if they know their product is being used to harm consumers through impersonation.

In October 2023, President Biden issued an Executive Order to harness the power of AI while also managing the risks associated with the technology. The Executive Order included measures that will improve the safety and security of AI and protect American consumers and workers.

Telcos are matching policy and regulatory efforts through a commitment to call authentication, spoof protection and branded calling. Together, these capabilities ensure more legitimate political call traffic gets through to voters while flagging unwanted and potential fraud calls with greater precision. 

Finally, telco-industry collaboration is poised to drive further innovation and AI development capable of staying a step ahead of even the most sophisticated threats. Some early efforts focus on researching how AI can be applied to various aspects of the telco business, including the use of voice biometrics, predictive AI-powered call analytics and AI SMS detection for robocall mitigation.

Avoid Tunnel Vision on Blocking Calls

Aside from the rise of AI-generated scams, the other two most common political robocall scams throughout Q1 were political campaign donations and election surveys. 

The FCC has clear rules for campaign fundraising, one of which prohibits political campaign-related auto-dialed or prerecorded voice calls to cell phones, pagers or other mobile devices without the recipient’s prior express consent. Illegitimate campaigns often ignore FCC regulations by posing as legitimate entities, like the DNC or RNC, or deploying spoofing tactics, violating rules laid out for campaign fundraising.

Similarly, bad actors frequently exploit the guise of conducting surveys to gauge voting trends. They may call and offer a prize or compensation, attempting to extract personal information from unsuspecting respondents. These scams work so well against the American public because they are tactics used by political campaigns. 

But telcos and other stakeholders have an obligation to ensure the pendulum doesn’t swing so far that legitimate organizations seeking to communicate with voters the right way – and the legal way – do not end up as collateral damage. Utilizing call authentication and branded calling technology allows telcos and legitimate organizations to ease the burden on voters so they are not left to guess if incoming calls are harmful or harmless, wanted or unwanted, nuisance or helpful.

Branded calling presents rich brand information on incoming call screens to facilitate easier recognition for voters. Call authentication is another critical piece; it ensures that only verified, branded calls reach voters – providing more confidence that if a branded call does get through, it’s from a legitimate call brand. 

These technologies would make it easier for Americans to distinguish political robocalls from legitimate campaign communications – a key ingredient in the battle against misinformation.

John Haraburda is Product Lead for TNS Call Guardian with specific responsibility for TNS’ Communications Market solutions.

Comments are closed.