Anti-robocall efforts are holding the line, but GenAI presents new challenges

FCC, New Hampshire AG issue warning letters on AI-generated political calls

In the wake of the 2023 deadline for implementing the anti-robocall STIR/SHAKEN call authentication framework, more U.S. calls than ever are being authenticated as to their legitimacy—that is, whether they are actually coming from the entity that they purport to.

According to the newest robocall report from Transaction Network Services, “virtually all” of the traffic that originates and terminates between U.S. Tier-1 carriers is now authenticated in compliance with STIR/SHAKEN. When Tier-1 traffic terminates with a smaller provider, TNS said that there has been a 45% improvement in call signing since 2022. Toll-free and Voice over IP providers have also seen an improvement of nearly 30% in call signing since 2022, TNS added. “However, only about 20% of calls between non-top networks are getting signed. The wide disparity between Tier-1 carriers and the rest of the field remains a serious problem. Bad actors are moving faster to adapt and improvise the latest scam tactics with machine learning tools and AI tricks.”

TNS said that as a percentage of total traffic, spam traffic increased slightly by 3% in the second half of last year. Comparatively, there was a 14% decrease in unwanted calls during the past two years.

While improvements in the fight against robocalls gain ground, from STIR/SHAKEN to Federal Communications Commission and Federal Trade Commission enforcement actions, TNS said, “consumer trust is arguably eroding at a faster rate. Lost trust translates into lower answer rates, and suspicious consumers are losing patience and quickly getting in the habit of not answering unknown calls.” The telecom ecosystem is trying to work around this and assist businesses which use legitimate robocalls—for appointment notifications, for school delays or closures, and so on—to get end users to answer the calls they want. “Advanced authentication and validation solutions, along with branded calling, are
legitimizing enterprise calls and delivering them effectively with measurable ROI,” TNS added. AT&T and TransUnion recently announced a branded business solution for outgoing calls, to pop up a brand name and business logo for the recipient and confirm the legitimacy of a call in order to encourage them to answer. T-Mobile US also offers a branded calling service.

“Carrier progress with call signing, IP connectivity and broader digital transformation
will be critical as the year unfolds,” said Denny Randolph, president of TNS’ communications market segment. “Rising threats from election disinformation robocalls ahead of the Presidential election, the growing accessibility of AI voice cloning tools, and the familiar cadence of seasonal tax, insurance, healthcare and holiday scams, pose risks across multiple fronts to consumers.”

Timeliness plays into how successful a scam robocall will be, and TNS noted that more than half of respondents in its research received at least one tax scam robocall in the lead-up to last year’s tax deadline. More than 60% said that they got at least one health insurance scam call during open enrollment. And robocall scams are getting smarter, as well, with generative artificial intelligence in play. “Mimicking the voice of someone else has become easier and more lucrative for robocall criminals in the last year,” TNS said in its new report. “The darker side of AI is on full display here, where voice clips and voice-to-text messages can be cloned from the cloud to fake the voice of a child, family member or friend.” Typically, seniors are the target of such scams, but genAI-fueled political calls are expected to proliferate in the run-up to the 2024 election—and in fact are already on the rise.

According to TNS’ “robocall scam of the month” tracking for January 2024, while the top scam overall continues to be the ubiquitous auto warranty calls, in three states—New Hampshire, Iowa and South Carolina, which all had recent primary elections—the top scam was political calls.

The Federal Communications Commission yesterday issued a warning letter to Texas-based Lingo Telecom, which it alleges is the originating entity for political misinformation robocalls from a company identified as Life Inc., which has a history of illegal robocalling according to the FCC’s warning letter. The calls to New Hampshire residents used AI-generated voice cloning to mimic President Joe Biden’s voice and to asked people not to vote in the New Hampshire primary.

The FCC also issued an order to “strongly [encourage] other providers to refrain from carrying suspicious traffic from Lingo” and is working with the New Hampshire State Attorney General’s office on its action.

“AI-generated recordings used to deceive voters have the potential to have devastating effects on the democratic election process. All voters should be on the lookout for suspicious messages and misinformation and report it as soon as they see it,” said New Hampshire Attorney General John M. Formella. “The FCC’s partnership and fast action in this matter sends a clear message that law enforcement and regulatory agencies are staying vigilant and are working closely together to monitor and investigate any signs of AI being used maliciously to threaten our democratic process.”

“The increasing reliance on AI-generated voices to deceive the public, including as part of election disinformation campaigns, is a rapidly growing problem,” said Loyaan A. Egal, chief of the Enforcement Bureau. “We will utilize every tool available to ensure that U.S. communications networks are not used to facilitate the harmful misuse of AI technologies. I thank our partners for their cooperation in this investigation and their ongoing efforts to stop and punish these illegal robocallers.”

Comments are closed.