The AI clone wars: five ways artificial intelligence will impact telcos and consumers in 2024 (Reader Forum)

The shroud of the dark side has fallen. Begun the Clone War has.” 

Rest assured, the future for telcos in 2024 is far less dire than this line from Star Wars – Attack of the Clones. But you wouldn’t know it based on growing industry concerns surrounding how bad actors are using Artificial Intelligence (AI) to, among other things, clone voices of unwitting consumers to engineer sophisticated robocall scams. 

The ease with which bad actors can now access and deploy sophisticated AI tools presents real challenges for the telecom market. However, innovation is also afoot; telcos and technology partners are collaborating on new AI solutions to protect subscribers and improve the customer experience; regulators are moving swiftly to create an environment where AI innovation can flourish with appropriate guardrails; and enterprises recognize they too have a role in protecting customers – and their own brand – from scammers and spoofers. 

These dynamics set up for a compelling year ahead for telcos, as AI will present unprecedented challenges and opportunities. Below are five predictions for how AI will impact the telecom market in 2024. 

1. Bad actors’ use of generative AI set to evolve and expand  

Many consumers have received an introduction to the dark side of generative AI over the past several months, through reports of kidnapping scams where Americans hear the convincing AI-cloned voices of family members in distress. Often, the targets of the ‘imposter grandchild’ scam were older adults confronted with the synthetic voice of a grandchild distressed and asking for money to bail them out of perilous situations. Concerned about the risk of generative AI to older Americans, Congress called on the Federal Trade Commission (FTC) to investigate AI-related senior scams.

Consumers will remain in the sights of scammers, as they continue to evolve tactics and targets. As generative AI becomes more advanced, bad actors will seek bigger and quicker potential paydays. In the coming months, we expect more AI voice cloning scams targeting executives and financial services firms. There have been several recent examples of scams aimed at tricking bank employees into releasing customer funds. 

Americans will also have to guard against increasingly convincing generative AI robocalls. For example, a scammer posing as a family member with a flat tire and asking for money. With generative AI, bad actors will soon be able to include the sounds of cars driving on the road to make those scams even more realistic. Many current iterations of voice cloning kidnapping scams were limited in replicating realistic background noise.

2.  More concrete AI regulatory guardrails will be set 

On October 30, the Biden Administration issued an executive order on AI to promote the technology’s safe, secure and trustworthy development and use. The FCC, for its part, voted to pursue an inquiry into the impact of artificial intelligence on robocalls and robotexts in a recent open meeting but it is only just entering the information-gathering process on the topic.

In 2024, regulators will seek to identify specific actions that foster AI innovation, but also create necessary guardrails to protect Americans from bad actors. As policymakers and regulators continue to monitor emerging AI technologies, expect to see fewer broad-brush strokes with AI regulation that would stifle innovation and more surgical attention to areas presenting the greatest risk to citizens. 

The AI burden does not fall squarely on regulators. Telecom providers will continue to recognize the need to educate stakeholders on AI tools fraudsters are using. With a better sense of how bad actors deploy these technologies, lawmakers can begin to regulate and impose sanctions on scammers’ use of the technology while creating avenues for the industry to use AI to enhance the subscriber experience.

3. Telcos will tap AI to restore trust in the voice channel 

We believe bad actors will remain busy in the coming year using AI with the explicit goal of gaining victims’ trust to obtain money and personal information. When AI is used to trick consumers into trusting scammers, they can also lose trust in the entities that are supposed to protect them – carriers, technology providers, policymakers, regulators and enterprises. 

Our own work on AI-powered solutions alongside broader industry efforts will be used by carriers and legitimate entities to restore that trust in several ways: 

AI-powered voice biometrics. Voice biometrics uses real-time AI to determine whether the voice on the other end of an incoming call is synthetic (a robocall) or not (a legitimate call). As telcos leverage voice biometrics, it will act as a call screener by analyzing the voice, tone and diction of callers to determine whether the voice is real or a bad actor deploying AI for nefarious intent. This will deter low-volume AI voice spammers who move quickly from one phone number to another. 

Predictive AI-powered call analytics. Carriers will benefit from the use of predictive algorithms churning across billions of daily call events to better understand bad actors’ robocall patterns. As a result, carriers will be more strongly positioned to protect subscribers. 

Greater understanding of generative AI. Key to combating the ease with which scammers can clone voices using AI is understanding how voice replication works. Carriers will have more opportunities to see first-hand how bad actors use generative AI, and thus better prepared to combat this challenge. 

4.  AI can ease the pain of customer support lines  

The customer is always right. But when it comes to interacting with brands’ customer support lines the customer seemingly is always frustrated especially when forced to navigate through endless prompts to reach a human, only to realize the person – or bot – is not the right one to address your specific needs and questions.

We have already addressed how scammers are using generative AI for nefarious purposes, but the technology can be an impactful force for good. The ability of generative AI to produce new content from massive volumes of data through machine learning can unlock numerous applications benefiting the consumer – including a better customer service experience. 

Research firm Metrigy’s global Customer Experience Optimization enterprise study finds that 46% of companies planned to use generative AI for customer-related activities in the contact center in 2023. Overall, Metrigy reveals that generative AI is involved in 43.6% of customer interactions.

While the role of generative AI in the contact center has been relatively modest to date, expect more involvement in the year to come. For example, generative AI can streamline the previously mentioned customer support line black hole. By replacing dozens of issue prompts, there is an opportunity for customers to simply describe their problem and be directly routed to the right person or content. 

Finally, telcos can improve their own customer service efforts with generative AI, which can be used by subscribers to describe any technical problem to chatbots and have the technology automatically run diagnostic tools to provide a solution.

5. AI may be weaponized for 2024 presidential election   

With each subsequent election, disinformation and scam campaigns have become more sophisticated, complex and high-risk for voters. 

Voters will have to be vigilant before, during and after the 2024 election; AI may be used by scammers to clone candidates’ voices to solicit money, make it sound like a candidate said something they haven’t or seek to direct voters to incorrect polling places to suppress turnout. 

The risk of AI will also likely extend to a key campaign tool used by candidates and party infrastructure: robotexts. In some ways, use of AI in texting is easier to discern, as keywords and phrases can be picked up and blocked. But the sheer volume of texts far exceeds voice, so that is where AI can be valuable to scale to that volume. 

Look for advances in telcos’ use of AI to detect and prevent machine generated political text messages from reaching the intended recipient, which plague voters’ messaging services today. These features will increasingly be incorporated into spam detection offerings.

2024 will feature an evolving set of challenges from those using AI for nefarious purposes, but it will also be defined by new opportunities to leverage the technology for Americans’ benefit. 

Comments are closed.