Able to closely imitate the voices of loved ones, AI-powered robocalls could pose a significant threat to unwary consumers

The Federal Communications Commission (FCC) has long recognised the growing impact of robocalls and texts on consumers throughout the US. As a result, various measures to reduce the impact of such calls, including the widespread implementation of Caller ID, have been gradually introduced over the past decade, with varying degrees of success.

In 2019, the FCC released its first ever report on illegal robocalls, with then-chairman Ajit Pai calling them “a waste of time at best and a scam at worst”. The report showed that the FCC proposed or imposed monetary forfeitures totalling almost $246 million against violators or apparent violators of either the Truth in Caller ID Act or the Telephone Consumer Protection Act since 2010, most of which took place between 2017 and 2019.

Since then, however, despite enforcement efforts, the scale of this challenge of only increased, with US customers estimated to have received 50.3 billion robocalls last year.

Now, as we near the end of 2023, this challenge of robocalls is beginning to take on a new dimension: the incorporation of AI.

As AI technology continues to evolve rapidly, generative AI is growing increasingly capable of mimicking the voices phone users worldwide. This, in turn, is creating a new and more effective vector for scam artists, who use these AI doppelgangers to dupe consumers into sending money or revealing sensitive information.

Identifying the use of deepfake audio in these telefraud attempts is no small feat. Customers, naturally, can be educated on the nature of these scams, but as the AI impersonator grows more advanced these measures are likely to become decreasingly effective.

This growing threat is not lost of the FCC, with Chairwoman Jessica Rosenworcel revealing at an event this week that she will be proposing an inquiry to take a closer look at AI’s impact on robocalls and texts. If adopted, the proposal suggests fighting fire with fire, using AI technology to protect consumers from these unwanted calls.

“AI is a real opportunity for communications to become more efficient, more impactful, and more resilient,” said FCC Chairwoman Jessica Rosenworcel. “While we are aware of the challenges AI can present, there is also significant potential to use this technology to benefit communications networks and their customers—including in the fight against junk robocalls and robotexts.  We need to address these opportunities and risks thoughtfully, and the effort we are launching today will help us gain more insight on both fronts.”

More specifically, the proposed inquiry would seek public comment on the following:

  • How AI technologies fit into the FCC’s statutory responsibilities under the Telephone Consumer Protection Act (TCPA).
  • If and when future AI technologies fall under TCPA.
  • How AI affects existing regulation and creating future policy.
  • If the commission should consider ways to verify the authenticity of legitimately generated AI voice or text content from trusted sources.
  • And what any next steps might be.

An AI arms race between scammers and security companies seems inevitable, hence it makes total sense that the FCC should be better equipping itself to provide regulatory guidance in this environment. While for now these steps are largely preliminary, they could lay the foundations for effective regulation in a world inundated with AI – both malevolent and benign.

How is AI transforming the US telecoms sector? Join the operators in discussion at Connected America 2024

Also in the news:
The power of hybrid workforces: The imperative of cloud communications and a mobile-first approach
Shared Rural Network facing two-year delay
Ice Norway doubles down on Mavenir with cloud-native IMS deal