FCC Prohibits AI-Generated Voices in Robocalls

FCC Prohibits AI-Generated Voices in Robocalls
<span class="bsf-rt-reading-time"><span class="bsf-rt-display-label" prefix="Reading Time"></span> <span class="bsf-rt-display-time" reading_time="2"></span> <span class="bsf-rt-display-postfix" postfix="mins"></span></span><!-- .bsf-rt-reading-time -->

The Federal Communications Commission (FCC has declared a new regulation that, starting right away, bans voice cloning technology that is often used in robocall scams that prey on consumers. The people who make these harmful robocalls can now face legal action from State Attorneys General all over the nation. 

 FCC Chairwoman Jessica Rosenworcel said that threat actors are using AI-generated voices in unwanted robocalls to take advantage of vulnerable relatives, mimic famous people, and spread misinformation to voters. These calls have become more common lately because this technology has become easier to set up and use by the average computer user.  

AI can mimic voices by using deep learning algorithms to analyze and replicate the unique characteristics of a person's voice, such as their pitch, tone, and speaking style. This technology is known as voice cloning or voice synthesis, and it can create realistic voice recordings that are difficult to distinguish from the real thing. 

Threat actors can use this technology to create fake voice recordings for use in scams, such as impersonating a person's relative or a government official to trick them into revealing personal information or sending money. They can also use it to spread misinformation by creating fake recordings of public figures or celebrities. 

 Previously, State Attorneys General could only go after the individual scam or fraud a perpetrator tried to commit. Now, the law prohibits using AI to create the voice in these robocalls, giving state law enforcement agencies more legal tools to hold these violators accountable under the law. 

 The FCC started an inquiry in November 2023 to create a record of how to stop unlawful robocalls and the role of AI.  

The agency asked how AI could be used for fraudulent schemes that start from unwanted calls, by imitating the voices of familiar people, and whether this technology should be regulated under the Telephone Consumer Protection Act (TCPA.) Likewise, the FCC also sought input on how AI can help with pattern recognition so that they can use this technology as a positive tool that can detect unlawful robocalls before they ever reach phone users. 

A coalition of 26 State Attorneys General, representing more than half of the nation’s AGs, recently wrote to the FCC expressing their support for this approach. By implementing this step, the FCC is building upon its efforts to establish partnerships with law enforcement agencies in states across the country to identify and eradicate illegal robocalls.