On Monday, the office of the New Hampshire Attorney General began probing into incidents involving deepfake Biden calls, which reportedly used advanced artificial intelligence to replicate President Joe Biden’s voice.
These calls aimed to dissuade voters in New Hampshire from participating in Tuesday’s primary election. Attorney General John Formella identified the message, sent to several voters on Sunday, as a likely illegal effort to hinder and suppress voter turnout. He advised voters to “should disregard the contents of this message entirely.”
Taylor Swift AI pictures reveal the dark side of AI
What do deepfake Biden calls say?
According to a The Associated Press article, one of the recordings echoes Biden’s voice, including his characteristic phrase, “What a bunch of malarkey.” The message suggests voters to “save your vote for the November election.”
It falsely claims, “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday.”
Importantly, participating in Tuesday’s primary does not restrict voters from voting in the November general election. While Biden is not actively campaigning in New Hampshire and his name isn’t on the primary ballot—after prioritizing South Carolina for the Democratic primaries—his supporters are orchestrating a write-in campaign for him in the state.
The origin of these calls, which deceitfully appeared to be from the personal cellphone of Kathy Sullivan, a past chair of the state’s Democratic Party and an organizer for Granite for America (a super-PAC backing Biden’s write-in campaign), remains unknown. After receiving reports of these calls on Sunday night, Sullivan promptly informed the authorities and filed a complaint with the attorney general.
“This call links back to my personal cell phone number without my permission. It is outright election interference, and clearly an attempt to harass me and other New Hampshire voters who are planning to write-in Joe Biden on Tuesday,” she stated.
The scope of the deepfake Biden calls incident remains uncertain, but a spokesperson for Sullivan mentioned hearing from at least a dozen individuals who received the call. In response, the New Hampshire Attorney General’s office is urging recipients of this call to contact the state Justice Department’s election law unit via email.
Gail Huntley, a 73-year-old Democrat from Hancock, New Hampshire, intends to write in Biden’s name in the upcoming Tuesday election. She recounted receiving the call around 6:25 p.m. on Sunday. While Huntley initially thought the voice was Biden’s, she soon discerned the deceptive nature of the call, as the content seemed incongruous. Initially, she assumed his statements were taken out of context.
“I didn’t think about it at the time that it wasn’t his real voice. That’s how convincing it was,” Huntley remarked. She expressed both shock and a lack of surprise over the spread of such AI-generated fakes in her state.
White House Press Secretary Karine Jean-Pierre confirmed on Monday that the call “was indeed fake and not recorded by the president.” Furthermore, Biden’s campaign manager, Julie Chavez Rodriguez, declared in a statement that the campaign is “actively discussing additional actions to take immediately.”
Julie Chavez Rodriguez, Biden’s campaign manager, strongly emphasized the campaign’s stance against election interference: “Spreading disinformation to suppress voting and deliberately undermine free and fair elections will not stand, and fighting back against any attempt to undermine our democracy will continue to be a top priority for this campaign,” she affirmed.
The recent incident involving the use of advanced generative AI technology for voter suppression in New Hampshire signals a concerning trend. Experts are raising alarms that the year 2024 could witness an unprecedented surge in election-related disinformation globally, fueled by such technology.
The deepfake issue
Generative AI deepfakes have already surfaced in campaign advertisements for the 2024 presidential race. This technology has also been exploited to disseminate misinformation in various elections around the world over the last year, impacting countries such as Slovakia, Indonesia, and Taiwan. Deepfake Biden calls are not the first instances of misusage of this technology.
Hany Farid, a renowned digital forensics expert at the University of California, Berkeley, who analyzed the call recording, confirmed its artificial nature, describing it as a low-quality deepfake. Farid expressed concern about the potential weaponization of generative AI in the upcoming election, seeing this incident as a precursor to future challenges.
How does AI voice generators work?
AI voice generators, or text-to-speech (TTS) systems, are a cornerstone technology in the creation of realistic synthetic voices, such as those used in deepfake Biden calls. These systems leverage artificial intelligence algorithms to convert written text into spoken words. Utilizing advanced deep learning models and natural language processing techniques, they are capable of generating human-like voices with varied tones, accents, and emotional inflections. The sophistication of AI voice generators has reached a point where they can produce highly realistic and expressive speech, making it challenging to distinguish from real human voices.
In the context of deepfakes, AI voice generators have played a pivotal role. This is evident in scenarios like the deepfake Biden calls, where AI algorithms have been used to mimic the voice of public figures like President Joe Biden. The emergence of AI celebrity voice generator tools, such as Bland Turbo, exemplifies the significant strides made in this field. These tools not only facilitate AI phone calling but also contribute to the evolving landscape of voice-based media. For those keen on exploring this innovative technology, a survey of the top voice-to-video AI tools available can provide further insight into its potential applications.
Featured image credit: The White House