Over the weekend of January 20-21, citizens in New Hampshire were subjected to an unusual political tactic. Robo-calls, featuring a voice resembling that of President Joe Biden, instructed them not to participate in the January 23 primary.
These automated messages were created using an AI deepfake tool, seemingly with the intention of disrupting the 2024 presidential election. The content of the calls, as recorded by NBC, advised residents against voting in the primary, framing it as counterproductive to the Democratic cause.
The New Hampshire Attorney General’s office quickly condemned these calls as misinformation, urging voters to completely ignore the message. Additionally, a representative for former President Donald Trump refuted any involvement by the GOP candidate or his campaign.
Investigators are yet to pinpoint the origin of these robocalls, but investigations are actively continuing.
Simultaneously, another political controversy involving deepfake audio unfolded, this time affecting Manhattan Democrat leader Keith Wright. An AI-generated audio clip, imitating Wright, was released, derogatorily targeting Democratic Assembly member Inez Dickens.
While some immediately recognized the audio as a fake, others, including Manhattan Democrat and former City Council Speaker Melissa Mark-Viverito, initially believed it was genuine, expressing shock over its content.
Experts note a growing trend of audio deepfakes, which are harder for the public to scrutinize compared to visual deepfakes. The widespread awareness of image editing tools like Photoshop doesn’t yet extend to audio manipulation, making these fakes more deceptive.
As of now, there is no universally reliable method to identify or counter deepfakes. Experts advise caution when interacting with media from unverified or suspicious sources, particularly when it contains extraordinary claims.
Get $200 Free Bitcoins every hour! No Deposit No Credit Card required. Sign Up