Source: ALJAZEERA
ALJAZEERA MEDIA NETWORK
Though at least 20 states have enacted regulations against election deepfakes, federal measures are yet to be established.
On January 21, Patricia Gingrich was about to have dinner when her landline phone rang. The New Hampshire voter answered to a voice advising her not to cast her vote in the upcoming presidential primary.
"As I listened, I thought it sounded like Joe Biden," Gingrich told Al Jazeera. "But when the voice advised to save the vote, I knew Joe Biden wouldn't say that."
In reality, the voice wasn't Biden's. It was a deepfake created using artificial intelligence (AI).
Experts caution that deepfakes, which are AI-generated audio, video, or images meant to deceive, present significant risks to U.S. voters ahead of the November general election. They not only inject false content but also undermine public trust.
Although Gingrich didn't fall for the Biden deepfake, she worries it might have decreased voter turnout. The message reached nearly 5,000 New Hampshire voters just days before the state's primary.
"This could harm those not closely following Democratic activities," said Gingrich, chair of the Barrington Democratic Committee in Burlington, New Hampshire. "If they believed Joe Biden suggested not voting, perhaps they would abstain."
The Biden call wasn't the only deepfake in this election cycle. Florida Governor Ron DeSantis’s campaign shared a video with AI-generated images of Donald Trump embracing immunologist Anthony Fauci, two public figures who clashed during the COVID-19 pandemic.
In September, another AI-generated robocall targeted 300 voters expected to participate in South Carolina’s Republican primary. This time, an AI voice imitated Senator Lindsey Graham, inquiring about their voting choices.
The practice of falsifying content for political gain has always been part of U.S. politics. Even George Washington faced “spurious letters” that falsely suggested he doubted U.S. independence.
AI tools can now convincingly imitate people quickly and inexpensively, raising the threat of disinformation.
A study from George Washington University predicted that by mid-2024, daily “AI attacks” would increase, endangering the November general election.
The study’s lead author Neil Johnson said the greatest danger isn't from recent, obviously fake robocalls but from more believable deepfakes.
Convinced online communities enable bad actors to inject manipulated media into mainstream spaces directly.
Communities in swing states, and groups such as parenting forums on platforms like Facebook, are particularly vulnerable.
Johnson anticipates a surge in "disinformation" — content that, while not entirely false, distorts the truth.
Deepfakes target not only voters but also election officials. Larry Norden, senior director of the Elections and Government Program at the Brennan Center for Justice, works with election officials to identify fake content.
For instance, bad actors could use AI to imitate a supervisor’s voice, instructing election workers to close a polling station early.
Norden emphasizes that AI tools simplify the large-scale creation of misleading content. Last year, Norden made a deepfake of himself to demonstrate AI’s risks in a presentation. “It didn't take long,” he said. “I just fed past TV interviews into an app.”
While his avatar wasn’t perfect, the technology has since become more sophisticated, making it harder to detect AI-generated content.
As deepfakes become more common, the public may become more suspicious of all media, potentially eroding trust. Politicians could exploit this disbelief for personal gain, a phenomenon termed the “liar’s dividend.”
For example, during the 2016 election, damaging Access Hollywood audio involving Donald Trump surfaced. If a similar recording emerged today, a candidate could more easily dismiss it as fake.
"There’s a current lack of trust," Norden said. "This could worsen the situation."
Federal regulations on deepfakes are limited. The Federal Election Commission (FEC) has yet to address deepfakes in elections, and related congressional bills remain inactive.
Individual states are filling the regulatory gap. Public Citizen, a consumer advocacy group, reported that 20 state laws addressing deepfakes in elections have been enacted. More bills in Hawaii, Louisiana, and New Hampshire await a governor's approval.
Norden highlighted that states are acting before Congress because it’s challenging to pass legislation at the federal level.
Voters are also taking action. Following Gingrich’s receipt of the fake Biden call, she joined a League of Women Voters lawsuit for accountability.
The call was traced to Steve Kramer, a political consultant who aimed to highlight the need for AI regulations by commissioning a magician to create the deepfake. Kramer later admitted to being behind the South Carolina robocall mimicking Senator Graham.
Kramer claims his intention was to make a difference, stating he gained “$5 million worth of exposure” for his efforts, hoping for AI regulations to take effect.
Kramer’s case demonstrates that existing laws can counter deepfakes. The Federal Communications Commission (FCC) ruled that voice-mimicking software falls under the 1991 Telephone Consumer Protection Act, making it illegal in most cases. The commission proposed a $6 million penalty against Kramer for the fraudulent robocall.
The New Hampshire Department of Justice also charged Kramer with felony voter suppression and impersonating a candidate, which could result in up to seven years in prison. Kramer pleaded not guilty and did not comment to Al Jazeera.
Norden noted that the criminal charges against Kramer are unrelated to AI. “Those laws exist independently of the technology,” he said.
Existing laws become less effective against unidentifiable parties, or those outside the U.S. Intelligence agencies are already seeing China and Russia experimenting with these tools, expecting future use.
Both Norden and Johnson suggest that a lack of regulation increases the importance of voter education about deepfakes and how to find credible information.
Gingrich believes voters must educate themselves about deepfake risks. Her advice to voters: “Ensure you are informed and know you can vote.”
Your email address will not be published. Required fields are marked *