Ahead of the New Hampshire primary in January, residents of the Granite State got a call urging them not to vote in the state's primary from a voice that purported to belong to President Joe Biden.
"It’s important that you save your vote for the November election," the voice said. “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again. Your vote makes a difference in November, not this Tuesday." The call even used one of the president's favored terms: "malarkey."
As it turned out, the call was created by a magician from New Orleans using an artificial intelligence software using audio recordings of President Biden's voice. A political operative working for Democratic presidential candidate Dean Phillips admitted this week to being behind the fake robocall. (Phillips’ campaign cut ties with the consultant, and said it had no knowledge about the robocall scheme.)
The magician, Paul Carpenter, told The Associated Press in an interview that he was hired by the consultant, Steve Kramer. Carpenter said he spent less than ten minutes creating the audio, but didn't know how the file would be used.
"I created the gun, I didn't shoot it," Carpenter told the AP. "I'm not about to be blamed for somebody being stupid."
The incident underscores the example of how complicated the issue of AI is when it comes to politics, particularly as the country gears up for November's presidential election. FBI Director Christopher Wray said at a conference this week that the U.S. is bracing for complex and fast-moving threats to elections, particularly noting advancements in generative AI, which he said makes it "easier for both more and less-sophisticated foreign adversaries to engage in malign influence, while making foreign-influence efforts by players old and new more realistic and difficult to detect."
"The U.S. has confronted foreign malign influence threats in the past," Wray said. "But this election cycle, the U.S. will face more adversaries, moving at a faster pace, and enabled by new technology."
In an interview with Spectrum News, Alex Reeve Givens, the President and CEO of the Center for Democracy and Technology, expressed concern about instances like the New Hampshire robocall, warning that voters can be "easily manipulated" about crucial information.
"Voters could be easily manipulated," she said. "Either it's the time, place or manner of their vote, what they're voting on, and whether they need to show up at all."
A poll from The Associated Press-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy from last year found that nearly 60% of Americans think that AI tools will increase the spread of false or misleading information in November's election.
Givens said while sometimes video and audio created by AI is obvious to point out, the technology is improving, and it can sometimes be difficult to tell when video or audio has been manipulated.
“It’s not fair to put that burden on individual consumers of information,” she said, adding that while candidates can take steps to correct the misinformation, it can undermine people’s confidence in the credibility of the democratic system.
“This is the time where we have to think seriously, both about how we counter those trends, and how we boost authoritative information so that voters do know where to turn to, to really understand the facts about whether they're going to vote how they can vote and what a candidate stands for,” Givens said.
But while AI can be used to deceive, the technology also has potential for good: Artificial intelligence programs can be used to help level the playing field for under-funded candidates.
“One of the real advantages of AI tools is that they can more affordably allow people to produce materials in a lot of different languages or in a higher quality video or audio format than they would be able to on a low budget otherwise,” Givens said.
OpenAI, the company behind the ChatGPT, says it won’t allow politicians to build applications for political campaigns. But these bans can be difficult to enforce.
Lawmakers are also considering new legislation for AI programs: Congress has explored bills that would require a watermark to identify content made by AI. The Federal Communications Commission last month made the use of AI-generated voices in robocalls illegal. And a proposal in Florida would change state defamation laws to include false claims in political ads made with artificial intelligence.
While Givens said these measures are a good start, she pointed out that some legislative remedies might be limited by the First Amendment.
“Policymakers have to focus on that deception on the manipulation piece of this, and then companies need to step up to the plate as well,” Givens said.
The Associated Press contributed to this report.