WASHINGTON, D.C. — Amid growing concern that deepfake technology can be misused in politics, one Ohio lawmaker is leading an effort to prevent artificial intelligence-generated misinformation and disinformation from influencing voters.


What You Need To Know

  • The Securing Elections from AI Deception Act seeks to limit the spread of election-related misinfomation online

  • Several high-profile cases have occurred of AI-generated deepfakes of political candidates

  • Republicans have opposed similar measures in the past, saying they would give government regulators too much power

Rep. Shontel Brown, D-Ohio, introduced the Securing Elections from AI Deception Act to require labels on election-related AI content and punish those who create and spread election-related misinformation with AI content.

Brown said several incidents of both regular election misinformation and AI-generated election misinformation in recent years inspired her to introduce the bill.

In 2020 right-wing operatives sent out thousands of robocalls in the Cleveland area, falsely telling people that they could be arrested or forced to get vaccines if they voted by mail. Two men pleaded guilty to a felony telecommunications fraud charge in 2022. During the trial, prosecutors noted the calls targeted minority voters in heavily Democratic areas.

In January New Hampshire residents received a phone call with a voice that sounded like President Joe Biden advising them not to vote in the primary election. But the voice was a deepfake. The New Orleans-based political operative alleged to be behind the call now faces more than two dozen criminal charges and a fine.

“This type of deception is already taking place. And I think that when you couple it with the 21st century technology, then you are getting into very dangerous territory,” Brown said. “People are already starting to use [AI-generated misinformation]. This is not science fiction or something that we are contemplating in our mind, it's actually happening.”

The bill has 46 cosponsors, including Reps. Joyce Beatty, D-Ohio, Emilia Sykes, D-Ohio and Greg Landsman, D-Ohio.

All cosponsors are Democrats.

“I hope that, ultimately, it does get bipartisan support,” Landsman said. “The country wants there to be a serious focus on social media in general, but in particular that there has to be guardrails. And it's the only way we're going to keep people safe.”

Brown said she was working on getting more support from across the aisle, arguing the issue should not be partisan.

“Clearly this is something that impacts both Democrats and Republicans. If you think back to the DeSantis campaign, they put in an artificial image out there that generated Trump hugging Dr. Fauci,” Brown said in reference to images posted by the presidential campaign of Ron DeSantis in June 2023. “So this isn't exclusive to Democrats.”

House Republicans, though, have opposed previous efforts by the Biden administration to control the spread of political misinformation.

Similar bills that passed along party lines in the Senate Rules Committee have also faced opposition from Republicans, who said the measures would give too much power to government regulators over speech.

“These bills increase burdens on speech. They rely on difficult to define terms like ‘reasonable person’ and ‘materially deceptive,’” Sen. Deb Fischer, R-Neb., said at a May 15 hearing. “Those vague terms create uncertainty about what speech is regulated and about whether a speaker could be subject to litigation or penalties.”

Brown said even if the bill does not pass by the November elections, she hoped it would raise awareness about AI-generation election misinformation.