In November, Rep. Mike Collins’ campaign released a video showing his opponent, Senator Jon Ossoff, mocking farmers and defending a government shutdown. The audio was entirely fabricated - Ossoff never said any of it. Collins’ campaign plans to keep doing this.
The deepfake included a small on-screen disclaimer noting the AI generation. Under current law, that may be enough.
Welcome to the 2026 midterms, where political campaigns have discovered they can put words in their opponents’ mouths and face little meaningful consequence.
A Patchwork of Protection
Twenty-eight states have now passed laws targeting deepfakes in elections, according to Public Citizen’s tracker. The legislation has received bipartisan support in every state where it’s passed - a rare consensus in American politics that lying with AI crosses a line.
But the laws vary wildly in scope and teeth:
Disclosure requirements: California, Washington, and Missouri mandate that political ads using AI include clear disclaimers. The theory: if voters know content is synthetic, they can evaluate it accordingly.
Time-limited bans: Texas and Minnesota restrict deepfake videos within 30 to 90 days of elections - creating a curious loophole where the same deceptive content is legal in July but criminal in October.
Criminal penalties: Maryland’s SB0141 criminalizes AI deepfakes spreading election misinformation, with fines and potential prison time.
Seventeen more states have pending legislation, while five have taken no action at all.
The Georgia Problem
The Collins-Ossoff deepfake exposes the weakness of disclosure-based regulation. Yes, the video carried a disclaimer. But research shows viewers often miss or ignore such labels. And once fabricated audio of a candidate “saying” something spreads on social media, the disclaimer rarely follows.
“The only reason a candidate would need to use a deepfake to make up an opponent’s words is if they didn’t think they could win on their own,” Ossoff’s campaign responded. Collins’ camp called it embracing “new tactics and strategies.”
Georgia’s SB9, which would criminalize knowingly publishing certain AI-generated campaign materials without disclosure within 90 days of elections, advanced in 2025 but hasn’t become law. Even if it had, disclosure was provided - technically compliant, practically meaningless.
Federal Paralysis
The Federal Election Commission remains deadlocked along partisan lines, split 3-3 between Democratic and Republican commissioners. The agency has issued guidance that existing fraud prohibitions apply “regardless of technology used” but has declined to create AI-specific regulations, handling complaints case by case.
The FCC has moved on one front: in February 2024, it ruled that AI-generated voices in robocalls qualify as “artificial or prerecorded voice” under the Telephone Consumer Protection Act. That’s why the Biden deepfake robocall in New Hampshire led to enforcement action. But the rule doesn’t extend to digital ads, television, or social media - where most political content actually lives.
Congress has passed two relevant laws, but neither addresses election deepfakes directly:
-
TAKE IT DOWN Act (signed May 2025): Criminalizes non-consensual intimate deepfakes with up to 2 years imprisonment. Platforms must remove such content within 48 hours of valid takedown notices. The notice-and-removal requirements take effect May 19, 2026.
-
DEFIANCE Act (passed Senate January 2026): Allows victims of non-consensual sexually explicit deepfakes to sue for a minimum of $150,000. Now awaits House action.
Both target intimate imagery. Neither covers political speech.
What’s Coming
According to the American Association of Political Consultants, 59% of political consultants now use AI tools weekly - primarily for marketing materials. Kevin Harley, a Republican consultant, expects super PACs to “experiment more often with deepfake attack ads in 2026,” particularly since they operate independently from campaigns and face even less accountability.
Political ad spending for 2026 is projected to hit $10.8 billion, a 20%+ increase over 2022. Some portion of that will fund synthetic content.
The danger may extend beyond individual deception. As the Brennan Center notes, the real threat is “the gradual erosion of public trust in all political information.” When any video could be fake, even authentic footage becomes deniable. The “liar’s dividend” grows: politicians can dismiss genuine recordings as AI fabrications.
The Constitutional Tightrope
Regulators face a genuine tension. Political speech - including attack ads and hyperbole - receives the strongest First Amendment protection. Courts may scrutinize regulations that:
- Sweep too broadly into ordinary political discourse
- Restrict parody or satire without clear exemptions
- Burden protected artistic or commentary expression
The Brennan Center recommends narrow prohibitions focused on high-harm content (voter suppression messages, false fraud allegations) combined with broader disclosure requirements for other synthetic content. Mandatory exemptions for news coverage, parody, and satire would preserve legitimate expression.
But even well-crafted regulations face the enforcement problem. Deepfakes can be created anywhere, spread instantly across platforms, and reach millions before any regulator responds. The EU AI Act’s transparency requirements don’t take effect until August 2026 - and cross-border enforcement remains uncertain.
What This Means
The 2026 midterms are becoming a live experiment in what happens when AI outpaces regulation. States are acting while Washington deadlocks. Campaigns are testing boundaries while courts haven’t ruled. Voters are left to distinguish real from synthetic in an environment designed to make that impossible.
Some candidates will use deepfakes responsibly - with clear disclosure, for obvious satire. Others will do what Mike Collins’ campaign did: fabricate opponent statements with a pro-forma disclaimer that does little to prevent deception.
The question isn’t whether AI will influence the 2026 elections. It’s whether, by 2028, we’ll have built systems that preserve any meaningful distinction between what politicians actually said and what campaigns claimed they did.
The Bottom Line
28 states have deepfake election laws. Congress hasn’t passed one. The FEC is paralyzed. And a Georgia Senate campaign just demonstrated that you can put fabricated words in your opponent’s mouth as long as you include fine print admitting you made it up. That’s the regulatory landscape eight months before the midterms.