Blog

U.S. Election Security and Deepfake Audio Fraud: Heightened Risk for November 2024

Date published:

Oct 30, 2024

Jon Marler

Manager, Cybersecurity Evangelist

SHARE ON
SHARE ON

As the U.S. enters the final weeks before the 2024 general election, a growing concern looms over the integrity of election security—deepfake audio fraud and disinformation. With advancements in Artificial intelligence (AI) and the ability to manipulate voices and videos convincingly, the 48 hours before and after November 5th, 2024, are expected to be the most vulnerable time for attempts to sway public opinion, according to experts who testified before the U.S. Senate Intelligence Committee in May 2024. Deepfakes pose a significant risk to voter confidence and the overall outcome of the upcoming U.S. election; this risk factor is expected to have a critical impact for elections worldwide.

The Risks: Deepfakes and Disinformation

Deepfakes have the potential to undermine democracy by spreading misinformation, especially during critical election windows. During the Senate hearing in May, prominent tech executives from Google, Microsoft, and Meta warned that bad actors could use deepfake technology to produce fake audio or video recordings of political figures or falsify public statements that might alter the course of the 2024 election. This could include deepfake audio messages that discourage voters from casting ballots or release manipulated content that damages the reputation of candidates.

Already, Americans are seeing publicly shared fraudulent videos of Joe Biden and Kamala Harris.

Lawmakers expressed concerns that, unlike traditional misinformation, deepfakes can be nearly impossible to detect with the naked eye or ear, creating confusion about what is real and what is fake.

This erosion of public trust could directly impact voter turnout and election outcomes.

Even platforms like Google and Meta have announced measures to label deceptive content, but it remains unclear if those efforts will be enough to contain the spread of sophisticated deepfakes.

Pre-Election Security Measures

California Governor Gavin Newsom recently signed a law (AB 2839) that makes it illegal to distribute false election-related information during the critical pre-election window, which begins 120 days before the election.  The goal was to prevent false audio or video content aimed at misleading voters from spreading unchecked. This is part of a broader attempt to mitigate deepfake risks and the potential harm they can cause in Election Day's lead-up to and immediate aftermath. However, as of October 3rd, 2024, U.S. District Judge John A. Mendez put a hold on AB2839 because of First Amendment violation concerns.

With the line between free speech and election interference being argued at the federal court level, Americans have to trust other measures to provide factual information, including collaboration between tech companies and the government. Major platforms are already gearing up to combat disinformation by enhancing their detection technologies and content moderation practices. Platforms like Microsoft with their Defending Democracy Program and Meta’s oversight board are employing machine learning algorithms to identify and label misleading content. However, the sheer volume of potential deepfake content may still present a significant challenge.

Post-Election Risks and the 48-Hour Window

The 48 hours after Election Day present another critical window. In 2020, a surge of mail-in ballots and the extended time required to count them delayed election results. According to a report by CISA, it left ample time to expose vulnerable audiences to fraudulent information. If that were to happen in 2024—in a time where AI is far more advanced—the ability to spread deepfakes and misinformation could intensify.

These deepfakes further erode trust in the electoral process, especially if they are used to challenge the legitimacy of results or fuel unrest among supporters of losing candidates.

Defending Against Deepfake Audio Fraud

To protect the election's integrity, government agencies and businesses must be prepared for this next wave of AI-enabled fraud. Strategies include:

1. Election-Specific Deepfake Detection Tools: Develop tools specifically designed to detect deepfake audio and video content around elections. Companies like DeepMedia and Reality Defender are already creating tools to detect AI-generated content in real-time.

2. Public Awareness Campaigns: Educating the public about the dangers of deepfake technology and encouraging voters to verify information from trusted sources before reacting to audio or video content online.

3. Multi-Layer Verification of Election-Related Information: Government entities and news organizations should promote multi-layer verification processes for election-related content. This may include using official communication channels to clarify potential misinformation in real-time and reassure voters about the message's authenticity.

Securing the November Election: Combating Deepfake Audio Threats to Democracy

As the November 5th election approaches, deepfake audio fraud and misinformation will be critical threats to election security. Businesses, tech platforms, and government agencies must collaborate closely to mitigate the impact of fake content and preserve voter confidence.

While new laws, like California’s, are helping to deter the distribution of false information, vigilance will be required before and after Election Day to protect the democratic process from bad actors using AI for nefarious purposes.

For a deeper dive into the security implications of deepfake voice technology, our recently released webinar “Good vs Bad Actors in AI-Deepfake Voice Technology for Fun and Fraud” can help.

SHARE ON
Andrea Sugden
Chief Sales and Customer Relationship Officer
Let’s Talk
To get started with a VikingCloud cybersecurity and compliance assessment, email, call or click:
Contact Us