Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Services
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Security
Unlock the full power of Microsoft Security
Offensive Security
Solutions to maximize your security ROI
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Shedding Light on Election Deepfakes

Contrary to popular belief, deepfakes — AI-crafted audio files, images, or videos that depict events and statements that never occurred; a portmanteau of “deep learning” and “fake” — are not all intrinsically malicious. 

These pieces of synthetic media can be used to create immersive and realistic learning experiences, democratize visual effects and filmmaking technologies, and help train employees on negotiations and high-stakes communications.

However, most of the news surrounding deepfakes feature its dangers, and understandably so. This revolutionary technology can be weaponized for phishing, business email compromise (BEC) attacks and even nonconsensual deepfake pornography. The fact that deepfake technology can be abused to spread misinformation should also be heavily underscored, especially during the election cycle.

Let’s take a look at the state of deepfakes during the 2020 elections, how it’s currently making waves in the 2024 election cycle, and how voters can tell truth from digital deception.

 

A Look Back: The Early Years of Election Deepfakes

In 2018, deepfake technology started becoming more mainstream, with Jordan Peele’s deepfake video of the former US President Barack Obama going viral to warn voters against fake news.

Now, deepfakes are even more powerful and can enable fake news to a higher degree than during the 2018 elections and can make voters believe that candidates said something they didn’t say or do, which can destroy their credibility. However, during the 2020 election cycle, the threat of deepfake disinformation was far greater than the actual weaponization of the technology for misinformation.

Reports state that during that time, more simple or traditional methods of disinformation proved to still be effective and more easily accessible. One such example is the edited video of Nancy Pelosi that was digitally slowed down to show her as drunk and incoherent.

Despite deepfake technology being in its early stages in 2020, alarm bells started ringing for the potential of this technology to be misused for malicious purposes.

 

Election Deepfakes Today: What Are We Up Against?

It didn’t take long for deepfake technology to become more advanced and accessible, making the abuse of deepfake technology in influence operations more widespread, impactful, and potentially dangerous.

In January 2024, a deepfake audio of President Joe Biden was used in robocalls that discouraged voters in New Hampshire from voting during the state’s primary elections. It was reported that Steve Kramer, a political consultant, was behind the deepfake. Kramer was indicted in New Hampshire and fined US$6 million by the Federal Communications Commission (FCC). In August, the FCC also penalized Lingo Telecom, a voice service provider that played a role in distributing the deepfake voice of President Biden, with a US$1 million fine.

Three months before the 2024 presidential elections, Republican presidential nominee Donald Trump shared AI-altered images of Taylor Swift fans appearing to wear “Swifties for Trump” shirts. Creating deepfake images of Taylor Swift, who held the highest-grossing concert tour of all time in 2023, endorsing a presidential candidate can potentially deceive voters into thinking that the artist’s large fanbase supports the Republican candidate. Although Trump later admitted that the images were fake and that AI can be dangerous, he was seen repetitively resharing AI-generated content created by his supporters.

Tesla CEO and X (formerly Twitter) Chairman Elon Musk also shared a fake campaign video featuring a deepfake audio of Kamala Harris saying untrue things on X, including her being a diversity hire and not knowing the first thing about running a country. The tech mogul, who has publicly endorsed Trump, didn’t outrightly state that the video was a parody at first, but after a couple of days, shared that the video was intended as satire.

Because these AI-generated videos get massive reach and engagement on social media, it has ignited discussions on whether these kinds of synthetic content should be treated as harmless satire or dangerous disinformation.

Since the advent of new generative AI tools, anyone can easily craft political deepfakes. A report from Google’s DeepMind shows that AI tools are being used to create deepfakes of politicians and celebrities to shape public opinion more than it’s being used to aid cybercrime.

New research from the Center for Countering Digital Hate (CCDH) also found that Musk’s new generative AI tool, Grok, had safety failures that allowed users to create deepfake images to spread political misinformation. Researchers observed that Grok did not reject 60 text prompts related to disinformation surrounding the 2024 presidential election.

Recently, the AI platform was also tweaked after election officials from five states called out the platform for producing false information about state ballot deadlines.

 

Unmasking Deepfakes: Telling Truth from Deception

While laws aiming to prohibit individuals from using AI for nonconsensual impersonation and regulate deepfake creation for election-related communications are being pushed in the US, voters must be on the lookout for AI-generated content that could be used for misinformation.

As the 2024 US presidential election draws near, we’ve rounded up some helpful tips on how voters can tell real content from synthetic ones:

  • Investigate the Source: Make sure the content is reported or published by a reputable news source, such as the Associated Press, Reuters, or The New York Times.
  • Check Photos’ Small Details: Although deepfake images are made to look realistic, there are still several tell-tale signs that they are fake. Look at the small or background details for unnatural or skewed images, such as the hair, hands, limbs, fingers, logos, and words.
  • Use Deepfake Detection Tools: Voters can access AI-detection tools that identify political deepfakes, such as those used by TrueMedia.org, to determine if audio or video content shared on social media is synthetic or otherwise. Big tech companies including YouTube are also actively working on detecting synthetic media, allowing users to detect and manage content that features users’ voices and faces, which could also help reduce election-related misinformation on the platform.

About the Author

Pauline Bolaños is a Security Content Researcher at Trustwave SpiderLabs. Pauline has seven years of experience as a cybersecurity writer, covering diverse security topics including malware, vulnerabilities, AI, and the cloud. Follow Pauline on LinkedIn.

ABOUT TRUSTWAVE

Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.

Latest Intelligence

Discover how our specialists can tailor a security program to fit the needs of
your organization.

Request a Demo