Trustwave Blog

How Deepfakes May Impact Upcoming Elections Worldwide

Written by Jose Luis Riveros | Jun 18, 2024

The common fear regarding election interference is that a threat actor will gain access to either ballot machines or the networks that tally votes. However, there is a much easier method a person interested in interfering with a specific election can implement.

The deepfake.

As I previously noted in my blog, "Is This Blog Real or a Deepfake?" creating a believable deepfake video or audio sample is relatively simple and can be constructed from apps freely available on the Internet.

Why is a deepfake such a powerful tool? Primarily because many factors can affect an election. As candidates face off and vie for voters over issues such as healthcare, the economy, education, etc., there is a strong possibility that those attempting to sway an election can introduce misinformation in the form of fake videos that could influence voters' decisions on any of these topics.

An actor with the proper resources trying to impact an election could create a single, well-made, realistic deepfake to sway people against or think twice about their potential candidate. A person looking to influence voters can design a video to target a variety of people using their age, race, and orientation as a starting point and piling on misinformation specifically designed to attack the target's deepest fears.

For example, in today's hypersensitive, social media-focused environment, a malicious actor could create and post a fake video showing one candidate saying something off-color to upset that candidate's supporters and possibly convince them to vote differently.

A quick side note. This scenario can play out in any election anywhere in the world, and deepfake damage is not limited to elections. Organizations, businesses, and even celebrities are also at risk. A deepfake could impact a company's stock or the economy through a video of the CEO saying the business is bad when, in fact, it's good or is being acquired when, in reality, it's not for sale.

 

Why Deepfakes Are So Dangerous

It is particularly difficult for the average person to avoid or spot a deepfake, but with that said, everyone from the average person on up should be on guard and try to stop them when possible.

To be successful in this endeavor, we need everyone—voters, campaign workers, the big social media companies, and, in particular, the news media—to take the lead. The news media is traditionally tasked with verifying the veracity of information presented both before and after publication.

At the same time, the campaign organizations have the power to create awareness by pressing the voting public and technology companies hosting the videos to review and filter those that have not been validated to be from the source indicated in the video.

Unfortunately, the average person will bear the brunt of vetting campaign ads and videos. In much the same manner people have to be aware of phishing and telephone scams, everyone will have to question what they see and then think if it really makes sense.

For example, a video showing President Joe Biden making multiple racist comments would be totally out of character and easy to discern as fake.

But what if someone posted a video showing President Biden declaring an end to aid for Ukraine? In this case, a viewer would have to go to multiple news sources to discover if this is, in fact, true.

Likewise, companies that are now flooding the Internet with thousands of applications that use AI to create virtual assistants, transform text into speech, and create videos that use your own voice should have more controls and restrictions in place, so these tools (free or not) are used correctly and that there is accountability if malicious actors are using those tools to create deep fakes.

 

Legislation to Limit AI

Multiple nations are developing or rolling out legislation designed to protect people from the insidious use of AI. In the US, the Federal Artificial Intelligence Risk Management Act of 2023 directs federal agencies to follow guidelines developed by the National Institute of Standards and Technology (NIST) for managing risks associated with AI use. US States are also taking action with California (AB 302, 2023), which requires analyzing existing AI systems to identify potential risks and unintended consequences, and California (SB 1001, 2023), which ensures individuals are informed when an AI system is being used. New York (A8195, 2023) is also in the works and proposes licensing high-risk AI systems and establishing an ethical code of conduct.

Internationally, there is the EU AI Act. This act is the world's first comprehensive AI law, passed by the European Parliament in April 2024. It sets varying regulations based on the risk level of AI systems.

 

Deepfake Detection Tools

Beyond legislation, organizations are developing practical tools that can help the average organization differentiate between what is real and what is a deepfake. These tools use machine learning to analyze videos for signs of manipulation.

Some popular options include:

  • Intel's FakeCatcher [AI Deepfake Detector Tools]
  • Microsoft Video AI Authenticator [AI Deepfake Detector Tools]
  • Deepware [AI Deepfake Detector Tools]

Of course, we must keep in mind that while these tools can be effective, deep fakes are constantly evolving and could be beyond a detector's ability to catch.

The final obstacle to halt a deepfake is good old human expertise. Trained professionals can analyze videos for inconsistencies in lighting, skin texture, blinking patterns, and other subtle signs that might indicate manipulation.