Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Services
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Security
Unlock the full power of Microsoft Security
Offensive Security
Solutions to maximize your security ROI
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

How Threat Actors Conduct Election Interference Operations: An Overview

The major headlines that arose from the three most recent US presidential election cycles illuminated the various fragilities of American election infrastructures and systems.

In 2016 and 2020, malicious actors launched a bevy of election-related attacks, ranging from hacking government officials’ email accounts to spreading disinformation on social media platforms, all with the purpose of hijacking democracy by stealing sensitive information, undermining confidence in the election, and influencing election results.

And unfortunately, as technologies progress and become more widely available, malicious actors have more ammunition for political propaganda and cyberattacks. This is exactly what we’re seeing in the 2024 US presidential election.

This blog entry explores how malicious actors launched cybercriminal attacks, abused social media platforms, and misused advanced technologies such as machine learning (ML) and artificial intelligence (AI) for election interference campaigns in the 2016, 2020, and 2024 US presidential election cycles.

 

2016: Widespread Hacking, Fake News, and the Cambridge Analytica Scandal

Russian Influence

News of Russian interference erupted ahead of the 2016 presidential election. In a report, the Senate Intelligence Committee reported Russian actors “directed extensive activity, beginning in at least 2014 and carrying into at least 2017, against US election infrastructure at the state and local level.”

Intelligence professionals and US officials believe that Russian actors, who targeted all 50 states, performed reconnaissance to better understand how the electoral systems worked to launch attacks later.

According to Senate Select Committee on Intelligence Chairman Richard Burr (R-NC), the US was not prepared for coordinated and persistent attacks against its election infrastructure during the 2016 election. Fortunately, based on the Senate panel’s report , they saw no evidence of votes being changed or voting machines being manipulated.

The unauthorized infiltration of election systems has led to 11 Russian nationals being indicted by a grand jury in the District of Columbia on for July 13, 2018, a computer hacking conspiracy. The individuals have been charged with “gaining unauthorized access into the computers of U.S. persons and entities involved in the 2016 U.S. presidential election, stealing documents from those computers, and staging releases of the stolen documents to interfere with the 2016 U.S. presidential election.”

Additional charges include aggravated identity theft, false registration of a domain name, and conspiracy to commit money laundering.

More information continued to emerge after the 2016 election. In 2017, the US government reached out to election officials of 21 states to inform them that they were targeted (although not necessarily breached) by Russian actors. An NBC report shared the states that said they were targeted included swing states such as Arizona, Pennsylvania, and Wisconsin.

Time Magazine wrote a comprehensive report detailing the extent of Russia’s interference during the 2016 elections, including hacking the emails of presidential candidate Hilary Clinton’s staff, the Democratic Congressional Campaign Committee (DCCC), Democratic National Committee (DNC), Sen. Lindsey Graham (R-South Carolina), Sen. Marco Rubio (R-Florida), and the Republican National Committee (RNC).

Fake News on Social Media, the Cambridge Analytica Conundrum

Through the years, social media has become a ubiquitous part of our lives, changing how people communicate, spend leisure time, and consume news and information.

In 2016, six in 10 US adults got their news from social media. A Pew Research study found that 66% of Facebook users got their news on the social media platform, while 59% of Twitter (now X) users got their information. Malicious actors have capitalized on this, abusing social media to launch influence operations on Americans.

Election-related fake news started gaining traction and notoriety on social media platforms in 2016, with a bias heavily favoring then-presidential nominee Trump over then-fellow nominee Hillary Clinton. Based on a 2017 Journal of Economic Perspectives report, 115 pro-Trump fake stories were shared on Facebook a staggering 30 million times, while 41 pro-Clinton fake stories were circulated 7.6 million times.

It was also reported that on X, more than 50,000 accounts were found to be connected to Russian bots that shared 3.8 million tweets during the 2016 election. Based on the hashtags associated with about 80% of these Russian bots, they appeared to be programmed to behave in a way that supported Trump. Meanwhile, on Facebook, 120 Russian-backed pages published 80,000 organic posts that an estimated 126 million people saw. These pages, which have collectively built a network of over 3.3 million people, have since been removed by Facebook.

Russian actors have also potentially abused Facebook’s advertising feature to spread disinformation. Facebook said an operation possibly based in Russia that controlled 470 fake accounts and pages spent $100,000 on 3,000 ads that spread polarizing views on immigration, race, and gay rights. Meanwhile, $50,000 was spent on ads that were possibly political in nature. However, it should be noted that Facebook did not find a connection between the ads possibly purchased from Russia and any specific presidential campaign.

What Facebook did find and confirm, however, was that the private user information of 87 million of its users “had been harvested on an unprecedented scale” by late 2015 to target them with personalized political advertisements. This was done by Cambridge Analytica, a voter-profiling company that worked on Trump’s 2016 presidential campaign and was headed by Steve Bannon, Trump’s then-key adviser.

Former Cambridge Analytica employee and data scientist Christopher Wylie divulged how the company “played with the psychology of an entire nation in the context of the democratic process” by building psychological profiles of voters in the US via Facebook data and an app that allowed them to harvest the data of tens of millions of Facebook users.

The New York Times reported that Cambridge Analytica designed target audiences for Trump’s digital ads and fund-raising appeals, modeled voter turnout, and determined ideal states to travel to to get voter support.

Following the massive Facebook data exploitation, Cambridge Analytica ceased operations in 2018.

In the aftermath of the Cambridge Analytica scandal, Meta, Facebook’s parent company, agreed to pay US$725 million to settle a class action lawsuit that accused the company of allowing third-party companies to access private user information.

This scandal also affected how other social media and tech companies approach political ads: The Bipartisan Policy Center rounded up some significant political campaigning changes, including Twitter banning political ads, Google reducing political targeting options, and Facebook giving users the ability to opt out of such ads.

2020: Foreign Influence Ops and Fake News Persist

Foreign Influence: Russia, Iran, and Other Countries

Russia continued to launch influence operations during the 2020 elections, but it wasn’t the only nation to do so. The National Intelligence Council reported that several countries launched election interference campaigns in the 2020 election cycle.

  • Russia: Russia launched influence operations that supported the re-election of Trump and, attacked his then-political opponent and current President Joe Biden, and subverted people’s confidence and trust in the electoral process. Contrary to their actions in 2016, Russia veered away from waging attacks against the US election infrastructure in 2020.
    Russian actors continued to be pro-Trump in 2020. They used proxies that were linked to Russian intelligence to disseminate false or unsubstantiated claims against President Biden to US media companies, US officials, and other prominent individuals associated with the Trump administration.
  • Iran: Iran, on the other hand, surreptitiously launched a complex influence campaign that aimed to sabotage Trump’s presidential campaign without supporting his political rivals. Like Russia’s operations, Iran designed its campaign to sow discord and worsen the political divide in the US and worsen the public’s distrust in the electoral process.
    As an act inciting social unrest, Iranian actors were said to have been behind Florida Democratic voters’ receiving emails from the so-called Proud Boys, a far-right gang that claimed to have accessed the entire US voting infrastructure. The emails threatened Democratic voters to vote for Trump, otherwise, the Proud Boys will go after them, claiming that they’ll know who voters will vote for.
    In 2020, Iranian actors also gained access to a US municipal government system to publish unofficial election results onto a public site. Fortunately, cybersecurity professionals foiled this malicious activity. It’s important to note that Iranian threat actors could not gain access to or compromise the ballot-counting process.
    It was also reported that Iranian hackers gained access to an unnamed media company’s network. The company, which provides content management systems (CMSs) for newspapers and publications, had its system compromised by hackers ahead of the 2020 election to attempt to modify or create content. On Nov. 4, 2020, just one day after the 2020 presidential election, Iranian hackers attempted to access the company’s system. However, the FBI informed the company of the attempt, and the company promptly blocked their access.
  • Others: Other countries also attempted to influence the US election in 2020, albeit their campaigns were relatively smaller in scale. These countries include Lebanese Hizballah, Cuba, and Venezuela.

 

Social Media Platforms Slowly Combat Misinformation and Conspiracy Theories

Because of the previous election cycle, the presence of disinformation campaigns on social media has reached mainstream consciousness.

The proliferation of fake news has prompted social media platforms to implement stricter actions to limit the spread of misinformation. According to one report, following the 2016 election, engagements on fake news content on Facebook dropped by more than 50%.

Interestingly, before the 2020 election, Americans also clicked less on unreliable websites. According to Stanford researchers, from 44.3% in 2016, the number of Americans who visited websites that published misleading or false election-related information dropped to 26.2%.

Social media companies actively removed accounts that engaged in coordinated inauthentic behavior (CIB). In 2019, Facebook was reported to have taken down influence operations originating from Russia and Iran on their platform. One of these campaigns is allegedly connected to Russia’s Internet Research Agency (IRA), which is linked to 50 Instagram accounts and one Facebook account with almost 250,000 followers that published almost 75,000 posts.

In 2020, Facebook and Twitter have also removed hundreds of accounts of “Russian military intelligence and other Kremlin-backed actors” who were involved in the 2016 presidential election influence operations. Out of the three networks of taken-down accounts on Facebook, the majority were found to have created fake personas of journalists or editors and shared links to fake websites posing as independent media companies.

Despite these massive takedowns, malicious actors continued to weaponize social media platforms and their paid features for disinformation.

Political Facebook ads that falsely described Joe Biden as a communist targeted Latino and Asian American voters in 2020. On YouTube, the Trump campaign ran a YouTube ad in Spanish that falsely claimed that Venezuelan President Nicolás Maduro backed President Biden’s campaign. This false advertisement, which was shown more than 100,000 times eight days before the 2020 election, targeted Latino voters in Florida.

 

2024: Tried-and-Tested Strategies Meet Advanced Technologies

Russian, Iranian Actors Continue to Meddle with US Presidential Election

In 2024, Iran and Russia continue engaging in US presidential election interference operations.

Microsoft reported in August that it tracked four separate Iranian groups targeting the 2024 US election, highlighting the following key findings:

  • One Iranian group has been crafting fake news sites that cater to Democrats and Republicans alike and may be utilizing AI services to copy some of its published content from legitimate US publications.
  • Microsoft believes that another Iranian group is responsible for potentially extreme political activity, such as intimidation or inciting violence to cause chaos.
  • Another Iranian group that is allegedly linked to the Islamic Revolutionary Guard Corps (IRGC) launched a spear-phishing attack on a high-ranking official on a presidential campaign using a compromised email account of a former senior advisor.
  • Another Iranian group launched a password spray operation that allowed it to compromise a county-level government employee’s account. It’s important to note that the government employee is based in a swing state.

In the same month as Microsoft’s report, the Trump campaign shared that Iranian actors hacked, stole, and distributed sensitive internal documents to news outlets such as Politico, The New York Times, and The Washington Post.

Meanwhile, in early September, the US justice, state, and treasury departments announced they found Russian state broadcaster RT of hiring a Tennessee-based firm to make and distribute media content with hidden Russian messaging to US audiences. US Attorney General Merrick Garland, said RT paid the firm a whopping US$10 million for its services.

 

Fake News, Cheap Fakes, and AI-Powered Content Make Waves on Social Media

Malicious actors continued to capitalize on social media’s wide reach and easy accessibility to spread disinformation ahead of the 2024 presidential election.

A recent Microsoft report discussed how Russian influence operations have set its sights on the Democratic party of Kamala Harris and Tim Walz via spreading fake videos to “discredit Harris and stoke controversy around her campaign.” Two Russian actors that Microsoft dubbed, Storm-1516 and Storm-1679, were behind the dissemination of fake videos on social media platforms, including X and Telegram.

Based on a report by The Guardian, in one of its most recent disinformation operations, Storm-1516 performed the following actions:

  • Paid an actor to act as the alleged hit-and-run victim who was left paralyzed in their fake video
  • Created an inauthentic website for a fictional San Francisco-based news outlet they called “KBSF-TV”
  • Spread the fake video alleging that Harris was involved in an alleged hit-and-run incident on social media platforms, which garnered an estimated 2.7 million views.

Simple and straightforward tactics such as Storm-1516's fake video are still a go-to when it comes to swaying public opinion. Another example involves unidentified actors not associated with the Republican campaign stealing publicly available photos of 17 European fashion and beauty influencers to promote Trump and JD Vance on X.

However, malicious actors have also started relying on AI technologies to create synthetic content that will further their disinformation efforts. With AI tools, crafting fake news stories and imagery and translating them into various languages to target and potentially sway the votes of specific communities have become easy to do.

In May 2024, OpenAI, the company behind the popular large language model-powered (LLM) tool ChatGPT, published a report that disclosed the countries that used its product for disinformation campaigns that they distributed on social media: Russia, China, Israel, and Iran. According to OpenAI’s blog entry, the tech company successfully disrupted five influence operations that did not seem to significantly increase their engagement or reach by using OpenAI’s LLM tool.

A Chinese influence operation dubbed “Spamouflage” was reported to incorporate the use of AI tools in creating content surrounding controversial topics, including reproductive rights and US support for Ukraine. The Chinese actors, who pretended to be American voters to worsen the political divide in the US, shared social media content that blasted both Democratic and Republican parties’ nominees.

Additionally, NPR reported that Russian actors have also created political content using AI, covering diverse topics supporting Trump’s presidential campaign and undermining Vice President Harris’s. In the same report, they discussed how Iranian actors also abused AI tools to create English and Spanish political social media content with the goal of damaging Trump’s campaign.

Aside from making and translating fake content, AI is also being misused to create fake social media profiles to become supersharers of fake news online.

According to the US Department of Justice, it thwarted a bot farm-powered Russian campaign to spread misinformation and promote pro-Kremlin stories in the US and other countries by using AI to create fake-yet-realistic-looking X accounts.

The AI software and the bot farm were linked to an unnamed editor at RT. RT, formerly known as Russia Today, is a state-controlled news outlet that Meta recently banned from its social media platforms, such as Facebook, Instagram, WhatsApp, and Threads, over allegations of using deceptive tactics to carry out surreptitious influence operations.

Conclusion

For as long as there are election activities, there will be concerted and covert efforts to disrupt them to shape opinions and sway votes. This means everyone must do their part to ensure that democracy is protected against influence operations.

Tech companies must come together and work with government and media organizations to establish guidelines and standards on how to use these technologies responsibly and ethically and to effectively protect users against AI-powered disinformation efforts.

Though often misused by malicious actors, security professionals can use AI to fight fake news and malicious AI-generated content. It can detect AI bots, screen content for possible disinformation, and identify AI-generated content.

Users must also do their part and view election-related media with a conscious and critical approach. Here are a few helpful ways to spot fake news and fake news sharers online:

  • Check the source if it’s credible.
  • Do your own research on the topic. Check trustworthy news sources to verify if a news item is factual or fictional.
  • Look out for sensational wording and if the news excessively appeals to a reader’s emotions.
  • Scrutinize a social media user’s username (if it matches the profile URL), profile picture, friends/followers/subscribers, and posts.

About the Author

Pauline Bolaños is a Security Content Researcher at Trustwave SpiderLabs. Pauline has seven years of experience as a cybersecurity writer, covering diverse security topics including malware, vulnerabilities, AI, and the cloud. Follow Pauline on LinkedIn.

ABOUT TRUSTWAVE

Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.

Latest Intelligence

Discover how our specialists can tailor a security program to fit the needs of
your organization.

Request a Demo