Harris-Trump Presidential Election: Looking at the Threats and Cybersecurity Challenges
With less than three months to go until the 2024 US presidential election and with possible cyberattacks and data leaks already impacting campaign activities, the US Office of the Director of National Intelligence (ODNI) and Microsoft have issued separate reports on the tactics they see being used by the nation's primary adversaries to impact the electoral process.
The ODNI and Microsoft memorandums focused on Russian, Chinese, and Iranian efforts to undermine the upcoming election through influence campaigns on trending election-related topics and other activities. Still, we should be aware that deepfakes and election interference can come from anywhere.
Trustwave SpiderLabs has tracked activities that could impact the election. In June, I detailed in a blog the potential impact deepfake videos, photos, and audio could have on the elections.
Previously, just prior to the 2020 elections, SpiderLabs discovered databases containing US voter information for sale on the Dark Web. Those databases include a shocking level of detail about citizens including their political affiliation. The sellers of the US voter database claim that it includes 186 million records, and if that is correct, that means it includes information about nearly all voters in the US. The information found in the voter database can be used to conduct effective social engineering scams and spread disinformation to potentially impact the elections, particularly in swing states.
More recently, we have seen that deepfakes are still occurring on general topics, but also on the election process as with a video earlier this year that depicted Taylor Swift supporting US Presidential candidate Donald Trump. However, it's also important to note that deepfakes are only one tool belonging to a larger toolset containing many different mechanisms that foreign organizations use to influence people's behavior and decisions, which are detailed in the ODNI and Microsoft reports.
Although neither the US government nor Microsoft specifically called out deepfakes in their reports, Trustwave SpiderLabs does believe they can play a powerful role.
As we noted, a threat actor with the proper resources could create a single, well-made, realistic deepfake to sway people against or think twice about their potential candidate. A person looking to influence voters can find the tools to design a video to target a variety of people using their age, race, and orientation as a starting point and piling on misinformation specifically designed to attack the target's deepest fears.
For example, in today's hypersensitive, social media-focused environment, a malicious actor could create and post a fake video showing one candidate saying something off-color to upset that candidate's supporters and possibly convince them to vote differently.
It's also important to note that any country, group, or individual can create a deepfake. Borders do not matter. The creator might have an issue with a candidate or how their country is reacting to international events and create a deepfake or false propaganda to counter that candidate or issue.
The Threats Facing the US Election
The intelligence community remains focused on Russia, The People’s Republic of China, and Iran, whose influence activities this cycle are seeking to influence the politics and policies of the US to benefit their interests and undermine US democracy and Washington's standing in the world, the ODNI said.
The ODNI noted that Russia continues to pose a significant challenge to the integrity of US elections. Utilizing a diverse array of proxies and strategies, Moscow is becoming more adept at concealing its involvement while expanding its influence and crafting messages that better align with the perspectives of American audiences. These entities aim to support a particular presidential contender, sway the results of congressional races, erode trust in the electoral system, and intensify societal and political rifts.
According to the ODNI, Moscow is leveraging Russia-based influence-for-hire firms to shape public opinion in the US, including with election-related operations. These firms have created influence platforms, directly and discreetly engaged Americans, and used improved tools to tailor content for US audiences while hiding Russia's hand. Additionally, Russian influence actors are working outside of its domestic, commercial firms and are attempting to build and use networks of US and other Western personalities to create and disseminate Russian-friendly narratives. These personalities post content on social media, write for various websites with overt and covert ties to the Russian Government, and conduct other media efforts.
The ODNI did not name any of the personalities being leveraged.
Discover Trustwave Government Solutions
Russian Activities
Russia’s use of readily available talent to do its bidding is similar to what I noted in my blog about how threat actors can use free off-the-shelf apps to generate high-quality deepfakes. Outside firms are chosen, according to the ODNI, because they frequently have skillsets and operational capabilities not available to government entities and can often operate more nimbly and with fewer bureaucratic hurdles than government entities. This is exactly why some actors use free deepfake apps because they allow the threat actor to quickly and efficiently create a viable product.
Microsoft's report dovetails into specific groups and their actions to impact the US election cycle. The Microsoft Threat Analysis Center (MTAC) has observed three Russian influence actors, each of which has produced election influence campaigns to varying degrees of effectiveness:
- Ruza Flood (a.k.a. Doppelganger2)
- Storm1516
- Storm-1841 (a.k.a. Rybar)
Microsoft reported that Storm-1516 has had the greatest impact as of late June 2024. This group pivoted in late April from Ukraine-focused operations to targeting the US election with its distinctive video forgeries. Storm-1516 consistently launders narratives through videos, seeding scandalous claims from fake journalists and nonexistent whistleblowers, and amplifying that disinformation via inauthentic news sites. Since April, Storm-1516 has attempted to drive headlines with fake scandals claiming that the US Central Intelligence Agency (CIA) directed a Ukrainian troll farm to disrupt the upcoming US election, the Federal Bureau of Investigation (FBI) wiretapped former US President Donald Trump's residence, and Ukrainian soldiers burned an effigy of Trump.
Chinese Activities
The PRC is also active but likely not attempting to sway the US presidential election, the ODNI said. Instead, China might look at a broader range of campaigns to influence. The Intelligence Community (IC) is vigilant to the possibility that Beijing-affiliated entities might attempt to disparage certain lower-tier candidates perceived as hostile to China's fundamental interests. This tactic was previously employed in select 2022 midterm contests involving both major US parties, according to the ODNI. Additionally, the IC recognizes that Chinese influence agents are actively fostering discord within the US through social media and depicting democratic systems as inherently disorderly.
Further, the ODNI said PRC government entities have collaborated with a China-based technology company to enhance the PRC's covert online influence operations, including to more efficiently create content that also connects with local audiences. Foreign actors continue to rely on witting and unwitting Americans to seed, promote, and add credibility to narratives that serve the foreign actors' interests. These foreign actors seek to take advantage of these Americans to spread messaging through their channels and engagements.
According to MTAC, the most prolific PRC threat actor from late April through late May was Taizi Flood (aka Spamouflage). This group leveraged hundreds of accounts to stoke outrage around pro-Palestinian protests at US universities. Microsoft said that Taizi Flood assets appeared to mimic students involved in the protests, reacting in real-time as students clashed with law enforcement across campuses and lifted text from authentic accounts with directions to demonstration locations.
To increase the level of conflict, some of these accounts seeded left-leaning messages into right-wing groups—to further agitate conflict about the protests or create misunderstanding within the US audience most receptive to their intended message.
Iranian Activity
MTAC has tracked significant influence activity by Iranian actors, which is not new as Iranian cyber-enabled influence operations have been a consistent feature of at least the last three US election cycles. Iran's operations differ from Russian campaigns by appearing later in the election season and employing cyberattacks more geared toward election conduct than swaying voters. Iranian actors are expected to employ cyberattacks against institutions and candidates, MTAC reported, while simultaneously intensifying their efforts to amplify existing divisive issues within the US, like racial tensions, economic disparities, and gender-related issues.
The ODNI reported that Iran persists in its endeavors to undermine confidence in American political frameworks and amplify societal unrest. The IC has spotted Iran actively attempting to shape the US presidential election, likely motivated by a desire to prevent an outcome that could escalate tensions with the US. To disseminate misinformation, Tehran employs extensive networks of digital personas and propaganda operations and has been particularly active in heightening tensions surrounding the Israel-Gaza situation. Tehran relies on vast webs of online personas and propaganda mills to spread disinformation and has notably been active in exacerbating tensions over the Israel-Gaza conflict.
Separating the Truth from the Lies
Whether it's deepfakes or foreign-spread propaganda, it is very difficult to determine what is real or fake.
We now rely almost 100% on the efforts of news companies and big Internet players like Google, Meta, etc., to verify and ensure that content is real and legitimate before being published to the public. If they fail, the impact of deepfakes and misinformation in general could be devastating.
About the Author
Jose Luis Riveros is Senior Consultant, SpiderLabs at Trustwave, where he is responsible for helping customers improve their security posture and delivering high-quality penetration testing including network, web app, wireless, and social engineering engagements. Follow Jose on LinkedIn.
ABOUT TRUSTWAVE
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.