Trustwave's 2024 Financial Services Threat Reports Highlight Alarming Trends in Insider Threats & Phishing-as-a-Service. Learn More

Trustwave's 2024 Financial Services Threat Reports Highlight Alarming Trends in Insider Threats & Phishing-as-a-Service. Learn More

Services
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Security
Unlock the full power of Microsoft Security
Offensive Security
Solutions to maximize your security ROI
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Combating Misinformation and Cyber Threats to Secure the 2024 US Election

As we near the 2024 election, safeguarding the integrity of our democratic process is of paramount importance.

While the protection of ballot machines has been a primary concern, the actual threats extend beyond physical infrastructure. Misinformation, cyberattacks, and the emergence of generative AI technologies like deepfakes present substantial challenges.

Between June 18 and July 12, the Trustwave SpiderLabs team meticulously examined over 5,000 politically themed emails sourced from secure email gateway cloud submissions and spam trap collections. These emails, originating from individuals affiliated with both Democratic and Republican parties, encompassed a wide spectrum of content, ranging from supportive to critical. Topics included candidate promotion, campaign updates, disparaging remarks, and conspiracy theories.

Despite the diversity of opinions expressed, two recurring themes were evident: appeals for financial contributions and the utilization of propaganda techniques. Adversaries exploit various channels to influence public sentiment. Understanding these tactics, mitigating associated risks, and implementing proactive measures are crucial for voters, campaign operatives, and media professionals alike.

 

Misinformation: The Invisible Enemy

Misinformation has become a pervasive threat in our digital age. With the organic nature of social media, biased algorithms, and the rapid spread of fake news, misinformation can easily influence public opinion. Social media platforms, despite their efforts to combat false information, remain a primary vehicle for the spread of misleading content. As we head into the heart of the election season, the potential for misinformation to shape voter perceptions and decisions is at an all-time high.

Key issues like healthcare, the economy, and education are particularly vulnerable to manipulation. Misleading narratives can be crafted to exploit voter fears and biases, swaying public opinion and potentially altering the outcome of the election. Imagine, for example, scrolling past a headline or video of a presidential candidate proclaiming their intent to put an end to a widely endorsed healthcare policy. If this does not align with their platform and is not realistic, it may still be believable enough not to spark a second thought in unsuspecting viewers. Without proper verification, this could spread rapidly across myriad platforms—its presence on Facebook, Instagram and X instantaneously and simultaneously would only serve to further validate it, no matter whether it is legitimate or not.

It is more crucial than ever for voters to critically evaluate the information they encounter and rely on reputable sources for their news. Training them to cross-check information with multiple credible sources can greatly reduce the spread of false information. It should always be noted that it is important to vet anything seen on social media against reporting from legitimized media outlets.

 

The Digital Battlefield

In addition to misinformation, cyberattacks pose a significant threat to election security. The introduction of generative AI has only inflamed this threat.

State-sponsored actors and independent hackers alike have demonstrated their ability to disrupt electoral processes through various means. From hacking into voter databases to launching denial-of-service attacks (DoS) on critical infrastructure, the tactics used in cyber warfare are diverse and constantly evolving.

Recent years have seen a rise in ransomware attacks targeting local government systems, including those responsible for managing elections. These attacks can lead to the theft of sensitive voter information, disruptions in the voting process, and a general erosion of public trust in the electoral system. Not only do these attacks have the potential to spread fake news, but they also enable blackmail and be leveraged as a tactic in advanced phishing campaigns. For example, a campaign email could carry a malicious link to urge voters to click on a link to view the candidate’s recent speech. Especially if the threat actor is leveraging AI, that link or any accompanying image could very well be realistic enough for the common citizen to click on it, exposing them to malware.

Mitigating phishing or malware threats may sometimes be left up to the individual on the receiving end, but strengthening cybersecurity measures at all levels of government is also essential to these risks—particularly those borne out of the proliferation of AI. To combat the misuse of AI and the threat of automated cyberattacks, several nations are developing or rolling out protective legislation. In the US, the Federal Artificial Intelligence Risk Management Act of 2023 directs federal agencies to follow guidelines for managing AI-related risks. States like California and New York are also enacting laws to regulate AI systems and ensure ethical conduct.

Discover Trustwave SpiderLabs

Learn More

Deepfakes and the New Frontier of Deception

Among the many threats to election security, deepfakes represent a particularly concerning development. These AI-generated videos can depict individuals saying or doing things they never did, creating highly realistic but entirely false narratives. As technology advances, deepfakes become increasingly difficult to detect, posing a significant challenge for both the public and media professionals.

The ease of creating deepfakes has lowered the barriers for malicious actors. Freely available apps and user-friendly software mean that virtually anyone can generate a convincing deepfake. This democratization of technology makes widespread misinformation more plausible than ever before. Malicious actors can produce and disseminate deepfakes quickly and in large volumes, flooding social media with fake content designed to influence voter decisions on key issues.

Deepfakes can even be tailored to exploit the fears and biases of specific demographic groups, potentially swaying public opinion against a candidate. Because deepfakes are so difficult to spot and often play on voters’ deepest fears, it's essential for everyone to stay vigilant. The news media plays a crucial role in verifying information, and campaign organizations can also create awareness by urging the public and tech companies to review and filter unverified videos.

The average person must also bear a certain amount of responsibility for vetting campaign ads, videos, and other media they encounter. Similar to how, in traditional cybersecurity, everyone is responsible for identifying phishing scams, it is just as necessary that every voter question the authenticity of the photo and video media they see.

 

Detection and Prevention

Despite the sophisticated nature of these threats, there are measures that can be taken to combat them. For misinformation and fake news, media literacy campaigns and public awareness initiatives are crucial. Voters need to be educated on how to identify false information and encouraged to verify the credibility of their news sources. Social media platforms must also continue to improve their algorithms to detect and remove misleading content more effectively.

In the realm of cybersecurity, government agencies and private organizations must collaborate to enhance the security of election infrastructure. Regular security audits, robust encryption methods, and comprehensive incident response plans are vital components of a resilient electoral system. Additionally, investing in advanced threat detection technologies can help identify and mitigate cyber threats before they cause significant damage.

When it comes to deepfakes, the development of sophisticated detection tools is paramount. AI-driven solutions can analyze videos for signs of manipulation, such as inconsistencies in lighting, shadows, and facial movements. Public awareness campaigns should also be launched to inform voters about the existence of deepfakes and provide guidance on how to recognize them. Practical tools are being developed, leveraging machine learning to analyze videos for signs of manipulation. Some popular tools include Intel's FakeCatcher, Microsoft Video AI Authenticator and Deepware.

 

This Election Year

As we move through the 2024 election year, the integrity of our democratic process is under unprecedented threat. Security leaders should continue to advocate for and support legislation that regulates the use of AI and imposes penalties for the creation and distribution of malicious deepfakes and misinformation. Encouraging international cooperation on AI regulation and targeted, politicized cyber threats can also help create a unified approach, and general rules of thumb, to shoring up election security.

It is imperative that voters, campaign workers, and media professionals remain vigilant and informed about these threats. By doing so, we can collectively work towards a more secure and transparent electoral process, ensuring that the voice of the people is accurately represented in the outcome of the 2024 election.

A version of this article originally appeared in Cyber Defense Magazine.

About the Author

Karl Sigler is Security Research Manager, SpiderLabs Threat Intelligence at Trustwave. Karl is a 20- year infosec veteran responsible for research and analysis of current vulnerabilities, malware and threat trends at Trustwave. Follow Karl on LinkedIn.

ABOUT TRUSTWAVE

Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.

Latest Intelligence

Discover how our specialists can tailor a security program to fit the needs of
your organization.

Request a Demo