Trustwave's 2024 Financial Services Threat Reports Highlight Alarming Trends in Insider Threats & Phishing-as-a-Service. Learn More

Trustwave's 2024 Financial Services Threat Reports Highlight Alarming Trends in Insider Threats & Phishing-as-a-Service. Learn More

Services
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Security
Unlock the full power of Microsoft Security
Offensive Security
Solutions to maximize your security ROI
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Distributed Denial of Truth (DDoT): The Mechanics of Influence Operations and The Weaponization of Social Media

With the US election on the horizon, it’s a good time to explore the concept of social media weaponization and its use in asymmetrically manipulating public opinion through bots, automation, AI, and shady new tools in what Trustwave SpiderLabs has dubbed the Distributed Denial of Truth (DDoT).

Influence operations and Coordinated Inauthentic Behavior (CIB), which involves using many inauthentic online assets including fake social media profiles and impersonated news outlets in a coordinated manner, are increasingly utilized to sway perceptions, behaviors, and decisions. These operations can be either legal or illegal and may include various forms of disinformation or manipulation. We’ll also delve into the new kind of risks these tactics pose to the upcoming US elections and examine how rogue actors and legitimate entities are heavily investing in online assets that they can utilize to undermine democratic values.

According to the Cybersecurity and Infrastructure Security Agency (CISA), information manipulation is considered a threat and can be broken down in Misinformation, Malinformation, and Disinformation (MDM).

  • Misinformation: Misinformation refers to false or inaccurate information that is shared supposedly without the intent to deceive. It often arises from misunderstandings or misinterpretations of facts.
  • Malinformation: When factual information is used out of context to mislead, harm, or manipulate. An example of malinformation is the edited video of Nancy Pelosi that was slowed down to make her appear drunk or incapacitated. The video, which was shared widely on social media, took genuine footage, but altered it to distort the perception of her speech and behavior. This manipulation of real content aimed to harm the target’s reputation.
  • Disinformation: False information deliberately created and disseminated with the intent to deceive and manipulate public opinion. A prominent example of disinformation was the "Pizzagate" conspiracy theory. This false narrative claimed that a child trafficking ring involving high-profile politicians was being run out of a Washington, D.C. pizzeria. Despite being thoroughly debunked, the disinformation campaign led to real-world consequences, including a shooting incident at the pizzeria by an individual who believed the conspiracy.

 

Resource Mining and Digital Trenches

To deliver these fabricated narratives, it is necessary that some type of infrastructure was previously put in place. The deployment usually consists of registering new domain names, email accounts, acquiring SIM cards, burner phones, hiring servers, VPNs, and registering new social media accounts. Unfortunately, the common countermeasures companies implement, including CAPTCHA and extra verifications steps, have been bypassed consistently over time. Sometimes, including having real people being paid to manually complete the task without warning the anti-bot protections.

Even with these preliminary moves in place, some social networks accounts need a certain level of credibility and authenticity, such as having content, followers, and connections, to achieve better reach and establish legitimacy. Below are the commonly used techniques observed, though not limited to:

Engagement Farming

Engagement farming is a technique used to increase visibility and reach within the platform's algorithm to grow an online presence. Engagement farmers may post content about polarizing subjects designed to spark debate, ensuring a higher number of comments and interactions. Posts often use sensational, misleading, or emotionally charged headlines designed to provoke curiosity, anger, or excitement, prompting users to click or engage with the content. Often reposting or slightly modifying previously viral content to capitalize on its proven ability to generate engagement.

Image 1. Engagement paid services has become an established business model
Image 1: Engagement paid services has become an established business model

Follower Farming

Follower farming includes increasing the number of followers on a social media account, often through artificial means like bots, purchased followers, or follow-unfollow strategies. The accounts also may host fake or real giveaways, requiring users to follow them as a condition to participate. These events often attract large numbers of followers quickly.

Image 2. Follower farming
Image 2: Follower farming

 

Hate Farming

Hate farming is a strategy used on social media platforms where individuals or groups intentionally spread hate speech, inflammatory content, or divisive rhetoric to provoke strong emotional reactions, particularly anger or outrage. Topics often include race, religion, gender, sexuality, or political views, aiming to create an emotional response.

Once a bot farm is deployed, and the payload delivery methods ready, accounts then start posting and interacting around specific topics, altering the perception of the unaware online public regarding the topic. This coordination ultimately leads to a snowball effect in which the manipulated narrative gains traction and spreads widely, often appearing as though it has broad support or organic origins.

Image 3. Possible astroturfing targeting Japanese males.

Image 3: Possible astroturfing targeting Japanese males.

It is not uncommon for these accounts to use pictures either AI generated photos or stolen from real profiles, usually of attractive women, often accompanied by sexually appealing messages to attract attention and increase engagement. Additionally, techniques like astroturfing are employed, where these accounts later shapeshift to target specific categories of individuals, such as farm workers, minorities, or other demographic groups, to create the illusion of grassroots support or widespread sentiment. This tactic helps the narrative to blend in more effectively with legitimate content, making it harder to detect and counteract.

Image 4. Engagement for hire is a common business.
Image 4: Engagement for hire is a common business.

 

How AI is Tipping the Balance

Generative models, such as advanced AI-based language tools, can create messages that are more credible and persuasive. These models significantly enhance the quality of propaganda compared to traditional methods, particularly when propagandists lack linguistic or cultural knowledge of their target audience. By generating content that is linguistically and culturally varied, the artificial nature of the message will be much harder to detect, and the end result will be more effective and convincing.

Most social networks offer API services that facilitate post automation. This enables users to schedule content, manage multiple accounts, and integrate with various tools for analytics and content management. Additionally, this allows users to automate various aspects of social media management, including scheduling posts, collecting data, and managing interactions at scale. These services are designed to help businesses and individuals optimize their online presence, but they can also be misused for coordinated disinformation campaigns. There is a vast codebase available on GitHub that could be easily modified to do so.

Image 5. A paid tool to manage social media posting.
Image 5: A paid tool to manage social media posting.

 

The Upcoming US Election and Coordinated Inauthentic Behavior (CIB)

According to Meta’s Q2 2024 report, one of the six CIBs was a covert influence operation leveraging social media to spread political propaganda under the guise of the Patriots Run Project (PRP). The operation has created 96 Facebook accounts, 16 pages, 12 groups and three Instagram accounts. In addition to several domains including "patriotsrunproject[.]com." an X presence through "PRPNational" has been constructed. The accounts originated in Bangladesh and were used to craft a false narrative of widespread support for PRP, which claimed to be a political advocacy group with chapters in several US states.

To enhance the legitimacy of these accounts, operators used AI to create profile pictures, which were later replaced with more personalized images effectively employing the astroturf technique.

These fictitious personas pretended to reside in key US states such as Arizona and Michigan, sharing content that blended local interests, like sports and restaurant check-ins, with political memes. The PRP campaign also utilized these fake accounts to amplify content, spending about $50,000 on Facebook ads and attracting thousands of followers and group members across its assets.

By copying authentic social media posts and maintaining operational security through proxy IPs, the campaign effectively evaded detection while spreading negative content about specific individuals and institutions.

In May 2024, TikTok reported taking down 350 inauthentic accounts operating from China that were engaged in a campaign to artificially amplify criticism of the U.S. The same report identified nine accounts operating from Russia, posing as fictitious journalists and news outlets, which aimed to spread narratives intended to increase social division and criticize the current U.S. presidential administration.

Image 6. TikTok reporting that it took down accounts.
Image 6: TikTok reporting that it took down accounts.

As part of a broader effort to deter and disrupt election interference, the U.S. Department of State is currently offering a substantial $10 million bounty for information on foreign individuals or entities involved in influence operations that interfere with U.S. elections. This initiative aims to safeguard the electoral process by targeting those who seek to undermine it through disinformation, cyberattacks, and other manipulative tactics.

 

The Ongoing Battle Against Coordinated Inauthentic Behavior (CIB)

In its ongoing efforts to counteract influence operations, Google's Threat Analysis Group (TAG) has intensified its focus on a widespread CIB network linked to the People’s Republic of China (PRC) DRAGONBRIDGE (aka Spamouflage). In 2023 alone, Google took down over 65,000 instances of DRAGONBRIDGE activity, including disabling a staggering 900,000 videos and 57,000 YouTube channels. This trend continued into 2024, with over 10,000 instances disrupted in the first quarter alone.

OpenAI disrupted several CIB influence operations leveraging their models. Notably:

  • Bad Grammar: Targeted Ukraine, Moldova, the Baltic States, and the US via Telegram, using AI to debug Telegram bot code and create political comments in Russian and English.
  • Doppelganger: Generated multilingual content (English, French, German, Italian and Polish) posted on X and 9GAG, translated and edited articles for websites, and converted news articles into Facebook posts.
  • DRAGONBRIDGE (aka Spamouflage): Used AI to research social media activity, generate content in multiple languages, and debug code for websites like revealscum[.]com.
  • IUVM: Generated and translated long-form articles and headlines for the website iuvmpress[.]co.
  • Zero Zeno: Created articles and comments posted across platforms like Instagram, Facebook, and X.

Further insights from Meta’s Q2 2024 report puts Russia as a top #1 source of influence operations using CIBs networks, with 39 campaigns tracked since 2017, followed by Iran with 30 and China with 11. and these operations are now often driven by commercial contractors rather than state agencies, leading to an influx of low-quality, high-volume initiatives.

Recently, the U.S. Department of Justice indicted a company run by U.S. nationals operating on U.S. soil for allegedly engaging in contracts to produce content intended for Russian influence campaigns. This indictment highlights a concerning trend where domestic companies are being drawn into foreign disinformation efforts, blurring the boundaries of influence operations. It underscores the ease with which homegrown entities can be co-opted or manipulated into participating in foreign propaganda schemes, posing significant challenges to national security and the integrity of the public discourse.

 

Coordinated Inauthentic Behavior as a Service (CIBaaS)

These operations targeting geopolitical issues, such as the Ukraine conflict, Gaza conflict, Indian elections, and broader global narratives, illustrate the increasingly organized and commercialized nature of influence campaigns. The rise of CIBaaS shows how manipulation of online discourse is becoming more accessible, structured, and widespread. This commercial approach to disinformation paves the way for actors of all kinds to engage in these practices, with specialized groups offering services that range from social media manipulation to full-scale influence operations.

Meliorator

In July 2024, a Joint Task Force that combined US, Dutch, and Canadian agencies, discovered a tool dubbed Meliorator. Meliorator is an advanced AI-powered software developed and used by Russian state-sponsored actors to create and manage disinformation bot farms.

The tool was primarily designed to generate authentic-appearing social media personas en masse, which could then be used to disseminate disinformation and lure public opinion. Meliorator has been notably used by actors affiliated with RT (formerly Russia Today) to target countries with influence operations. These operations included creating thousands of fake accounts on social media platforms to spread pro-Russian narratives to undermine geopolitical adversaries.

Image 7. Meliorator tool diagram
Image 7: Meliorator tool diagram (Source: Joint Report)

 

Team Jorge

In 2023, a core tool called Advanced Impact Media Solutions (AIMS), a sophisticated AI-based platform, surfaced. AIMS creates and manages fake social media profiles, enabling the generation and dissemination of viral content with minimal input. This platform automates the creation of thousands of avatars, each with detailed backstories, to engage in coordinated inauthentic behavior across various social media platforms.

Image 8. Diagram showing the flow of a CIB paid service.
Image 8: Diagram showing the flow of a CIB paid service.

The operations conducted by Team Jorge and Meliorator, among others, reveal the industrialization of CIBs, where disinformation campaigns are executed with precision and at scale. These entities provide services to manipulate online discourse, amplifying divisive content and craft deceptive narratives across various platforms. Their operations, target critical geopolitical issues like the Russia-Ukraine conflict, Indian elections, events in Brazil and UK , are representative of the broader trend of influence operation campaigns using CIB networks.

 

Responding to campaigns using Coordinated Inauthentic Behavior (CIB)

Since this is a relatively new trend, efforts to understand and address CIB networks are still catching up. Currently, there are two notable approaches gaining traction. The first is the Breakout Scale, which is used to measure the reach and impact of these operations. The second is the DISARM framework, which provides a structured method for identifying, disrupting, and mitigating the effects of CIBs.

Image 9. The Breakout Scale
Image 9. The Breakout Scale (Source: Brookings.edu)

The DISARM framework, which is grounded in cybersecurity principles, serves as a critical tool for defending against such threats. By adapting the established MITRE ATT&CK framework, DISARM provides defenders with strategies to understand and mitigate the sophisticated methods employed by these CIB generating actors, ensuring that the information environment remains resilient against manipulation.

Image 10. Disarm Framework Objects
Image 10. Disarm Framework Objects (https://disarmframework.herokuapp.com/)

 

Conclusions

It’s clear that the mechanisms to implement CIB and influence operations have grown increasingly sophisticated. Groups now leverage AI to generate convincing disinformation and orchestrate widespread social media manipulations, posing significant threats to democratic processes and public trust. The commercialization of influence operations underscores the alarming scale at which these manipulations occur.

This shift toward industrialized CIBs reflects a disturbing trend where the manipulation of public discourse becomes not only more accessible but also more prone to abuse. As disinformation, misinformation, and malinformation proliferate, it demands heightened awareness and proactive measures from all stakeholders to safeguard the truth in an increasingly complex information environment.

References:

About the Author

Jose Tozo is a Senior Security Researcher with expertise in vulnerability management, automation, and incident response. With a proven track record of developing innovative solutions and mitigating threats in large environments, for over 20 years Jose has been dedicated to protecting critical assets in the digital world. Follow Jose on LinkedIn.

ABOUT TRUSTWAVE

Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.

Latest Intelligence

Discover how our specialists can tailor a security program to fit the needs of
your organization.

Request a Demo