Criminals have historically been quick to embrace cutting-edge technology for their financial gain. For instance, the notorious bank robbers Bonnie and Clyde utilized high-powered V-8 engine-equipped Ford cars to outpace local law enforcement. Other criminal groups leveraged telephones to coordinate their activities, while some recognized the advantage of wielding Thomson submachine guns to outgun security personnel and police.
In a similar vein, it’s unsurprising that threat actors have now turned to artificial intelligence (AI), particularly for email-based attacks. These malevolent actors target, infiltrate, manipulate, and exfiltrate data from the very organizations that specialize in developing technology to facilitate global functioning and business operations. Trustwave SpiderLabs’ 2024 Technology Threat Landscape: Trustwave Threat Intelligence Briefing and Mitigation Strategies sheds light on this concerning trend.
The consequences of attacks in this industry can be quite severe. Attackers are highly motivated by financial gains and political advocacy and continually adapt their methods to outpace defenses. The technology sector has some unique challenges due to the nature of the industry, including:
The report notes that AI's ability to quickly and accurately generate text has made AI a key weapon and greatly complicated a security team's job of not only flagging such emails prior to delivery, but also educating staffers on how to spot a malicious email.
AI's role in email-based attacks was not the only development spotted by Trustwave SpiderLabs. The research team also revealed how attackers find and use various vulnerabilities to gain access. Making gaining access even easier is that more than 12 million devices were found open to the Internet that were not patched against several known vulnerabilities, a preferred avenue of attack by many adversaries.
The report also points out the special relationship technology companies have with their customers. In most cases, technology companies are third parties and possibly the root cause of most supply chain attacks. Additionally, certain technology subsectors, like software companies and infrastructure providers, have complex supply chains, making it difficult to ensure the security of all components and services. This issue has come to light in the MOVEit, SolarWinds, and Kaseya attacks.
Generative AI, a form of artificial intelligence capable of generating new text, media, and source codes, enjoyed a breakout year in 2023, becoming widely popular in the business, consumer, and threat actor communities. Tools like ChatGPT, DALL-E, Synthesia, and others experienced explosive growth in creative and malicious applications.
The concern is over Gen AI's ability to craft sophisticated email attacks, highlighted by the emergence of WormGPT and FraudGPT, which are Large Language Models (LLMs) similar to ChatGPT but lacking security constraints and which have proven to be a favorite among adversaries. For example, Trustwave SpiderLabs researchers have been observing the growing frequency of potentially AI-generated (BEC) emails appearing in our client's inboxes. To see how these function, our researchers tested some of these emails against multiple AI text content detectors and tools (GPTZero, Copyleaks, ZeroGPT, Quillbot) to identify any AI content in the message.
In some cases, these tools have shown almost the entire BEC message is most likely AI-generated.
The truly dangerous aspect is that tech-savvy personnel, especially those in the technology sector, have become more cognizant of the indicators for identifying phishing attempts, such as grammatical and spelling mistakes.
However, with the advent of AI-generated text, phishing emails can significantly enhance the effectiveness of phishing campaigns by eliminating the basic language and grammatical errors that proliferate in older phishing attempts.
Aside from AI-generated phishing text, our researchers also observed the increasing frequency of using AI services as lures, along with deep fakes, another newcomer to the threat actor's weapon kit.
In one email scam, SpiderLabs found a scam offering recipients the opportunity to make easy money through "Quantum AI," an alleged stock trading platform associated with billionaire Elon Musk. This scam extends beyond emails, circulating a deep fake video of Musk on social media that promotes the platform, falsely claiming high returns with minimal risk. These fabricated emails and videos attempt to trick individuals into investing in this financial scam.
Finally, Trustwave SpiderLabs researchers noted the increasing use of AI-powered software-as-a-Service Marketing Platforms for sending unsolicited marketing emails. One example that our team has observed uses the Kalendar AI, a SaaS platform that can write personalized invitations to prospective customers and automatically send pitches on behalf of a specific company.
We should note that this methodology is not necessarily malicious, but this could easily progress from being just unsolicited marketing emails to full-blown malicious email campaigns due to the ease of creating and distributing personalized email campaigns through AI-driven services such as these.
The technology sector isn't alone in facing an elevated threat landscape. As SpiderLabs has pointed out in previous reports:
As a result, preventative measures remain the most effective defense against all types of cyberattacks, all of which are listed in the report.
Please take the time to download Trustwave SpiderLabs' 2024 Technology Threat Landscape: Trustwave Threat Intelligence Briefing and Mitigation Strategies to learn all about how threat actors plan, launch, and benefit from attacking the technology sector.