Trustwave Blog

ChatGPT: A Tool for Attackers and Defenders

Written by Jason Whyte | May 30, 2024

ChatGPT impresses everyone with its writing capabilities; however, its proficiency in understanding and generating human-like text has inadvertently empowered threat actors to produce realistic and error-free phishing emails, which can be challenging to detect.

The use of ChatGPT in cyberattacks poses a significant threat, particularly for attackers whose first language isn’t English. This tool helps them overcome language barriers, enabling the creation of more convincing phishing content. 

According to the Information Systems Audit and Control Association (ISACA), cybercriminals are leveraging their expertise, specific targets, and intended outcomes to frame questions for ChatGPT. This approach amplifies the effectiveness of their already sophisticated deceptive tools, underscoring the need for heightened cybersecurity measures.

Cybercriminals have adapted ChatGPT’s advanced text-generation capabilities to refine exploit code. Despite built-in guardrails to prevent misuse, skilled threat actors can craft prompts to bypass these limitations, letting them refine or check exploit code for effectiveness. The accessibility and adaptability of ChatGPT pose a significant challenge in cybersecurity, as it lowers the barrier to conducting sophisticated cyberattacks.

On the other hand, ChatGPT and similar generative artificial intelligence (AI) tools offer significant advantages to cybersecurity teams. These tools can automate and expedite various processes, empowering cybersecurity professionals to concentrate on more intricate tasks that necessitate human judgment and experience. Some of the key advantages include:

  • Increased efficiency: AI and machine learning algorithms can process and analyze vast amounts of data at speeds unattainable by humans. This capability significantly enhances the speed of detecting and responding to security breaches.
  • Enhanced threat detection: AI can identify patterns and anomalies that human analysts might miss, leading to more accurate and comprehensive threat detection. This improvement reduces the likelihood of false positives and false negatives in threat identification.
  • Automated responses: repetitive tasks can be automated, allowing cybersecurity teams to engage in strategic and high-level tasks.
  • Continuous monitoring: AI facilitates real-time threat detection through constant monitoring of networks and systems, which is crucial for timely responses to security breaches.

While ChatGPT and similar tools are making it easier for threat actors to deceive even the most vigilant employees, organizations can effectively utilize ChatGPT for cybersecurity with the right level of expertise. However, many companies lack their own security operations center (SOC), and finding and retaining skilled security professionals is a real challenge. 

In such cases, seeking professional assistance, such as partnering with a managed detection and response (MDR) service provider, can be a more effective strategy. These services, especially when combined with tools like ChatGPT, are adept at proactively detecting and responding to cyber threats, providing a reliable defense.

AI’s capabilities in processing and analyzing vast data sets, such as logs and security events, let it rapidly identify potential threats, often uncovering blind spots that might elude traditional detection methods. For example, when a security information and event management system flags suspicious activities, ChatGPT can quickly analyze and prioritize these events, providing a summarised view. 

This activity significantly reduces the time and effort required for cybersecurity teams to analyze manually. Plus, its ability to interpret and analyze scripts targeting specific vulnerabilities can help generate intrusion detection system signatures to enhance the protective capabilities of MDR services further.”

 

A version of this article originally appeared on ITWire.