ChatGPT is proving to be something of a double-edged sword when it comes to cybersecurity.
Threat actors employ it to craft realistic phishing emails more quickly, while white hats use large language models (LLMs) like ChatGPT to help gather intelligence, sift through logs, and more.
The trouble is it takes significant know-how for a security team to use ChatGPT to good effect, while it takes just a few semi-knowledgeable hackers to craft ever more realistic phishing emails.
One way to level the playing field is to bring on more white hat defenders on your side in the form of a managed detection and response (MDR) service provider backed by a team of security researchers who know how to effectively add ChatGPT and other generative AI models to the arsenal of tools it already has.
The two sides of the ChatGPT issue were illustrated nicely in a recent Trustwave webinar, “ChatGPT: The Risks and Myths of Chatbot AIs.” Karl Sigler, Senior Security Research Manager at Trustwave, went through a number of ways threat actors are employing LLMs such as ChatGPT and how the good guys, including members of the Trustwave SpiderLabs research team, are employing the technology to help in their work.
As Sigler explains, one of the things ChatGPT was trained to understand well is the English language, including writing and editing.
That is a big help to threat actors who create phishing emails but whose native tongue is not English. Bogus emails filled with spelling or grammatical errors are obvious clues that tip off vigilant employees to phishing emails. Now attackers can use ChatGPT to help craft error-free emails that read well, making them harder for unsuspecting end users to detect.
The ability to write more realistic emails gives threat actors a powerful weapon to use in their business email compromise (BEC) efforts, including honing complex social engineering pretexting attacks that can help them fool even the most vigilant among us.
Additionally, threat actors are using ChatGPT to help refine their exploit code to make it more effective. ChatGPT does have guard rails intended to prevent its use for nefarious means, Sigler said. If you ask it to “write me some code to exploit this vulnerability,” it’ll tell you it’s not intended for that purpose. But it’s not difficult to craft prompts that get around those guard rails, such as asking ChatGPT to identify errors in code you’ve already written.
“Criminals definitely have a new tool they can use to craft phishing emails, maybe check exploit code or modify code,” Sigler said.
Security teams can use ChatGPT and other generative AI tools; for example, the technology can help gather insights from data to aid incident investigation and research.
This can entail log analysis and curating events from security information and event management (SIEM) systems. Given proper direction and input, ChatGPT can help threat hunters find relevant data far more quickly than they can on their own.
For instance, say, a SIEM identifies 10 log entries that appear to be related to a suspicious activity. Usually, a security analyst has to manually examine those entries to figure out what’s going on. Now they can dump this information into ChatGPT with instructions to prioritize each event and summarize what they mean collectively – a huge time-saver.
Similarly, if a security team identifies a script that targets a given vulnerability, it can drop it into ChatGPT and ask it to identify what HTTP requests the script generates. From that, the team can likely garner information to create intrusion detection system (IDS) signatures. Here again, that saves a significant amount of manual effort.
However, it’s not hard to determine who benefits more from ChatGPT in these examples. Threat actors need little technical know-how to take advantage of ChatGPT. All they’re doing is asking it to write better emails, subject lines, and the like. It takes a bit of experience to write scripts, but these are widely available for sale.
The good guys, however, need significant cybersecurity chops to effectively use ChatGPT.
Sigler’s tips in the webinar offer valuable insight to any security operations center (SOC) team. But most companies don’t have a SOC, and most struggle to hire and retain knowledgeable security professionals.
For them, a better defense is to hire professional help, such as from an MDR service provider staffed with professionals like those on the Trustwave SpiderLabs team. These folks do indeed know how to make effective use of ChatGPT, among many other tools, to help them consistently identify and respond to threats in an organization’s environment.
To learn more about how to defend your organization and get Sigler and the Trustwave SpiderLabs team working for you, visit our Managed Detection and Response page.