Trustwave Blog

The Impact of Artificial Intelligence on Cybersecurity: Opportunities and Threats

Written by Ed Williams | Mar 7, 2024

The integration of Artificial intelligence (AI) is forcing a significant transformation in the business operations landscape. Through automation, data analysis and predictive capabilities, AI is reshaping how businesses operate as companies look to spur productivity.

However, whilst businesses are set to gain a myriad of benefits from using AI, organizations will require consistent education to remain prepared for the new wave of AI-supported cyber threats that are expected, which could have an untold impact if left unchecked and unguarded.

 

AI's Role in Cybersecurity

Ongoing research suggests a rapid rise in the adoption of AI for cybersecurity, and this makes perfect sense as the cybersecurity sector stands to gain numerous benefits from the integration of AI.

A recent study conducted by Blackberry shows how important cybersecurity professionals view AI. It showed 82% of IT decision-makers surveyed plan to allocate a budget for AI security by 2025, and almost half intended to do so by the end of last year.

AI is helping to bolster cyber defenses, including predictive capabilities and rapid pattern recognition. These automated systems and measures make it possible for incredibly quick threat detection and response in cybersecurity to become the norm. With progress into AI showing no sign of abating, this positive impact on the cybersecurity industry should continue in due course.

 

Revealing the Unseen Risks of AI

As highlighted above, there is a sustained push to make AI services that benefit the wider public and keep them safe. However, bad actors will always be in the shadows, ready to manipulate any new technology, and AI is no exception.

The elite Trustwave SpiderLabs team has reported on this growing threat landscape with AI-backed tools such as the wide distribution of WormGPT and FraudGPT, which enable inexperienced hackers to effortlessly acquire software that aids in generating malicious code and offers AI-driven tools to support cybercriminal activities.

Likewise, the already advanced GPT-4 model has shown it is adept at impersonating customers, raising several concerns about authenticity for the foreseeable future. With discussions ongoing about the development of GPT-5, which is being touted as a possibly life-changing product, this apprehension will likely continue until AI is rectified with comprehensive safeguards and AI regulation.

From what we know, bad actors primarily use AI's natural language processing to create hyper-realistic and highly personalized phishing emails. This fact has been highlighted in Trustwave's most recent report looking into cybersecurity threats in the hospitality sector as well as a number of other industries such as healthcare, financial services, and manufacturing.

In the report, we found that threat actors use Large Language Models (LLMs) to develop social engineering attacks, which are more sophisticated as LLMs can create highly personalized and targeted correspondence such as instant messages and emails.

This AI-generated content predominantly contains malicious links or attachments, primarily HTML attachments, which threat actors employ for HTML smuggling, credential phishing, and redirection.

Trustwave also discovered in recent research that around 33% of these HTML files employ obfuscation as a means of defense evasion. It is dutifully expected that there will be an uptick in the frequency of phishing attacks, presenting a challenge in detection as AI capabilities advance. Another disconcerting trend involves the increasing prevalence of deepfake technology, enabling the creation of counterfeit audio or video content to deceive customers by mimicking authenticity.

 

Being Intelligent for the Year Ahead

In the face of AI's latest advancements, cybersecurity must progress towards a more proactive, intelligent, and efficient framework. By optimizing security processes such as threat response, hunting, and analyzing extensive datasets, AI presents significant potential for enhancing cyber defenses.

AI can help consultants work more productively through time-intensive tasks such as scouring extensive datasets in real time, identifying patterns, and detecting anomalies that could be potential threats. This emergence of AI-supported products has enabled cybersecurity professionals to foresee and mitigate risks before they inflict substantial damage.

Nevertheless, cybersecurity consultants must remain mindful that hackers can exploit these same capabilities. This involves conducting thorough vulnerability scans and addressing any gaps identified by AI systems to ensure a robust defense and doing so frequently.

Moreover, consultants must explore how AI can simulate cyberattacks, providing valuable insights for a resilient incident response strategy.

This activity ensures that, in the event of an organization falling victim to an AI-powered cyberattack, resources are in place to mitigate the threat and minimize the fallout from the breach.

Without a doubt, the ongoing surge in artificial intelligence technology is reshaping the cybersecurity landscape as well as a plethora of other sectors.

While AI empowers organizations to detect threats and streamline security processes more efficiently, it simultaneously equips cybercriminals with new capabilities to commit serious damage to a brand and its customers. As experts in their respective fields, AI and cybersecurity experts must work hand-in-hand to ensure that the products for the future are regulated well and do not fall into the hands of bad actors.

A version of this article originally appeared on Computing.com