Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More

Services
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

Database Security

Prevent unauthorized access and exceed compliance requirements.

Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Microsoft Security
Unlock the full power of Microsoft Security
Offensive Security
Solutions to maximize your security ROI
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats

Trustwave Answers 11 Important Questions on ChatGPT

ChatGPT can arguably be called the breakout software introduction of the last 12 months, generating both amazement at its potential and concerns that threat actors will weaponize and use it as an attack platform.

Karl Sigler, Senior Security Research Manager, SpiderLabs Threat Intelligence, along with Trustwave SpiderLabs researchers, has been tracking ChatGPT’s development over the last several months. So, we decided to sit down with him to answer some of the more pressing questions regarding this very powerful and influential software.

For some quick background on ChatGPT please check out these SpiderLabs blogs: 

And for a very in depth look at ChatGPT please watch Sigler’s recent webinar.

  1. What are your primary thoughts and concerns right now when it comes to ChatGPT?

Sigler: Now is the time to prepare for ChatGPT. It hasn't even been out for a year, yet it is taking up all the oxygen in the room. And for good reason. ChatGPT is very powerful. It does have its flaws, but if this is where we're starting from, then just imagine where we will be in another five years.

Start looking at ChatGPT to see how you might use it. We're already seeing exponential growth and we’re eventually going to see a progress line that goes straight up. How will this then look when we start using AI to train AI? Maybe humans aren't the best at figuring out how to train AI. Maybe AIs will start training each other in a much better fashion than we can even imagine.

So, it's going to be interesting, but there's a lot of opportunities on both sides.

While the criminals definitely have a new tool that they can use to improve phishing emails and check or modify malware, there are also plenty of use cases for the good guys. Security staffers will have AI to process massive dataset logs and get a good idea of what exploits might do without having to run the code in a sandbox. They can even use ChatGPT to identify if threat actors are using ChatGPT against your organization in certain situations.

Right now is the time to play around with ChatGPT. Type in a question to see the response. Since basic access to ChatGPT is free, I highly recommend you set up an account on openai.com and play around with it yourself.

  1. What happens with the information that is uploaded to ChatGPT? Is it a good idea to upload confidential information?

Sigler: Privacy is a big question with ChatGPT; whether in the data sets used for training, how people interact with it, or how confidential those conversations may even be.

Users must remember that when you're using ChatGPT, you're also training ChatGPT. So, anything you type in goes back to ChatGPT to help fine-tune the software. This means it's a feedback loop. Don't type anything confidential into ChatGPT. Don't ask it specific questions about your family, or your job unless you expect that information to become public.

  1. How are attackers weaponizing this platform?

Sigler: In every possible way. Threat actors are using it across the board. As we know, many of the larger cybercriminal groups have people dedicated to performing very specific tasks within their organizations. They have a manager who runs everything with tasks like ensuring malware is undetectable and campaigns are well ; there are coders and translators. Other attack groups don't have a lot of people to handle these jobs, so ChatGPT will help them fill this skills gap. Perhaps their coder isn't the best coder, but knows enough to ask questions to ChatGPT on the best way to write a function. This action will likely lead to threat actors being able to write better code, and have better translations for communications and social engineering of emails.

Luckily, they can't use ChatGPT live in an attack (at least not directly). It's still isolated in a bubble, so it's just going to be there as back-end support.

  1. What if the person teaching the AI has an agenda or provides biased or false information? Won't the AI then have a false education and thus be wrong?

Sigler: Absolutely 100%. The AI will only be as good as its training data set. AI researchers and experts have already done this. They've created an AI that has a completely misconceived view of reality. They have created really hateful, horrible, and violent AIs. An AI will be only as good as the dataset, and it's very easy to get bad data.

  1. Since ChatGPT is relatively new and security is unsettled, are companies implementing policies banning its use on corporate networks?

Sigler: You do see this more and more, for instance on college campuses where they've already had ChatGPT bans to limit potential cheating. The entire country of Italy briefly banned its use over privacy concerns, and we're seeing companies thinking about banning it for internal use.

From a security perspective, I think the primary concern would be those privacy concerns. An organization may not want its customer service team using the generally available version of ChatGPT to handle specific customer requests and issues.

Recently, a compromise of OpenAI caused a public leak of individual prompts which showed Samsung developers accidentally leaking intellectual property on the platform. There's going to be all kinds of issues like these.

  1. What changes do you envisage from a SOC perspective regarding detecting issues, say signatures, etc.

Sigler: The changes are going to be massive. It's going to take place because your SOC and SIEM are dealing with huge data sets. Going through just a customer's data set and through their network detection, versus their post base detections, and then going through their Windows system, versus their Linux system. It's potentially a big help.

Where ChatGPT can come into play is by helping companies that are dealing with big data and have complex questions to answer that are technical, but not necessarily opinion-based. This is where ChatGPT shines. ChatGPT can help by digging through thousands lines of incoming alerts and deciding whether the security team needs to be concerned. I can even see ChatGPT running in the background as a Tier One support .

  1. Can ChatGPT generate test data?

Sigler: Absolutely, you can generate test data, and that's really a tremendous feature. We've asked for samples of specific Windows events, and we’ve even had it test exploits against a vulnerable (but fully virtual) victim system and post the results.

  1. Can ChatGPT bypass paywalls by entering the link to synthesize the information?

Sigler: No, it cannot. ChatGPT can't directly access the Internet, so it cannot bypass paywalls that way.

  1. Can this technology be used in-house so nonpublic info isn't exposed to the Internet?

Sigler: Not at this point although there are several for-pay services that claim to isolate and privatize your interactions. Microsoft and OpenAI are both promising such a service in the near future.

  1. Can you jailbreak ChatGPT?

Sigler: It depends on what you mean by jailbreak. Do you mean jailbreak out of ChatGPT entirely in order to gain access to the Operating System and data behind it? Then yes. A vulnerability in ChatGPT allowed a malicious actor to gain access to other user’s prompts on the backend. This vulnerability was quickly patched.

In ChatGPT parlance, however, “jailbreak” generally means being able to bypass safety protocols by forcing ChatGPT to answer questions it would normally be prohibited from answering (e.g. how to build a bomb, or how to make methamphetamine). If that’s what you mean by “jailbreak” then the answer is also “Yes.” Safety control bypasses are common place.

  1. Will there be a time when ChatGPT will be limited or confined within a single organization?

Sigler: Absolutely, and I expect this probably sooner rather than later. It'll be a paid service. There'll be a whole new business providing APIs for the AI that are going to be very specific. We will start getting isolated instances of ChatGPT which may be able to pull from an organization's larger data set, but then also has access and isolated access to private data.

This will create security concerns, but we're going to see this happen. I think that a lot of the larger Fortune 500 companies that are already involved with the data programs the upcoming ChatGPT-4 are already using it in an isolated fashion

 

BSO_19744_chatgpt-webinar-social-meta-2
WEBINAR

ChatGPT: The Risks and Myths of Chatbot AIs

Artificial intelligence (AI) has rapidly become a key component of many organizations' digital transformation strategies. However, with the rise of AI comes a new set of security challenges and threats.

ABOUT TRUSTWAVE

Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.

Latest Intelligence

Discover how our specialists can tailor a security program to fit the needs of
your organization.

Request a Demo