Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
Get access to immediate incident response assistance.
Get access to immediate incident response assistance.
Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
The world is just beginning to understand how the intersection of artificial intelligence (AI) and data privacy will impact organizations, their employees, and those who use their services. As of June 2024, there are simply too many unknowns.
The use of AI and its abilities is growing at an almost geometric rate, and this growth is making it difficult for governments and organizations to develop frameworks that will ensure AIs and Large Language Modules (LLM) are properly used and abide by the privacy regulations that are already in place.
An International Association of Privacy Professionals (IAPP) study clearly shows public concern. The 2023 IAPP Privacy and Consumer Trust report found 68% of consumers globally are either somewhat or very concerned about their privacy online. Most find it difficult to understand what types of data about them are being collected and used. The diffusion of AI is one of the newest factors to drive these concerns, with 57% of consumers globally agreeing that AI poses a significant threat to their privacy. Additionally, in a recent Pew Research Center survey, 81% of consumers think the information collected by AI companies will be used in ways people are uncomfortable with and in ways the collectors did not originally intend.
Barry O'Connell, General Manager EMEA, Trustwave, noted there is a broader discussion to be conducted and decisions to be made regarding privacy, but these must be done intelligently and with a great deal of nuance.
"As AI becomes more pervasive, an organization needs to understand how this technology is managed. Having governments provide frameworks on data privacy, employment legislation, etc., makes sense," O'Connell said. "Having governments make decisions on specific functionality or technology is sub-optimal, not to mention nearly always reactionary. Challenges like the ability for a system to create an explorable timeline of a PC's past usage will be coming thick and fast."
What organizations need, O'Connell said, is an AI adoption framework and a related governance structure to assess how this technology should and should not be used within the context of providing a safe and secure environment for the employees, data, IP, customers, partners, etc. Legislation should form the parameters, but organizations must take responsibility for the situational use of AI within their business.
Ed Williams, Vice President EMEA, Trustwave SpiderLabs, agreed, saying he does not want to see the government take a heavy hand to this as AI is a transformational technology, and he would not want a nation to fall behind because of concerns that are unfounded and/or short-sighted.
Williams would like to see new data collection features turned "off" by default, allowing organizations and users to enable data collection features as they see fit to maximize privacy while ensuring necessary information is still collected. It's important to have a level of granularity where the organization and user can differentiate between what is required and what isn't, he said.
There is also a historical backdrop to these privacy concerns, as collecting data on workers is not a new idea.
"As a counterbalance to this, the context of where and when this is happening is key. It is worth remembering that ‘employee activity monitoring’ has been around for a while and is there to ensure appropriate behavior; think of the finance sector and insider trading as an example," Williams said. "The health service has monitoring to ensure that health care practitioners only view the records they are meant to review as part of their role.”
There are also security issues at play that governments must keep in mind when considering interfering with AI development.
"From a cybersecurity perspective, these tools are, and will continue to be, used by adversaries. This means that in addition to gaining an understanding of the legitimate use of these tools, organizations need to factor in their malicious use and how to protect against it. Bad actors don't play by the rules," O'Connell said. "No amount of legislation will stop cybercriminals and malicious nation-states from maximizing the use of this technology to achieve their goals."
The evolving landscape of AI necessitates a smarter and more responsive approach to cybersecurity. AI offers immense potential to fortify defenses by streamlining threat response, proactive threat hunting, and in-depth data analysis. Security professionals can leverage AI to automate time-consuming tasks like sifting through vast datasets in real time, pinpointing patterns, and uncovering anomalies indicative of potential threats. These AI-powered tools empower consultants to anticipate and neutralize risks before they escalate.
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.
Copyright © 2024 Trustwave Holdings, Inc. All rights reserved.