Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
Get access to immediate incident response assistance.
Get access to immediate incident response assistance.
Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
The U.S. Department of Homeland Security's (DHS) Cybersecurity and Infrastructure Security Agency (CISA) and the United Kingdom's National Cyber Security Centre (NCSC) today jointly released Guidelines for Secure AI System Development in partnership with 21 additional international partners.
“As more organizations begin adopting AI-based software to run their day-to-day business operations it will become imperative that these solutions are designed from the ground up as secure,” said Bill Rucker, President of Trustwave Government Solutions. “Threat actors will attempt to exploit any security vulnerabilities so we support these guidelines as they will help give developers a guideline on how to create secure products."
The increasing use of Artificial Intelligence (AI) spurred this international effort to create guidelines to help those creating systems that use AI make informed cybersecurity decisions at every stage of the development process.
"As nations and organizations embrace the transformative power of AI, this international collaboration, led by CISA and NCSC, underscores the global dedication to fostering transparency, accountability, and secure practices," CISA Director Jen Easterly. "The domestic and international unity in advancing secure by design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology revolution."
The document does not encompass all AI but refers specifically to machine learning (ML) applications, with all types of ML being in the guide's scope.
For the purposes of the guide, the agencies defined ML applications as those that:
“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron. “These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.
The elite Trustwave SpiderLabs team has been at the forefront of tracking the development of legitimate AIs such as ChatGPT and Google's Bard, along with malicious versions such as WormGPT and FraudGPT.
The general takeaway is that the world has only started to see the tip of what these still very young technologies will accomplish, both good and bad, and even that differentiation is sometimes somewhat blurry.
For example, while ChatGPT can't directly access the Internet, it can access any private, confidential, or proprietary information input into it. Every piece of data input into ChatGPT helps to create a feedback loop that trains the software, which means that any private information used in the platform is no longer private.
Additionally, even "good" AIs like ChatGPT are proving to be a helpful tool for threat actors who use them to create hard-to-spot phishing emails.
CISA, the NCSC, and their partners believe a structure is needed for AI developers to keep AIs as secure and useful as possible.
The document is aimed primarily at providers of AI systems, whether based on models hosted by an organization or using external application programming interfaces (APIs). However, all stakeholders, including data scientists, developers, managers, decision-makers, and risk owners, should be aware of these recommendations.
The government agencies focus on four key areas within the AI system development cycle that they believe must be followed to boost security.
The key aspect is prioritizing security awareness with the development staff. This includes providing users with guidance on the unique security risks facing AI systems, which can be included in standard InfoSec training, and train developers in secure coding techniques and secure and responsible AI practices.
Developers must include a risk management process that includes understanding the potential impacts on the system, users, organizations, and society if an AI component is compromised or behaves unexpectedly.
Finally, as with all software, systems must be designed from the beginning for security, functionality, and performance.
In one way, security AI is no different than general cybersecurity practices regarding third-party suppliers, understanding where your assets are stored and who has access, and maintaining proper documentation of the process being conducted.
If an organization is going to an outside source, you must assess and monitor the security of your AI supply chain across a system's life cycle and require suppliers to adhere to the same standards your organization applies to other software.
The document recommends processes and controls be in place to manage what data AI systems can access and to manage AI-generated content according to its sensitivity.
All AI models must have security baked in from the beginning. Developers can accomplish this by implementing standard cybersecurity best practices and implementing controls on the query interface to detect and prevent attempts to access, modify, and exfiltrate confidential information.
Unfortunately, developers must create incident management procedures since no security program is perfect. These plans should reflect different scenarios and regular reassessments are conducted as the system grows and evolves.
Companies should store critical digital resources in offline backups and train responders to assess and address AI-related incidents. High-quality audit logs and other security features or information must be provided to customers and users at no extra charge to enable their incident response processes.
Over the long haul, AI developers and operators must take several steps to ensure the software operates properly and safely. These steps include monitoring the system's behavior by measuring the outputs and performance of the model and system so sudden and gradual changes in behavior affecting security can be observed.
Such scrutiny will help account for and identify potential intrusions and compromises, as well as natural data drift.
Operators must also monitor the information the AI receives from outside sources. By examining the incoming data, the operators can ensure that privacy and data protection requirements are being met. Additionally, continuous data monitoring will likely spot adversaries' attempts to input malware or data designed to alter the AI's functionality.
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.
Copyright © 2024 Trustwave Holdings, Inc. All rights reserved.