Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
Get access to immediate incident response assistance.
Get access to immediate incident response assistance.
Trustwave and Cybereason Merge to Form Global MDR Powerhouse for Unparalleled Cybersecurity Value. Learn More
The introduction, adoption, and quick evolution of generative AI has raised multiple questions about implementing effective security architecture and the specific requirements for protecting all aspects of an AI environment as more and more organizations begin using this technology.
Recent security reports on vulnerabilities that expose Large Language Model (LLM) components and jailbreaks for bypassing prompting restrictions have further shown the need for AI defenses. Luckily, while there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application.
Let’s discuss AI Defenses here in terms of:
Gartner has described Generative AI risks as shown in the image below. You may notice there’s not much difference in this example compared to a typical enterprise application containing traditional data and application layers. However, the image points out specific risks with this type of application. For example, the input and output from a user’s prompts can present unique risks such as data exfiltration and disinformation.
Image 1: Gartner Generative AI Security Risks
The use of AI doesn't change the foundations of security architecture best practices. Using frameworks like Microsoft’s Well Architected Framework (WAF) still apply. For example, an AI architecture may consist of a SaaS backend or a virtual machine running an operating system with a front-end web UI. This means a security architect can still design protections for SaaS and VMs in their traditional manner. Familiarity with the following list of security topics is relevant to AI security:
Security benchmarks such as Microsoft’s Cloud Security Benchmark or NIST CSF still apply to AI protections. Cloud vendors and compliance organizations are also developing AI-specific benchmarks. Benchmarks are built into many cloud provider services. For example, Microsoft’s Azure cloud provides security benchmarks in its Defender for Cloud service, and Purview provides AI-specific benchmarks related to data protection.
MITRE has created a new knowledge base of 'attack techniques' that should be considered and tested against your AI infrastructure. This is a great tool for understanding and planning defenses for AI-related threats.
Image 2: Mitre’s ATLAS Matrix for AI Tactics and Techniques
Security Providers may offer an AI Readiness Review to help a company prepare for AI deployments. When choosing a vendor for evaluating your AI designs, consider a multilayered approach that includes architecture, auditing, and implementation checks, so the result is a practical, guided solution that is tailored to your implementation.
Based on the topics above, here’s a table presenting examples of AI risks and potential solutions. Consider creating your own reference table as a validation checklist for AI deployments.
AI Risks |
Possible Security Solutions |
Data Exfiltration |
Use of EDR, DLP, and CASB systems. Implement robust data protection policies. |
Disinformation from User Prompts |
Implement identity management to verify user authenticity. Use SIEM/SOAR for monitoring and response. |
Vulnerabilities in LLM Components |
Regular vulnerability testing and validation with tools like Mitre's ATLAS Knowledge Base. |
Jailbreaks Bypassing Prompting Restrictions |
Apply Zero Trust principles to all AI deployments. Use CASB. |
AI Model Theft |
Encryption of AI models both at rest and in transit. EASM for external attack surface management. |
Misuse of AI Architectures |
Governance, compliance, and control measures. Security operations processes and procedures. |
Injection of Malicious Data or Code |
DevSecOps and CI/CD pipeline security measures. CSPM (Cloud Security Posture Management). |
Unauthorized Access to AI Systems |
Identity and Access Management solutions. Incident Response planning and implementation. |
Insufficient Logging and Monitoring |
Implement logging and monitoring as per Azure OpenAI model recommendations. |
Non-compliance with Regulatory Requirements |
Compliance auditing and AI Audit tools like Microsoft Purview AI Audit. |
Table 1: AI Risks and recommended security solutions for each
Although there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application. Follow security best practices and then build a security framework specific to your application’s needs.
References
David Broggy is Senior Solutions Architect, Implementation Services at Trustwave with over 21 years of experience. He holds multiple security certifications and won Microsoft's Most Valuable Professional (MVP) Award for Azure Security. Follow David on LinkedIn.
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.
Copyright © 2024 Trustwave Holdings, Inc. All rights reserved.