SpiderLabs Blog

Sentinels of Ex Machina: Defending AI Architectures

Written by David Broggy | Aug 5, 2024 1:00:00 PM

The introduction, adoption, and quick evolution of generative AI has raised multiple questions about implementing effective security architecture and the specific requirements for protecting all aspects of an AI environment as more and more organizations begin using this technology. Recent security reports on vulnerabilities that expose Large Language Model (LLM) components and jailbreaks for bypassing prompting restrictions have further shown the need for AI defenses. Luckily, while there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application.

Let’s discuss AI Defenses here in terms of:

  • Risks – What unique risks can be associated with AI architectures?
  • Frameworks – What is different about the architecture frameworks used for AI models?
  • Benchmarks – Are there any benchmarks for validating best practices?
  • Vulnerability Testing and Validation – What are the attackers looking for to exploit AI deployments?

 

AI Security Risks

Gartner has described Generative AI risks as shown in the image below. You may notice there’s not much difference in this example compared to a typical enterprise application containing traditional data and application layers. However, the image points out specific risks with this type of application. For example, the input and output from a user’s prompts can present unique risks such as data exfiltration and disinformation.

Image 1: Gartner Generative AI Security Risks

 

Security Best Practices Frameworks

The use of AI doesn't change the foundations of security architecture best practices. Using frameworks like Microsoft’s Well Architected Framework (WAF) still apply. For example, an AI architecture may consist of a SaaS backend or a virtual machine running an operating system with a front-end web UI. This means a security architect can still design protections for SaaS and VMs in their traditional manner. Familiarity with the following list of security topics is relevant to AI security:

  • EDR – endpoint detection and response
  • Identity Management
  • Data Protection and Data Loss Prevention (DLP)
  • CSPM – cloud security posture management
  • CASB – cloud access security broker
  • SIEM – security information and event management
  • DevSecOps – security for DevOps
  • Encryption
  • EASM - external attack surface management

 

Security Best Practices Benchmarks

Security benchmarks such as Microsoft’s Cloud Security Benchmark or NIST CSF still apply to AI protections. Cloud vendors and compliance organizations are also developing AI-specific benchmarks. Benchmarks are built into many cloud provider services. For example, Microsoft’s Azure cloud provides security benchmarks in its Defender for Cloud service, and Purview provides AI-specific benchmarks related to data protection.

 

Vulnerability Testing and Validation - Mitre's ATLAS Knowledge Base

MITRE has created a new knowledge base of 'attack techniques' that should be considered and tested against your AI infrastructure. This is a great tool for understanding and planning defenses for AI-related threats.

Image 2: Mitre’s ATLAS Matrix for AI Tactics and Techniques

 

Vendor AI Readiness Evaluations

Security Providers may offer an AI Readiness Review to help a company prepare for AI deployments. When choosing a vendor for evaluating your AI designs, consider a multilayered approach that includes architecture, auditing, and implementation checks, so the result is a practical, guided solution that is tailored to your implementation.

 

AI Problem/Solution Matrix

Based on the topics above, here’s a table presenting examples of AI risks and potential solutions. Consider creating your own reference table as a validation checklist for AI deployments.

AI Risks

Possible Security Solutions

Data Exfiltration

Use of EDR, DLP, and CASB systems. Implement robust data protection policies.

Disinformation from User Prompts

Implement identity management to verify user authenticity. Use SIEM/SOAR for monitoring and response.

Vulnerabilities in LLM Components

Regular vulnerability testing and validation with tools like Mitre's ATLAS Knowledge Base.

Jailbreaks Bypassing Prompting Restrictions

Apply Zero Trust principles to all AI deployments. Use CASB.

AI Model Theft

Encryption of AI models both at rest and in transit. EASM for external attack surface management.

Misuse of AI Architectures

Governance, compliance, and control measures. Security operations processes and procedures.

Injection of Malicious Data or Code

DevSecOps and CI/CD pipeline security measures. CSPM (Cloud Security Posture Management).

Unauthorized Access to AI Systems

Identity and Access Management solutions. Incident Response planning and implementation.

Insufficient Logging and Monitoring

Implement logging and monitoring as per Azure OpenAI model recommendations.

Non-compliance with Regulatory Requirements

Compliance auditing and AI Audit tools like Microsoft Purview AI Audit.

Table 1: AI Risks and recommended security solutions for each

 

Summary

Although there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application. Follow security best practices and then build a security framework specific to your application’s needs.

References

David Broggy, Trustwave’s Senior Solutions Architect, Implementation Services, was selected last year for Microsoft's Most Valuable Professional (MVP) Award.