Trustwave Rapid Response: CrowdStrike Falcon Outage Update. Learn More

Trustwave Rapid Response: CrowdStrike Falcon Outage Update. Learn More

Services
Capture
Managed Detection & Response

Eliminate active threats with 24/7 threat detection, investigation, and response.

twi-managed-portal-color
Co-Managed SOC (SIEM)

Maximize your SIEM investment, stop alert fatigue, and enhance your team with hybrid security operations support.

twi-briefcase-color-svg
Advisory & Diagnostics

Advance your cybersecurity program and get expert guidance where you need it most.

tw-laptop-data
Penetration Testing

Test your physical locations and IT infrastructure to shore up weaknesses before exploitation.

twi-database-color-svg
Database Security

Prevent unauthorized access and exceed compliance requirements.

twi-email-color-svg
Email Security

Stop email threats others miss and secure your organization against the #1 ransomware attack vector.

tw-officer
Digital Forensics & Incident Response

Prepare for the inevitable with 24/7 global breach response in-region and available on-site.

tw-network
Firewall & Technology Management

Mitigate risk of a cyberattack with 24/7 incident and health monitoring and the latest threat intelligence.

Solutions
BY TOPIC
Offensive Security
Solutions to maximize your security ROI
Microsoft Exchange Server Attacks
Stay protected against emerging threats
Rapidly Secure New Environments
Security for rapid response situations
Securing the Cloud
Safely navigate and stay protected
Securing the IoT Landscape
Test, monitor and secure network objects
Why Trustwave
About Us
Awards and Accolades
Trustwave SpiderLabs Team
Trustwave Fusion Security Operations Platform
Trustwave Security Colony
Partners
Technology Alliance Partners
Key alliances who align and support our ecosystem of security offerings
Trustwave PartnerOne Program
Join forces with Trustwave to protect against the most advance cybersecurity threats
SpiderLabs Blog

Sentinels of Ex Machina: Defending AI Architectures

The introduction, adoption, and quick evolution of generative AI has raised multiple questions about implementing effective security architecture and the specific requirements for protecting all aspects of an AI environment as more and more organizations begin using this technology. Recent security reports on vulnerabilities that expose Large Language Model (LLM) components and jailbreaks for bypassing prompting restrictions have further shown the need for AI defenses. Luckily, while there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application.

Let’s discuss AI Defenses here in terms of:

  • Risks – What unique risks can be associated with AI architectures?
  • Frameworks – What is different about the architecture frameworks used for AI models?
  • Benchmarks – Are there any benchmarks for validating best practices?
  • Vulnerability Testing and Validation – What are the attackers looking for to exploit AI deployments?

 

AI Security Risks

Gartner has described Generative AI risks as shown in the image below. You may notice there’s not much difference in this example compared to a typical enterprise application containing traditional data and application layers. However, the image points out specific risks with this type of application. For example, the input and output from a user’s prompts can present unique risks such as data exfiltration and disinformation.

Image 1 Gartner Generative AI Security Risks

Image 1: Gartner Generative AI Security Risks

 

Security Best Practices Frameworks

The use of AI doesn't change the foundations of security architecture best practices. Using frameworks like Microsoft’s Well Architected Framework (WAF) still apply. For example, an AI architecture may consist of a SaaS backend or a virtual machine running an operating system with a front-end web UI. This means a security architect can still design protections for SaaS and VMs in their traditional manner. Familiarity with the following list of security topics is relevant to AI security:

  • EDR – endpoint detection and response
  • Identity Management
  • Data Protection and Data Loss Prevention (DLP)
  • CSPM – cloud security posture management
  • CASB – cloud access security broker
  • SIEM – security information and event management
  • DevSecOps – security for DevOps
  • Encryption
  • EASM - external attack surface management

 

Security Best Practices Benchmarks

Security benchmarks such as Microsoft’s Cloud Security Benchmark or NIST CSF still apply to AI protections. Cloud vendors and compliance organizations are also developing AI-specific benchmarks. Benchmarks are built into many cloud provider services. For example, Microsoft’s Azure cloud provides security benchmarks in its Defender for Cloud service, and Purview provides AI-specific benchmarks related to data protection.

 

Vulnerability Testing and Validation - Mitre's ATLAS Knowledge Base

MITRE has created a new knowledge base of 'attack techniques' that should be considered and tested against your AI infrastructure. This is a great tool for understanding and planning defenses for AI-related threats.

Image 2 Mitre’s ATLAS Matrix for AI Tactics and Techniques

Image 2: Mitre’s ATLAS Matrix for AI Tactics and Techniques

 

Vendor AI Readiness Evaluations

Security Providers may offer an AI Readiness Review to help a company prepare for AI deployments. When choosing a vendor for evaluating your AI designs, consider a multilayered approach that includes architecture, auditing, and implementation checks, so the result is a practical, guided solution that is tailored to your implementation.

 

AI Problem/Solution Matrix

Based on the topics above, here’s a table presenting examples of AI risks and potential solutions. Consider creating your own reference table as a validation checklist for AI deployments.

AI Risks

Possible Security Solutions

Data Exfiltration

Use of EDR, DLP, and CASB systems. Implement robust data protection policies.

Disinformation from User Prompts

Implement identity management to verify user authenticity. Use SIEM/SOAR for monitoring and response.

Vulnerabilities in LLM Components

Regular vulnerability testing and validation with tools like Mitre's ATLAS Knowledge Base.

Jailbreaks Bypassing Prompting Restrictions

Apply Zero Trust principles to all AI deployments. Use CASB.

AI Model Theft

Encryption of AI models both at rest and in transit. EASM for external attack surface management.

Misuse of AI Architectures

Governance, compliance, and control measures. Security operations processes and procedures.

Injection of Malicious Data or Code

DevSecOps and CI/CD pipeline security measures. CSPM (Cloud Security Posture Management).

Unauthorized Access to AI Systems

Identity and Access Management solutions. Incident Response planning and implementation.

Insufficient Logging and Monitoring

Implement logging and monitoring as per Azure OpenAI model recommendations.

Non-compliance with Regulatory Requirements

Compliance auditing and AI Audit tools like Microsoft Purview AI Audit.

Table 1: AI Risks and recommended security solutions for each

 

Summary

Although there are some unique challenges to protecting AI architectures, they still require the same security protections as any other enterprise application. Follow security best practices and then build a security framework specific to your application’s needs.

References

David Broggy, Trustwave’s Senior Solutions Architect, Implementation Services, was selected last year for Microsoft's Most Valuable Professional (MVP) Award.

Operational Technology Security Maturity Diagnostic

 

Latest SpiderLabs Blogs

SYS01 Infostealer and Rilide Malware Likely Developed by the Same Threat Actor

Drawing on extensive proprietary research, Trustwave SpiderLabs believes the threat actors behind the Facebook malvertising infostealer SYS01 are the same group that developed the previously reported...

Read More

Multiple Cross-Site Scripting (XSS) Vulnerabilities in REDCap (CVE-2024-37394, CVE-2024-37395, and CVE-2024-37396)

Trustwave SpiderLabs uncovered multiple stored cross-site scripting (XSS) vulnerabilities (CVE-2024-37394, CVE-2024-37395, and CVE-2024-37396) in REDCap (Research Electronic Data Capture), a widely...

Read More

Knowing your Enemy: Situational Awareness in Cyber Defenses

Most homeowners know that a lock is a good idea as a basic defense against invaders, and leaving the front door unlocked is simply unwise. Unfortunately, when it comes to creating a strong cyber...

Read More