Fort Knox for Your Data: How Elasticsearch X-Pack Locks Down Your Cluster – Part 2
In Part 1 of Fort Knox for Your Data: How Elasticsearch X-Pack Locks Down Your Cluster, we uncovered the dangers of running Elasticsearch with X-Pack disabled and thus, highlighting the ease with which attackers can exploit unauthenticated endpoints.
Now, in Part 2, we will explore the other security features of X-Pack beyond authentication. We’ll dive into its auditing mechanisms, role-based access control (RBAC), IP filtering controls, field and document level security, and FIPS 140-2 compliance features. As a finishing touch, we’ll examine a real-world CTF challenge (HTB: HayStack), demonstrating how misconfigurations in X-Pack can leave Elasticsearch open to exploitation.
By the end of this article, you’ll have a comprehensive understanding of Elasticsearch’s X-Pack security features (both its strengths when enabled and its weaknesses when disabled), arming you with the knowledge to secure your deployments effectively.
Auditing Features of X-Pack
As shown in Figure 1, to enable the X-Pack auditing feature of Elasticsearch, the `elasticsearch.yml` file needs to configure `xpack.security.audit.enabled` to be set to `true`. Configuring traditional auditing with log files in an Elasticsearch Docker instance is not as straightforward as in a standard installation, primarily due to Docker’s containerized nature and logging mechanisms. However, Docker’s built-in logging driver simplifies the process by capturing Elasticsearch logs and directing them to ` docker logs`, providing a structured but also constrained approach to auditing. While this method reduces configuration flexibility, it ensures audit visibility with a minimal setup that aligns with containerized best practices.
The aftermath of a Hydra brute-force attack is a perfect scenario to check the audit logs. As shown in Figure 2, the Docker logs reveal multiple failed authentication attempts targeting the Elasticsearch target with X-Pack security enabled. The audit logs specifically indicate repeated failures (within milliseconds of each attempt) to authenticate the "elastic" user, which confirms that a brute-force attack tool (such as Hydra) was used in an attempt to systematically guess valid credentials.
Figure 1. An `elasticsearch.yml` file with the X-Pack audit feature enabled.
Figure 2. Audit Docker logs of the Hydra authentication brute-force attack simulation.
Authorization (RBAC) Features of X-Pack
Now that we have explored authentication issues and protections of a disabled and enabled X-Pack, the next concept to dive into is authorization. Essentially, authorization is the process of determining which actions a user or system entity is allowed to perform after authentication. It ensures that authenticated users can only access the resources and perform operations that align with their assigned permissions. Role-based access control (RBAC) is a key authorization mechanism that assigns permissions based on user roles rather than individual identities. With RBAC, administrators define roles with specific privileges such as read-only access, write permissions, or administrative control and then assign these roles to users or groups. This approach enhances security by enforcing the principle of least privilege (PoLP), reducing the risk of unauthorized access and privilege escalation. In Elasticsearch X-Pack, RBAC helps secure clusters by restricting access to indices, API endpoints, and administrative functions based on predefined roles, ensuring a structured and controlled authorization model.
The following procedures below will demonstrate RBAC in action by first creating a new `read-only` security role, creating a user to adopt this role, creating the index readable to the role, and then adding the index document content readable to the role. Finally, the role will be tested by performing the configured allowed actions (read permissions) as well as performing the disallowed actions (write permissions) that will succeed and fail, respectively, as expected.
As shown in Figure 3, executing the following `curl` command on the Elasticsearch API configures a new security role named "read_role", which is designed to provide read-only access to the "demo_index" index. To elaborate, the `/_security/role` API endpoint is part of the X-Pack Security module, which manages RBAC in Elasticsearch. Furthermore, the role is explicitly configured to allow access only to the "demo_index" index, and the `"privileges": ["read"]` setting ensures that users with this role can only perform read operations such as searching and retrieving documents.
Figure 3. Security role creation (`read_role`).
As shown in Figure 4, the execution of the following `curl` command on the Elasticsearch API creates a new user account named "read_user" with a simple password configured and then assigns it to the predefined read-only role ("read_role"), which restricts the user's access to only reading data from permitted indices. To elaborate, the `/_security/user` API endpoint is part of the X-Pack Security module, which adds and updates users in the native realm. Moreover, the `full_name` and `email` details of the user creation are user metadata, which were added to fabricate realism.
Figure 4. New user creation (adopts `read_role` permission).
As shown in Figure 5, the execution of the following `curl` command on the Elasticsearch API creates a new index named "demo_index" and then defines a mapping structure that specifies how documents will be stored and searched within this index. To elaborate, the request defines field mappings to ensure that the "message" field is stored as "text", which means that Elasticsearch will tokenize and analyze the field’s content for full-text search capabilities.
Figure 5. Sample index creation (`demo_index`).
As shown in Figure 6, the execution of the following `curl` command on the Elasticsearch API performs a document indexing operation that adds a new document with a unique ID (1) into the "demo_index" index. To elaborate, the request stores a JSON object containing a "message" field with the value "This is a test document" and since "message" was previously defined in the index mapping as a "text" field, Elasticsearch will tokenize the text for full-text search as well as analyze and index each word separately for efficient querying.
Figure 6. Sample document creation (`_doc/1`).
As shown in Figure 7, executing the following `curl` command on the target Elasticsearch API is a test for user authorization. Specifically, it verifies whether the "read_user" account has the necessary permissions to query and read data from the "demo_index" index. Since the "read_user" does indeed have the proper read permissions ("read_role" assigned to the index), therefore the response will contain all documents stored in "demo_index", formatted as JSON.
Figure 7. RBAC `read-allowed` permission confirmation.
As shown in Figure 8, the execution of the following `curl` command on the target Elasticsearch API is the crucial test to verify access control enforcement, which ensures that a user with read-only permissions ("read_user") is not able to perform unauthorized write operations, such as attempting to create a new document in the "demo_index" index. This action simulates a potential unauthorized modification attempt, confirming that RBAC is functioning correctly within the Elasticsearch target.
Figure 8. RBAC `write-blocked` permission confirmation.
IP Filtering (NACL) Features of X-Pack
User access controls such as authentication and authorization are great tools to secure a system, but there are other means to control access, such as network access control (NACL) lists. In general, NACLs are security mechanisms that define rules specifying which IP addresses or systems can access specific resources and what actions they are permitted to perform. ACLs operate at various levels, including network access (IP-based) and resource access (user-based permissions). In Elasticsearch X-Pack , the IP filtering feature functions as a network-level ACL that allows administrators to define allow-lists and deny-lists for incoming connections based on IP addresses or subnets. This ensures that only trusted sources can access the Elasticsearch cluster while preventing unauthorized access attempts before authentication even takes place. Basically, the X-Pack IP filtering feature serves as the first line of defense for protecting Elasticsearch data .
As shown in Figures 9 and 10, the allowed-list subnet named `es-network` and the disallowed-list subnet name `non-es-network` were created, respectively. This will be essential for this simulation to contrast the outcomes from each network access attempt.
Figure 9. Docker `es-network` network creation (`allowed-list`).
Figure 10. Docker `non-es-network` network creation (`blocked-list`).
As shown in Figure 11, the `elasticsearch.yml` file was modified to configure the IP filtering configuration for this simulation. Essentially, the IP filtering feature was enabled and both the allow and deny subnets were configured.
Figure 11. An `elasticsearch.yml` file with X-Pack IP filtering feature enabled and configured.
To adopt the new subnet and IP filtering configuration, the Elasticsearch Docker instance needs to be restarted. As shown in Figure 12, the Docker command to restart the Elasticsearch instance was executed successfully.
Figure 12. The `docker restart` command execution to refresh and implement new configuration.
The previously created allowed-list subnet will now be used on a regular Ubuntu Docker instance.
As shown in Figure 13, this machine was accessed via bash for further configuration. All the minimum required tools such as `curl` and `iproute2` were installed as shown in Figure 14.
To confirm that this Docker instance has the correct IP address range, the execution of the `ip a` command on the Docker instance or execution of the `docker network inspect` command may be done as shown in Figures 15 and 16, respectively. Finally, as shown in Figure 17, this allow-container instance was used to successfully access the Elasticsearch API demonstrating that the allow-list subnet performed its duty as expected.
Figure 13. Running and accessing the `allowed` container machine.
Figure 14. Installation of all required tools on the `allowed` container machine.
Figure 15. IP range confirmation of the `allowed` container machine via the`ip a` command.
Figure 16. IP range confirmation of the `allowed` container via the `docker network` command.
Figure 17. Confirmation of the `allowed` IP filtering configuration.
The previously created disallowed-list subnet will now be used on a regular Ubuntu Docker instance. As shown in Figure 18, this machine was accessed via bash for further configuration. All the minimum required tools, such as `curl` and `iproute2`, were installed, as shown in Figure 19.
To confirm that this Docker instance has the correct IP address range, the execution of the `ip a` command on the Docker instance or execution of the `docker network inspect` command may be done as shown in Figures 20 and 21, respectively.
Finally, as shown in Figure 22, this blocked-container instance was used to attempt to access the Elasticsearch API, which ultimately failed and therefore demonstrated that the disallowed-list subnet configured in the IP filtering performed its duty as expected.
Figure 18. Running and accessing the `blocked` container machine.
Figure 19. Installation of all required tools on the `blocked` container machine.
Figure 20. IP range confirmation of the `blocked` container machine via the` ip a` command.
Figure 21. IP range confirmation of the `blocked` container via the `docker network` command.
Figure 22. Confirmation of the `blocked` IP filtering configuration.
Field and Document Level Security (FLS and DLS) Features of X-Pack
Field-level security and document-level security (FLS and DLS) in Elasticsearch X-Pack provide fine-grained access control by restricting users from viewing specific fields or documents within an index, even if they have general access to the index itself. FLS allows administrators to hide or exclude sensitive fields (e.g., passwords or financial data), while DLS restricts access to specific documents based on predefined query rules.
A common misconception is that FLS and DLS are the same as RBAC, but this is false. They differ from RBAC, which controls access at a broader level by assigning users to roles with specific permissions over entire indices or clusters. At the same time, RBAC determines who can access which indices and perform what operations, FLS and DLS dictate what specific data within an index a user can see, adding an additional layer of security for protecting sensitive information.
As shown in Figure 23, the `elasticsearch.yml` file was modified to enable the DLS and FLS configuration for this simulation. Essentially, all it takes is a mere setting of the `xpack.security.dls_fls.enable` to `true.`
Figure 23. An `elasticsearch.yml` file with X-Pack DLS and FLS enabled.
To adopt the new DLS and FLS configuration, the Elasticsearch Docker instance needs to be restarted. As shown in Figure 24, the Docker command to restart the Elasticsearch instance was executed successfully.
Figure 24. The `docker restart` command execution to refresh and implement a new configuration.
As shown in Figure 25, the executed `curl` command creates a new Elasticsearch index named `customer_orders` and defines a mapping for its structure. The mappings section specifies the data types for each field, ensuring that `order_id` and `customer` are stored as keywords (exact-match values), `amount` is stored as a double (floating-point number), and `credit_card_number` is also a keyword. The response confirms successful index creation with `acknowledged: true`, indicating Elasticsearch has registered the request, and `shards_acknowledged: true`, ensures that the index is ready for use.
Figure 25. Sample index and index mapping creation (`customer_orders`).
As shown in Figure 26, the executed `curl` command utilizes Elasticsearch's ` _bulk` API endpoint to insert multiple documents into the ` customer_orders` index in a single request. Each document represents an order, containing fields such as ` order_id,` ` customer` , ` amount` , and ` credit_card_number.` The ` _id` field is a common database practice that uniquely identifies each document to ensure that duplicate entries are not created unintentionally.
As shown in Figure 27, the response confirms all documents were successfully indexed based on the multiple ` result: created` output and no errors, therefore showing that Elasticsearch efficiently processed the bulk operation.
Figure 26. Sample index data bulk upload command execution.
Figure 27. Sample index data bulk upload output.
As shown in Figure 28, the `curl` command executes the ` _search` API request on the ` customer_orders` index to retrieve all stored documents needed to confirm the index and documents have been uploaded successfully. The query results show three orders, each containing details such as ` order_id` , ` customer` , ` amount` , and ` credit_card_number` . The response confirms that the query was completed successfully ( successful: 1, failed: 0 ) and returned three matching records.
Figure 28. Confirmation of sample index content.
As shown in Figure 29, the `curl` command creates a custom security role named `customer_role` in Elasticsearch using the X-Pack Security API.
This role is configured to grant restricted access to the `customer_orders` index. The privileges assigned to this role enforce two key security measures. First, the `customer_role` includes a query filter `"query": { "term": { "customer": "karl" } },` which ensures that any users assigned this role can only access documents (DLS) where the customer field is `karl`, which prevents unauthorized users from viewing orders belonging to other customers.
Second, the `field_security` configuration restricts access to only `order_id` and `amount`, to prevent users from retrieving the sensitive fields (FLS), such as `credit_card_number` and `customer` details.
Finally, the response `"created": true` confirms the successful creation of the role. This setup enhances data security by ensuring that users only access specific records and only the fields they are authorized to see, reducing the risk of data leaks.
Figure 29. Security role creation with privilege and field grant assignments.
As shown in Figure 30, the `curl` command creates a new Elasticsearch user named ` karl_user` using the X-Pack Security API. This user is assigned the ` customer_role` role, which enforces DLS and FLS for the ` customer_orders` index. As a result, ` karl_user` can only view documents where the customer field is `karl` and is restricted to accessing only the ` order_id` and ` amount` fields while excluding sensitive details such as credit card numbers. The ` "created": true` response confirms the user was successfully added.
Figure 30. New user creation (adopts `customer_role` privilege).
As shown in Figure 31, the `curl` command queries the ` customer_orders` index while authenticated as ` karl_user,` whose access is restricted by DLS and FLS. The output shows that only two documents are returned. Specifically, those where the customer is "karl", as enforced by the DLS query ` { "term": { "customer": "karl" } }` . In addition, the documents only contain the ` order_id` and ` amount` fields, while the ` credit_card_number` and ` customer` fields are omitted due to FLS restrictions. This confirms that DLS ensures the user can only access their data, while FLS enforces field-level privacy, preventing exposure of sensitive information. In contrast, a privileged user like ` elastic` would see all documents and fields without restrictions.
Figure 31. DLS and FLS permissions confirmation with `karl_user` (`customer_role`).
As shown in Figure 32, the `curl` command retrieves all documents from the ` customer_orders` index using the Elasticsearch `_search` API endpoint while authenticated as the superuser (elastic). The output displays all three documents, including those belonging to both ` karl` and `vincent`, along with all fields, including the sensitive ` credit_card_number` field. This response differs from what was returned when the user ` karl_user` (who is restricted by DLS and FLS) executed the same query. When ` karl_user` performed this search, the response only included documents where customer is `karl` and omitted the ` credit_card_number` field, reflecting the applied security restrictions. This output highlights how privileged users bypass DLS and FLS, whereas restricted users see only sanitized, role-specific results based on their assigned permissions.
Figure 32. DLS and FLS permissions demonstration with `elastic` (admin user).
In conclusion, enabling ` xpack.security.dls_fls.enabled: true` is crucial for enforcing fine-grained access control in Elasticsearch. With DLS enabled users like Karl can only access their own data, preventing unauthorized access to other users' records. Similarly, FLS ensures that sensitive fields, such as credit card numbers, remain hidden from unauthorized users. However, when DLS and FLS are disabled, all users with index access can view all documents and fields, potentially exposing confidential data. This highlights the importance of keeping DLS and FLS enabled to maintain data privacy and security in multi-user environments.
FIPS 140-2 Compliance Features of X-Pack
The Federal Information Processing Standard Publication 140-2 (FIPS 140-2) is a U.S. government security standard that establishes the requirements for cryptographic modules used to protect sensitive information. Compliance with FIPS 140-2 is mandatory for U.S. federal agencies, military organizations, and contractors handling government data. Furthermore, industries such as healthcare, finance, and critical infrastructure often adopt FIPS 140-2 as a best practice to meet regulatory compliance requirements including, HIPAA and FedRAMP.
For Elasticsearch, ensuring FIPS 140-2 compliance is crucial when operating in federally regulated environments or handling sensitive enterprise data. As shown in Figure 33, to enable the X-Pack FIPS 140-2 feature of Elasticsearch, the `elasticsearch.yml` file needs to configure `xpack.security.fips_mode.enabled` to be set to `true.`
To ensure that FIPS mode has indeed been configured, a `curl` command on the `_nodes` API endpoint is executed and piped with a `grep` command, as shown in Figure 34. This then ensures the system adheres to government-mandated security policies, protecting Elasticsearch clusters from potential cyber threats, data breaches, and compliance violations.
Unlike traditional security features that produce immediate, tangible effects, FIPS mode operates at a cryptographic level, enforcing the use of validated algorithms without altering the system's outward behavior. While no simulation can explicitly showcase its presence in action, its impact is nevertheless critical and ensures that encryption, hashing, and authentication mechanisms adhere to strict federal guidelines. Though unseen, FIPS mode silently strengthens Elasticsearch’s security posture, providing an essential layer of trust and compliance in regulated environments.
Figure 33. An `elasticsearch.yml` file with X-Pack FIPS 140-2 compliance feature enabled.
Figure 34. Confirmation of the enabled X-Pack FIPS Mode.
Elasticsearch X-Pack Misconfiguration in CTF Box (HTB: HayStack)
A perfect opportunity to exploit an Elasticsearch instance with X-Pack disabled is in capture-the-flag (CTF) platforms that host machines with a vulnerable and/or misconfigured Elasticsearch database. A specific example would be the CTF machine Haystack, which centers its challenge around the exploitation of an ELK stack (Elasticsearch, Logstash, and Kibana).
In particular, the Elasticsearch component of this target was used to obtain an initial user foothold by enumerating unprotected indices that will eventually lead to the discovery of valid SSH credentials from decoding a Base64 encoding buried within the “haystack” of Elasticsearch data.
To preserve the thrill that comes with the enumeration routine and the satisfaction of obtaining crucial information (e.g., SSH password) that leads to further one's position in an exploitation process, critical data within the following screenshots will be redacted with a red box.
As with any penetration testing or CTF engagement, it all starts with a port scan. As shown in Figures 35 and 36, port scans can be done in a variety of ways depending on one’s preference, such as using NMAP (CLI method) or Zenmap (GUI), respectively. Both scans were able to detect ports 22 (SSH), 80 (HTTP), and 9200 (Elasticsearch).
Figure 35. NMAP port scan of the Haystack target (CLI).
Figure 36. Zenmap port scan of the Haystack target (GUI).
The HTTP port is probed further using a good old and trusted web browser (Firefox in this case). The web service shows a single photo of a needle in a haystack. As shown in Figure 37, the link to the image is copied for further investigation. Then, the `wget` command was used to download the image into the local attack machine to inspect it further, as shown in Figure 38.
Figure 37. Copy the image link from the HTTP probe.
Figure 38. Execute `wget` on the image link to download.
Let’s keep the old adage “A picture is worth a thousand words” in mind and apply the command `strings` on the downloaded image, which exposes a Base64 encoded message as shown in Figure 39.
To elaborate, the ` strings` command is a powerful forensic and reverse engineering tool used to extract human-readable text from binary files, executables, memory dumps, or digital images. It helps analysts identify embedded credentials, file paths, error messages, hidden malware indicators, or encoded messages (e.g., Base64) within compiled programs or suspicious files.
Figure 39. Exposing a Base64 encoded message from the `string` execution output.
As shown in Figure 40, the Base64 encoded string was decoded to reveal a message in what appears to be Spanish. Using Google Translate, the Spanish phrase `la aguja en el pajar es “clave”` was translated to English as `the needle in the haystack in “key”` as shown in Figure 41.
Figure 40. Decoding the discovered Base64 encoding.
Figure 41. English translation of the Spanish encoded message.
Since no credentials have been obtained thus far, exploring SSH will not be an option at this time. Therefore, port 9200 was probed using `curl` to expose the fact that this target is running Elasticsearch without any need for credentials, as shown in Figure 42. This indicates that X-Pack may be disabled in this Elasticsearch target.
Figure 42. A `curl` command execution on port 9200 to discover an Elasticsearch instance.
As shown in Figure 43, the executed command, ` curl -X GET "http://10.10.10.115:9200/_cat"`, queries the ` _cat` API endpoint of the Elasticsearch target. The ` _cat` API provides a human-readable, tabular-style output of various cluster-level and index-level statistics, aiding in monitoring and troubleshooting. The command output lists available ` _cat` endpoints, including ` _cat/indices` for viewing index details, ` _cat/health` for cluster health status, ` _cat/shards` for shard allocation, ` _cat/nodes` for node information, and many others. The `_cat/indices` endpoint is particularly intriguing as it might lead to further information disclosure of critical data such as valid login credentials.
Figure 43. A query on the Elasticsearch `_cat` API endpoint.
As shown in Figure 44, the executed `curl` command queries the ` _cat/indices` API endpoint of the Elasticsearch target, which provides an overview of all indices in the cluster, including their health status, document count, storage size, and replication settings. To save time, the `quotes` index will be explored further while the `bank` will be ignored as it was found to be a rabbit hole that leads nowhere.
Figure 44. Exposing the `quotes` and `bank` indices.
As shown in Figure 45, the executed `curl` command queries the Elasticsearch index named ` quotes` to retrieve a single document (`size=1`) from the dataset.
The JSON-formatted response shows the ` hits` section, which indicates that the index contains 253 documents. This is crucial information to know from an attacker’s point of view as it allows them to know the minimum size, they will have to query to exhaustively acquire all the data there is to offer for this index.
Figure 45. Initial investigation of the `quotes` index data.
The previous clue was the Spanish phrase `la aguja en el pajar es “clave”`. Notice the portrayed significance of the word `clave` which may be a clue.
It is safe to assume that all 253 quote sections of the `quotes` index are all in Spanish. Instead of converting all these quotes into English and filtering keywords (e.g., `password`, `key`, etc.), it would be simpler to work with the Spanish quotes as-is.
As shown in Figure 46, the `curl` command was used to extract all 253 quote sections and was ultimately piped with a `grep` command to search for a case-insensitive match of the word `clave` (`key` in English). Finally, two Base64 encoded strings were exposed from all the Spanish quotes.
Figure 46. Targeted text search within `quotes` index data via the `grep` command.
As shown in Figure 47, the two quotes containing the two Base64 encodings were then translated into English. It gives the impression that these Base64-encoded messages may be used as credentials to access a system or a service (e.g., SSH). Therefore, both Base64 strings were decoded to indeed expose a username and a password as shown in Figures 48 and 49, respectively.
Figure 47. The English translation of another Spanish-encoded message.
Figure 48. Decoding a Base64 encoding to expose a username.
Figure 49. Decoding a Base64 encoding to expose a password.
As shown in Figure 50, the credentials were then used to successfully access the SSH service. An initial foothold to the target machine has now been obtained. With that, the user flag was swiftly located to reveal its MD5 contents as shown in Figure 51.
Figure 50. Obtain SSH access using the Base64-decoded credentials.
Figure 51. Locate and expose the user flag content (`user.txt`).
Conclusion
Elasticsearch X-Pack security features exemplify the continuous advancement of database security and data protection in modern distributed systems. By seamlessly integrating authentication, authorization, and encryption mechanisms, X-Pack fortifies Elasticsearch clusters against unauthorized access and data breaches. With support for robust security configurations such as RBAC, DLS, and FLS, it ensures that users only interact with the data they are explicitly permitted to access. In addition, X-Pack provides FIPS 140-2 compliance, enabling specific organizations to meet stringent cryptographic security standards required for government and regulated industries.
Trustwave's dbProtect and AppDetectivePro products include comprehensive coverage for many of the aforementioned misconfigurations and vulnerabilities in this article. Specifically, many of the X-Pack security configurations such as X-Pack authentication, authorization, audit, DLS and FLS, IP filtering, and FIPS 140-2 compliance are all validated to be enabled. Moreover, the specific weaknesses (e.g., default account credentials, anonymous access, user privilege verification, password hashing algorithm validation, disabled security, missing patches, etc.) and vulnerabilities (e.g., Improper Privilege: [CVE-2020-7009, CVE-2020-7014, CVE-2020-7019], Information Disclosure: [CVE-2019-7619], Race Condition: [CVE-2019-7614], etc.) not demonstrated in the simulations are also addressed through our products’ comprehensive scanning, detection capabilities, and extensive database activity monitoring.
About the Author
Karl Biron is Security Researcher, SpiderLabs Database Security at Trustwave with nine years of technical experience. He holds multiple certifications and brings global expertise from his work across Singapore, the UAE, and the Philippines. Karl has also contributed to the field with two IEEE peer-reviewed publications, both as the lead author. Follow Karl on LinkedIn.
ABOUT TRUSTWAVE
Trustwave is a globally recognized cybersecurity leader that reduces cyber risk and fortifies organizations against disruptive and damaging cyber threats. Our comprehensive offensive and defensive cybersecurity portfolio detects what others cannot, responds with greater speed and effectiveness, optimizes client investment, and improves security resilience. Learn more about us.