Evaluating Vendors for Zero Trust System Integration
CAVEAT: This post is not meant to criticize any vendor or process–it is only meant to help future customers understand the process to evaluate vendors offering Zero Trust system integration.
This is the second in a series of Zero Trust posts. ICYMI, "Getting to Ground Zero (Trust)" was the first one. You can read it here.

In this installment of our Zero Trust series, we'll discuss how to evaluate vendors for Zero Trust integration. While creating an exhaustive evaluation plan would exceed the scope of a blog post, these methods and ideas will provide a good foundation for evaluating vendors for use within the Zero Trust environment. It is common when onboarding a new solution, or even an upgraded version of an old solution, features or behaviors are identified which can positively or negatively affect the platform. Unfortunately, the timing can be too late during the development or integration process to mitigate the negative issues or replace the solution. At the speed of business, delays exacerbate already tight timelines and are compounded by technology with multiple integration points.
As with any milestone and timeline, a balance must be achieved whereas to encourage technological growth and increase value while reducing undue risk to the organization yet still accomplishing the planned tasks. Unknown features, unclear integration methods, and leading or trailing integration technological versions are just some of the items which can inflict delays or rework. While our partner and supplier entities are commonly referred to as "vendors", they are ultimately still companies. And as such they are still beholden to their own timelines, milestones, and business goals. As with all IT decisions, money drives the solution, so understanding limitations of a solution can make the difference between a useful product and a headache. Certifications such as International Organization for Standardization (ISO), Cybersecurity Maturity Model Certification (CMMC), or others will not be covered in this post due to scope. However, much of what will be discussed will be covered or assessed via those types of certifications.
To reiterate, this post is not meant to disparage any vendor nor take a shot at any vendor's integrity. This post is meant to provide some additional options for gathering useful information, allowing for a clearer picture of the system's stance post solution deployment. As with anyone who has been in the IT community for several decades, it is well known that developing solutions require time and money to produce. Since most solutions are complex to develop and typically ingest third party packages, it is rarely feasible to continuously update to latest libraries or packages from dependencies. This is an important aspect of vendor and supplier constraints organizations should keep in mind.
Development constraints should not give a pass to poorly written or insufficiently secure software. Ensuring well-developed, properly written, secure code is part of the costs associated with being a development shop. So as not to belabor this post, understanding a vendor's constraints will need to be part of your evaluation. Weighing a vendor's development guardrails or business goals for their product is key to understanding the risks and impact on your Zero Trust system. For example, the vendor may have settled on using a slightly older, yet still supported, operating system (OS). This older OS may still be supported for 1-2 more years but will be End of Life (EOL) prior to your acceptable timeframe (i.e., your organization may be required to upgrade to a later OS version in 12 months, but the vendor cannot upgrade for 18 months). In many cases, OS upgrades can result in version compatibility issues, so you may be required to change course or request a Plan of Actions and Milestones (POAM) as a stop gap or workaround.
Version compatibility has been a long-term concern for most vendors. This is not just at the internals of the software but can be something as simple as to how a browser operates. Again, understanding the behaviors of a solution (positive or negative) will better equip the staff to make a more informed decision. While compatibility is a significant part of any software package or integration, it is just one of the many aspects of vendor evaluation. Some nuances of an evaluation may reveal a solution's feature which may normally not be well documented or discussed. This nuance may be due to a feature being operationally functional, but deficient in some way regarding your system. Items, such as libraries, components with no Long-Term Support (LTS), outdated methods and practices (that function as prescribed), may present issues across the environment if the cumulative IT community is going in a different direction. This is common in instances where outdated, but still technically operational, packages or protocols are used so as not to force a vendor to rewrite code.
Reworking or rewriting code is typically time-consuming and, in many cases, require separate environments to perform regression and functional testing. We'll refer again to this being part of the cost of operating a development shop, but not all vendors have this capability due to resource limitations. This is where small, cutting-edge vendors may differ from larger vendors. For example, this may be more visible when working with vendors providing more cutting-edge technology, as these tend to be more focused on developing or enhancing features versus implementing a Continuous Integration/Continuous Delivery (CI/CD) pipeline for automated testing. Often it is simply a function of having deeper pockets.
CI/CD and automated testing enable functional testing of components like Public Key Infrastructure (PKI). While we could go into great detail on this and the next topic, we'll keep it brief and highlight a growing trend among vendors, particularly in the containerization space. PKI can be configured in various ways, but not all configurations are secure, and some can create or obscure additional security risks. Insider threats are a serious and ongoing concern for IT professionals, especially when protecting high-value targets—not just government-related, but across the broader IT community. If PKI is poorly controlled or managed manually, Advanced Persistent Threats (APTs) can exploit this to hide long-term compromises, allowing undetected data exfiltration or misuse. Since PKI is a core element of Zero Trust, short-lived or untracked certificates can pose significant risks.
We'll cover short-lived and non-tracked certificates in a future blog post, but this type of certificate issuance should be limited, and ideally, not used at all. In the event it must be used, all certificate operations and certificates themselves should be scrutinized programmatically via Artificial Intelligence (AI) or other analytics. In short, a vendor may rely on an automated PKI system (in part or whole) to secure their solution. It's crucial to have detailed discussions about how the operations of their components function or integrate. For example, does the vendor use an automated Certificate Authority, third-party images, or outdated protocols with limited future use or support?
The use of container images has significantly grown in popularity over the last decade, and it is common for vendors to utilize third-party images when deploying their product. In many cases the vendor sees the third-party images as a dependency and may not directly supply those images. They may just pull them in the background when deploying a solution. For example, when using more locally based Kubernetes deployment technology, it is typical for image names and registry locations to be written into YAML files and deployed via a command line tool. If the system owner is unaware of those third-party images and the system has internet access, the images will simply be pulled to the node running the application.
Using third-party software, especially open-source, can have legal, business, or operational implications, which can be even more significant in high-value environments. Ideally, and at a minimum, the vendor would fork (if necessary), support, test, and provide all necessary images for a containerized deployment. In the past, virtualized or hardware-based deployments required users to download and install artifacts manually, and this process should be handled in a similar way today. Consider this, for each database deployed as a container, its deployment is equivalent to deploying the same database application on hardware. While image pedigree is important, it's additionally crucial to thoroughly understand the system's analytics capabilities (we will discuss this in a future post) as proper analytics can be integrated with the environment's containerization solution. Although analytics can be helpful in a containerization platform, it is used to a much broader degree in the base Zero Trust environment.
Data collection is crucial today, but it also involves the challenges of storing and analyzing it. This section will be brief, but it should spark a discussion on evaluating solutions for data storage and analytics. Storing and analyzing data is costly and resource intensive. Hybrid options do not necessarily relieve the cost but may provide an organization the ability to store larger amounts of data and analyze data off-premises. Even though many vendors offer analytics and data storage, it may benefit your organization to dig into aspects such as how data is deleted or how data is pulled by analytic platforms. For example, it is possible, depending on your data storage solution, for data storage to be fast, but curation, such as deletion, to be prohibitively slow. Another example would be how data is pulled back from the source. In many cases it is not helpful or possible to pull back all data needing to be operated on. Some querying packages will operate on first come, first serve. In this case, it is possible to not receive all the data from a specific timeframe (e.g., a query for the last 24 hours). In this case, the querying package takes the first n records and then stops the incoming flow of queried data. With this model (not to determine right or wrong here), the returned data may include all of the requested data from the last 24 hours, but that is not guaranteed. While this is another area for deep discussion, the intent here is to point out that asking the right questions, such as "Do I actually get all records for the last 24 hours?", will gather important information to allow for better decision making.
Drilling down into more of these evaluation aspects would bloat this post, so we've assembled a recommended list of key evaluation points to help assess a vendor. This list isn't exhaustive, but it's a good starting point. The first few are (of course) pertinent to the anchors in a Zero Trust environment. Many can be interchanged with each technology being assessed.
- How does the PKI solution integrate?
- With many of these types of questions it will be necessary to ensure your targets can also utilize the same integration processes and methods, in this case PKI.
- Does the PKI solution utilize current or future-looking standards?
- Does your DNS solution provide for secure DNS, such as DNSSEC?
- Do you offer options for best-practice, secure hardening and architecture for your DNS solution?
- Do you have an air-gapped NTP option? If so, how would I integrate into the current data center?
- Is your solution currently going through any validation process such as IAT or ATO?
- What limitations, if any, exist by utilizing a containerized version of the solution or vice-versa?
- Does your object-based storage use a wrapper for all types of incoming connections (e.g., file, block, object)? If so, is the interface object-based?
- Does your solution federate? What options are available (e.g., OAuth, SAML, OpenID)?
As with many things in business, asking the right questions is of the utmost importance. Performing a well-structured, thorough evaluation of a vendor's product as early in the process as possible, and ideally before it's required, reduces rework and downtime. An essential component to a successful evaluation is having the right people to ask the right questions. During an evaluation is the best time to bring in personnel with the right real-world experience and knowledge in each area. Having the right people as part of the evaluation from the onset will not only bring the right questions to the table, but it will also likely result in further engagement to determine the worthiness and fitness of the product. CyberPoint can provide the right people with the expertise and experience for your Zero Trust solution.
With CyberPoint's NXZT Team by your side, rest assured we will work with you to attain your Zero Trust goals. Whatever your Zero Trust aspirations may be, trust CyberPoint's NXZT Team to turn them into reality!
Contact us about how we can support your organization today.
About CyberPoint's NXZT Team: Your Premier Partner for Zero Trust Security
Embracing ZT within your IT infrastructure demands a seasoned and proficient partner, and CyberPoint's NXZT Team stands at the forefront of ZT expertise. With a wealth of experience in the ZT domain, we specialize in conducting comprehensive assessments, meticulously evaluating your organization's existing network and creating detailed reports outlining the necessary adaptations to achieve ZT certification.
Contact NXZT
With CyberPoint's NXZT Team by your side, rest assured we will work with you to attain your Zero Trust goals. Whatever your Zero Trust aspirations may be, trust CyberPoint's NXZT Team to turn them into reality!