Compliance or Engineering: Cybersecurity’s Chicken & the Egg

Compliance-First or Engineering-First Approach?

What came first, the chicken or the egg?  This common idiom befuddles many issues amongst philosophical debaters and now, cybersecurity professionals.  The question cybersecurity professionals are asking is: should a system be compliant with governmental regulations and standards before testing on the network?  Although this question may result in a resounding ‘yes’ from the crowd, how can a system’s configuration be assessed without it being configured on the network?  How long before a non-compliant control is able to be mitigated?  With cybersecurity trailing behind what seems like an ever-evolving automation-centric technological landscape, how long before the compliance and engineering components behind securing these innovative creations begin running in tandem?  Will the technological advances in automation lead to an approach that allows these two efforts to work alongside each other in a more real-time manner?  Let’s take a “peep” inside as we crack the conundrum of cybersecurity’s chicken or the egg. 

The Chicken:  The Compliance-First Approach

Due to the legal implications and financial penalties associated with non-compliant systems, organizations must ensure that only secured and compliant systems are allowed on their networks.  The compliance standards come from multiple different governing bodies, depending on the type of industry you’re in, to ensure that an authoritative body reviews any potential vulnerabilities or risks that may exist prior to granting the information system authority to operate.  The golden rule for cybersecurity compliance comes from the NIST Special Publication 800-53, which for a moderate impact system, outlines over 200 controls to meet for an information system to achieve FISMA compliance.  NIST 800-53 spans over 18 control families that cover items including, but not limited to, Access Control (AC); Configuration Management (CM); Physical and Environmental Protection (PE); Identification and Authentication (IA).  With the increased adoption of cloud computing, the US Government has created initiatives, most namely being FedRAMP, to better mediate the time and cost associated with vetting a Cloud Service Provider (CSP), such as AWS, Google Cloud, Azure, etc.  By definition, FedRAMP’s goal is “to utilize the methodology of a do once, use many times framework that saves cost, time, and staff required to conduct redundant Agency security assessments.”*

The primary motivation behind Government initiatives, like FedRAMP, is to allow for “rapidly” adaptable transitions into the cloud… but how does this come to fruition?  Once a governmental agency decides to utilize a FedRAMP'ed cloud service provider, they can now "inherit" a subset of security controls. As long as the CSP undergoes a standardized governmental assessment by an approval before the implementation of the cloud solution, they no longer will be fully responsible. An easy example of this would be the PE control family in NIST 800-53. If a cloud provider is providing all of the infrastructure for a service, then the responsibility for securing the physical hardware lies on the CSP, not the agency or company utilizing the cloud solution.  Control inheritance may not seem like a big deal, but it allows the consumer of the cloud services to focus on implementing controls that the CSP is not responsible for, as outlined in their Customer Responsibility Matrix (CRM).   One huge potential benefit gleaned from the idea of a CRM is the ability for infrastructure services in the cloud to be configured in a layered security posture.  Essentially, security configurations could be “set” before application developers enter the environment and begin to write their code.  This would allow for the continuation of the CI/CD pipeline model currently being utilized.

A CRM is a document provided to the consumer of a cloud solution from the CSP which outlines the controls the consumer will be responsible for implementing.  The consumer of the cloud service can understand what's taken care of by the CSP, and what they'll be responsible for configuring and monitoring.  In short, rather than spending time ensuring the physical machines and information systems that process and store their data are secured, the organization can focus on the remaining controls for which they are responsible.

These regulatory bodies provide and enforce security standards to ensure information systems are evaluated and assessed before implementation.  These compliance assessments are the “checklists” for ensuring that an information system is secured.  However, if a lapse in time occurs from completion to implementation or remediation, this could leave many systems unpatched and vulnerable for extended periods of time if these findings are not addressed in a timely manner. Although this is a compliance issue, many developers aren’t concerned about being compliant prior to launch.  The thought is that if you develop to be perfectly compliant from the beginning, the opportunity cost grows since development will take much longer if all of the security compliance controls are considered at the time of build.  This becomes the case when the development project is not directly marketing to the government as their sole customer, leaving compliance efforts as an afterthought to the actual development and marketing of the application. Another obstacle in regards to time in the overall cybersecurity process comes from the gaps created in the continuous monitoring of these packages.  The system runs the risk of being unpatched for a duration of time after the findings are discovered in the compliance assessments. The hurdles created by a “compliance first, engineering second” approach have illustrated a big area of concern around the time it takes from discovering vulnerabilities in governance assessments to the technical implementation that remediates the system’s security posture.  

The Egg:  Engineering-First Approach

When trying to assess a system for compliance before the system is technically connected, why not start with engineering the system to be secure AND compliant before taking the time to assess and build out each security control’s implementation details?  Assessing a system at face value from a compliance perspective before the final configuration on the production environment seems backwards. The dilemma that arises is that non-compliant systems should NOT be connected to the organization’s network until the risk of the assessment has been properly evaluated and approved.  This Catch-22 poses a major risk.  If a system connects to a company or agency’s network before becoming compliant with standards and policies, then the entire network could potentially be exposed to whichever vulnerabilities exist within the system. However, a full compliance assessment cannot be completed if the system is not fully set up and configured.  Compliance aims to ensure the most “locked-down” version of the system is configured prior to being placed on the organization’s network. Still, many times the “early” assessment of governance standards results in not being able to assess controls until they are put into production and made live.  There are a few caveats that exist that might push organizations to allow for connectivity prior to a compliance assessment if the attack surface is minimal and the company or agency is okay with accepting the risk that this connectivity might entail.  This may be the case for systems or technologies that don’t necessarily have access rights to modify configuration settings or production data.  A vast majority of organizations are adopting CI/CD pipelines to allow for a deployment checklist and process that allows for security integration before launch.  If the end goal is to shorten the cycle time that compliance checks and remediation actions take on production systems, the CI/CD process could be utilized in Managed Cloud Environments (MCE) that allow for security to be baked into environments.  This would allow for layered security compliance overlays for pre-secured technology stacks to enable developers to only worry about deploying secure code, with automation tools.

After an area of a system is found to be non-compliant, a plan of action is created to begin tracking and mitigating the risk.  But this now calls for a technical implementation.  A great example of this can be illustrated by looking at a government-approved, open-source tool such as OpenSCAP.  OpenSCAP scans the configuration baseline of a server or system to establish any server configuration areas of non-compliance, excluded security controls related to policies and procedures.  A simple SSH command in combination with this tool will report all failed configuration baseline items based upon an organization’s approved baseline, as long as the organization has created a compatible OVAL or tailoring file, as well as other industry standards such as DISA STIG and CIS benchmarks.  However, what really opens the door for compliance and engineering to work better together is the functionality OpenSCAP has built in to map each failed control to the relevant NIST 800-53 control or other compliance standards frameworks. Although not currently for Windows servers, environments with Linux servers can benefit greatly from OpenSCAP.  Although there are a large number of restrictions and potential areas that OpenSCAP is not able to scan for, there may be a large amount of assessment and auditing time saved by the utilization of this open-source tool.

Attached below are some screenshots and examples of how OpenSCAP can be used to look for areas of vulnerabilities and non-compliance with various standards:

OpenSCAP:  Example of a Check Run on a Linux-Based Server

OpenSCAP:  Different Grouping and Mapping Options Available After Scan Completion

Openscap:  Example of All Technical Checks That Map to NIST 800-53 (Control AC-03)

The Circle of Life and the Tools That Make This Possible

Although not a complete game-changer, tools like OpenSCAP can map the technical area of non-compliance directly back to the NIST 800-53 control that it corresponds to after scanning a server’s configuration and security posture. This new possibility could potentially aid in helping shorten the amount of time it takes to correctly assess and evaluate systems or servers for areas of non-compliance. However, the need for System Security Plans still exists, this helps evaluate systems already stood up for areas of vulnerabilities without needing to ask for information from application developers and owners. The argument between a “compliance first/engineering first” approach and may aid in reframing the future of cybersecurity compliance and engineering by allowing for these steps to run concurrently via automation toolsets. Imagine being able to scan and test a network in a secluded environment made possible by virtualization and advancements in automated security tools, scanners, central logging systems, and Cloud Access Security Brokers (CASBs). This “circle of life” approach can steer the cybersecurity Governance, Risk, and Compliance (GRC) field from a compliance-first approach to a pathway where security tools work in tandem with NIST controls to combat the vulnerabilities existing in connecting a non-compliant system to the network.  The “chicken-or-egg" metaphor is one that can be used to express a scenario of infinite regress, such as deciding whether compliance should be addressed first or if secure engineering should be prioritized. Most can agree that regardless of which comes first, compliance can set the frameworks that need to be met, while engineering adapts to that framework.  One component is undeniably just as important as the other. The integration of CI/CD pipelines and security automation tools could potentially bring us closer to an approach that would allow for the integration of these two efforts to reduce the time and burden of compliance assessments.

One way we’re helping to bring this idea to fruition is through Lockdown Enterprise that helps with STIG and CIS baseline automation.  Learn more about lockdownenterprise.com and check out our engineering and automation consulting services

*( https://www.fedramp.gov/faqs/)

Contributor: Derek Green - MindPoint Group

More from Our Cybersecurity Experts