Innovative Minds - On Point - One Group  

MindPoint Group Blog


Even with automation, security baselines like STIG or CIS remain a challenge to manage. But there is hope.


Ever ask a sysadmin what they find most tedious about their job? If they’re being honest, keeping up with security patching and compliance causes the most headaches. Surprised? You shouldn’t be. Patching continues to be a labor-intensive job with dire consequences for misconfigurations that could down a system or expose parts of the environment to attack. The process itself frequently takes critical systems offline and disrupts the business, exposing weaknesses and incompatibilities that create rework for other areas of your department (making you VERY unpopular). Unfortunately, this happens because security policies are full of vague and seemingly conflicting requirements that run counter to the broader business objectives of shipping new revenue-generating features to end users and customers as quickly as possible.

Photo by Tim Gouw from Pexels

Automation has become a problem-solving buzzword in IT operations, yet despite the near-ubiquitous use of automated system patching tools for daily IT operations, automating complex security hardening policies has remained largely an unsolved problem. The gap between what is “good enough” vs the recognized industry hardening standard is so wide that it’s become a major contributor to the uptick of security breaches across data-sensitive industries.

Patching perils

Consider, for instance, patching a RHEL (or CentOS, or Ubuntu, or Arch Linux… you get the point) system. How do you ensure the patch has not been tampered with, and is originating from a trusted vendor repository? In the case of RHEL distributions, this requires configuring gpgcheck to equal 1 in the /etc/yum.conf file, otherwise the server will allow installation from any repo without a valid signature. But what about that one repo you need that doesn’t have signed packages? Can you (and your organization) afford to make an exception for that repo while ensuring that everything else is locked down with appropriate signatures? Is this policy able to be translated into effective automation? My point is that there are hundreds of controls and exceptions that need to be implemented for proper security compliance, and that writing these security policies into automation is rarely done well—if at all. Often controls like this are applied painstakingly by hand in production environments after a system or application has been deployed, then manually justified when they cannot be universally applied. It’s a serious problem which has plagued our industry for years.

Applying security policies to complex, bespoke system infrastructure takes more than automation—it takes human ingenuity and logical compromise. Someone (or more likely a team) with the proper expertise, experience, and authority must translate exacting security standards into executable policy that abides by an industry standard, yet is implementable and (ideally) automatable. Arriving at the compromise between policy and procedure is tough, but once you’re there, then you can automate as usual for the productivity and security gains every organization needs.

Real-world ramifications

Imagine how these underlying problems play out in the real world. A developer makes some feature changes to an application. Those new features, once locally tested and approved in pre-production, are pushed to testing and eventually to production. At this point comes the rub, and most operations teams usually have three paths they can take:

  • Deploy the application as-is into the secured production environment. Promptly brick said application since system, app, and network configurations are typically different from that of the pre-production development and test environments. Prepare for a fight.
  • Punt the application back to development with a security assessment report rendering the application unimplementable without some amount of rewrite. Prepare for a fight.
  • Relax the security controls of the production environment to accommodate the application. Prepare for a breach, then a fight.

In this scenario, there’s no path forward that doesn’t end in an acrimonious discussion between the security, development, and operations teams. Faced with this decision, most organizations press the easy button for option 3 again and again, which has long term security implications to all systems, and configuration drift headaches to deal with later on.

There is a better way

At MindPoint Group (MPG), we’ve seen these problems first hand, and although it’s no simple task to apply complex policies to a complex environment, MPG’s expertise in security and engineering is key to the value we provide for our clients. NASA, for instance, has partnered with MPG for over six years in order to modernize and secure their many environments. One of the key accomplishments we’ve helped NASA achieve is a continuous application of custom-made STIG and CIS baselines across a cloud environment. This includes over 300 unique controls across differing versions of 4 major Linux variants – RHEL, AWS Linux, Ubuntu, and CentOS – and hardening rules for over 120 applications. Using Ansible & Packer, MPG organized the sprawl of golden images, secured them according to NASA’s requirements, then wrote policy to accommodate new architecture and cloud services. What once took them 3+ hours per system now takes them 7 minutes.

With regulatory fines being levied against private corporations in Europe, and public awareness of lax data policy affecting brand reputation, the private sector can no longer afford to take system security lightly. Need proof? Just look at all the trouble Facebook is in right now for shipping fast and cutting corners. So if you’d like to fully take advantage of Ansible’s security capabilities, then we’d love to talk. If you’re not an Ansible user (yet) but need assistance ramping up several aspects of your security strategy, then we’re happy to help with that too.

Categories: Automation, Compliance and tagged , , , ,

MindPoint Group Blog


Unconventional Automation: Ansible for FedRAMP


Ansible today is more powerful than it has ever been. Over the past few years it has taken the IT automation world by storm. For sure there are other automation technologies that are ‘better’ or more ‘performant’ within certain niches. But as a general-purpose, one-size-fits-most automation solution, Ansible is the dominant technology.

One area where Ansible is underrated is in the world of compliance. Many controls within the various regulatory and compliance bodies such as HIPAA, PCI, SOC2, FedRAMP, and others demand certain ‘things’ to be true in a technical sense. These technical controls can be mostly or entirely resolved by Ansible depending on the nuances of a particular environment.

I’m going to teach you how to figure out where Ansible can fit when it comes to satisfying controls within FedRAMP. This is the first part of a two-part series. The second part, to be released at AnsibleFest, will provide more concrete technical examples as well as some extra resources to leverage on your compliance automation journey.

What is FedRAMP?

For the uninitiated, FedRAMP is a compliance standard that applies to cloud service providers (CSPs), think *aaS, that wish to solicit business from Federal agencies. To give an example, suppose you make a really spectacular To-Do list application that is provided as a SaaS. Now imagine that you want the fine folks at NASA to be able to use your application…all you have to do is get through a FedRAMP audit. Keep in mind, FedRAMP is probably the most challenging (and expensive) compliance standard to adhere to with over 400 unique controls and a multi-step process to make sure all of your ducks are in a row. If you really want to learn more about FedRAMP you can do so on the official website.

How does Ansible Fit?

Before we get into a detailed example, let’s talk about how to identify places where Ansible can fit into your solution for a given control. The entire list of controls for what is called a ‘Moderate’ system can be found in the FedRAMP System Security Plan Template (direct link to DOCX). For many controls there are keywords that are very strong indicators that Ansible will be able to help, in part or in whole, to satisfy the control. For example, words like ‘automatically’ or ‘configuration’ are strong indicators that Ansible would be a good fit. Let’s take a look at the literal text of one control.

CM-2, Enhancement 2 (link)


The organization employs automated mechanisms to maintain an up-to-date, complete, accurate, and readily available baseline configuration of the information system.

Supplemental Guidance: Automated mechanisms that help organizations maintain consistent baseline configurations for information systems include, for example, hardware and software inventory tools, configuration management tools, and network management tools. Such tools can be deployed and/or allocated as common controls, at the information system level, or at the operating system or component level (e.g., on workstations, servers, notebook computers, network components, or mobile devices). Tools can be used, for example, to track version numbers on operating system applications, types of software installed, and current patch levels. This control enhancement can be satisfied by the implementation of CM-8 (2) for organizations that choose to combine information system component inventory and baseline configuration activities.

Related to: CM-7, RA-5

For those already well grounded in Ansible-land, the text is a given. The control is, in layman’s terms, mandating that the baseline for information systems (operating systems, network devices, laptops, etc.) are applied and maintained in an automatic fashion. Further distilling this into Ansible terms, by having Ansible content (playbooks, roles, vars) that strictly define the baseline configuration for all of the information systems that are to be audited, you have effectively satisfied this control (sans documentation).

The best part is that since Ansible can effectively also be the tool that tracks and manages your inventory, you are already in the running to at least partially satisfy control CM-8 (2) which deals with system inventory management.

This isn’t magic, it’s simply mapping what the text of the compliance body says with the capabilities that are already available to you via Ansible. This is a topic that can and will go deeper. If you are fortunate enough to be attending AnsibleFest this year (in Austin), I’ll be presenting on this very topic and going quite a bit deeper. If you’re unable to attend, no worries, the second part to this blog post will be made available after AnsibleFest along with a video of my presentation.

Click this link to learn more about Jonathan’s presentation at AnsibleFest.

Interested in learning more about our Security Through Automation Services?  Click this link


Categories: Cloud, Configuration Management, Cyber Security, FedRAMP, FedRAMP, ISP Blog, Open Source and tagged , , , ,

MindPoint Group Blog


VMware Provisioning and Automation with Ansible


All, in just a week I am going to be at AnsibleFest in Austin, TX to give a talk and see what others are doing. As part of Fest this year, Ansible wants people to share their automation stories. I wanted to give a quick look at mine as a way of introducing the VMWare Provisioning and Automation with Ansible talk I will be co-presenting with Abhijeet Kasurde.

About 6 years ago I was working on a project for the Federal government in which we were providing security for the largest cloud migration at the time. The team had to migrate an entire datacenter (more than 100 applications) to AWS in the span of about 13 weeks. Ansible was still pretty early in its development at the time, but was mature enough that some of the application developers on the team started using it to automate and orchestrate the work being done to build environments in AWS, deploy services, and migrate data.

As the lead for the security team, I was learning what AWS was, and figuring out how to apply traditional government security requirements to cloud systems and services. I was getting a crash course in what “cloud native” meant, and was getting familiar with new toolsets as well. The value of Ansible was apparent almost from the moment I was introduced to it. From a security perspective it meant being able to enforce configuration management and avoid wild west style system administration. From an operational perspective, it meant being able to do things faster and more reliably.

Fast forward to my next role which was leading the transformation of a government Tier 2 Security Operations Center (SOC). The environment was drastically different. There was nothing deployed to the cloud, nor would there be in the near future. But the ability to deploy and manage tools reliably and quickly, make tools already in operation more reliable and resilient, and to enable users who are on the front lines in a constant battle with Advanced Persistent Threats (APTs) made bringing that same automation power to bear just as, if not more relevant.

So, with the backdrop of several years of getting to know and being a casual user of Ansible in a cloud-only environment, now I had to be the one leading the implementation in an environment where:

  • We were 100% deployed on-prem;
  • We used VMware as our virtualization platform; and
  • We were building new tooling completely from scratch.

We had a lot of great success in doing this, and Ansible was the catalyst that allowed us to overhaul several enterprise security systems in a short time, to demonstrate measurable improvements in both performance and reliability, and to bring transparency to what we built and how we built it. There are a couple things from this effort that have led to the talk I’ll be co-presenting.

  1. Using and managing a VMware farm/environment can get expensive. We obviously had some base licensing we needed just to get our farm going, but there are a lot of add-ons like Operations Manager and vRealize Automation that many folks consider “must-haves.” If you are constrained by budget or just want to get the most out of your investment in Ansible, how much is possible?
  2. With any environment- cloud or on-prem virtualization farm- you will have machine templates. Guess what? Now you have to take care of them. The most common thing I have seen is that there are a lot of VM templates in vSphere (one for RHEL6 base, one for RHEL7 base, one for RHEL7 w/ mySQL, and so on). Being a responsible admin or just one who gets audited regularly, you are going to have to dedicate a bunch of time to maintaining those templates. Once a month, you have to boot up a VM from each one, patch it, and then regenerate a new template. This can quickly become many hours of work every month. How can we use Ansible to optimize this process?
  3. In a cloud environment we never had to care about basic stuff like storage size, the amount of RAM, cores of CPU, etc when we provisioned new machines. We just picked the right sized AMI off a menu, and at any time we could expand disks magically. In a VMWare environment this can be somewhere between that cloud magic and having physical hardware that needs significant downtime to reconfigure. How can you make your platform more closely resemble the cloud by building Ansible playbooks that give you the hooks you need?

In any case, on that project I learned a lot about using Ansible with VMWare. Throughout that time I felt like most of the “cool technology” glory goes to those working in the cloud. However, having spent most of my career working for the Federal government, I know that there are still a lot of VMware centric shops out there, and based on my experience transforming an enterprise SOC, I hope to be able to share that there are still major benefits to bringing new tooling and concepts to these “legacy” virtualization environments.

Innovation is still possible, even in our “traditional ways” of doing things.

I hope to see plenty of people out there. If any of you are Ansible-ers who work in a VMware environment I hope to see you at my talk.

Click this link to learn more about Matt’s presentation at AnsibleFest.

Interested in learning more about our Security Through Automation Services?  Click this link


Categories: Cloud, Configuration Management, Engineering and Architecture, ISP Blog, Open Source, Security Operations Center, SOC and tagged , , , , ,

MindPoint Group Blog


Improving Provisioning in VMware with Ansible


This coming Thursday, September 7th, MindPoint Group VP for Information Security and Privacy Matt Shepherd will be presenting “Where to Start with Automation” at Ansible Fest San Francisco. Here is a brief synopsis of his talk:

AWS and OpenStack get a lot of attention as the cool platforms to build on, and with good reason. They are truly cloud platforms that provide abstractions to deliver elastic compute and storage. However, there are still substantial enterprise investments in VMware. While there are a number add-ons and third-party tools available to deliver orchestration and advanced management capabilities, these are additional investments and increase the sprawl of loosely-integrated tooling in IT organizations. This causes a problem in most VMware organizations:

  • Either the platform costs too much; or
  • Teams do most of the management manually or through a patchwork of scripts/tools.

In either case, the terms cloud, automation, and orchestration mean different things for teams supporting VMware than they do for other groups.

In this session, the Lead Engineer for the Department of Justice Security Operations Center will talk about how VMware support in Ansible has changed over the last 12 months, and how to close the gap in terms of being able to consider your VMware infrastructure a real cloud platform with fully automated provisioning, elastic storage features, and managing even your base infrastructure as code. You’ll see how he’s used just four Ansible modules to deliver implement this capability at JSOC allowing them to provide more stable and reliable infrastructure while increasing the speed at which new services are delivered to the front-line troops defending the enterprise. This presentation will step through the caveats to consider, the Ansible content you need, and how to structure host_vars files.

In this session you will learn:

  • What recent development has been happening around VMware in Ansible modules;
  • How to think about elastic compute & automated provisioning in a VMware world;
  • How to stop caring about disk sizing/partitioning and learn to love hacking elastic storage for VMware VMs; and
  • Where you can go next from this starting point.

If you are in the San Francisco area and are interested in hearing Matt speak at Ansible Fest, registration can be found here. The details of Matt’s talk “Where to Start with Automation” are as follows:


Ansible Fest: San Francisco




When: Thursday, September 7th at 11AM

If you are unable to attend in person, stay tuned for additional content regarding Matt’s talk.  In the meantime, you can also checkout our latest Ansible content on:



Categories: Cloud, Configuration Management, Engineering and Architecture, Open Source and tagged , , , ,