Ever ask a sysadmin what they find most tedious about their job? If they’re being honest, keeping up with security patching and compliance causes the most headaches. Surprised? You shouldn’t be. Patching continues to be a labor-intensive job with dire consequences for misconfigurations that could down a system or expose parts of the environment to attack. The process itself frequently takes critical systems offline and disrupts the business, exposing weaknesses and incompatibilities that create rework for other areas of your department (making you VERY unpopular). Unfortunately, this happens because security policies are full of vague and seemingly conflicting requirements that run counter to the broader business objectives of shipping new revenue-generating features to end users and customers as quickly as possible.
Automation has become a problem-solving buzzword in IT operations, yet despite the near-ubiquitous use of automated system patching tools for daily IT operations, automating complex security hardening policies has remained largely an unsolved problem. The gap between what is "good enough" vs the recognized industry hardening standard is so wide that it's become a major contributor to the uptick of security breaches across data-sensitive industries.
Consider, for instance, patching a RHEL (or CentOS, or Ubuntu, or Arch Linux… you get the point) system. How do you ensure the patch has not been tampered with, and is originating from a trusted vendor repository? In the case of RHEL distributions, this requires configuring gpgcheck to equal 1 in the /etc/yum.conf file, otherwise the server will allow installation from any repo without a valid signature. But what about that one repo you need that doesn't have signed packages? Can you (and your organization) afford to make an exception for that repo while ensuring that everything else is locked down with appropriate signatures? Is this policy able to be translated into effective automation? My point is that there are hundreds of controls and exceptions that need to be implemented for proper security compliance, and that writing these security policies into automation is rarely done well—if at all. Often controls like this are applied painstakingly by hand in production environments after a system or application has been deployed, then manually justified when they cannot be universally applied. It’s a serious problem which has plagued our industry for years.
Applying security policies to complex, bespoke system infrastructure takes more than automation—it takes human ingenuity and logical compromise. Someone (or more likely a team) with the proper expertise, experience, and authority must translate exacting security standards into executable policy that abides by an industry standard, yet is implementable and (ideally) automatable. Arriving at the compromise between policy and procedure is tough, but once you’re there, then you can automate as usual for the productivity and security gains every organization needs.
Imagine how these underlying problems play out in the real world. A developer makes some feature changes to an application. Those new features, once locally tested and approved in pre-production, are pushed to testing and eventually to production. At this point comes the rub, and most operations teams usually have three paths they can take:
- Deploy the application as-is into the secured production environment. Promptly brick said application since system, app, and network configurations are typically different from that of the pre-production development and test environments. Prepare for a fight.
- Punt the application back to development with a security assessment report rendering the application unimplementable without some amount of rewrite. Prepare for a fight.
- Relax the security controls of the production environment to accommodate the application. Prepare for a breach, then a fight.
In this scenario, there’s no path forward that doesn’t end in an acrimonious discussion between the security, development, and operations teams. Faced with this decision, most organizations press the easy button for option 3 again and again, which has long term security implications to all systems, and configuration drift headaches to deal with later on.
There is a better way
At MindPoint Group (MPG), we’ve seen these problems first hand, and although it’s no simple task to apply complex policies to a complex environment, MPG’s expertise in security and engineering is key to the value we provide for our clients. NASA, for instance, has partnered with MPG for over six years in order to modernize and secure their many environments. One of the key accomplishments we’ve helped NASA achieve is a continuous application of custom-made STIG and CIS baselines across a cloud environment. This includes over 300 unique controls across differing versions of 4 major Linux variants – RHEL, AWS Linux, Ubuntu, and CentOS – and hardening rules for over 120 applications. Using Ansible & Packer, MPG organized the sprawl of golden images, secured them according to NASA’s requirements, then wrote policy to accommodate new architecture and cloud services. What once took them 3+ hours per system now takes them 7 minutes.
With regulatory fines being levied against private corporations in Europe, and public awareness of lax data policy affecting brand reputation, the private sector can no longer afford to take system security lightly. Need proof? Just look at all the trouble Facebook is in right now for shipping fast and cutting corners. So if you’d like to fully take advantage of Ansible’s security capabilities, then we’d love to talk. If you’re not an Ansible user (yet) but need assistance ramping up several aspects of your security strategy, then we’re happy to help with that too.
Learn more: lockdownenterprise.com