So, I've rambled a bit in the past several weeks on the current state of FISMA. You'd think that somewhere in there I'd stop complaining about things and provide at least some sort of idea about what a FISMA-compliant security program should be. Well, this post is intended to be my rough sketch of it at a high level without going on for 3 hours.We started this series by looking at Jerry West's bold step out on the end of the limb, and referencing OMB's M10-15 memo. Honestly, I have to say the OMB memo is a step in the wrong direction. People have stated that it lays out an effective continuous monitoring program (often referring to some new era of continuous C&A), but in reality the main points of the memo are:
- You must be able to feed security metrics directly into CyberScope using either an automated feed or by uploading an XML file.
- Agencies will need to answer a bunch of benchmarking questions in CyberScope. (ie- "Is your agency. . umm, like secure and stuff?")
- Some team of (I'm guessing) incompetent wonks will come around asking questions to follow up on the surveys referenced in #2.
Wow, I have looked into the face of continuous monitoring ... and been completely underwhelmed. Can we go back to using the Excel spreadsheets please? Here is the problem- OMB wants to be able to assess, on their own, how secure each agency is. The problem is that they have not yet accepted that regardless of their role in the implementation of FISMA, they do not have the time, the resources, or the know-how to perform what would amount to an accurate and useful assessment of every agency within the federal government.
Seriously, what purpose do items #2 and #3 serve? You have the agency doing self-reporting. You have the OIG at the agency performing an independent audit. Now OMB has to ask some questions. Oh, and then they will send someone out to (within the matter of probably 3 weeks) make a comprehensive and accurate assessment of whether the survey questions were honestly answered. If anyone actually believes that's possible (the comprehensive and accurate parts) I have some magic beans to sell you?
I won't beat a dead horse as Richard at taosecurity.com has already come to the same conclusions. Let's move on.
So, if you are really trying to monitor continuously, I think Richard brings up an important question- what are we trying to continuously monitor? For most in the federal government, we are mainly concerned with monitoring the effectiveness of security controls. However, Richard indicates he'd rather see us monitoring threats. You could pretty easily make a case for monitoring both.
In less than 1000 words, here is what you need to do.
First, you have to establish and use secure configuration baselines. No, not because that guarantees security, but because you need to establish a baseline. (What is the baseline security posture of the supporting infrastructure?) Through the implementation of these standards you will implement least functionality and will reduce avoidable vulnerabilities due to configuration issues. This is your starting point. Here we are dealing with controls.
Next, ensure that you monitor this infrastructure of baselined equipment. By this I mean you should/need to have a comprehensive set of security tools which provide you with real-time continuous information regarding what is happening on your network and systems. Firewalls, switches, routers, IDS/IPS, DLP, netflow sensors, audit logging tools, etc all feeding into some sort of centralized tool that can be customized to identify potential events. Here we are focused on threats.
Next, perform a set of ongoing assessments including:
- Scans for compliance with your security baselines. Continuously means something
- Scans for vulnerabilities.
Testing of subsets of management, operational, and technical controls. Technical controls can change more quickly, and therefore, require monitoring at more frequent intervals. Your technical testing should include:
- Tests that are targeted to and appropriate for the technology in place.
- Penetration testing where appropriate.
The important part of these first few points is we create a better understanding of security controls and vulnerabilities in deployed technologies, and we better understand the threats actively targeting the organization. This allows for better assessment of risk, and as a result, the ability to better prioritize how to make improvements. So of course, as you get this all going you also need to have a very effective remediation function in your organization. All the vulnerabilities identified in testing obviously need to be triaged and dealt with, but as emerging threats are identified there should (hopefully) be strategies for dealing with these as well.
On top of all this, you need to have a defined set of metrics, and a tool which tracks those metrics for you. These should be things which span from the technical to non-technical. This may be the hardest part to get right actually. The metrics need to present an accurate picture of the state of security in the enterprise, but should remain "dashboard-level" because upper management is the intended audience. Your SOC analyst will be using the SIEM as their "dashboard." Upper management doesn't want to see daily potential intrusion alerts though.
The part that I think OMB is struggling with is that they are trying to create the set of metrics, but do not have any real control over the data that goes into their reports. As a result, their dashboard is next to useless. However, the approach laid out in Memo M10-15 does not do anything to improve that situation. There needs to be some acceptance of the fact that certain data is not going to be shared. It shouldn't be! Why would we want to have every government agency submitting monthly vulnerability scan reports (as an example) to OMB? Sure, they'd have more reliable and timely data, but they'd also know a whole lot more than they need to have. Now, if an attacker wants current vulnerability information they need only get access to OMB's reporting system.
On some level, OMB needs to just take the data that agencies and OIGs provide and trust that it is accurate. OIGs, for their part, need to do a better job of auditing security programs. They are quite often too "results-driven." They search for problems instead of trying to gain an understanding of the security posture of the subject enterprise, and (I know I'm generalizing, but this is just my experience) often do not have the technical chops to really understand how operational security should function. In essence, it can sometimes seem like no matter how good or bad your security program is, the OIG will find something wrong because if they don't they feel they've failed. Always finding a problem is not an indicator of a good audit team. The really sad part though is that often when an agency does have a bad security program the OIG audit team really only identifies some superficial issues which don't fully identify the true problems.
Unfortunately, OMB's M10-15 memo does not add any sort of continuous monitoring to anyone's program. It does, however, create a useless set of additional reporting requirements and processes. The result will be more money spent on the same amount of security.