A Follow-up on Quantitative Risk Analysis

The other day Terry wrote a piece on risk assessment, and focused primarily on quantitative assessment.  It brought to mind this piece from earlier this year by Richard Bejtlich at taosecurity.blogspot.com.  In it, Richard rightly points out the flaws in Craig Wright’s formula for risk, but in constructing the piece he also throws out the entire notion of quantitative risk assessment with the bathwater so to speak.

First, one of the commenters seems to have dissected the issue most accurately.  The problem with Craig and his derivative formula for risk in an SMS banking application is that Craig is “bombastic” and comes off as a know-it-all who actually knows very little about security in the real world.  He asserts his formula, and the notion that risk can be measured with 6 digits of accuracy as fact, and anyone who does not agree is a simpleton.

This is the risk of your vulnerability.

So, in that respect Richard is correct.  Risk cannot be measured with such a degree of accuracy.  And anyone who sells a risk model as some sort of crystal ball does their clients and colleagues a disservice.  Why?  Because above all else, risk analysis is used as the basis for making a decision of some sort 9 times out of 10.  When the decision maker is convinced that the information feeding their decision is 100% accurate, the outcome can be very different than if they feel that there is some degree of estimation or potential error inherent in the risk information.

Where I have to disagree with Richard is in saying that quantitative risk “model = clown.”  He makes an assertion that modeling led to the economic collapse, and of course, who could argue that models were not involved there?  If we stop for a second though, we realize that the reason those models failed is probably due much more to:

  • the fact that they were based on obviously flawed assumptions (such as housing prices will continue to rise in perpetuity); and
  • there was a very real separation of duties problem at play.
You had people trying to sell a widget using a model to prove that the widget was worth $X.  No reason to question that, right??  No problem, there are “independent” rating firms which will validate that these things really are worth what the model says, right?  Yes, but the people selling the widgets pay them to do the ratings . . . . so how does that independence thing work again?  Oh right, we all lose our shirts while they make a killing.
This issue might be true for security risk analysis to some extent, but generally security people understand that the person assessing the risk in a system cannot be the person who built the system and sold it to you.  I doubt anyone in the field takes much stock in any sort of self-certification, and so the shell game the financial industry used to give the illusion of “independence” is not, in my opinion, an example that translates well to the field of information security.

However, Richard is still right that Craig’s claim that he has a “means of calculating risk with a high degree of accuracy” is enough basis to classify him as a clown.  He may have a pretty reliable model.  However, claiming that you can plug some numbers into a formula and have the formula make a decision about the efficacy of a banking authentication mechanism without having to put any thought into it is ridiculous.

A risk calculation- whether it is qualitative or quantitative- serves as informational input to a decision.  It is up to an intelligent human to understand that the number or qualitative value is not 100% accurate, but puts them within some range of accuracy.  More often than not, a risk assessment that is scoped in as closely as the one Craig puts forth also serves almost no use.  If I am developing the application, what I should really be asking is for someone to take a holistic look at the application, provide me with assessment of various risks in all parts of the application, and then view the complete set.  What I care more about is whether the output of the formula rated the risk in the authentication module as more or less serious than the one that has been identified in the authorization controls.  Further on a scale from low to high do these fall towards one end or the other.  And lastly, but probably most important- what are the recommended mitigations for each of the risks?!

The output of Craig’s formula serves no use whatsoever whether I believe him to be the Rene Descartes of risk or not.  In order to decide whether there is some corrective action or mitigating control to implement for the authentication module I need to know what the recommended/possible corrections are, and whether they are within reason given the seriousness of the identified risk and my overall goals for the security of the program.  So, I don’t need to know the risk as a number to 6 degrees of accuracy.  I just need to know the ballpark it is in to determine if a fix should be applied.  If I’m unsure because they are close, an intelligent person will probably seek more explanatory information (not more numbers) regarding what the issue is, how an attacker would exploit it, and the damage that could be done.

So, I think Terry was right to point out that there is a place for quantitative analysis, and that as time advances we should be able to collect enough historic data to be able to make well-informed decisions.  I also think Richard was right, that people like Craig are selling snake oil.  My point though, is that those two statements are not contradictory.  As long as we understand what the output of the quantitative model really amounts to we can use it to make effective decisions.