The Value of Quantitative Risk Analysis

Greetings blog followers and welcome to my inaugural blog post. I have recently joined the MindPoint Group family and look forward to becoming a regular contributor. For my first post I would like to present some of my opinions regarding risk assessment, particularly as it pertains to the employment of qualitative vs. quantitative risk assessment.

I reviewed an article that appeared in the ISACA Journal recently. The article, Looking at IT Risk Differently, discussed a method of quantifying IT risk. The author identifies two types of risk; the risk that is common to all participants, and the risk that is specific to a single entity (person, company, agency, etc.). Rameshkumar calls the common risk systematic and attributes it to external factors that would necessarily affect all other participants in an activity. The unsystematic risk is that risk that is unique to an organization. “Systematic risk is also known as generic risk, and unsystematic risk is also known as specific risk,” (Rameshkumar, 2010).

It is generally accepted that risk is a factor of likelihood and impact. In mathematic terms the statement would look something like (Likelihood x Impact) = Risk, or L x I = R. Because likelihood involves a bit of uncertainty, Rameshkumar suggests that it is appropriate to convert likelihood to probability, thereby converting a qualitative measure, likelihood, to a quantitative measure, probability. Probability ratings are assigned based on trend data available.

Rameshkumar then provides a hypothetical risk analysis for loss due to fire. It is here that the author delves into the mathematic world of statistics and applies the calculation of means, variance and standard deviation of probability distribution, all of which have highlighted the need for me to take a statistics course. However, I have enough information to pose my personal critique.

If you’re not familiar, there has been a long running debate over the use of qualitative risk analysis and quantitative risk analysis within the field of information assurance. Arguably, quantitative measures of anything are typically more favorable in almost every case. I say ‘almost’ not because I know of any specific cases, I don’t, but because I never say never or always.

The problem with quantitative analysis of risk in IT security is that, by its very definition, it requires real historic data to back up the numeric values assigned. Providing probability values for something like a fire, or a flood, and possibly even a natural disaster aren’t that difficult because insurance companies have been tracking those trends for decades. Calculating the probability that someone in accounting is going to successfully exploit a vulnerability in a software package is not so cut and dry.

One of the reasons insurance companies provide coverage for driving a vehicle is that they have a lot of actuarial data on the probability that your teenage son will be involved in a collision during the first year or two of driving. This probability is fairly static and though trends do shift, in most cases it isn’t a large shift in a short period of time. The same cannot be said for IT security however. Trends in this industry experience much sharper changes over a relatively short period of time.

When you consider that the World Wide Web has only been in existence for the last 20 years or so and began with a very limited user base. A list of ‘reasonably reliable’ web servers, all 26 of them, was provided on W3’s site in November 1993. It wasn’t until 1995, just 15 years ago, that web browsers, and the PCs to run them were made available to the general public. Netcraft, a group that collects and reports on WWW statistics, publishes a monthly web server survey. Their most recent August 2010 survey puts the number of active web sites at a little under 96 Million.

Security trends have changed dramatically since the beginning. Initially, no one even considered the security aspects of an interconnected society like the one we have now. And it has only really been within the last 10 years that organizations really started looking at evaluating risk in their IT operations. That’s not to say that security organizations didn’t exist, or that IT security requirements haven’t existed, the NSA published the Orange Book back in 1985, 10 years before the WWW came into existence. The problem is that few organizations saw IT and IT security as a benefit. It has always been relegated to a cost center rather than a strategic investment.

It is my belief however, that reliable data will eventually be available that will allow quantitative risk assessment. Insurance companies will slowly increase their presence in the field of risk transference in the IT security world and as they do, those statistics will become more available and risk profiles can be generated. In the beginning they won’t be truly accurate, maybe they never will, but they will be usable.

I’m sure that this subject will come up again here and I welcome all your comments. If there is something in particular you would like to see, please let me know.

-Terry Grogan

Reference: Rameshkumar, A. (2010). Looking at IT Risk Differently. ISACA Journal Vol 1.

Latest posts by Terry Grogan (see all)