The other day I read this article over at TaoSecurity which was a follow-up to a post from a couple weeks ago. Richard has been talking about some pending SEC regulations and how they will impact security. Specifically, the issues he has written about deal with protecting whistle-blowers from retribution if they come forward regarding data breaches at a publicly-traded company.
On the face of it, this seems to be great for transparency. Although data breaches involving PII and other customer data have been a big issue for more than a decade now, there has been little traction in terms of putting mechanisms in place to protect customer information. Companies are missing the fundamentals and putting customer data at risk as a result. It may not be the main issue, but there are certainly a lot of conversations about the age-old problem of whether a security investment is cost-effective. People are generally still very bad at quantifying how much an exposure and/or the corresponding security initiative might be worth.
It's simply not easy to estimate how much revenue will be lost due to a breach. Quite often the same breach would be likely to have different financial impacts for different companies. People may have been outraged about the Sony PSN breach(es) last year, but they were not likely to return their PlayStations as a result. Lawsuits can have a real financial impact, of course, but how many people curtailed use of PSN after the breach, or how many people who did not have a PlayStation at the time opted to buy an XBox instead after the news about the breach came out? Even now that's not a clear number, and predicting that in advance is even more difficult.
These regulations involving whistle blowers and regulatory fines can be very important to bringing some level of consistency to the equation. The fines will likely be easier to predict than other aspects of the fallout of the breach. More importantly, protections for whistle blowers raise the likelihood that if a breach occurs, the public will know about it.
The most interesting question to me is how will these rules play out in the real world. Obviously, the assumption is that security personnel with knowledge of a breach will be inclined to disclose it even of their employer is not. Richard mentions that they can stand to earn bounties for their disclosures incentivizing them to come forward. However, what if this security person had identified the "missing fundamentals," requested funding for corrective measures, received it, and then the breach still occurred?
Of course in some cases, the breach occurs because the attacker has the advantage, is sufficiently skilled, and the cards are stacked against the defenders who work with limited resources of time and technology. However, as I said, some are a failure of a fundamental nature. Here, we have a security group which was provided with the requested resources so the fundamental failure is probably on their part. Maybe they focused on the wrong thing ("that vendor said if we bought the silver bullet model it would fix everything . . . "), or they implemented the right tools and then never actually spent the time to configure and use them properly. Whatever the reason, the breach and the fact that the company made investments in security place some or all of the blame for the breach back on the shoulders of the person/group most likely to be in a position to blow the whistle. I'd be surprised if this didn't disincentivize some of the potential whistle blowers out there.
I'm not saying that the rules are not necessary and good. In general, customers and investors are at a great disadvantage here. So the rules are a necessary effort. My question is more a curious musing about the decisions the people in some of these organizations may face when considering whether to blow the whistle.