Cyber Warfare, the 0-Day Exploit Market, and the Rest of Us: Part 1
So, aside from learning that there are top government officials with no regard for the commitments that they’ve made to protect information critical to national security, this NY Times article has also taught us quite a lot about where we stand in terms of engaging in cyber warfare. Most obvious is the fact that the US is engaged in real efforts that can accurately be called cyber warfare at all. Up to this point, I’ve only read a lot of speculation about what cyber warfare is, how it might be viewed from a legal aspect, technical problems that might be encountered, and if/when the country might ever engage in cyber warfare.
The first interesting piece of information from the report is that the Olympic Games project was unique in some regards in that it was able to translate errors in 1s and 0s into real physical damage. This will not always be the case with a cyber attack. However, it doesn’t need to be. Damage to the integrity of financial sector data is likely to be just as damaging as any physical damage that could be inflicted via a computer. Also, while real physical damage was done to turbines at the Natanz plant, delays to their uranium enrichment could have likely been achieved by making the devices supporting the plant to malfunction intermittently without actually causing physical damage to them.
The second interesting piece of information is perhaps the most obvious one- mistakes were made which allowed Stuxnet to escape into the wild. It is doubtful that any piece of software ever written has been bugfree. Even tightly-controlled software which implements only a few well-understood and documented features such as those associated with Programmable Logic Controllers (PLCs) is not bug free. In fact, there a number of bugs and resulting vulnerabilities associated with this type of equipment.
The problem here then is for a PLC, it controls (usually) a single, specific function for a piece of associated hardware. Often these are found in places industrial control systems where an error in the amount of water released by a dam gate controlled by a PLC can have devestating effects. These impacts are exactly what project Olympic Games sought to exploit by enabling physical damage to turbine by taking control of the PLC controlling the turbine. I have to assume that the risks associated with errors in a cyber weapon are going to have similar potential impacts in terms of the severity. What would have happened if the error which occurred was compounded? The error let the malware out into the wild. If the error had also impeded the malware’s ability to accurately identify the PLCs it was intended to take control of, who is to say it would not have eventually found a similar PLC controlling those dam gates? Or maybe sewage control valves. Maybe it would migrate to traffic control systems.
These types of errors may be somewhat far-fetched based on the code and structure of the program, but the point is that an error was introduced into a rather complex piece of software being worked on by a collaborative development team from two different countries. Therefore, those errors should be expected, not a surprise, and I would actually be surprised if there is not a report analyzing and documenting those risks floating around somewhere. These officials probably evaluated the possibility of such unexpected incidents, and impacts if they were to occur. These were weighed against the benefits of moving ahead with the program. If they were not, then I’d say there was a fundamental flaw in the government’s approach to the project.
So, the point here is that as we as a society move towards an environment where we are engaging in these types of acts with our adversaries, we need to understand the risks. It’s easy to understand that when a bomb falls on you, that you will die. It is less obvious to the mass number of computer users (in other words, all of us), what the impacts would be if a piece of malware was unleashed by the Chinese government. What if it infected portions of the energy sector meaning that none of the northeastern states could receive natural gas for heating in their homes for the winter? Does the US have contingency plans in place to deal with these incidents?
Third, the sources for the article state that Stuxnet was the only real product of the US’s engagement in cyber warfare. However, we recently discovered the Flame virus which appears to be similar to Stuxnet in terms of what it is targeting and the complexity of the measures used to obfuscate it. As opposed to Stuxnet, which was designed to implement active attacks, Flame appears to be primarily designed for espionage. Perhaps the US was not involved in development of Flame, but it certainly seems to be related. Perhaps it is a variant developed solely by Israel based on their collaboration with the US on Stuxnet.
In any case, the fact that Israel and the US are confirmed to be engaged in cyber warfare should be a strong indication that major adversaries such as China and Iran are, or soon will be, developing this type of capacity as well. It also indicates that there is a strong possibility that non-state actors such as terrorist groups are or will be engaging in these types of activities as well.
Fourth, the article confirms something that everyone in the security industry already knows- the human element is perhaps the most dangerous one in any computer system. Stuxnet did not compromise the computers at the Natanz facility through some sort of advanced hacking technique. It gained entry because people carelessly brought USB drives into the facility, and brought the Stuxnet malware with it. From there, you’re only a single USB insertion event away from severly impeding a country’s nuclear capability in this case. Even (perhaps) the most advanced malware threat the world has discovered required a fundamental fault in the human-computer interface in order to succeed.
That’s it for now. In Part 2 we’ll discuss how the 0-day exploit market has evolved over the past few years.