Mike Andrews here. With a very broad brush, the vulnerabilities we see can be split into two categories -- flaws and bugs. Flaws are inherent problems with the design of a system/application – Dan Kaminskys’ DNS vulnerability would be a good example. Bugs, on the other hand, are issues with the implementation of the software, and the classic example would be a buffer overflow. It’s not a perfect taxonomy and we can start splitting hairs in some instances, like the CSS stuff Gareth Heyes, David Lindsay and Eduardo Vela Nava showed at BlueHat (is it a flaw in the spec that allows such abuse of functionality, or a bug in that browsers shouldn't be executing such code), but generally one can place a vulnerability in one of these categories quite easily.

Another distinction is the one between vulnerabilities and exploits, and this is a much easier one to make. A vulnerability is an issue (flaw/bug) that allows a bad thing to happen (e.g. a buffer overflow), while an exploit is what can be done with it (e.g. crash a machine, escalate privilege, execute foreign code). It's usually one-to-many relationship – a single vulnerability can be exploited numerous ways.

2008 was an interesting year for security. Although, I posit, a good number of the vulnerabilities that have been disclosed, talked about, and garnered a lot of press in the past year were, in fact, simply more creative exploits. While the underlying vector of many vulnerabilities haven’t changed (injecting foreign code into a webpage, for example), what can be done with them has been advanced (CSS history stealing, javascript-based internal network scanning, clickjacking, etc, etc).

The thing is, however, that although better exploits advance the field in some ways, like how a particular mitigation, update or architecture doesn't really protect us, it seldom "adds" anything to the conversation. What was once broken, then fixed, is now broken again; we scratch our heads and go back to square one, learning little from our mistakes. Nate Mcfeters’ previous post about old issues becoming new again couldn't be more spot on -- the past appears to be a fruitful ground and many "old" vulnerabilities are finding a new lease off life with better exploits. The recently disclosed research of creating a fake CA certificate from MD5 collisions proves this point. We knew this specific attack was possible back in 2007 (and thus most CA’s changed to SHA1) – I guess sometimes it really takes an exploit to force some into action. [Ed: For Microsoft's recommendations on this issue, see these MSRC and SVRD blog posts].

We're never going to get rid of bugs or flaws as software development is a human process and we make mistakes or overlook things. Software is getting more and more complex as well, which means there is more opportunity for vulnerabilities and ways of exploiting systems. However, I believe we are missing a huge trick here; other than those of us who are really into software and security, how much attention does the wider community learn from an exploit versus understanding its root cause? It causes panic, a rush to create/find a patch, and then when we are “safe” again, often it’s forgotten. If you don't already, subscribe to the Microsoft Security Vulnerability Research & Defense blog (at http://blogs.technet.com/swi/default.aspx) for an in-depth look at the details behind vulnerabilities, what they affect and why they came about in the first place. I can't recommend studying this kind of information enough.

So, my call for 2009 is for us to not focus as much on the exploit, even though that is difficult to do (who doesn't like a good magic trick, hack, example of why software is flawed). Focusing on the why, rather than the what, is what I believe makes software better and there’s (still) so much to learn from previous mistakes.

-Mike Andrews

Principal, Foundstone professional services