Get on-the-go access to the latest insights featured on our Trustworthy Computing blogs.
Digg! this article.
UPDATE: You may want to also read Apples, Oranges and Vulnerability Metrics for a discussion of issues related to comparing OSes having different sets of applications.
6 Month Vulnerability Summary Chart
Well, the first half of 2006 is behind us and I've completed an initial vulnerability comparison between Windows XP Service Pack 2 (winxpsp2) and the two enterprise versions of Red Hat Desktop that have shipped - Red Hat Desktop 3 and 4. I'm going to start with the basic data and then udpate this blog entry as clarification or further analysis is necessary.
Whether you look at the all up total vulnerabilities, look at high severity vulnerabilities, or look at the weighted Workload Vulnerability Index, it is hard to argue against the fact that Red Hat 3 required less vulnerability-driven work than Red Hat 4, and Windows XP SP2 required less than either. Read this FAQ if you have questions about the analysis or data. If you still have questions, ask away and I'll update as needed. First, let's start with the basic vulnerability counts. I've charted the totals for the six month period in chart 1, but will also lay out various details in a later section.
So, of course, if a Microsoft Executive were to do a "red hot candy" demo, you can see the results are pretty drastically in favor of Microsoft. However, as Mark Cox likes to point out, that's not the whole story as lots of low severity issues are not equal to lots of high severity issues. Let's provide a weighted formula to normalize the chart.
Next, I also calculated and charted a monthly Workload Vulnerability Index as defined by NIST and similar to that introduced by Mark Cox in Red Hat RHEL4 Risk Report. I did use the NIST ratings rather than vendor ratings in order to use a more objective, common source of rating. Read the FAQ for more info on why. Below is the WVI, charted, a more normalized comparison of the severity-rated issues affecting all three platforms over the past 6 months. You can see the full details by month in a later section.
What does this metric mean though? Well, it basically establishes that High severity vulns are 5x as bad as Mediums and 20x as bad as Lows. By dividing those, we get a weighted equivalent to how many High severity vulns one would have. Next, but dividing by the days passed, the WVI value essentially represents the equivalent of how many High rated vulnerabilities per day the product had over a given period.
Unique Vulnerability Count versus Vulnerability Fix Events
In the course of my analysis, I uncovered a challenge in calculating Red Hat days-of-risk that identified an issue that hasn't been publicized much.
In general, Microsoft tries very hard to follow a policy of patching an issue on all versions in all languages at the same time. Why? If you don't follow this policy, then the first fix raises the awareness of a vulnerability to a much higher level, increasing risk for any of the versions that don't yet have a patch. Imagine that you have a patch for your US-English product, but all of the localized versions are still vulnerable - not good for a global company. So, the 39 vulnerabilities fixed by Microsoft were all fixed in all supported products at the same time, as far as I can tell. (It is not always true, but is most of the time).
That same cannot be said for the Red Hat products. For rhd3, there were 96 fix events for the 92 unique vulnerabilities. What is a fix event? When a vulnerability is fixed on a given day, it becomes a fix event. So, if cups and tetex both share a vuln and are fixed on the same day, there is only one fix event. If they are fixed on different days, representing increased DoR for one or the other, then there are two fix events. On rhd4, it was much more drastic. The 135 unique vulnerabilities had 173 fix events, indicating that 38 times a vulnerability was not addressed in all packages on the same day.
Because of the nature of the Open Source model, there seems to be a higher tendency (unscientificly speaking) to just copy a piece of code and reuse it in another components. This means that if a piece of code turns out to be flawed, not only must it be fixed, but also that maintainers must find every place they might've reused that blob of code. A visual inspection showed me that many of these were the multiple vulnerabilities affecting firefox, mozilla and thunderbird. In a typical example, firefox packages were fixed, then mozilla packages were fixed 4 days later, then thunderbird was fixed 4 days after that.
Vulnerability Summary Details by Product
The following tables detail how many vulnerabilities were patched each month, by NIST severity, including totals and days-of-risk for each breakout. The VWI metric is also calculated each month along with the total for the period.
One other interesting stat I looked at was how many of the vulnerabilities fixed were publicly disclosed at the time the product became generally available. Windows XP SP2 had zero. Red Hat Desktop 3 fixed 4 issues that had been disclosed prior to product ship (10.23.2003). Red Hat Desktop 4 fixed 7 issues that had been disclosed prior to product ship (2.15.2005).