It seems that I am not yet gone J. Eric Bidstrup, a colleague of mine, wrote a great blog post about Common Criteria, where it does a pretty good job and where it fails. Basically he claims – and I could not agree more – that the customer "only" wants to know whether the operating system "is safe". I quote:

In terms of software security, all of the following most people would think of as being "bad": Viruses, worms, malware, hackers, criminals, and espionage. These items listed have one thing in common – all of those bad things require a weakness (a "vulnerability") in the software used, and finding a way to exploit that vulnerability for a nefarious purpose.

I slightly disagree as we have seen a lot of attacks to perfectly patched systems without exploiting a software vulnerability but the user. However, as we will never be able to "Common Criteria Certify" the user, the definition definitely works for the Common Criteria discussion. He writes another pretty remarkable statement:

It has been our experience that customers typically don't care whether they are exposed to risk from a design vulnerability or an implementation vulnerability, they care that they are exposed to risk. Period. When customers ask "Is it Safe?" they expect software that can be deployed and maintained to operate securely in the face of adversarial activity.

Well, it is clear that this is true. His final conclusion is:

If customers expect a real-world answer to the question "Is it Safe?" to be answered by Common Criteria, then Common Criteria must change.

You can read the full post here: Common Criteria and answering the question 'Is it Safe'

Now, I expect that this raises again the same discussion as I had on my post with regards to The Value of Operating System Comparisons. With this post I commented several posts on the value of using vulnerabilities to compare operating systems. One of the post was called Operating systems aren't any more secure than the idiot using it and in a comment there, dre suggested a five-star rating system for software. It is an interesting concept but does – in my opinion – not scale. And this is part of the CC problem as well: It takes much too long to certify a piece of software.

However, I think that public debates about certifications of software as well as about what it needs to have the best possible security and at the same time ensuring the necessary level of backward compatibility is needed.