The Guardian is reporting that "A three-week wave of massive cyber-attacks on the small Baltic country of Estonia, the first known incidence of such an assault on a state, is causing alarm across the western alliance, with Nato urgently examining the offensive and its implications.". Yahoo is also carrying a similar story from Agence France-Presse (AFP). The accusation is that this is a state-run cyber attack originating from Russia. What's also interesting is that Estonia has been a member of NATO since March 2004. The articles point out that NATO does not currently define a "cyber attack" as a military action so they would not currently trigger Article V which is basically the "an attack on one is an attack on all" doctrine. Leaving aside the politics of the Estonia/Russia dispute which I know nothing about, the situation does lead to interesting questions like whether a state-backed cyber attack should be considered a military event, how should countries and alliance respond in such a case, etc.

In working with the customers I do, which include government agencies who you would expect to be targets of this type of activity, it is clear that most people and even policy makers in some cases do not realize how much of this activity occurs every day in terms of organized network reconnasaince and penetration efforts, etc. My view is that over the last 3 years in particular, the issue of information security has gotten particularly more difficult and complicated as it has moved from the proverbail "teen in the basement" writing viruses and looking for fame to a much more dangerous environment where the threats come more from organized crime rings with theft/fraud as their goal and governments/terrorist organizations with active efforts to find weaknesses in critical millitary or economic systems. These folks are not going to publish their exploits, or use them in large scale, easily detected manner. That is the main reason why it is so foolish when you hear users of the various operating systems saying "my OS is much more secure" or "there are no remotely exploitable vulnerabilities in X, Y, or Z". The bottom line is that it is impossible to know with certainty. All code has flaws, and over time new means of exploit are found that were not knowable previously. That is a key point. If you had the resources to exhaustively analyze a piece of code or a complete OS today, and if you could find every single flaw with perfect knowledge of all existing computer science there is still a high degree of likelihood that within the useful life of that code new, previously unknown techniques will have been developed to exploit it. This is why millitary and intelligence agencies put so much emphasis on defense in depth and other layering techniques. If lives are on the line you don't assume that any component is completely secure whether it is Vista, OSX, Linux, etc. In evaluating system components, those that themselves implement defense in depth should be given preference.