From this press release:
"Juniper Networks Inc. (NASDAQ: JNPR) and Microsoft Corp. (NASDAQ: MSFT) today announced the companies are working together to provide customers and partners with open standards-based interoperability between Juniper Networks Unified Access Control (UAC) and Microsoft Network Access Protection (NAP). "
This is similar to an announcement last year that we did with Cisco around interoperability with their Network Admission Control system. What's exciting about these technologies is the integration of the network infrastructure with the server and client infrastructure. In most cases to date these infrastructure have not been well integrated. The network infrastructure has had little or no awareness of the type and state of clients/servers attached to it and the computing infrastructure has had little or no awareness of the underlying network it relies on other than its up or down status. With these interoperability announcements, decisions at the network level will be able to leverage data originating from higher in the stack. The most frequently cited example is the unpatched client machine attaching to the network. With the integration of the network UAC/NAC with NAP, the client machine will be prevented fomr connecting to the main network until the appropriate patches are applied. While extremely valuable, this example just scratches the surface of what is possible. NAP is platform that can be built and extended by partners in many ways such as creating custom health models etc. The integration of the network and computing infrastructure in the future will enable all kinds of advanced scenarios such as dynamic quality of service or dyanmic secure VLANs based on the authentication method used, etc.
The Guardian is reporting that "A three-week wave of massive cyber-attacks on the small Baltic country of Estonia, the first known incidence of such an assault on a state, is causing alarm across the western alliance, with Nato urgently examining the offensive and its implications.". Yahoo is also carrying a similar story from Agence France-Presse (AFP). The accusation is that this is a state-run cyber attack originating from Russia. What's also interesting is that Estonia has been a member of NATO since March 2004. The articles point out that NATO does not currently define a "cyber attack" as a military action so they would not currently trigger Article V which is basically the "an attack on one is an attack on all" doctrine. Leaving aside the politics of the Estonia/Russia dispute which I know nothing about, the situation does lead to interesting questions like whether a state-backed cyber attack should be considered a military event, how should countries and alliance respond in such a case, etc.
In working with the customers I do, which include government agencies who you would expect to be targets of this type of activity, it is clear that most people and even policy makers in some cases do not realize how much of this activity occurs every day in terms of organized network reconnasaince and penetration efforts, etc. My view is that over the last 3 years in particular, the issue of information security has gotten particularly more difficult and complicated as it has moved from the proverbail "teen in the basement" writing viruses and looking for fame to a much more dangerous environment where the threats come more from organized crime rings with theft/fraud as their goal and governments/terrorist organizations with active efforts to find weaknesses in critical millitary or economic systems. These folks are not going to publish their exploits, or use them in large scale, easily detected manner. That is the main reason why it is so foolish when you hear users of the various operating systems saying "my OS is much more secure" or "there are no remotely exploitable vulnerabilities in X, Y, or Z". The bottom line is that it is impossible to know with certainty. All code has flaws, and over time new means of exploit are found that were not knowable previously. That is a key point. If you had the resources to exhaustively analyze a piece of code or a complete OS today, and if you could find every single flaw with perfect knowledge of all existing computer science there is still a high degree of likelihood that within the useful life of that code new, previously unknown techniques will have been developed to exploit it. This is why millitary and intelligence agencies put so much emphasis on defense in depth and other layering techniques. If lives are on the line you don't assume that any component is completely secure whether it is Vista, OSX, Linux, etc. In evaluating system components, those that themselves implement defense in depth should be given preference.
The Microsoft Security Response Center (MSRC) announced some changes to the Advanced Notification Service (ANS) which is basically a service anyone can subscribe to which provides notice on Thursday the week before each month's Tuesday security bulletin releases of the number, severity, and affected products for that month's security bulletins. The changes are additional detail that will be provided for each individual bulletin including:
The changes are a good idea. The ANS is it is today is somewhat valuable in that it gives you some idea of what is coming on patch Tuesday but really only enough information to make high level staffing decisions ie. have ops staff primed for testing and deployment. With the additional information in the new service, you should be able to get more prepared since you'll know more of the specifics of each bulletin, how to detect if you are vulnerable, etc. It should help ops staff to have a more complete test and deployment plan ready to roll by each Tuesday.
The MSRC also announced changes to the Security Bulletins themselves to make them more readable and quicker to get the important parts like deciding applicability and finding direct links to the hotfix downloads. They've posted a sample of what the new bulleting format looks like here.
By now most readers of this blog will have heard that some of the planned Windows Server Virtualization (Viridian) features have been deferred. The details were posted here. Dan Kusnetzky over on ZDnet has a blog post with good analysis of the options. This was certainly a disappointment for those of us who are excited about the technology. Clearly some work needs to be done on the planning and expectation setting process. The bottom line in a situation like this is that this technology is complicated and sometimes issues crop up that are not possible to anticipate and sometimes take longer than expected to solve. This leads to the situation where you are behind schedule. At this point, as both the orginal post and Kusnetzky's post point out, in software dev you have three choices: Reduce quality and testing to ship on time; Push out the release date for the full product; Cut features and ship on schedule. You then need to make the decision as to which course to take and let your customers know as soon as possible. Given the state of the market, my view is that the team made the right decision.
The decision-making process for what to virtualize should revolve around the resources required by the workload to be virtualized (processor, ram, I/O, etc). Currently, Microsoft virtual machines are limited to a single processor core, just under 4 GB of RAM, and 4 NICs. Consequently, no single server workload that requires more than this can be virtualized without changes such as scaling it out to multiple VMs etc. Windows Server Virtualization is being designed to remove those limitations as a top priority. There are additional priorities such as high availability, dynamic data center scenarios, etc. The identification and prioritization of these scenarios is critical and customer driven as it should be.
Those priorities were planned to be accomplished in WSV via the following major features.
As you can see from the announcement, Live Migration and Hot Add are being deferred. That still leaves several major features (in addition to a lot of other features) still included in the initial release. The features that are in address the biggest limitation with our current offerings, namely the size of the single server workload that can be virtualized. The team in my view is correctly evaluating the market, our offerings, competitor's offerings, and customer needs. I certainly wish all five features would be in the first release but I also would certainly rather have the first three on schedule than to have to wait 3, 6, or more months to get any of them. Put simply, the cup is 3/5ths full...