Good news from the virtualization team. Over on their blog is a new post about the import/export functionality in Hyper-V. There are major enhancements that address several scenarios that were either not easy or not possible with Hyper-V RTM. Scenarios such as exporting a VM and importing it to a different location or importing the same exported VM multiple times, etc. R2 introduces multiple enhancements described well on the team blog.
This is the one area where I was disappointed with Hyper-V RTM as it felt like a step back from Virtual Service instead of a step forward since in VS moving VMs was a simple matter of copying the VHD and VMC to a new location. While in this area Hyper-V felt like a small step backwards, the purpose was to put much more programmability into the entire import/export process. Basically setting the foundation for the much more advanced approach being delivered now. These new features will be significant enablers for all of the backup, storage, and lab management suites being developed by Microsoft and others.
Today Microsoft made available the Microsoft SDL Process Template. This was announced over on the SDL Blog. Roger has some comments on it as well.
The SDL Process Template is a free downloadable template for Visual Studio Team System that integrates the SDL directly into a customer’s software development environment. The template helps:
Continuing the foray into social networking I started on a couple weeks ago, today’s topic is social bookmarking. For those who are even later to the game than I am, social bookmarking is basically a site or service that lets you tag and store your website bookmarks in a central location that is accessible from any browser while also publishing your bookmarks for others to see. Advanced services let you publish your bookmarks as an RSS feed so that others can subscribe and be notified when you bookmark something. Delicious.com is the most well know of these and provides even more features like being able to subscribe to feeds based on tags so that you get a stream of all new bookmarks using one or more tags. An example would be creating a subscription for the tag “Hyper-V” where you would then see a list of all bookmarks created where someone added the Hyper-V tag. You can also use these services to see and follow what others are bookmarking, a good way to see what influentials in your area find interesting.
Lesser known than Delicious but utilized by nearly two million people are the TechNet and MSDN social bookmarking sites. Like nearly all Microsoft web properties, after an initial burst of coverage when launched, there is usually minimal follow-up coverage (hopefully that changes this summer…) so unless you caught the original announcements you may not be aware of these sites.
For a thorough introduction and steps to get started, check out this post over on Technically Speaking.
For my purposes, I will be using both Delicious and TechNet for social bookmarking. I’ll keep the TechNet list focused on the deeper, more informational bookmarks on technical topics. As with everything else I’m doing online, I will be bringing these bookmark feeds into Friendfeed which I’m using as a hub for all of my online activities.
For the bookmarks only, you can find me at these locations (RSS feeds available there as well):
TechNet Social Bookmarks: http://social.technet.microsoft.com/Profile/en-US/?user=davidzi
Between MMS and TechEd there have been a lot of announcements on the virtualization and cloud computing front. First, over on the Virtualization Team Blog, Jeff provided the announcement and details around some new capabilities coming in Hyper-V with Windows Server 2008 R2:
64 logical processor support. A 4x improvement over Hyper-V R1 and means that Hyper-V can take advantage of larger scale-up systems with greater amount of compute resources.
Support for up to 384 Concurrently Running Virtual Machines & 512 Virtual Processors PER SERVER. We are increasing the maximum number of concurrently running virtual machines to 384 per server and the maximum number of virtual processors to 512 for the highest virtual machine density on the market.
Processor Compatibility. Processor compatibility allows you to move a virtual machine up and down multiple processor generations from the same vendor. This does not mean you can live migrate between Intel and AMD nodes, just between different generations from the same vendor.
Not to be outdone, the VMM team announced a bunch of new features that will be in their Release Candidate coming out in a few weeks:
Combined, these new features from both teams enable some key scenarios at both the entry level and high end of the spectrum. One of the major advantages of our stack is that it is very approachable from an entry level since it leverages so much of what your administrators already know and beginning with Microsoft Hyper-V Server 2008 R2 SKU, will be available with all of the high end features (Clustering, Live Migration, etc) for free. Within a couple hours a Windows admin can become proficient with the basics of Hyper-V and be up and running (for free!). Within a few days at most, the ability to implement basic clustering, HA, and Quick/Live migration can be achieved. At the high end, very advanced architectures can be implemented including VMM, OpsMgr, deep SAN integration, etc. This is where our technical guidance, solution accelerators, and service offerings come into play.
To see an example both of how this stack is being leveraged by commercial providers as well as an example use case for enterprises wishing to use the cloud as reserve capacity, check out the video below demoing a future version of VMM and how it will integrate private and public cloud capacity seamlessly:
Brian Madden has an excellent post up today called The hidden costs of VDI. I’ve been working nearly full time the last two months helping to put together a Microsoft Services offering around desktop virtualization in general and VDI in particular so have spent a lot of time looking into both the technical and business considerations that must be taken into account. I’d summarize his post in three points:
As a well known fan and expert on Server Based Computing (SBC), i.e. Terminal Services or Citrix Presentation Server/XenApp, Brian prefaced the article by saying that he likes VDI “where it make sense”. He correctly points out that nearly all vendors and TCO models show that Server Based Computing still provides the lowest TCO due to its high user density but that there are limitations which make other approaches such as VDI relevant.
That is where I’ll jump in with my thoughts because I completely agree with those statements and it has been the foundation of the offering I have been working on. It starts with the notion of flexible desktop computing and desktop optimization that Microsoft has been talking about for some time now. An overview of this approach is presented in this whitepaper. To summarize, there are a variety of ways that a desktop computing environment can be delivered to users ranging from traditional desktops, to server based computing, to VDI, with a multitude of variations in between with the addition of virtualization at the layers illustrated below:
Rather than selecting a one-size-fits-all solution, virtualization provides architects a new, more flexible set of choices that can be combined to optimize the cost and user experience of the desktop infrastructure. The following four steps lead to an optimized solution:
Define User Types: Analyze your user base and define categories such as Mobile Workers, Information Workers, Task Workers, etc. and the percent distribution of users among them. The requirements of these user types will be utilized to select the appropriate mix of enabling technologies.
Define Desktop Architecture Patterns: Each architecture pattern should consist of a device type (thin client, PC, etc) and choice of:
For each pattern, determine which user types it can be applied to. For example, with mobile or potentially disconnected users, presentation virtualization alone would not be applicable as it requires a network connection. Power users may require a full workstation environment for resource intensive applications but may be able to leverage application virtualization for others. These are just a few examples where different user groups have different requirements.
Determine TCO for each Architecture Pattern: Use a recognized TCO model to determine the TCO for each pattern. Minor adjustments to these models can be made to account for specific technology differences but most include TCO values for PCs, PCs with virtualized apps, VDI, and TS/Citrix thin client scenarios. Be wary of vendor provided TCO models. To Brian’s points, be sure to gain a full and complete understanding of the chosen TCO model and what does and does not include. Consistent application of the model across the different architecture patterns is critical for relevant comparisons.
Model Desktop Optimization Scenarios: With the above data, appropriate architecture patterns can be selected for each user type by choosing the lowest TCO architecture pattern that still meets user requirements. By varying the user distribution and selected architecture patterns, an optimized mix can be determined. It is tempting to simply choose the lowest TCO architecture pattern for all users but this can be very dangerous in that it will typically impact your high value, power users the most if their requirements are not accounted for.
A one-size-fits-all approach would result in either a large number of PCs if not using virtualization, a large number of servers if virtualizing everything, or failure to meet power user needs if using only server based computing. An optimized solution is one which utilizes the right mix of technologies to provide the required functionality for each user type at the lowest average TCO. Combined with a unified management system that handles physical and virtual resources across devices, operating systems, and applications, substantial cost savings can be realized.
As I mentioned at the top, a lot of the concepts in addition to very detailed architecture and implementation guidance are part of the Microsoft Services Core IO offerings. For the last two years, in addition to my customer work I have been deeply involved in the creation of the Server Virtualization with Advanced Management (SVAM) offering. The work I mentioned above around VDI architecture will complement that and be available later this summer. Finally, specific to desktop imaging, deployment, and optimization, there is also the Desktop Optimization using Windows Vista and 2007 Microsoft Office System (DOVO) offering. Taken together in concert with the underlying product suites, these illustrate Microsoft’s “desktop to datacenter” solutions and how to plan, design, and implement them.