There’s a good technical post over on Chris Adam’s blog about how to dynamically provision customized virtual machines by using System Center Virtual Machine Manager and unattend.xml. The unattend.xml file is used in combination with a sysprep’d image and applies customization (things like computer name, installed roles, etc) that are specified in the XML file. Chris’s post explains how this can be done very easily in VMM.
This post was timely as I have been working on some unattended installations and other automation for a customer I am working with. With all the focus on the back and forth with competitors at the virtualization layer, it almost seems like the workload and configuration inside the VM is “getting no respect”.
In any event, the unattended installation realm can be intimidating at first. There are multiple ways of accomplishing most tasks, there is an enormous amount of things in Windows that can be customized, etc. Microsoft makes a large number of resources available such as the Windows Automated Installation Kit, Microsoft Deployment Toolkit, etc. There are beta updates to these for Win7, R2, etc. that can be found on Bing.com.
For a very detailed treatment on all of these topics, check out the Deploying Vista series over on WindowsNetworking.com Most of the content is the same for Windows 2008 servers as well. This article on Technet is quick and direct step-by-step guide for a basic automated installation. Between the info Chris provided and some of these resources, you’ll be well on your way to dynamic VM provisioning.
Here’s an interesting and slightly amusing mock debate between Brandon Shell and Jason Conger on Citrix’s Workflow Studio vs PowerShell for automation. If you aren’t familiar with it, here is the description of what Workflow Studio is:
“Citrix Workflow Studio™ is an infrastructure process automation platform that enables you to transform your datacenter into a dynamic delivery center.”
“Built on top of Windows PowerShell™ and Windows Workflow Foundation, Workflow Studio provides an easy-to-use, graphical interface for workflow composition that virtually eliminates scripting. Workflow Studio acts as the glue across the IT infrastructure allowing administrators to easily tie technology components together via workflows.”
The debate is amusing because in reality both guys understand that each has its place, one is a foundational component of the other, and the combination of the two can be extremely powerful. The core of the “debate” is one’s definition of automation: execution of atomic tasks with as little effort/code as possible (basic PowerShell) or event/workflow driven execution of multiple tasks with associated logic (advanced PowerShell and/or Workflow Studio). The first is an enabler for the latter.
It’s been my opinion since Exchange 2007 and Virtual Machine Manager 2007 committed entirely to PowerShell and with the PowerShell team’s continued focus on simplicity and consistency, that this was the tipping point that was going to enable real automation and orchestration of IT infrastructures. Now with partners (Citrix) and competitors (VMware) alike building on and/or leveraging PowerShell, we’re going to see significant advancements in the state of the art this year.
Over on the Solution Accelerators Security Blog is a post and link to the IT Infrastructure Threat Modeling Guide.
From the guide:
The IT Infrastructure Threat Modeling Guide provides an easy-to-understand method for developing threat models that can help prioritize investments in IT infrastructure security. This guide describes and considers the extensive methodology that exists for Microsoft Security Development Lifecycle (SDL) threat modeling and uses it to establish a threat modeling process for IT infrastructure.
This is one example of what I think will be a growing trend where the lines between infrastructure and development will be blurred. This is a positive as there are a substantial number of best practices in both disciplines that can be shared. A structured approach to threat modeling is a prime example.
The Hypervisor Functional Specification v2.0 for Windows Server 2008 R2 has been posted to the web and can be found here. The original v1.0 version for Windows Server 2008 RTM was described in this post.
Here is the overview of the v2.0 version:
This document is the top-level functional specification (TLFS) of the second-generation Microsoft hypervisor. It specifies the externally visible behavior of the Microsoft hypervisor, a component of Microsoft Windows Server 2008 R2 Windows Server virtualization. The document assumes familiarity with the goals of the project and the high-level hypervisor architecture. This specification is provided under the Microsoft Open Specification Promise. For further details on the Microsoft Open Specification Promise, please refer to: http://www.microsoft.com/interop/osp/default.mspx. The Hypervisor Functional Specifications document specifies the externally visible behavior of the Microsoft hypervisor, a component of Microsoft Windows Server 2008 R2 Windows Server virtualization. The specifications can be used to understand the functions of the hypervisor and implement a compatible solution.
I was able to get a small commentary on desktop virtualization and VDI published in the Microsoft Architecture Journal. It’s based on the work I’ve been doing around creating a VDI offering to augment Microsoft’s server virtualization offerings. For a slightly expanded version of my thoughts on this topic, see this post. As with server virtualization, desktop virtualization makes sense in a lot of cases but not all. I outline a simple framework for for choosing the optimum mix of solutions for your user base.