The traditional desktop computing model, as shown in Fig. 1, has been one where the operating system, applications, and user data and settings are bonded to a single computer. We will buy a computer either with OS and some applications pre-installed, or apply a hard disk image with targeted OS and selected applications to the computer hardware. Once a computer is deployed, a user can then log in the system, customize the environment, run applications, change settings, create data and files. This model is straightforward and easy to understand. With respect to desktop deployment, this means that the OS, application execution/presentation and user data are all self-contained within a single device. This model has the advantage of simplicity because it leverages well understood technologies that ship with Windows. In addition, because a PC with this model is configured to be completely self-sufficient, this solution is well-suited to mobile use. However, the tight binding between the various layers may not be a preference for all scenarios. This model has its limitations.
The tight couplings between each layer provide efficiency; they also introduce dependencies, hence complexities. And these complexities make it difficult for users to move the applications, settings, and files from one PC to another in case of upgrades or a lost or stolen laptop. When exemplified by thousands of desktops and laptops, as many enterprises do, the management of these laptops and desktops becomes a major concern. As mobile work force and the number of branch offices continue to grow with the proliferation of Internet and the advancement of networking technology, the work environment and data access patterns of information workers have become dynamic and been rapidly evolving. The long term maintenance associated with computing resources based on the traditional computing model is becoming cost-prohibitive for many companies, while impairing the IT’s ability to quickly prepare for or respond to a business opportunity.
Desktop Virtualization is the process of separating, or more precisely isolating, out these individual components, and managing each one separately. Fig. 2 shows by isolating these components, we can now abstract and virtualize the computing resources. Each layer can then reference a resource in other layers based in the abstraction or virtualization boundary and without specifying the specifics of how a referenced resource is configured within its host layer. Over all this reduces complexity and improves PC and application management.
When it comes to virtualization, not all solutions are equal. Microsoft has developed a number of virtualization solutions to address specific issues as depicted in Fig. 3. There are times a virtualization solution may not be cost-effective while offering deployment flexibility. It is crucial to recognize that and architect a virtualization solution accordingly to produce maximal business benefits.