Virtualization has received a great deal of attention from the press during the past couple of years. We have seen many products promoting storage virtualization, network virtualization and computational virtualization. Furthermore, with the promotion of service oriented architecture for application solutions, we are even seeing more discussions on service virtualization.
While many vendors (including Microsoft) have presented a multitude of tools to virtualize different aspects of a datacenter, many data center teams are quickly asking if virtualization execution tools really optimizes costs for a data center.
The promise of Virtualization
Reducing physical components: (reducing physical component real estate)
Virtualization, specifically, promises data centers the opportunity to have fewer physical storage, network and computational components in their data center. While this rarely decreases software cost, this can usually help reduce the initial real estate cost of hardware. However, hardware vendors usually size significantly larger, more advanced (and expensive) machines when consolidating environments. One must consider the cost of a larger number of smaller machines compared with fewer larger machines.
If data center real estate space is of the one of the most important concerns, then virtualization might be considered for consolidation. However, it should be viewed as one of many consolidation options. Some options which should be reviewed carefully:
Look at a couple of Consolidation options:
The risk of Virtualization
A Complexity Quagmire (increasing operational cost)
While designing a virtualized environment might look reasonably easy on paper, it can become quite complex in a real world operational environment. The problem happens with operations. While we consolidate operating systems on one server, we still have the same number of networks and server to manage. All of these virtualized servers must be instrumented, networked, monitored, provisioned, and patched, Now, instead of these virtualized systems managing it's own memory, it now shares memory with other processes and virtualized systems. Since most virtualized environments share a common physical memory space, address corruption can have cascading effects on other virtual systems. Now, instead of crashing a server, one can crash a data center. This becomes worse when some vendors place the virtual system address space in privileged/kernel mode (which share memory with the host operating system services). This effort can quickly corrupt transactions, crash entire data center environments and virtualized networks and storage systems.
Brittle hardware architecture (hardware acquisition and support cost)
Large virtualized production environments are often deployed on expensive larger complex hardware architectures. This makes the initial acquisition cost of a few complex systems higher than purchasing many smaller, less complex servers. As special hardware focused management and monitoring software is often required to keep the system operational, this can often increase operational support cost and require specialized more expensive operational skills to operate.
Software Cost might be higher (software acquisition and operational cost)
Specialized hardware management and monitoring software is often needed for production based complex server environments.
What is needed?
Common taxonomy and modeling environment
In order for virtualization to be designed efficiently, architectures must have a taxonomy and modeling design system to ensure consistency and predictability across tiers, layers and services. Furthermore, a consistent modeling environment enables consistent business value mappings across services. This promotes reduced acquisition costs and promoting operational standards in a uniform fashion
In order for virtualized environments to operate effectively, there must be a common management, provisioning, monitoring and patch management system that can manage across layers and tiers (as well as for storage, network and computational areas). This keeps operational cost to a minimum as well as promote an organic knowledge lifecycle management environment to promote that best practices and valuable insights are captured and capitalized on by development and architecture systems.
A Common Process
In order for virtualized environments to be operate more predictably without incrementally increasing support costs over time, there must be a common, effective process environment which everyone follows and supports. This is especially true for configuration control and change control processes. Companies who begin to consolidate complexity (but not necessarily reduce it), it is critical that effective processes are respected and optimized within the data center environment.
There is no magic product that will solve all virtualization challenges. One must look at the entire data center environment and develop good virtualization architecture and engineering criteria for the enterprise data center to ensure data center teams have a good chance at keeping operational complexity and cost to a minimum.