Virtualization has received a great deal of attention from the press during the past couple of years.   We have seen many products promoting storage virtualization, network virtualization and computational virtualization.  Furthermore, with the promotion of service oriented architecture for application solutions, we are even seeing more discussions on service virtualization.   

 

While many vendors (including Microsoft) have presented a multitude of tools to virtualize different aspects of a datacenter, many data center teams are quickly asking if virtualization execution tools really optimizes costs for a data center.

 

The promise of Virtualization

 

Reducing physical components: (reducing physical component real estate)

Virtualization, specifically, promises data centers the opportunity to have fewer physical storage, network and computational components in their data center.   While this rarely decreases software cost, this can usually help reduce the initial real estate cost of hardware.   However, hardware vendors usually size significantly larger, more advanced (and expensive) machines when consolidating environments.  One must consider the cost of a larger number of smaller machines compared with fewer larger machines.

 

If data center real estate space is of the one of the most important concerns, then virtualization might be considered for consolidation.    However, it should be viewed as one of many consolidation options.   Some options which should be reviewed carefully:

 

  1. How much operational complexity and cost the option will increase or reduce in the data center?

 

  1. How architecturally brittle is the design?  How easy can it be adapted to new business needs, be extended and/or be architecturally decomposed?

 

Look at a couple of Consolidation options:

  1. Consolidating like performance, management and service of applications
    1. Provides the most efficient use of physical resources (smaller machine needs)
    2. In the past, has taken the most time and money of initial cost.  However, this is changing quickly as .Net services have allowed decomposed services to reduce the solution complexity and consolidate common application services between applications
  2. Application Isolation Techniques on a Common operating system
    1. Utilize tools like WSRM to place like applications, services on a single operating system
    2. Quick time to market,  Box cost are higher than application consolidation box costs, but lower than Virtualization box costs
    3. Management, network, back and recovery and DR strategies must be physical and logically identical.
    4. Developing process/automation tools to reallocate resources for a specific application/service for specific business needs is presently challenging for everyone.
    5. Data/transaction corruption risk can be higher
  3. Application Server Virtualization images on a common physical machine
    1. Moderate time to market, not as fast as application isolation techniques but faster than application consolidation techniques
    2. Virtual network and storage environment relationships are more complex
      1. virtual subnet management issues and storage repository issues
    3. Management, Provisioning, Patch management issues are usually more complex
    4. Developing process/automation tools to reallocate resources for a specific application/service for specific business needs is presently challenging for everyone.
    5. Data/transaction corruption risk can be higher

 

 

 

The risk of Virtualization

 

A Complexity Quagmire (increasing operational cost)

While designing a virtualized environment might look reasonably easy on paper,  it can become quite complex in a real world operational environment.   The problem happens with operations.   While we consolidate operating systems on one server, we still have the same number of networks and server to manage.  All of these virtualized servers must be instrumented, networked, monitored, provisioned, and patched,  Now, instead of these virtualized systems managing it's own memory, it now shares memory with other processes and virtualized systems.  Since most virtualized environments share a common physical memory space,  address corruption can have cascading effects on other virtual systems.   Now, instead of crashing a server, one can crash a data center.  This becomes worse when some vendors place the virtual system address space in privileged/kernel mode (which share memory with the host operating system services).    This effort can quickly corrupt transactions, crash entire data center environments and virtualized networks and storage systems.  

 

Brittle hardware architecture (hardware acquisition and support cost)

Large virtualized production environments are often deployed on expensive larger complex hardware architectures.  This makes the initial acquisition cost of a few complex systems higher than purchasing many smaller, less complex servers. As special hardware focused management and monitoring software is often required to keep the system operational, this can often increase operational support cost and require specialized more expensive operational skills to operate.

 

Software Cost might be higher (software acquisition and operational cost)

Specialized hardware management and monitoring software is often needed for production based complex server environments. 

 

 

What is needed?

 

Common taxonomy and modeling environment

In order for virtualization to be designed efficiently,  architectures must have a  taxonomy and modeling design system to ensure consistency and predictability across tiers, layers and services.  Furthermore, a consistent modeling environment enables consistent business value mappings across services. This promotes reduced acquisition costs and promoting operational standards in a uniform fashion

 

Common management

In order for virtualized environments to operate effectively,  there must be a common management, provisioning, monitoring and patch management system that can manage across layers and tiers (as well as for storage, network and computational areas).   This keeps operational cost to a minimum as well as promote an organic knowledge lifecycle management environment to promote that best practices and valuable insights are captured and capitalized on by development and architecture systems.

 

A Common Process

In order for virtualized environments to be operate more predictably without incrementally increasing support costs over time,  there must be a common, effective process environment which everyone follows and supports.  This is especially true for configuration control and change control processes.   Companies who begin to consolidate complexity (but not necessarily reduce it),  it is critical that effective processes are respected and optimized within the data center environment.

 

 

There is no magic product that will solve all virtualization challenges.   One must look at the entire data center environment and develop good virtualization architecture and engineering criteria for the enterprise data center to ensure data center teams have a good chance at keeping operational complexity and cost to a minimum.