Server Virtualization for commodity systems has become a pervasive answer in the enterprise datacenter for good reason: it is an opportunity to consolidate systems thereby reducing the number of servers in the datacenter, increase efficiency on each physical server, reduce energy consumption in the datacenter and hopefully, reduce operational deployment time for new systems.

There is no doubt that the initial virtualization effort can lead to all of these promises and many more. However, there is an assumption that savings promise will continue in the future with the simple adoption of virtualization. That assumption would be wrong.

A customer has to ask: Why would IT vendors promote virtualization if it would mean less licenses or fewer servers to sell?

The real answer: For many companies, Virtualization can promote larger server growth in a company, not less…

The logic is this:

You just consolidated and reduced the number of servers you currently have. Congratulations. You just proved you can reduce servers, increase efficiency while reducing GHG (green house gases) in the datacenter.

However, a virtualized datacenter often starts another dynamic in an IT organization: Virtual Server Sprawl. It’s easy to do. The friction of buying and configuring another server is significantly reduced. Therefore, business units (to be on the safe side) will ask for more virtual servers than they need. Also, it encourages an increase in functional decomposition in the architecture. This can lead to an increase in servers and increase unnecessary system utilization. Overall system utilization goes up, but real architectural efficiency for energy resources decreases.  


Important law of Virtual Server Sprawl:

Virtual Server Sprawl eventually leads to faster Physical Server Sprawl


This is not to say virtualization is bad.

Far from it. Virtualization is an essential element for a environmentally sustainable strategy for consolidation modeling. However, What I promote is that server virtualization alone can lead to some dramatic problems. And if the velocity of virtual server deployments increases too much, it eventually increases physical server deployment velocity to unacceptable levels.

What is needed to stop Virtual Server Sprawl: (all three are needed)

· Virtualization Systems Management,

· IT Process Leadership Strategy

· Preemptive Infrastructure Architectural Guidance


Systems Management

Part One: Visibility

Understand what your Virtual Guests are part of the solution and what internal and external dependencies are needed on a day to day basis. Furthermore, like all systemic quality models, understand the peaks and valley forecasts for the solution over time. Also, What is the value of the solution being managed compared to other solutions in the datacenter? We need to know this in order to make good discriminating decisions with resources.

Part Two: Control

If you cannot act on the knowledge efficiently, then the management environment is pretty much useless. It’s important to power up virtual guests as need and then spin them down as demand fluctuates. However, the application architect must be able to design loosely coupled systems that are able to be horizontally scaled up and down without disrupting service.

Part Three: Measurement

What was the impact of virtualization management decisions on the ecosystem? What have we learned and how do we improve the decision making process for moving resource allocations to specific solution components?

Process Leadership Strategy

Virtual Server Sprawl is very real problem for many enterprises. Executives must actively adapt the change and configuration management processes of IT management to take into account the significantly lower friction for service deployment demand. Most configuration and change management processes are not prepared for this change. Also, IT Executives should review quarterly the velocity of virtual server growth and review management standards for encouraging an acceptable growth model.

Preemptive Infrastructure Architectural Guidance

Can we supply architects with guidance models to limit virtual server spawl the datacenter? I believe we can. It’s natural to functionally decompose to the nth degree when server allocation is cheap. Academics usually promote these kind of models without concern to the impact of the rest of the IT ecosystem (example: energy constraints in the datacenter). Having the architects be accountable for the energy allocation demands and be able to answer question based patterns of IT impact is a start.

Also, the big question is how do we advise architects in refactoring their solution to optimize resource consumption in a productive way without impact TTM (time to market) or Functional Expectations?

This year, I know many will be diving into these questions in more detail. Improperly managed, virtualization can be a Trojan horse. But it doesn’t have to be this way. A little planning and leadership will go a long way to make this pervasive capability successful in the datacenter.