I have been investigating some more in the area of Green IT, S+S. Some ideas and a lot of questions have come to mind. Please read on and let me know if they make any sense.

By the way, Part 1 is here :-)

1. How do you understand the status quo?

This may prove to be the most difficult part of the job. There aren’t many tools available. System Center Operations Manager, plus a few OEM management packs, are a starting point. Alas, you must build your own model to establish correlations between power utilization measured and applications over time. From those, you can derive a measure of efficiency. A few 3rd-party applications (e.g. Verdiem’s Surveyor, Avocent and APC InfrastruXure) do a better job of establishing the baseline, although again they do little for the correlation analysis.

IBM’s Active Energy Manager goes a step further (on IBM hardware) by allowing you study trends and to take action on specific energy-related conditions. Again, it is not a complete “IT intelligence” tool.

2. How do you design your infrastructure and applications to optimize consumption?

Once you understand what type of load consumes what power (no small feat):

1. Can you reduce the physical tiers of your architecture? For instance, if you have a memory-intensive application and a CPU-intensive one, you may want to co-host them, thus using all the available cores and saving a few machines’ worth of power. This will only work from a performance point of view if you manage resource allocation tightly to avoid contention. In our example, you would run a thread belonging to the memory intensive application on 1 core and a thread of the cpu-intensive one on the other core of the same CPU socket. Before embarking on such a consolidation exercise, you will want to estimate the costs and the savings, in terms of power and money. Also keep in mind that as a consequence of the changed workload, you may require different hardware (e.g. “whole machines” rather than just blades) to optimize your power consumption profile over time.

2. Can you reduce the logical tiers of your architecture? Here’s an example: your application may use Sharepoint as a front-end, windows workflow to manage business logic, SQL for data processing, all running on separate hardware. Sharepoint can host workflows. SQL handles workflows in Integration Services and it can host an in-process CLR. With some clever re-architecting of your application, you may be able to get rid of the middle tier by using some combination of the two workflow services. The whole area of “power-conscious” applications is yet to be explored. We’re investigating.

3. Can you offload a tier of your architecture? Here’s where Software + Services comes into play. For instance, you may consider using an on-line storage service (e.g. SQL Server Data Services, aka CloudDB or Sitka) instead of hosting your own SQL. If you have a compute-intensive application, you may want to farm it out to a HPC provider and pay by CPU cycles utilized (Microsoft will offer such a service, now in pilot stage with a few ISVs). If your provider is able to consolidate several users’ workloads on its servers and charge for capacity consumed, the overall carbon footprint may be reduced – along with your costs.

4. If you do offload a function, how do you measure its performance against SLAs? This is actually the most difficult point. Technology is available to do all of the above (although not necessarily on Windows). Capacity-on-demand, for instance, has been a feature of certain Mainframes and Unix systems for years. Hosted services offering are widely available. However, different security boundaries and political pressures make it difficult to build tools that monitor its application across companies – leave alone countries.

5. Can you offer or trade computing capacity? If you know how much you need, when and where, why not “sell” spare capacity? Again, S+S comes into play here. Grid computing is possibly the best example of implementation of a similar concept today.

3. What tools & techniques are available?

Hyper-V sounds like an obvious answer, but we are at risk of sounding like the proverbial person with just 1 hammer in the toolbox, to whom everything looks like a nail.

Virtualization is one powerful tool, but it must be used appropriately. One must carefully choose which workloads to virtualize, then which of those virtualized workloads can be combined on a single physical tier. Again, given a workload profile, that physical tier may look entirely different from your current one. Also, most often we speak only of host virtualization. For a complete solution, we must find the best combination of host, storage and network virtualization.

A caveat to keep in mind is that virtualization may be self-defeating without proper management practices. The ease of deploying virtual machines may lead operators to spawn far more than necessary. I have seen a few examples of this in large deployments.

Regulation (in the form of prescriptive guidance) may address some of the problem, but charging money is more effective. The idea of trading computing power may become useful in this scenario: imagine that you planned and budgeted for 200 VMs, but find out that you’re running just 150. You could sell the capacity for the remaining 50 to another part of your organization that requires it. They wouldn’t even need to buy or host servers. Who said that market economy principles cannot be applied to IT governance?

Co-hosting is another technique to optimize resource consumption, often neglected on Windows. If you can virtualize two workloads and run them together without significant impacts in performance, you may be able to gain even more by running them on the same o/s instance. The applications must of course be compatible (able to coexist). Thus, you eliminate the overhead of virtualization. Tools like WSRM allow you to change resource allocation dynamically, adapting to workload requirements. Unix and Mainframes have been doing this for decades, along with virtualization.

IIS6 and 7, for instance, are classical examples where co-hosting of several websites works very well. SQL2005 and 2008 are good examples too, where you can co-host several databases in one instance and several instances on one machine.

As for capacity optimization tools, I could not find a silver bullet. I mentioned a few so far; here’s a quick summary:

- System Center Virtual Machine Manager, with its workload analysis and placement functions, is instrumental in devising the best resource allocation.

- The Microsoft Assessment and Planning Toolkit is a useful, free instrument to plan for virtualization (amongst other things).

- System Center Capacity Planner is also very useful in designing the target architecture for certain workloads (Exchange, Sharepoint, Operations Manager).

- For a far more sophisticated (and expensive) capacity management and planning suite, you may want to look at tools like SAS.

- System Center Operations Manager, plus management packs provided by OEMs, is useful to obtain a baseline of resource utilization.

- IBM’s Active Energy Manager is a great example of what we can do with the data.

4. Further reading

Here are a few pointers that may help inform a discussion:

- Lewis Curtis’s blog: http://blogs.technet.com/lcurtis/

- Little Miss Enviro-Geek http://blogs.technet.com/lmeg/default.aspx

- The Green Datacenter Blog: http://www.greenm3.com/2008/07/new-coal-electr.html

- MAP Toolkit

- Microsoft’s Environment web page

- IBM Green Datacenter paper

- IBM Active Energy Manager

- Windows Server 2008 Power Savings

- Green Computing Paper

- The Green Grid

- Infrastructure Planning and Design