Dileep Bhandarkar Ph.D.Dileep Bhandarkar Ph.D., Distinguished Engineer
Global Foundation Services

Data center operators are touting metrics like PUE to demonstrate energy efficiency leadership, but there is more to consider. Data centers exist because they can more efficiently and economically scale to house servers that host online services used by enterprises and consumers. However, we need a more holistic approach to ensure that we are minimizing the energy consumed to run these services. Performance matters, but so does the energy consumed to deliver that performance. During my past 40 years as a computer architect, I have seen processor power grow faster than processor performance.

During the last decade, processor power had reached extremely high levels. Previously, when I was a senior server platform architect at Intel, I led the effort to use low power mobile cores for server processors. In my role as a cloud server infrastructure designer at Microsoft for the last five years, I have been focused on driving our performance per watt higher with each new generation. Microsoft works closely with our processor suppliers to specify and test low power processors. We have moved away from using the highest performance processors because their performance per watt has been worse than the low power processors. Yes, Every Watt Matters!

Microsoft remains focused on performance per dollar per watt as an additional means of achieving higher energy efficiency. The chart below illustrates how low power processors can indeed provide a better value proposition.

The performance, price, and power numbers are based upon public information that was available to us in 2009. In a truly distributed computing environment, where each server does not need to deliver the maximum possible performance, the servers with the best performance per dollar per watt are the best choice. We continue to favor multi-core processors that are optimized for performance per watt.

Microsoft maintains some of the most monitored and measured data centers in the industry. The data provided in the accurate monitoring of all the aspects of our data centers are combined to inform a thoughtful and holistic design approach for our infrastructure and the servers together. This has led to significant improvements in energy efficiency year-over-year. Our measurements of server performance under load and workload analysis at huge scale has also enabled us to right-size our server platforms. We have eliminated unnecessary components to eliminate the power they consume, used Gold-rated or better high-efficiency power supplies and voltage converters, and bounded the expandability of server platforms to achieve significant power savings.

More recently, we started using solid state drives in some of our server platforms where we saw a 30 percent increase in performance at a five percent increase in power. Additionally, we have taken advantage of Moore's Law to increase the number of cores from two per processor in 2007 to eight per processor in 2012, while maintaining or decreasing platform power. Our holistic approach, to designing data centers and servers together, has allowed us to drive additional improvements in energy efficiency.

Our team of data center designers continue to drive our Power Usage Effectiveness (PUE) lower and we run our servers at higher temperatures that allow us to use free air cooling in many of our locations today. Climate data from The Green Grid shows that free air cooling can be used exclusively in many places. Because many systems can run at higher temperatures without problems, using outside air to cool computing equipment is not only viable, but also sustainable and cost efficient. The addition of evaporative cooling also allows operation of our servers when the outside temperature goes above the 90° to 95° Fahrenheit (F) inlet that the equipment needs. Evaporative cooling is especially well suited for climates where the air is hot and humidity is low. 

While the standard maximum recommended temperature for data centers under guidelines from the American Society of Heating, Refrigeration, and Air-Conditioning Engineers (ASHRAE) is 80.6° F,  there is growing evidence that facilities can operate at significantly higher temperatures, (ASHRAE now defines an Allowable A2 range of up to 95° F), without damaging the IT equipment. Most server manufacturers do warranty their equipment up to 95 °F. Raising the operating temperature of facilities will result in significant savings in energy consumption by eliminating the use of chillers. Since 2007, we have been helping to drive the industry to open temperature ranges beyond those recommended and shared our best practices and R&D results from our "data center in a tent project" in 2008.  We have been running our Dublin data center with free air cooling since 2009 and continue to make improvements with each new phase of construction.

The free air cooling approach pioneered in our Dublin facility is also being used at our new sites in Boydton, VA and Des Moines, IA with an expanded normal operating temperature range up to 85° F with occasional excursions to  90° F.

Last year, we started deploying ITPACs in our Quincy, Washington data center. We took 480V directly from the facility transformers to the server racks, thereby eliminating losses in another level of voltage conversion. Furthermore, we also removed the fans from the servers and relied on the air handlers in the ITPAC to pull the hot air out. Strictly speaking, this meant that our PUE increased because the fan-less servers consumed less power! But we were more energy efficient because it took less total energy to run the same workload!

Our cloud infrastructure server engineering team continues to work with our technology providers to help them define and design more energy efficient products with and for us. It is all about performance per watt. Since 2008, we have been sharing our best practices and ideasfor the future with the industry at large. We also are continuing to participate in industry conferences and share ideas with other cloud service companies, including Facebook's Open Compute Project.

We are confident about this direction meeting the needs of our data center environment and we hope that these best practices helps to set the stage for continued healthy and dynamic dialog and sharing in this industry. We are delighted to see eBay's announcement about their plans to use about six million watts of power generated on-site by fuel cells. We are looking forward to seeing additional information on the analysis that led to this bold move. Fuel Cells and other innovative energy technologies have been on our research agenda. We will follow the eBay project with great interest along with other members of the Data Center Pulse Group and commend eBay for pushing the envelope with this revolutionary approach and look forward to future progress reports.

 //db

 

 

 

 

Dileep Bhandarkar Ph.D., Distinguished Engineer, Global Foundation Services