By Dileep Bhandarkar, Ph.D.Distinguished Engineer Global Foundation Services, Microsoft
In late 2007 we were intrigued by the opportunity of deploying servers in ISO (shipping) containers. Our analysis showed that we could pack as many as 2500 servers in a single 40 foot container and reduce our deployment times and improve efficiency. We started creating space for stacked ISO containers in our Chicago datacenter and developed a functional specification for the containers that allowed our OEMs to innovate while still meeting our physical and functional requirements. Our container infrastructure specification defined the physical space (length, width, and height), weight, power budget and voltage, inlet water flow rate and temperature, and return water temperature. Beyond this the OEMs were free to design the internals of the containers as they preferred, but they had to avoid stranding power. We specified the server configurations and network architecture. Our first deployment in 2009 resulted in up to2500 servers to be pre-installed and networked in a 40 foot ISO container. The containers were lifted off a truck and rolled in on air skids into their parking spots and connected to power, network and water in a matter of a few hours.
Moving a container into the Chicago datacenter
The server configuration changed for the next purchase and we moved to a stacked solution with about 1800 servers in the bottom container and all electrical and mechanical systems in a second container on top, which is slightly shorter to fit within the datacenter’s height limits.
Double stacked container in the Chicago datacenter
The next evolutions of containers were not restricted by ISO height. The increased height allowed the same number of servers to be housed in a single taller container. Our floor space constraints restricted the width and length to be within ISO limits, but we had enough headroom for a taller container. We did however have to factor in highway overhead clearing for the trucks we were using to ship these containers to our facility.
The increased height of containers allow the same number of servers to be housed in a single unit
Through this experience we learned several lessons. First, the containers allowed us to bring in a large number of pre-assembled and pre-installed “contained” computing capacity to be connected into our infrastructure within hours after connecting power, network fiber, and water connection. The energy efficiency came from containment of air flow in a restricted volumetric space. The ISO dimensions were a convenience with respect to transportation and use of readily available lifting apparatus, but they placed constraints on the container design. We moved away from strict adherence to ISO dimensions over time. The 8-foot width of ISO containers limited how the server racks could be positioned and serviced.
Our experiments with free air cooling showed that we could run our servers at higher inlet temperatures and use outside air in most locations instead of using water side economization. With the addition of adiabatic cooling, the temperature range could be extended by 10 to 15 degrees. Our Generation 4 design deviated from ISO dimensions to allow more design flexibility. For example, the floor space was expanded to allow a larger footprint in our Quincy datacenter. Our free air cooled Information Technology Pre-Assembled Components (ITPACs) came in four units that were assembled on site (some assembly required J) before they could be connected to power, water, and network.
Modular PAC units being prepared for final connections in Quincy, WA datacenter
Modularity was a design goal for us, but modularity does not imply containers, and containers do not necessarily imply ISO compliance! We stopped using the term container and coined ITPACs to denote our new IT Pre-Assembled Components.
In recent discussions at several datacenter conferences, there is still some confusion about what modularity means. To some it means containers. The dictionary defines it as “the use of individually distinct functional units” or “constructed with standardized units or dimensions for flexibility”. A modular datacenter consists of smaller building blocks that can be replicated to deliver additional capacity. The form factor and size (power and floor space) can vary greatly. Our Gen 4 datacenter designs feature modular buildings that function as “colocation rooms” that can be populated with servers as needed, while the ITPACs are shipped fully populated with servers. Both are free air cooled in selected facility locations. This is the next step in our pursuit of higher energy efficiency. The physical form factor and technologies will evolve over time. ISO containers are a thing of the past; they served a good purpose at first but are not as relevant today.
The use of containers and ITPACs are another form factor for housing IT infrastructure. They are not tied any specific hardware or software configuration. IT should not be misconstrued as a cloud services appliance! Any configuration that resides inside a container or ITPAC can be installed and operated inside a colocation room, but the costs parameter will be different.
To “coin” an old car commercial phrase from the 1980s, “this not your father’s datacenter”!
Microsoft’s Quincy, WA datacenters: Gen 2 datacenter on left with Gen 4 modular facility on bottom-right side
You can access more of our best practices around datacenter efficiencies, rightsizing servers for production environments, cloud security, etc. in published papers and blogs available on our web site at http://www.globalfoundationservices.com.
So why did Microsoft divert from containers that supported 2500 servers to containers that only supported 1800 servers?
As our workload evolved, we increased the memory and storage per server to expand the amount of compute per server. This increased the amount of space needed to house additional disk drives. The new configuration allowed space for only 1800 servers in a container.