We have finished version 3 of Windows HPC Server! The verbose, official, and approved name is Windows HPC Server 2008 R2 Suite signaling that we are leveraging Windows Server 2008 R2 as our core operating system but make no mistake, this is the big v3 release, our most ambitious release. It’s like the third stage of a rocket firing. What makes it a big deal? We have continued to improve performance while adding new features that will increase the size of the HPC community.


Sometimes people think Supercomputing is all about the Top500 List, the list of the most powerful 500 supercomputers in the world, but HPC is more than that. It’s about enabling the next generation of complex simulations in biology, chemistry, physics, finance, weather, and more. Microsoft’s ambition is not limited to the top500 but also to ensure the next 500K is just as capable, increasing the number of people and applications that can use the power of cluster based supercomputing.


We’re doing a bunch of things to enable more people and organization to use HPC. First, we’re improving the tools developers use to write multi-core applications with Visual Studio 2010. My favorite feature is the Parallel Performance Analyzer, a tool that allows developers to see stuff like context switching for the threads of their application. We’ve also partnered with Intel to make sure their parallel performance tools run great in Visual Studio. Customers asked for a more resilient programming model for service oriented applications. With this release we’ve created a new asynchronous programming model that allows applications to submit millions of calls to the cluster, disconnect from the cluster, and collect the results later.


Enabling more people to use HPC also means making it easier to setup and manage. First, you have to get the cluster up and running and to do that we’ve added support for network boot as well as the ability to dual boot with Linux. Next, we’ve improved usage of the cluster with new job scheduling policies as well as SharePoint integration. Finally, it is the nature of distributed systems to fail. To help with failures we’ve improved our diagnostics making it easier to identify hardware failures, network failures, or failures in applications. ISVs shipping applications with the Windows HPC logo ship HPC Server diagnostics with their application, making it easier for administrators fix their clusters.


Another way to enable more people to use HPC is ensuring commonly used applications like Mathematica or Matlab. With this release we now support running Microsoft Excel 2010 on the cluster, increasing the size and complexity of models computed in Excel. There are an estimated 300 million Excel users worldwide and Excel is often cited as a modeling and simulation tool used by engineers, scientists and financial quants. Some of our customers run thousands of embarrassingly parallel simulations in Excel. The ability to offload to a cluster enables them to reduce their simulation runs from days to hours.


There are two major categories of supercomputing applications: tightly coupled applications and embarrassingly parallel applications. Our performance improvements benefit both types of applications. First, on performance, we have continued turning the performance crank for tightly coupled applications that use the Message Passing Interface (MPI) and high-speed RDMA networking. The result is performance that equals Linux based on open source and application specific benchmarks. And we continue to contribute our MPI improvements to Argonne’s open source MPI project, making these contributions some of Microsoft’s largest open source contributions.


Second, we’ve made improvements to leverage the multi-core revolution, improving performance when running on the latest multicore chips from Intel and AMD and adding support for general purpose GPUs, the latest revolution in multi-core processing.


Finally, with this release we’ll support using Windows 7 workstations as part of the cluster environment using our new Desktop Compute Cloud (DCC) feature. With DCC administrators can specify particular hours of the day to use workstations in the cluster, for example, every night after 7PM.  Of course if users are logged in and still working, we won’t use the workstation for computation. With DCC we further expand the compute fabric for HPC.


So, wrapping up, we have finally put the finishing touches on Windows HPC Server 2008 R2, our third release - a release that will expand the HPC community.  As for the future of Windows HPC Server, today at the High Performance Computing in Financial Markets conference we demonstrated integration Azure, which will be released in a product update in the fall, allowing HPC Server users use compute nodes that are traditional compute nodes in a data center, desktops, and/or instances in Azure.



Ryan Waite, general manager