Microsoft Enterprise Platforms Support: Windows Server Core Team
As you may already know, with the release of Windows Compute Cluster Server 2003 (CCS) we included Microsoft Message Passing Interface (MS‑MPI) implementation which is fully compatible with the reference MPICH2. This allows integration with Active Directory and enables role based security for administrators and users, and the use of Microsoft Management Console (MMC) which provides a familiar administrative and scheduling interface.
The Microsoft CCS can use GbE, InfiniBand (IB), Myrinet, Quadrics, or legacy high-speed fabrics as interconnects for high performance computing. The majority of high performance computing clustered systems use GbE, but more and more customers these days prefer the high speed and low latency of interconnects such as InfiniBand or legacy specialty hardware. Our implementation of CCS supports all WSD-compatible fabrics.
This is one of those things that you wake up some days wondering “How does this thing actually work?” Which seems to be a simple question, but then after couple of discussions with the developer you realize that “Hmm, actually it is not very clear or you say some magic is happening somewhere!” If you are trying to find out answers for the following questions, then listen up…
What’s more interesting that when we checked the test clusters with IB cards, we found that DNS and Default Gateway settings are not configured on IB network interface cards (NICs). There was no name resolution mechanism, on the MPI network at all. So how we force the MPI traffic using mpich subnet mask without name resolution…..
After thoroughly discussing this with Mr MPI , here is a brief summary on how magic happens…
Myth: Subnet manager running on IB switch does name resolution on MPI network? Wrong !
So bottom line is we do not need to have a name resolution on MPI network as long as node can resolve their names through private or public network.
For additional information regarding Microsoft Compute Cluster Server, please visit our Windows HPC (High Performance Computing) Community forums
RELATED RESOURCE REFERENCE(S):
· Message Passing Interface (MPI) Documentation
· Using Microsoft Message Passing Interface (MS-MPI) Documentation
Mike Rosado Senior Support Engineer Microsoft Enterprise Platforms Support