by Mark Stone on March 16, 2009 02:05pm


I’m going to tell a story that starts in Indiana, but really it’s about Brazil.

Once upon a time “scientific computing” was nearly synonymous with “Fortran”. Today, though, just about any high level language can be used to write High Performance Computing (HPC) applications. These days that language choice also includes C#.

At Indiana University, the Open Systems Lab has pioneered work to implement Message Passing Interface (MPI) support for .Net, so that MPI applications can be written in C#. The project is MPI.Net, and you can find it on Codeplex. It is open source, about three years old, has reached a 1.0 release, and is compatible with two other important open source projects, OpenMPI and Mono. The principle developers behind the project are Andrew Lumsdaine at Indiana University and his former student, Douglas Gregor, who is now on the faculty of Rensselear Polytechnic Institute.

This is the kind of open source work that’s really exciting to see because of the way it expands choices for the developer and the end user. A C# developer should not be closed off from writing HPC applications if that’s what they want to do. And a research scientist should not have to think about whether their lab is running Linux or Windows Server. Both of these individuals are working enough layers above the operating system that somebody else’s operating system choice should not be a constraint.

So I was very excited to learn that students in Brazil at Federal University of Rio Grande do Sul were doing work on MPI, and excited to talk with them about their work. One of their projects is MPI#, also open source and also hosted on Codeplex.

MPI# builds on top of the work of MPI.Net, adding some functionality not yet present in MPI.Net. Specifically, quoting from the project description:

The goals of this project would be to build upon MPI.NET in order to complement it with the features that are missing, mainly regarding collective communication. Either they could benefit from C# native support for such communication, either they could be programmed on top of the provided MPISend/MPIRecv encapsulations. C# and .NET features such as fault tolerance or dynamicity support would be studied, in other to turn the MPI# implementation robust in large, dynamic and heterogeneous platforms.

Two of the students working on MPI# are Ismael Stangherlini and Fernando Afonso. They are graduate students in computer science, working on projects affiliated with the Brazilian Interoperability and Open Source Software Development Nucleous. When I talked to them about their work on MPI# I was curious what their communication with Indiana University had been like. Their response: they had never been in contact with Indiana University; they simply downloaded the code for MPI.Net and started working on their own.

That’s the magic of open source: that they can, in fact, just download the code on their own and start coding against it. They may make an important contribution to MPI.Net. Or their code may be entirely disregarded. Or they may move on to other projects and somebody else may or may not pick up where they left off. At this stage it’s too early to tell. But the fact that all of these scenarios are possible demonstrates why, as a methodology, open source is so nimble and adaptive. A top-down product development process, or a top-down standards development process can only execute on the innovations envisioned by the few at the top, and at the speed of the slowest decision-makers in the process. But a bottom-up open source process enables every innovation that anyone at the grass roots level can see.