Follow Us on Twitter
by anandeep on February 05, 2007 09:25pm
I am an avid reader of Joel On Software – I find his insights great and very revealing.
I was reading a recent blog post by Joel entitled “The Big Picture” in which he has this to say about open source
“Open source doesn’t quite work like that. It’s really good at implementing copycat features, because there’s a spec to work from: the implementation you’re copying. It’s really good at Itch Scratching features. I need a command line argument for EBCDIC, so I’ll add it and send in the code. But when you have an app that doesn’t do anything yet, nobody finds it itchy. They’re not using it. So you don’t get volunteers.”
This was in the context of the review of the book “Dreaming in Code” by Scott Rosenberg. Scott was on campus at Redmond a few days ago, talking about his book. The book is a “Soul of a New Machine” type look into an open source startup that was trying to make a PIM (Personal Information Manager) code-named Chandler – and the travails that startup went through. Scott begins his introduction by saying ”… the art of creating it (software) continues to be a dark mystery”, so you can guess the whole thing didn’t go very well!
This got me thinking – what is open source really good for? Is it only good for copycat or scratch-an-itch type of software? Could there be a limit to what open source process can achieve in terms of software artifacts?
First of all – being “copycat” or “scratch-an-itch” type of software is not bad at all. It can be argued that Firefox falls into the former category being based on closed source browsers but while gaining feature parity with other browsers it added new features. I think this was the benefit of the general public because not only did Firefox get better in the spirit of competition other browsers (including Internet Explorer) got better. As for the latter category, what good is software if it isn’t working for its’ users or “scratching an itch”?
I think the point Joel was trying to make was that bootstrapping an open source project requires either a user need or the need for an alternative. A community is more easily built when there is a shared need for functionality or alternatives.
But something about that bothered me. After all isn’t Open Source all about the “love of the game”? Why wouldn’t a community want to do something that was experimental and didn’t have any immediate payoff? Coming from a university research environment, I knew there were people out there putting out experimental code into open source including everything from Robotics to the ALICE Educational Software Authoring System (I knew the person behind ALICE – Randy Pausch from Carnegie Mellon). ALICE has a pretty vibrant open source community behind it.
That said, all the top open source projects (based on a poll by O’Reilly) fall within Joel’s characterization.
So can futuristic experimental projects be developed using the open source process?
I think that the answer is yes. But these kinds projects cannot be developed in a pure open source community process like that of Linux. An institution like a university or a company has to bring to it critical mass. The US government paid for a lot of ALICE – before it could be put out there in a true community process.
BTW I just looked at the ALICE website – and Microsoft has also supported ALICE financially. That wasn’t the case back when I was at Carnegie Mellon – I remember thinking, “How is an educational software package which is a rage with art students going help the US defense department?”. Almost all its funding at the time came from DARPA (Defense Advanced Research Projects Agency)!
There has to be a lot of money/resources/people put into a software project to bring it to a stage so that a community sees that an itch is going to be scratched, and then gets on board.
I was chatting with Hank Janssen and Kishi Malhotra about the “top” open source projects and stated that the top open source project I wanted to see was a “Cloud OS” which wasn’t yet around. I was waiting for the day when a system call made on my laptop would kernel trap on a machine in a data center in India, without my knowing or caring to know which data center or which machine. Ruminating on this I postulated that some of the early components are already there with the Google File System and the Google Cluster Architecture. Then I realized that even though those were Linux based they were by no stretch of imagination open source!