by admin on May 08, 2006 08:51pm

Feedback loops are critical.  One of my favorite experiences working with Open Source Software came in 1999 when I was working at  We used Apache and mod_perl heavily, and we also did some interesting proxy/caching work with mod_proxy.  At the time, since we were one of a very few high volume web sites doing this type of work with Apache and mod_proxy, we found a variety of issues.  One of the issues we found was a bug (in an unusual code path) that allowed a users session id to be cached in the Apache server and then accessed by a different user.  Unintentional sharing of sessions on an ecommerce site is typically not a good thing.  So we wrote a very simple and short bit of code to fix this and submitted it to the mod_proxy maintainer.  It was accepted and we got our first ‘+1’ open source commit – and felt like minor champions that day.  However, the developer discussion email list started percolating with complaints that this change broke a few other Apache+mod_proxy configurations on different platforms than I had been testing and running on ( was largely Linux based).  So we quickly (within minutes) saw emails on the list and some direct to me saying the patch busted some of their Apache configurations on Solaris, HP-UX or BeOS or whatever it was I wasn’t testing on.  I was a little surprised, but also amazed at the immediacy of the feedback.  Then I had a sinking feeling – what else might I have broken, but haven’t yet received feedback on?  Knowing that I was not that good of a programmer, worry led to fret and fret led to genuine stress.  In discussions with the package maintainer, we agreed to back the patch out.  Although a little disappointed it made me realize the power of the community feedback loop, but also the ad hoc nature of how open source software was tested, basically by other developers on whatever they may be running.

In some ways this is good and helps drive the Darwinian nature of the bigger OSS projects.  In some ways this is scary because it is really an inconsistent testing model.  Andrew Morton recently discussed this issue about Linux kernel testing.  So how do you tell when the community feedback loop is significant enough (either community size, developer talent, frequency, methodology, tools, etc.) to represent enough critical mass that your OSS project will be used and vetted broadly enough to represent some degree of satisfactory testing coverage?   I have some ideas, but I want to open this one up for discussion: what tactics, strategies or even instincts do you use to assess the quality of an OSS project?*


* Also, I’m quite familiar with the various OSS maturity model projects out there.  However, I’m skeptical as most of these that I’ve seen are driven by consultants who use these as tools to charge you a service for helping you to assess a given piece of OSS.  This is fine, but it isn’t the question I’m asking (consultants have been doing this for years).  However, there is the Carnegie Mellon West led OpenBRR project that looks interesting, but I know many of you have ideas on ways you have done this in the past – so in short I’m interested in your experiences.