I gave a presentation to our MVPs on how the Exchange team builds software yesterday. Assuming the participants aren't just stroking my ego, it went pretty well. Often customers are surprised why their pet DCR isn't implemented in the same release as when they request it; I think there's a perception that A) there's a lot of time in the release cycle to write new code and B) the smaller a change is, the less risky it is ("It's just changing one default setting!")

In the development cycle, a relatively small portion of that time is spent writing new code - the majority is spent refining that code, making sure it interops with other new and existing code, finding bugs, fixing bugs, finding bugs, fixing bugs... etc. For example, you might have a 6 month milestone, and only 6 weeks of that milestone is spent writing new code. Similar to many other industries, there are three main variables in software development: Features, Quality and Time. F+Q=T.. and you can only control two of them. If you add features and maintain the same level of quality, it's going to take more time. If you add features and maintain the same ship date, the quality will suffer.

During the presentation, I parroted a line I found on Joel Spolsky's site last week: "If we add 8 more women to the project, can we have the baby in 1 month?". It's a more vivid way of representing the "too many cooks" aphorism and how it also applies to software development. The mythical man month and all that. Every time you add a developer, you add code... and that adds risk. Whether or not that risk is acceptable depends on many things:

  • The skill of the developer as well as the other developer(s) performing code reviews
  • The nature of the change being made and how many code paths it affects - or if it affects a code path that's run through many many times in normal operation
  • The testing necessary to ensure that the change is a valid one and didn't introduce any regressions
  • The time necessary to perform those tests - and not only for this change, but if new tests need to be introduced, they will need to be run several more times through the testing process before the product is finished
  • The phase of the software project you're in - it's much easier to make changes in the beginning when you know you have more time to test them and ferret out any problems
Plus there's not just risk, but the additional work that needs to happen after the fix is made - new tests may need to be written and manually run/automated repeatedly, string changes may cause re-localization (for Exchange server-side UI, that's 9 server languages - but for Outlook Web Access that's over 25 UI languages and a different person may be in charge of localizing that string for each different language), re-deploying a build in a massive lab environment in order to get scalability coverage on the fix, etc.

There's also some perception that there is such a thing as a "minor change". "It's just a string change, why couldn't you just add it?" I am reminded of a fix we did in the Exchange 2000 SP1 timeframe; we were checking for a condition, and added a string to Outlook Web Access if that condition was true. It happened late in the cycle but we thought it was a low risk change, and it was an important scenario that we wanted to address before releasing SP1. We tested it and checked it in, but shortly thereafter a tester uncovered a regression caused by that seemingly minor fix - if that condition was true and another separate condition was also true, we ended up duplicating a different string in the UI, which was really ugly for the end user.

Even "minor" changes, if there are such a thing, have risk. From time to time, we'll get a bug that can be fixed by a one character change - but that one character could be really important... :-)