Follow Us on Twitter
by jcannon on July 18, 2006 02:15pm
There is a buzz word floating out there – “business readiness”. It seems everyone (including people here at Microsoft) are trying to capture something important to organizations and people that are responsible for selecting, deploying and maintaining software for businesses. What does it really mean though?
Does it mean that a software package, distribution or application meets a benchmark? Does it mean that it is supportable without getting the Big Three consulting companies involved? Does it mean that all its functionality has been tested using regression test cases? Does it mean performance and scalability of the software meets needs? Does it mean that the software will be kept alive into the future by a vibrant community?
In my opinion it means all of the above.
So, what is the problem?
The problem is that business readiness is in the eye of the beholder! (Definition of beholder – the dude who happens to be holding the software when the music stops!)
I think this is a complex problem for two reasons
I will concentrate on the first point – how do you objectively measure business readiness, and suggest a way to look at this. This is not a recipe, just a few thoughts on what we should pay attention to. Hopefully you can dive into the suggested links and find stuff that helps you evaluate the business readiness of some software you are considering.
There are many levels at which software must be evaluated – I assume here that the functionality of software is not the issue. Of course this is a big assumption, but the evaluation of software “features” is a better understood art than the non-functional aspects of software. (There is even a term called “non-functional requirements” while doing requirements and specifications – I never quite got my head around how something that didn’t function could be a requirement!).
What is the state of the art? This is a question that is very hard to answer. For any piece of software the best most people can do is to compare it to its competitors in the marketplace. Most organizations that use open source would not have the luxury of having the commercial software to compare against. They would have to rely on word of mouth or other such imperfect evaluations. Even for most commercial software it is hard to get a good grasp of how that software compares with other software.
There are some organizations such as ISBSG (International Software Benchmarking Standards Groups) that is a non-commercial organization that collects data about software projects and quality. This data is submitted voluntarily by organizations that are software organizations all across the world in many different areas of software. The software for which such data is submitted is largely proprietary and commercial.
A good use of ISBSG data would be to compare defect density within an open source project to the benchmark for that kind of application within the ISBSG data. This would serve as an indicator of the quality of the open source software.
Other data available includes “cost per function point” for a project – this can help evaluate if the cost of the project/product to your organization is close to the “standard” price for good quality projects for the application area chosen.
Evaluating the software Once the gold standard is known other evaluation criteria for the software at hand can be applied. The gold standard provides an quantitative upper bound in terms of number of defects and cost. But IT departments do not run on cost alone….
For open source software there are a number of evaluation benchmarks/certifications being made available. However, the criteria used to evaluate open source doesn’t exist in a vacuum – it is based on hard earned lessons in software development in general. I think that these criteria apply to all software whether open source or commercial software.
Some of the standards bodies out there include:
There is nothing stopping you from considering criteria from each of those models to evaluate the “business readiness” of the software you are concerned with. I suspect that any good model will show comparable results, or the discordant models will fall by the wayside!
Show me the money In their “Expert Letter” ,CAP Gemini - developers of the OMMM model, try to make the point (somewhat unconvincingly in my opinion) that because commercial software is developed differently from open source it has to be evaluated differently.
In my opinion, its all about the value the software provides. If the value can be derived down to dollars, that may be the best way of convincing people.
Khaled El-emam, has this cool ROI process that starts with software metrics such as number of bugs and ends up with a dollar calculation about how much a software product/project will cost the users in terms of “cha-ching”. Maybe every product needs to be put through this “business readiness” measurement!
I am now thinking about visualizing the business readiness using some cool graphic tools – “be the software, be the soooooooftware” (apologies to “Caddyshack”!)