Follow Us on Twitter
by admin on June 28, 2006 06:13pm
Consider this scenario – the CIO’s office just called. They have decided to follow your recommendation and are going to use JBoss (substitute favorite open source software package here) as their application server (substitute database server/web server/whatever server here). They also decided to make it standard on both your Windows servers and your Linux/NetBSD (substitute favorite flavor of open source operating system here). Now, they ask you – will it handle the loads on both operating systems? They don’t want any of the servers to be idle after all that investment they made.
You have a reputation as a “data driven” dude, i.e. you look for empirical evidence in making decisions like this. No hand wavy – “it will work …. maybe” for you. You know your workloads cold. You know you will run test scripts to simulate those workloads.
The million dollar questions are (drum roll please):
Why are these questions important? The objective is to have consistent and comparable tests on each platform – and it is important to make sure that the testing process does not introduce any bias one way or the other.
So which approach - trusting the tools or trusting your methodology allows for consistent and comparable tests on each platform?
Lets examine some pros and cons of each approach:
Damn! The laws of physics strike again. Looks like there is no easy answer since the observer effect will always be present i.e. observing the software using testing software impacts the software being observed so that a precise observation can’t be guaranteed.
So what do you do in such a case?
My friend (and I think he works with me too) Hank Janssen had the following to say (I will not point you again to his handsome visage like I did in my last blog or people will start talking!)
I am not so married to the tool as I am married to a consistent and approved process. There will be many different tools for different languages/applications/infra structures/protocols and the like.
I wanted to stay away from testing methodologies that are so different between different platforms that I would have a hard time interpreting the results. For example, using Loadrunner or Winrunner on Windows and than Grinder on Linux. (In case of Java loads) This example would result in apples to not so apples comparison. Most OSS tools (jMeter and Grinder) are available on both platforms.
Advice from the testers in Microsoft
I spoke to a few people who do testing for a living within Microsoft and here is some advice they had to offer
But enough about us, what approach do YOU take, dear readers? Please do let us know - we are looking for war stories, dispatches from the data centers, tales from the trenches and any other feedback on testing disparate platforms that you have to share!