by admin on June 28, 2006 06:13pm


Consider this scenario – the CIO’s office just called. They have decided to follow your recommendation and are going to use JBoss (substitute favorite open source software package here) as their application server (substitute database server/web server/whatever server here). They also decided to make it standard on both your Windows servers and your Linux/NetBSD (substitute favorite flavor of open source operating system here). Now, they ask you – will it handle the loads on both operating systems? They don’t want any of the servers to be idle after all that investment they made.

You have a reputation as a “data driven” dude, i.e. you look for empirical evidence in making decisions like this. No hand wavy – “it will work …. maybe” for you. You know your workloads cold. You know you will run test scripts to simulate those workloads.

The million dollar questions are (drum roll please):

  1. Will you use a standard tool or testing package (such as Grinder with JBoss) that works on BOTH Windows and Linux ?
  2. Will you trust your test scripting skills and your methodologies and write your own scripts – using whatever tools make sense on Windows and Linux?


Why are these questions important?  The objective is to have consistent and comparable tests on each platform – and it is important to make sure that the testing process does not introduce any bias one way or the other.

So which approach  - trusting the tools or trusting your methodology allows for consistent and comparable tests on each platform?

Lets examine some pros and cons of each approach:

 

 

Damn! The laws of physics strike again. Looks like there is no easy answer since the observer effect will always be present i.e. observing the software using testing software impacts the software being observed so that a precise observation can’t be guaranteed.

So what do you do in such a case?

My friend (and I think he works with me too) Hank Janssen had the following to say (I will not point you again to his handsome visage like I did in my last blog or people will start talking!)

    • If the same tools are available on both platforms, then use the same tools.
    • Stay with industry standard, and or OSS accepted methods of testing.
    • Have a list of ‘acceptable’ tools to pick from
    • Publish a list of methodologies/guidelines you are using (so that people can make their own judgments)

I am not so married to the tool as I am married to a consistent and approved process.  There will be many different tools for different languages/applications/infra structures/protocols and the like.

I wanted to stay away from testing methodologies that are so different between different platforms that I would have a hard time interpreting the results. For example, using Loadrunner or Winrunner on Windows and than Grinder on Linux. (In case of Java loads) This example would result in apples to not so apples comparison. Most OSS tools (jMeter and Grinder) are available on both platforms.

Advice from the testers in Microsoft

I spoke to a few people who do testing for a living within Microsoft and here is some advice they had to offer

  1. Do think about configuration parity of hardware. For instance you may need more RAM for Windows to run the application and a higher capacity network card for Linux. Consider if your goal is to actually establish the configuration parity or is it to find which OS scales under a given configuration?
  2. If you do end up trusting the tool – define a sample case. This is a test case for which you roughly know the performance parameters on each platform. Run it under the testing tool for both platforms to make sure there are no significant differences in performance due to the tool itself. Of course defining a sample case may not be a trivial task – but use one that tests the most critical parameter for your application – memory, CPU or I/O for instance.
  3. Consider running the software being tested on its own server  – without running any additional testing software on the servers. Provision a remote server with the testing tool (choose one), and install agents for that tool on each server with the software being tested. Measure network latency and I/O latency and run the tests simultaneously from the remote machine to each server. This works if the software being tested requires only simple parametric commands to run or is entirely run from a UI. Doesn’t work very well if complex scripting is required to run on each server.
  4. Do understand the effects of running tests through the entire system. There is at least one war story that I can vouch for, it entailed testing of a web retailers production systems under peak load conditions, using test scripts on client systems. Unfortunately the test team forgot to turn off the connection from the web systems to the warehouses that carried the inventory because someone missed that in the end to end testing plan!  Some poor soul written into a test case got umpteen gizmos from the retailer. The system worked as advertised, which was not the intent of the test team (at least for the test)!


But enough about us, what approach do YOU take, dear readers? Please do let us know - we are looking for war stories, dispatches from the data centers, tales from the trenches and any other feedback on testing disparate platforms that you have to share!