by jcannon on July 11, 2006 06:17pm


Part 3 – Adaptation and simulation of Heterogeneous environments under lab conditions

A simple question that has always perplexed me is how software and hardware OEM’s across the world simulate heterogeneous environments under lab conditions. I have witnessed several different approaches, practices and stages of this adaptation and each one of them is unique and correct in its right and merit. I guess, that leaves the “big” question which remains unanswered i.e., how do you bring a “real-life” scenario and manifest it under lab conditions. This is even more challenging because the average test lab for a medium to large organization is no match to the size and complexity of its elder sibling, the Enterprise Data Center, running its production systems, applications and operations. So why squeeze all that complexity into a smaller scale ? Is there one perfect method?– of course not, depends on what heterogeneity means to you/your business. Let’s look at this and why it’s necessary and also share some techniques that may be helpful.

Start with why it’s necessary to represent if not an equivalent amount of heterogeneity within a lab but a comparable one. Start with simple logic – why do we need a lab in the first place ? In most cases it’s an environment we can turn to and run processes, tests and simulations which we dare not try in a Production Environment. However, the caveat here is that if we do want to test a tool or an app that we’re about to roll into a production environment, our best bet is to test it in the lab with conditions mirroring as closely to the production environment as possible. It’s also a place where we can develop workarounds, fixes, documentation, implementation practices and as much supplementary support mechanism as we’d like before we bite the bullet and push the tool or app into production. The expectation we keep in mind when we do that is that results from the lab and production rollout should bear a resemblance like that of the “Partridge Family” and hopefully not of the “Manson Family”. Okay, bad joke but you get the point.

Now on to “Tips and Tricks” to help with the process of adaptation and simulation of a lab environment that mimics your production one. Here’s what I found useful:

  1. Deployment Methods: Using similar deployments tools, techniques and methods in the lab that are already in use in the production environments makes one aware of “delivery mechanisms” and the path, process the deployment cycle will take when released
  2. Configuration Management: Extreme familiarity and knowledge of the configuration options of not just the delivery mechanism/s but also of the tool/s or app/s is something as valuable as having that Swiss knife in your pocket – you just never know when you’re going to need it
  3. What Business Scale ?: Never hesitate to walk out of the lab and have a conversation with decision makers who chose the tool/app. Find out more about what their expectations out of this application are (by now I know some of you may be cringing in your chairs but I am dead-serious on this one). This is the best way to learn if the application should be tuned towards business scales such as Reliability, TCO, Scalability, Performance, High availability or whatever
  4. Manageability: My personal favorite – always have a lifeboat handy i.e. when the fit hits the shan, will you still be able to recover the system, do a roll-back, connect remotely and most importantly, keep the service/s up and available
  5. Driving Efficiencies: Most IT departments have to squeeze every efficiency they can out of their budgets, and labs are a luxury when they have to deliver results to CTOs. So what’s the best way to accomplish testing, or simulation, on a budget. How does someone with no extra money support such an effort. There’s some creative resource utilization that can be implemented such as:
    • Rotation of production hardware coming up for decommissioning and reallocating such resources to the lab
    • Making use of evaluation copies and licensing i.e. since most lab testing scenarios only extend to short periods to drive testing
    • Using down-time to allocate personnel to testing efforts i.e. if there’s lag time between two projects, using that time and headcount effectively to drive testing


And finally a small anecdote to help put things in perspective. In my past life, I remember several years ago when I was still on the east coast, I worked on implementing an asset tracking tool for desktops spread through the environment. We tested the tool on individual desktops and did not care about running the entire scenario using network connectivity across the simulation. We were told by the vendor that the tool uses less than 1% of CPU as negligible amount of memory. After random tests, we rolled out the tool and the purpose of the tool was to run a script and send the results back across the network. However, due to ACL’s in place, which we forgot to account for, and lack of validation of packet delivery, the desktops stopped responding. This was an expensive lesson in why we should test the waters to the best possible extent before setting sail.

Just a few thoughts and hope it triggers some more for everyone out there. As always, please do let me know if that has been useful and/or if you have a specific topic in mind you’d like us to write about.

-Kishi