by jcannon on July 11, 2006 06:17pm
Part 3 – Adaptation and simulation of Heterogeneous environments under lab conditions
A simple question that has always perplexed me is how software and hardware OEM’s across the world simulate heterogeneous environments under lab conditions. I have witnessed several different approaches, practices and stages of this adaptation and each one of them is unique and correct in its right and merit. I guess, that leaves the “big” question which remains unanswered i.e., how do you bring a “real-life” scenario and manifest it under lab conditions. This is even more challenging because the average test lab for a medium to large organization is no match to the size and complexity of its elder sibling, the Enterprise Data Center, running its production systems, applications and operations. So why squeeze all that complexity into a smaller scale ? Is there one perfect method?– of course not, depends on what heterogeneity means to you/your business. Let’s look at this and why it’s necessary and also share some techniques that may be helpful.
Start with why it’s necessary to represent if not an equivalent amount of heterogeneity within a lab but a comparable one. Start with simple logic – why do we need a lab in the first place ? In most cases it’s an environment we can turn to and run processes, tests and simulations which we dare not try in a Production Environment. However, the caveat here is that if we do want to test a tool or an app that we’re about to roll into a production environment, our best bet is to test it in the lab with conditions mirroring as closely to the production environment as possible. It’s also a place where we can develop workarounds, fixes, documentation, implementation practices and as much supplementary support mechanism as we’d like before we bite the bullet and push the tool or app into production. The expectation we keep in mind when we do that is that results from the lab and production rollout should bear a resemblance like that of the “Partridge Family” and hopefully not of the “Manson Family”. Okay, bad joke but you get the point.
Now on to “Tips and Tricks” to help with the process of adaptation and simulation of a lab environment that mimics your production one. Here’s what I found useful:
And finally a small anecdote to help put things in perspective. In my past life, I remember several years ago when I was still on the east coast, I worked on implementing an asset tracking tool for desktops spread through the environment. We tested the tool on individual desktops and did not care about running the entire scenario using network connectivity across the simulation. We were told by the vendor that the tool uses less than 1% of CPU as negligible amount of memory. After random tests, we rolled out the tool and the purpose of the tool was to run a script and send the results back across the network. However, due to ACL’s in place, which we forgot to account for, and lack of validation of packet delivery, the desktops stopped responding. This was an expensive lesson in why we should test the waters to the best possible extent before setting sail.
Just a few thoughts and hope it triggers some more for everyone out there. As always, please do let me know if that has been useful and/or if you have a specific topic in mind you’d like us to write about.