So this post is about testing...as mentioned in Part 1 of this series

5. Testing has a new significance (especially true for mixed workload / co-hosted consolidation), reliance on doing this manually might not be sufficient

This is a big topic and often an area often overlooked in the Windows world.  Often as we all know Windows applications arrive in the data centre or computer room by unorthodox methods, systems that evolved from the desktop (or ‘under the desk’) are common, systems that were ‘thrown over the fence’ to operations to manage are rife.  Also it’s common for an application to be promoted from the development box straight through to production (on the same hardware).  This is often the difference in approach from those organisations that run mainframe and mid range systems – the disciplines are not in place (or not used).  MOF and ITIL can assist you; take a look at the MOF and ITIL links to understand more. 
Testing if it is carried out is often done by the developer or by a small subsection of users; it seems to be rare that test tools are used (unless the application is intended for Internet facing customers, in this circumstance it’s only the foolish or very brave that don’t extensively test). Consolidation (especially) co-hosting brings its own set of challenges as repeat testing is often required, this is also true of virtualisation but more from a capacity test point of view (which co-hosting also needs). 
So what problematic scenario am I talking about, think of it like this;  For the first application you place on your consolidated platform (hopefully in a consolidation test or UAT setup) who will need to function test and or load test – that may mean for instance some developers and users – lets say 10 people are needed.  Nothing wrong with that you might say, but when you come to load the next application you should test the first application and the second to prove a) coexistence and b) stability under load.  You may be lucky and the 10 people know both applications but your luck will run out sometime and you could need an additional 10 people for every added service.  This is a very simplistic example but I hope it illustrates the point.  This requirement is also there for virtualisation as you may need to stress the virtual machines to prove stability under load of the whole system

So what can you do?  Well there are many good commercial products on the market and many free stress tools (sometimes just performing CPU, Memory or I/O stress can assist you and these tools are typically free or can easily be scripted).  Another cheap if not free method of testing is to select tests that stress all parts of an application, so for example you may be able to enter a Web query that places load on the web tier as well as on the application tier and the database concurrently (one effective query could do a lot of testing or you could argue that one very bad query could do the same for you). 
I’ve often seen people try and test all architectural layers independently, when in fact in real life you must test the ‘whole’. Free web stress tools can assist you there.  Developers often have tools, perhaps you can utilise their tools?  Also lookout for products that have there own test tools (Microsoft Exchange and SQL Server do) or perhaps it may be a consideration when selecting an application that a stress tool is available or it can be integrated with your own.