donotenterThere are some things we need to make clear to those who have participated in Mississauga, Vancouver and Ottawa, or will be participating in IT Virtualization Boot Camps in Montreal, Calgary, Edmonton (registration link coming soon), and Saskatoon

When we developed the IT Virtualization Boot Camps we had several discussions around what hardware we should use to deliver the sessions. Theoretically we wanted server-grade hardware but we couldn’t get anyone to donate it… and frankly the idea of carrying a populated half-rack around did not appeal to me or Mitch. We briefly discussed the possibility of building it in a remote datacentre but decided against it because of potential Internet connectivity issues.   We ended up building the environment on laptops.  In fact, we built a special case to ship the 20 computers we use around the country (as well as a second one for the machines we use to deliver the sessions).  It is not an ideal solution, but it allows us to do everything we wanted to do when we land in a city. 

After the first couple IT Virtualization Boot Camps both Mitch and I started getting questions from attendees that we had not expected… asks for support on the most ridiculous scenarios, to which we would usually respond ‘Why would you ever want to do that in a production environment?’ The answer kept coming back ‘Well, isn’t that how you told us to do it?’ Of course it wasn’t, but as we both thought about, we began to understand where some of the miscommunications came from. Based on that, Mitch compiled a list of  what NOT to do in production in your environments!

1. Your laptop is NOT a server!

2. Your desktop is NOT a server!

I have met people over the years – especially in the SMB space – who feel that because a computer is based on x86 hardware and the specs are similar they can run their production servers on any hardware. This is WRONG! Just as there is a difference between corporate-grade and consumer-grade hardware, servers should only be run on server-grade hardware – whether you prefer HP, Dell, or Intel OEM machines.

3. You should have multiple domain controllers!

4. If you have only ONE domain controller, and it is virtualized, there are risks in joining the virtualization host to that domain. I am not saying that it will not work – it will – as long as you are careful about it. Remember, do it carelessly at your peril!

5. When using a Storage Area Network (SAN), which is highly recommended for virtualization environments, use a proper physical SAN device. Trying to do things ‘on the cheap’ with software SAN solutions may work… but use them as a last resort. Remember, they will not have the flexibility or power of a physical SAN, nor the management tools.

6. If you do decide to use a Software SAN (such as Microsoft iSCSI Software Target 3.3), DO NOT UNDER ANY CIRCUMSTANCES BUILD IT IN A VIRTUAL MACHINE.

What software SANs do in order to ensure that the volume is not shared is it creates a fixed-size VHD. If you create a 100GB LUN (Logical Unit Number) then a 100GB VHD is created on the volume. Creating a VHD within a VHD not only slows things down, it also has the potential to… well, make things go bad.

7. Don’t (on a daily basis… or EVER!) turn your Hyper-V hosts off, disconnect them and all of your networking components, put them into a roller-board suitcase, and travel with them. Your servers should only move if your company sells your building and moves to a new one. Otherwise they should stay put and always stay on! In fact, there should be careful planning for UPS requirements and generators in the event of power outages. Remember… when I am finished at your site at the end of the day… I ‘destroy’ the demo environment and rebuild it before going to my next session!

8. YOU NEED MORE THAN ONE NETWORK CARD RUNNING ON A CHEAP D-LINK SWITCH TO MAKE YOUR VIRTUALIZATION ENVIRONMENT WORK!!! This is not a commentary on D-Link hardware… for home and SMBs they probably work pretty well (I use them for some things). When planning the network architecture of your virtualization environment you should do some serious planning around networking requirements, including how many NICs for production, how many for iSCSI, how many for Clustering, will your Production vNetwork be shared with your Management vNetwork? The answer to all of these questions depends on your requirements… but it is ALWAYS more than one. Remember: More NICs=More Better!

9. Your iSCSI (Storage) network should not be on the same wire as your Production network, and if it is out of necessity then you should at the very least implement vLAN tags to segregate the traffic. Remember, the only encryption you can put on an iSCSI network (and few people seem to…) is CHAP – not very good.

10. YOUR LAPTOP AND DESKTOP ARE NOT SERVERS! Of course this is the same as Points 1 & 2, but important enough a message that it warrants repeating.

11. VM Snapshots are great for labs and testing, but are not recommended for your production environment, and are NEVER a long-term solution. In fact this is STRONGLY discouraged by both Microsoft, VMware, AND SWMI Consulting Group They should be used in production sparingly and carefully, and only with very careful planning and monitoring. Remember, when you delete a snapshot… NOTHING HAPPENS. The VHD and AVHD files only merge when you shut down the virtual machine, and can take a lot of time!

12. Breaking any of these rules in a production environment is not just a bad idea, it would likely result in an RGE (Resume Generating Event). In other words, some of these can be serious enough for you to lose your job, lose customers, and possibly even get you sued. Follow the best practices though and you should be fine!

PLEASE take away these lessons as well as the ones we conveyed to you in the IT Virtualization Boot Camp.