Jeff Alexander's Weblog

Technical Evangelist - Windows Infrastructure

Building a New Hyper-V Cluster System on a Budget!

Building a New Hyper-V Cluster System on a Budget!

  • Comments 3
  • Likes

Windows Server 2008 R2 Logo H

About 4 years ago myself and Kleefy were frustrated at how slow it was to do all the demo’s we do on laptops computers.  Sure laptops are fast enough these days and you can do some things to trick them out and make them faster.  Back in 2006 though we decided to build our own shuttle PC’s and to be honest they served us well for many years.  However as demo’s get bigger and require more resources and memory it was time for an upgrade to my systems.  Check out the link above for the system details of what I ran for 4 years.  So I went to my manager with a rough costing of how much it would cost to get these systems up to date with current hardware.  I estimated around mb_productimage_ga-x58a-ud3r_v2_0_big$10k which was within the budget we had to spend.  I got the green light and it was time to go shopping.  But before I get to the components let’s start with the goals of this project.

  1. First of all I needed a faster system that could support more memory and had a faster CPU.
  2. Secondly I needed way more storage than I currently had and needed it to be faster.
  3. And finally I needed better networking in the system to support iSCSI clustering and Live Migration. 4

So with these 3 things in mind it was time to go shopping.  I normally use AUSPC Market but they didn’t have all theblack_front_right_up components I needed.  Nothing against them at all.  They just didn’t have the 37367_64right parts.  So I ended up buying from a local online retailer called Techbuy.  They had everything I needed and there prices were very competitive.  Because essentially what I was doing is replacing all the parts in my 3 servers with new ones.  So how did it all pan out?  Well check out the list below along with some photos of the build process.  All prices are in Aussie $$$.

Component Brand

Quantity

Unit Cost

Total Cost

Case Fractal Design Define R2

3

$169.80

$531.45

Motherboard Gigabyte GA-X58A-UD3R

3

$312.95

$967.55

CPU

Intel Core i7 930 Quad Core

3

$413.05

$1273.85

Memory Corsair 12GB 1600GHZ DDR 3 RAM Dominator Series

5

$790.00

$4038.90

Hard Drives – System Western Digital 450GB VelociRaptor SATA III 10,000 RPM

3

$394.45

$1183.35

Hard Drives – Storage Western Digital 600GB VelociRaptor SATA III 10,000 RPM

4

$440.45

$1761.80

Network Cards Intel Pro/1000 PT Dual Port Server Gigabit Adapter

6

$218.80

$1349.00

Video Cards XFX Radeon HD 5770 1GB

1

$250.45

$250.45

Video Cards XFX Radeon HD 5570 1GB

2

$130.90

$261.80

Power Supplies XFX 650W XXX Edition Modular Power Supply

3

$161.90

$485.70

      Total $12,103.85

So there you have it.  I went a little over budget but I guess these things happen when you are trying to price systems! (sorry Sarah!)  One mistake I did make is I didn’t realize that the motherboards did not have video capability on-board which means I had to buy 2 extra video cards.  Not a problem since I can use them for RemoteFX demo’s anyway so they won’t go to waste.

Build Process

Below are some photos of the build process:

The old systems being taken apart

IMAG0008IMAG0009

The motherboard and CPU go in..

IMAG0005

Hard Drives go in..

IMAG0016  IMAG0011

Pretty much done..

IMAG0018

Cables hidden behind the motherboard…

IMAG0020

So what did I end up with in the end?

  • I have a 2 Node Windows Server 2008 R2 Cluster with each node on the cluster running 24GB of RAM.  This is great for Hyper-V as I can run many more of my demo scenarios.  And with the new Intel network card’s and the fast hard drives I've found that Live Migration work's really well.
  • All my storage is running on the 3rd server which is running Windows Server 2008 R2 and has 4 of the 600GB VelociRaptor disks in it.  I’ve setup a 2 node cluster and enabled Cluster Shared Volumes which works great in this scenario.
  • It’s just a much faster system that came in at a reasonable price for what I got.  I don’t carry these around anymore like I did the shuttles.  After doing that a few times it got old very fast.  These servers now sit in a proper server room next to the main datacentre at Microsoft office in North Ryde.  So cooling is not an issue.

So as I head to TechEd Australia 2010 this week I now have some pretty quick demo machines to work with.  I’ll be using a combination of my own hardware above and what’s onsite at the event.  I’m doing a session on Hyper-V Networking and BranchCache so I hope to see you there!

All the best folks!

Technorati Tags: ,

Jeffa

Comments
  • Great post, Jeff. Are you using Windows Storage Server as your iSCSI target? Can you elaborate on your storage and network design?

    Josh

  • This is excelent info Jeff! But I also have a question regarding the storage solution.

    what RAID level are you using on those 4 disks where all of the VHDs are stored? I am looking at doing a similar thing and I just dont know what RAID level would be best for this kind of setup. Im torn between 1+0 or 5 and it would be really handy if you could provide some more information on how you have the storage server configured and what kind of performance you get out of it.

  • Hi guys,

    I've had a couple of requests to provide more details on my iSCSI solution.  I will post a second part to this post in a few days so please stay tuned!

    regards

    Jeffa

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment