Windows 7 is coming fast. The beta has been available for a few weeks now, and there are newsgroups, forums and blogs popping up all over the place, talking about features, functionality, and providing feedback.
What about Partners?
Readiness has already started to appear on the Partner Portal:
Training links? Check. Deployment information? Check. Sales & Marketing? Check.
There’s already whitepapers, reference cards and presentations for small, midsize, and enterprise customers. There are even audio presentations, and marketing brochures, ensuring your business is able to start riding the wave as early as possible.
Download all the info, here.
Using Virtual Service Clients and Virtual Service Providers Microsoft’s Hyper-V creates a low-overhead environment for virtual machines that can provide a high degree of scalability on a state-of-the-art SAN.
A colleague here at MS UK forwarded me a link to a great benchmark test involving Hyper-V, specifically targeted at Storage I/O.
Key Findings include:
Other key findings and quotes include:
“In these tests, our principle concern centered on the number of IOPS that could be sustained, which provides a critical I/O health measure for a VOE (Virtual Operating Environment). Our secondary concern was the measurement of I/O throughput, which provides the best insight into SAN fabric infrastructure bottlenecks” “The number of IOPS sustained in all of our tests clearly indicates that a Hyper-V VOE based on 8Gbps QLogic FC SAN infrastructure is able to scale and support a high number of VMs, which will easily provide for a high consolidation ratio. Equally important, the scalability that this infrastructure provides a VM enables the hosting of the most I/O-intense applications.”
“In these tests, our principle concern centered on the number of IOPS that could be sustained, which provides a critical I/O health measure for a VOE (Virtual Operating Environment). Our secondary concern was the measurement of I/O throughput, which provides the best insight into SAN fabric infrastructure bottlenecks”
“The number of IOPS sustained in all of our tests clearly indicates that a Hyper-V VOE based on 8Gbps QLogic FC SAN infrastructure is able to scale and support a high number of VMs, which will easily provide for a high consolidation ratio. Equally important, the scalability that this infrastructure provides a VM enables the hosting of the most I/O-intense applications.”
Definitely worth a read.
Great content on the TechNet site:
I’d say the checklist is the pick of the bunch – follow that, and you shouldn’t go far wrong!
I found this via Patrick Lownds’ MVUG blog – looks like a beauty! (The book, not Patrick ;-))
Understanding Microsoft Virtualization Solutions
The above eBook is available for free if you register at http://csna01.libredigital.com/?urmvs17u33. Be patient once you’ve clicked to download – it takes a few minutes to render in the browser.
This guide will teach you about the benefits of the latest virtualization technologies and how to plan, implement, and manage virtual infrastructure solutions. The technologies covered include: Windows Server 2008 Hyper-V, System Center Virtual Machine Manager 2009, Microsoft Application Virtualization 4.5, Microsoft Enterprise Desktop Virtualization, and Microsoft Virtual Desktop Infrastructure.
I’ve just had a quick skim read of the book, and it looks pretty comprehensive, and even covers aspects such as the Microsoft Enterprise Desktop Virtualisation, and VDI, which are both pretty new areas of technology to be covering, so definitely a worthwhile read from that perspective.
The book also gives focus to the importance of Core Infrastructure Optimisation, and how you can take an infrastructure from being a very reactive, fighting-fires, basic infrastructure, right up to a dynamic infrastructure, where IT is a business enabler, not a cost centre.
Definitely worth a read in my opinion.
Patrick (of Microsoft Virtualisation User Group fame!) pinged me an email referencing a blog post he’d written over at the MVUG blog, covering his experiences (so far) with Broadcom NIC Teaming and Hyper-V. I didn’t even know Patrick was writing a blog, and skimming over his most recent posts, it’s definitely one that I’d recommend.
The post in question, around the NIC Teaming, is definitely an interesting one. NIC Teaming is one of those grey areas with Hyper-V. Partners and Customers with VMware experience ask if the feature is included with Hyper-V, and are sometimes surprised to learn that it’s never been supported by Microsoft on Windows Server. It’s always been the ballpark of the NIC vendor, hence Broadcom, Intel et al, are now starting to produce solutions for NIC Teaming, on Windows Server 2008 (and thus Hyper-V). Patrick details, in his post, his experiences with NIC Teaming and Hyper-V.
Before you read his post, if you’re not familiar with what NIC Teaming is, is basically another word for Link Aggregation, so having multiple physical NICs, teamed together, to provide additional redundancy, but at the same time, improving link speed in many cases. Wikipedia has a good explanation.
So, it’s over to Patrick’s post to get all the info – if you can’t wait, I’ve summarised below:
I think that’s my longest title ever! I could have acronym’d it up I suppose, but that would have been too easy :-)
System Center Operations Manager 2007 has been out for a fair while now. It’s had it’s SP1 update, and now it’s rapidly approaching it’s R2 release, which brings even more great features and functionality, especially around managing non-Microsoft environments. System Center Virtual Machine Manager 2008 however, is still pretty fresh-faced and bushy tailed, having only released around October last year.
When VMM shipped, it shipped with a Management Pack for Operations Manager. For those of you not familiar, Operations Manager is Microsoft’s monitoring solution, for both software, and hardware. However, out of the box, Operations Manager doesn’t know exactly how to monitor, in the most optimal way, Dell’s hardware, or Exchange Server for example. To more accurately and effectively monitor stuff, OpsMgr needs Management Packs, and these Management Packs are built by the teams that build the actual software/hardware to be monitored. So, back to our example before – Dell would write the Dell Management Pack, and the Exchange Team would write the Exchange MP. These MP’s are then stored up on the web, with the online catalog. You can even create your own, as it’s an open framework.
Like I said, when VMM shipped, it shipped with a MP for OpsMgr, which enabled:
Without doubt, the coolest thing that the MP enabled was PRO, and you can see a demo of this for yourself here.
However, one of the key elements that was missing from this MP release was Reporting. Partners and Customers alike have been asking for this, and it’s finally been released. This updated release brings in updated reports for all platforms that we manage:
It’s a shame it took so long to come, but it’s now available to download.
Configuration Instructions are here.
Whenever I deliver a presentation with Partners and Customers, one of the requests I receive is, can I have the deck. So, through the power of my blog, and a bit of jiggery-pokery with SkyDrive, I’ve uploaded my latest deck up onto the web, for all to access. If you haven’t seen me present it, then chances are, some of the more-diagrams-less-words slides won’t make sense (but I’m working on that :-)), but for those who I have met with, you should have a good idea as to what I’m banging on about!
Anyway, if you scroll down the page on the right hand side of the blog, you’ll see a permanent link to my latest deck. This link will always stay on the right hand side, and whenever I upload a newer deck to my SkyDrive site, you’ll be able to get it from the same link. For those of you who haven’t already scrolled down yet, you’re looking for this:
I thought it was quite subtle myself! It’s currently in version 1.4, but I’m working on 1.5 to include more R2 related goodness.
For those of you who can’t wait, you can download my latest deck from here.
Any questions, feel free to ping me a message.
This article was sent around internally a week or so back, and having given it a quick read, I thought it was useful to share.
The article is divided into a couple of key sections:
Each is as important as another, however I’d like to draw your attention to the ‘Planning for Hosts and Host Groups’ section, as this gives some interesting information on how your hosts can be distributed within Active Directories, or even, across Active Directories. The full list of supported host types is as follows:
I’m not 100% sure on how VMM handles cross-forest trusts, but this is a question I’ve asked internally and I’m waiting on a response. Either way, you’ve still got a fair amount of scope and flexibility when it comes to management across AD.
The article goes on to talk about Library Servers, Utilisation of Memory and Resources and more, so definitely worth a read if you can spare the time.
When you create a Windows Server 2008 Failover Cluster, with 3 nodes or more, and start running virtual machines on those nodes, what happens when one of those nodes fails?
Well, as you’d expect, the VMs on the now-down-physical-node, reboot onto another available node in the cluster (providing you haven’t been silly and used all the resource already on the other nodes!). For me, the question is, is there any logic behind where those VMs restart if you have more than 3 nodes still remaining?
This KB article goes some way to answering that question: “Failover behaviour on clusters of three or more nodes”.
There are 4 possible scenarios:
For those of you who haven’t configured clustering in 2008, basically, you can right click the properties of a VM, and choose a preferred owner (node) for which you want this VM to primarily reside. Should that node fail, the VM won’t sulk and say “you’ve taken away my preferred node so I won’t restart anywhere else” – it’ll simply restart on another available node, however, you can also specify failback, which means when the preferred node does come back online, the VM will migrate back to it’s favourite home.
The KB document details each of the 4 scenarios listed above. It’s not going to be to everyone’s taste, but some people may find it useful.
Here’s the link again: “Failover behaviour on clusters of three or more nodes”