I’ve been a big fan of NetApp technologies for ages, and I’ve worked closely with people like Steve Winfield, and Pete Mason, to produce a number of videos showcasing some of the collaborative work that’s gone on between Microsoft and NetApp, resulting in products like SnapManager for Hyper-V, SnapDrive 6.2 and more. We’ve got some fantastic joint wins on the platform now too, at both small, and large customers, so it’s all good from that perspective.
I’m currently building out my team’s internal demo infrastructure, which currently consists of 1 Dell T605, with Hyper-V R2, and a number of System Center technologies virtualised on top, along with a cluster of 2 Dell R710’s, hooked up to a NetApp FAS3050c. Now this FAS3050c isn’t the latest model, and it doesn’t have the most capacity in the world (my DS14 Disk shelf gives me around 570GB of usable space) but then it was kindly donated to me by NetApp, who were replacing some of their older kit, with newer kit for our Microsoft Technology Center, in Reading, UK. The great thing for me is, I can still have the latest version of OnTap, it’ll work with the latest and greatest versions of SnapDrive, and SnapManager for Hyper-V, and it still gives me all the features I need, like the snapshotting, thin provisioning, and best of all, deduplication. I’ll be honest with you right now. I love dedupe. I think it’s fantastically clever, streamlined, and because it’s at the block-level, rather than the file level, it’ll even dedupe stuff that you think, on the surface, has no chance of being deduped. Crazy stuff. Let me explain more.
Firstly, for those of you not sure what deduplication with NetApp is, and how it works, there’s a great explanation over at the Dr DeDupe blog.
As I said, my cluster environment is 2 Nodes, and to that cluster, I’m presenting 4 LUNs of storage, which in my NetApp environment, are in 4 separate Volumes. You don’t have to do it like this, and who knows, maybe I’ll change it in the future, but right now, this is how it is:
As you can see, I've got a dedicated LUN for my witness disk, (I’m using Node and Disk Majority for my 2-node cluster), and 3 LUNs presented to the cluster, which have been selected to be Cluster Shared Volumes. They aren’t huge, 100GB each for two of them, and a 25GB CSV that will hold the swap files of my key VMs (Each host only has 12GB RAM, so having 25GB for SWAP VHD’s is fine!) You’ll see from the image above, that currently, I’m using around 51% of my CSV2. It’s currently got a 40GB (ish) Fixed VHD with WS2008 R2 inside, but at the same time, CSV2 also has another Dynamic VHD, with Windows 7 x86 inside it, currently expanded to around 8GB. Total consumption of that CSV is 51GB:
So, that means I’ll lose 51GB on my SAN, right? Wrong! We’re actually using a grand total of 17.5GB!
If we go over to NetApp System Manager, and take a look at this particular volume, you can see for yourself:
Just think about this for a minute. Due to the fact that this is block-level deduplication, we can look inside the contents of the VHD files etc, and see where the blocks match, and deduplicate them, so in this case, we’ve saved a grand total of 37.62GB, which amounts to 60%. Obviously Windows still thinks it’s using 51GB, even though, under the covers, the SAN hasn’t lost that space. This is where Thin Provisioning starts to help, as you can make Windows think it has more storage available to it.
This use of deduplication hasn’t just been used on my CSV’s. Oh no. I’ve used it on the Witness disk, where, even though the whole volume is only 1GB, and the consumption was 50MB for the quorum information, deduplication still managed to save me 10mb, which is 20%. What about my other savings? Well, on my SCVMM Library, where I’m storing a couple of VHDs, but also some ISO files, I’ve saved a total of 15%, and on my actual backup store, being used by Data Protection Manager 2010, to protect Hyper-V and SQL so far, I’m saving just under 39GB, which equates to 58%. These savings are real, and are enabling me to get even greater levels of consolidation on my SAN than I would have normally. Brilliant stuff NetApp.
Now I just need to get ApplianceWatch PRO working… :-)
For those of you using System Center Operations Manager to monitor your environment, you’ll understand the concept of management packs, and the value they bring. Fundamentally, OpsMgr wouldn’t be the product it is today, without the management pack framework. These management packs contain the knowledge to monitor, to a granular level, workloads like Exchange, SQL, and SharePoint, but also non-Microsoft applications, from Partners like Citrix, F5, Brocade, Dell, HP, NetApp and more. Without the packs, who would we rely on to configure the monitoring elements? We couldn’t rely on the OpsMgr team, as they don’t have knowledge of every technology in the world that’s monitor-able! We couldn’t rely on individual IT Admin’s within organisations, as this would be complex and time-consuming for the individual involved, and without a very deep knowledge of product-X, say, Exchange, how would you know where to start from a monitoring perspective! Thankfully, the management pack framework, for the most part, takes the pain away when it comes to monitoring key applications and workloads. Sure, some MP’s are better than others, but they’re all improving, and the ecosystem is growing. You only have to look at the number of Partners who are producing PRO-enabled management packs to see the ever-growing ecosystem:
These are just the Partners who are building PRO-MP’s, never mind the huge ecosystem creating regular MP’s too! For me, that just shows that Partners get it. They get the fact that management is a key focus in the future, so providing value-add to their customers, through integration with Microsoft technologies, helps to unify a customer’s infrastructure, and ease the management process for them.
One of our key MP Partners within the ecosystem is Bridgeways. The reason I’m aware of Bridgeways, is because, among other things, they allow OpsMgr to monitor VMware environments. This is a very useful add-on for an environment where VMware technologies have been deployed as the virtualisation layer of choice, but more knowledge about what’s inside the virtual workloads is now required. That’s where OpsMgr comes in, but not being able to see everything in OpsMgr would be disappointing, hence Bridgeways provide the MP to hook OpsMgr in with VMware technologies. You can see a demo of this, here.
Aside from the VMware MP, Bridgeways also provide a free MP for Hyper-V. Now, you can download the Microsoft MP for Hyper-V here, however, this is very much a base MP, and effectively gives you the following:
Not very much there! Fairly useful, but to be honest, the information is more about listing info, than monitoring performance. That’s where Bridgeways have come in and extended the MP in multiple different directions, to give you:
You know what the crazy thing is? Bridgeways have even linked to a blog, which walks you through taking the standard Hyper-V Base MP, and extending it to produce the MP that you can download! For those of you who’ve never edited or modified an MP, and want to understand how it’s done, and the results it can produce, this is a very useful series of tutorials:
Part 1: Getting Started with writing a more robust Hyper-V MP Part 2: Adding your own Monitors Part 3: Adding Rules and Performance Views Part 4: Adding Dashboards
Hat-tip to the chaps at the xplatxperts blog for all the information – very useful indeed!
You and I both know, typically, training isn’t cheap. It’s not just the course fees that can be expensive, but it’s the time out of the office that in fact, can be more costly. It’s a pretty simple relationship – when your sales team are out of the office, they aren’t selling, and if they aren’t selling, you aren’t generating revenue! I’m sure this is just one of the reasons why, over recent months, more and more content is being delivered, on demand, through the browser, enabling employees within organisations the ability to learn, in a time that’s flexible to them, and the business. The Microsoft Virtualisation Sales Specialist certification is no different.
Why’s it important?
Well, believe it or not, very shortly, the Microsoft Partner Network will retire the Gold, Certified, and Registered Partner levels, and will replace said levels with competencies. These competencies will more accurately reflect Partner skills, and enable Customers to more accurately find, and work with, the right Partner for them. Competencies exist in 2 flavours for a particular area. Using virtualisation as an example, there will be the virtualisation competency, and the advanced virtualisation competency, each with different requirements and different benefits (although advanced will be in addition to the regular competency). If this is news to you, and you’re reading this thinking, ‘wha?’, I suggest you head on over to the Partner Network before continuing!
Now that' we’re all up to speed, why is the Sales Specialist certification important? Well, these types of certifications are going to contribute towards the competency. For so long, being Gold, or Certified has, to a large extent, been about technical exams, MCPs and MCSE’s, but this break-from-the-norm is ensuring that not only can your techies deploy the technology, but your sales team can also articulate the benefits, construct the deal, and license it accordingly. Oh, and we’ll throw a bit of competitive training in there too ;-)
You can access the Virtualisation Sales Specialist training, and exams by heading over to the Microsoft Sales Specialist site, but you’ll need a Windows Live ID associated to a Partner to take advantage of this – if you’ve ever accessed the Microsoft Partner Portal, and signed in successfully, use this Live ID! When you log in, you’ll see that as of today, there are 2 tracks:
If we drill into the virtualisation track, you’ll see that there are 2 main sections. One is from more of a product understanding perspective, and covers an overview, before providing a deeper look at Microsoft’s Server Virtualisation story, then it’s Desktop story, and a second section, dedicated solely to Licensing, and Selling in Competitive Situations. There is also a separate assessment for either section, which, if you pass, you’ll be rewarded with the certification logo that you see above, but also a certificate you can print out. It’s also important to know that the accreditation is valid for 2 years.
What are the courses actually like? Well, for me, they could do with a bit of a brush up from a presentation perspective. Considering with have PowerPoint 2010 now, and the ability to make some of the slickest presentations, graphically, we’ve ever seen, some of the slides in the sessions leave a little to be desired, but the messaging is spot on, and covers everything to a good level of details, and trust me, the exams are pretty tricky! Whenever you launch a course, it registers you on to it via the Microsoft Partner Network, which will be logged against your Partner ID and enable you, as a Partner, to identify who’s been on which courses.
Overall, I’d say it’s a pretty useful resource, and definitely one to get under your belt. Even if they don’t contribute towards the competency (the Partner Network says the online courses are available in October, yet this one is here now!), they’re a valuable way of understanding the Microsoft Virtualisation story, across desktop and datacenter.
Head on over to the Microsoft Sales Specialist site for more information!
For those of you want to learn a little bit more about some of the nitty gritty details around Hyper-V, like what actually happens when the Hyper-V role is enabled in Windows Server 2008 R2, or what happens when a snapshot is deleted, this Hyper-V R2 component architecture poster could be of use to you. If you’re lucky enough to have a whopping printer, and some massive paper, you could print it out, and view it in all it’s glory, and heck, even put it on your wall if the mood takes you, but for the rest of us, we’ll have to stick with the PDF on the screen. There are a total of 8 sections on the poster, covering the following aspects
People usually get caught out with networking, so that’s a useful section to pay attention to, especially as it goes into detail on the differences types of virtual networking and also the differences with and without VMQ and the benefits this brings. The configuration of storage is also covered in some detail. All in all, a pretty useful reference!
You can get hold of the poster here.
If you’re a Microsoft Partner, and you currently sell/deploy, or are thinking about selling/deploying either System Center Essentials 2010, or Data Protection Manager 2010 into mid-size businesses, then Redmond Channel Partner Magazine have published a very useful 8-page document that may just be up your street. It’s got a wide range of information, along with a number of interviews with key stakeholders in the technology, such as David Mills, Senior Product Manager, Jason Buffington, Senior Technical Product Manager, and Dave Sobel, CEO of Evolve Technologies to name but a few. There’s also a 2 page interview with Zane Adam, a General Manager for Virtualisation at Microsoft, who shares his thoughts on virtualisation and management in the mid-market, and how Microsoft can deliver the integrated IT solution to address the challenges seen by businesses in this segment.
If you’re interested in getting hold of the article, simply head on over to Redmond Channel Partner Online, register, and you’ll then have access to the article for free.
If you’re interested in more information around the combination of SCE and DPM, there’s a useful blog post over on the System Center blog, here and also, more specifically here.
I admit, it’s been a while. 28th April to be precise, which, in my book, is far too long. For those of you who showed concern, it was much appreciated, but I’d like to reassure you, I hadn’t moved role, left Microsoft, or been struck down with illness. On the contrary, I’d just been very busy indeed, as we roll up to our end of year. That said, you have to go back to December 2007 when I last failed to post at least one article within a month, so it’s a good job I’m here today!
May has been an absolute whirlwind. A series of events, meetings and other Partner-related activities has meant blog-time has been at an all time low, but that doesn’t stop my ‘to-blog’ folder mounting up and mounting up…
53 items, excluding this one, to talk about! It’s going to be a while before I get through that lot, and by the time I do, some of it will no doubt be out of date, but we’ll cross that bridge when we come to it.
The eagle-eyed among you will no doubt have noticed that the blogs hosted through MSDN and TechNet have recently undergone a bit of a facelift as we upgraded our blogging platform to a newer version. At first, I thought this was going to be painful, but I was reassured all content would be brought across to the new platform without any loss. They were right. Well, apart from the fact that the CSS overrides I’d successfully used on the old virtualboy blog, didn’t really have the same effect on the new virtualboy blog, so I had to start again on that one, but, and I hope you agree, it’s looking better than ever, with more flexibility from my side to boot.
Firstly, I can happily create my own widgets, and drag modules around the page to style them how I like, which makes simple customisations much easier. I have more control over what exists in these modules too. Sure, I had control over my right-hand-column before, but the new platform makes it much easier to enable me to have control, and set the styling how I see fit. I’m no web guru (or should that be, master?) so anything that makes my life easier is a good thing.
One of the first things you may notice is the inclusion of a small, well, advertisement I guess, for the sister site, virtualboy tv:
Hopefully, it’s inclusion will make it easier for anyone to make their way over there, should they want to investigate some of the video content that’s available around Microsoft virtualisation. Again, I’m sure I could have done this before, on the old platform, but the new platform just makes it easier.
You’ll also notice that you’re now in control of how you view the content on the homepage:
If you take a look at the screengrab above, you’ll see right at the top left, you have 2 controls. the first, is list view, and shrinks down the content on the homepage into bitesize chunks, so the page loads a bit quicker, and if you’re interested in a post, as usual, you can click the link and read it in it’s entirety. This is the default setting on my blog. Alternatively, if you prefer, you can expand this out by using the button next to it, in order to see all the posts in their entirety on the home page. You also have the option to sort the content in a couple of different ways.
One of the biggest things I’ve noticed with this new platform however, is the integration with Windows Live ID’s, which opens up a whole new content-sharing opportunity, by unifying the authentication process of the blogs. So, if you’ve got a Live ID, make sure you sign in when accessing the blogs, as you’ll be able to start to build up a repository of content that you’re interested in. If you check out mine:
You’ll see that I have a number of tabs, ranging from profile, which includes posts that I’ve marked as ‘favourites’ across any of the TechNet and MSDN blogs. For me, that’s very cool, as I can see something that I’m interested in on say, the Virtualisation team blog, mark is as a favourite, and in the future, I can just go to my profile and find it. I guess it’s an alternative to favourites in your internet browser, but hey, it’s a nice feature!
You can also start to make friends with fellow bloggers, which then starts to contribute towards your activity tab.
All in all, I really like the new platform. I expect there to be a few niggling bugs as we go through the next few weeks. The fact that the search bar at the top of my blog currently searches across the whole of TechNet rather than just my blog is one they already know about, so hopefully there won’t be too many more!
Patrick, from the MVUG blog, sent across this beauty. If you’re working with HP hardware, or thinking about it, then Virtual Connect should already mean something to you! If you’ve never worked with the hardware before, this this book could be for you! In a nutshell, Virtual Connect is a way of simplifying the connectivity between servers, and the LAN/SAN infrastructure. Connections are, get this, virtualised (Not in the same league as ‘user state virtualisation!’ ;-)) thus reducing cable count, simplifying management, and with the latest addition, namely, Flex-10, you can increase the number of NICs presented to blades by 4 times, without actually adding any NIC cards, or switches. Very useful where virtualisation is concerned, where the recommended network configurations suggest in excess of 4 NICs per host to segregate the different types of traffic, yet most blades only allow 2 NICs per blade. There’s some great info on Flex-10 here, but if you’re interested in this, and more, check out the free eBook.
Download it here!
If you’ve had chance to watch the Quest vWorkspace 7.0 videos that myself and Matt Evans recorded a couple of months back, showcasing the installation and configuration of Quest vWorkspace with a Microsoft infrastructure, you’ll know that it’s a pretty compelling combination of technologies, that’s easy to configure, with powerful results. For those who haven’t seen them, Part 1 focused on enhancing the RDS Session Host environment, whilst Part 2, focused more specifically on Virtual Desktops. It was within the virtual desktop video, that you may have heard Matt mention improvements in future releases of vWorkspace that may help streamline the backend image provisioning on Hyper-V R2, by better utilising things like differencing disks. If you’re not familiar with differencing disks, more on this later…
To cut a long story short, Quest vWorkspace 7.1 not only delivers a superb addition around image provisioning, integration with Remote Desktop Services (RDS) but also enhances the end user experience capabilities. Let’s look at the user experience bits and the RDS integration first, before focusing on, for me, the killer feature of the release.
From the vWorkspace website, “With the patent-pending EOP Xtream, vWorkspace delivers faster screen updates and smoother interaction across WAN and Internet/VPN connections by dramatically reducing the effects of network latency. These improvements drive user acceptance and will accelerate adoption of desktop virtualization in enterprises and Desktop-as-a-Service (DaaS) from the cloud.”
What does this mean?
Well, anyone who’s used RDP, even with it’s most recent iteration, will know that whilst its effective and rich over a LAN, when used over the WAN performance can start to suffer as latency is increased and thus, gives the user a less than optimal experience. With the introduction of EOP Xtream, Quest has enhanced the user experience considerably, for WAN based environments. If you watch the video here, you’ll see that compared with RDP, with 100ms latency, the experience is significantly improved. Now, for this particular demo, I wouldn’t class the user experience as ‘local’, as there’s still some element of jerkiness here and there, but it’s clear to see there is a considerable improvement. Skip to 2 minutes 30 seconds, and you’ll see Windows 7 come into play, this time, with 200ms latency. Again, there is a clear difference between RDP, and EOP Xtream, so all good from that perspective. Skip to around 3 minutes 50 seconds, and you’ll see a 2008 Terminal Server, with 300ms latency, and again, you’ll see the experience is altogether smoother, and snappier, with EOP Xtream enabled.
That’s the user experience covered, but what about the integration with RDS? Well, again from the vWorkspace website, “vWorkspace eases setup and accelerates use of the services provided in the Windows Server 2008 R2 operating system, including the Remote Desktop Connection Broker (RDCB). vWorkspace uses the integration with RDCB to publish applications and desktops from different host platforms, including Hyper-V and Remote Desktop Session Host, through RemoteApp and Remote Desktop Web Access services.”
If you’ve used RDS, to it’s full capacity, you’ll know there are a couple of places you need to go to configure the different bits. If you want to manage applications, you head on over to the RD Session Host, into RemoteApp Manager, and off you go. If you want to manage the Connection Broker, you have to go over to the RDCB, and off you go. vWorkspace 7.1 introduces Advanced Integration with RDS, giving you a unified management console for RDS. If you check out this video, you’ll see exactly what I mean. In the video, the administrator chooses to provide a number of applications out to users, yet the speed of which the apps are verified, added, firstly to AppPortal, then behind the scenes, added to RDWeb, and RemoteApp and Desktop Connections is very quick indeed, and yet all this was configured from one place.
That’s 2 key areas down, however I mentioned above, the killer feature for me, was the image provisioning, utilising differencing disks. For those of you not familiar, from here, “A differencing virtual hard disk stores the differences from the virtual hard disk on the management operating system. This allows you to isolate changes to a virtual machine and keep a virtual hard disk in an unchanged state. The differencing disk on the management operating system can be shared with virtual machines and, as a best practice, must remain read-only. If it is not read-only, the virtual machine’s virtual hard disk will be invalidated”
If you think about this in the vDesktop world, I could create a sparkly new VM, make it as perfect as I want it to be, then I could (more than likely), Sysprep it, shut it down, and use this as my read-only master disk, from which I would hang the ‘differences’ from. This, conceptually, sounds all fine and dandy. We spin up 100 vDesktops, create differencing VHD’s off the parent, boot them, and away we go. This would continue, fine for a while, until you need to update the OS. Bearing in mind here, the reason for using the differencing disks is to be more dynamic, and to save storage, if I push out 100 x Update-X, it’s getting stored in each of the vDesktop’s differencing disks. The easiest thing to do, again conceptually, would be to update the master disk, and thus all the differencing OS’s would have the patch, without it taking 100 x the storage required. The reason I use the word conceptually is, because in reality, if I change or modify the master, the differencing disks would be invalidated. Thus, I’d need to recreate 100 new differencing disks, and attach them to the already existing vDesktops, and then sort out AD, DNS records, Computer objects in AD, the OOBE process when you start Windows after sysprepping, and more! Sure, I could do some of this with PowerShell, or alternatively, vWorkspace 7.1 does it for me.
If you check out this video, you’ll see, from about 1 minute in, you’ll see the new rapid provisioning process, using a golden image, yet at the same time, incorporating sysprep customisation information into the image. Nice! Once completed, you’ll see how bulk updates, like those discussed briefly above, can be managed through vWorkspace. This is very cool stuff, and on the surface, rivals Provisioning Server as a dynamic platform for deploying vDesktop images. Both will have their pro’s and con’s I’m sure, but to my knowledge, Quest are the first to develop a solution like this that integrates with differencing disks, and manages them in a rich, simple and streamlined way. Definitely worth a look if you’re evaluating VDI options!
Brian Madden, or more specifically, Gabe Knuth, has also picked up on the release, showing a short video, live from MMS 2010, in which he discusses the new capabilities with Quest’s Rob Mallicoat. Again, useful viewing, particularly if you want a bit more depth about EOP Xtream.
My to-blog folder is positively brimming with stuff that I want to talk about, but finding the time at the moment is proving mightily difficult! What with MMS last week (including the awesome demo of SCVMM vNext, long-distance Live Migration, Opalis, Service Manager, Config Manager R3 beta and more!) along with Quest releasing a superb update to vWorkspace, namely version 7.1, and countless products reaching their RTM milestones, I’ve got a lot of blogging to do! This one however, comes first!
A couple of weeks back, in London, we ran a Virtualisation Summit, at Shepherds Bush cinema. If you attended the day, I’m sure you’d agree that the content, for the most part, was useful. Whether you were interested in Microsoft’s all up strategy going forward, desktop optimisation, VDI, virtualisation management, or, toolkits to assist with constructing your own private cloud, it’s clear that there was a lot of information on offer, and it’s always difficult to please everyone. I would have personally liked to have seen a few more demos, and wow-stuff, kind of like the stuff shown in the MMS keynotes, but hey ho, maybe next time!
If you did attend, you’ll have hopefully seen me present a session, along with Patrick Irwin from Citrix, on “Implementing a Comprehensive VDI Solution”, in which we discussed RDS for VDI, and some of the elements of XenDesktop. I also discussed some of the value add that Quest offer, with vWorkspace. Unfortunately, I didn’t know about the 7.1 release, otherwise I would have squeezed in some other useful titbits of information! For those that attended, I hope you found my session useful. I do try and put everything into my sessions, to make them engaging and informative, but I’m always open to feedback to improve!
For those that didn’t attend, or maybe you did, and want a recap, you can now view the session on demand, over at the Virtualisation Summit page. If you head on over there (or click the image below), click the play button next to my name, and a Silverlight video will start to play. Unfortunately, there’s no separate download of the underlying WMV file, so it’s in-browser only for now I’m afraid. Still, the resolution is pretty good, so if you want to go full screen, feel free. Also, if you want the deck from the day, let me know and I’ll ping you a link.
I hope you enjoy it, and as always, feedback welcome!
There are also other sessions available on demand, from the whole week of TechDays, which you can access, here.
Earlier today, I had the pleasure of sitting down and having a chat, via webcam, live from the man-cave, with Mike Laverick, who, among other things, has a successful blog, is a VMware vExpert, and has written a number of books, including the VMware vSphere 4 Implementation. If you’ve not seen Mike’s chinwag series before, to quote Mike, a chinwag is a “light informal conversation for social occasions”, but focused on virtualisation. It just shows it’s a very small world when the first waggee of the series was Chris Dearden, who I actually sat next to on the VMware VI3 Fast Track training course a year or so ago!
Naturally when I was asked to participate in the chinwag, I obliged, and if you’re interested, you can head on over to Mike’s blog, to have a watch, or a listen. There are plenty of versions available, ranging from video, through to MP3, through to iTunes podcast. In the chinwag, we covered all sorts of topics, ranging from Microsoft’s strategy around virtualisation and management, to a bit of a deeper look at Hyper-V, the hypervisor, and the parent partition. We also chatted about some of the features coming in SP1 of R2, but before you get your hopes up, I just explained what has been announced, rather than giving away anything more than what’s already been publicly shown or discussed in various sources online.
You can watch, or listen to the chinwag over on Mike’s blog. Enjoy!