Love reading? Love technology? Like money?
If the answer to those questions is Yes, then these eBook links are for you. Covering a wide spectrum of technologies, from Office through to Virtualisation, and Windows Phone through to Visual Studio, this array of free eBooks should keep you going for a while…
For those of you who caught my Dynamic Memory post a couple of days back, and are desperate for more information, you can actually watch, online, the session delivered by Ben Armstrong, Senior Program Manager Lead (and Dynamic Memory owner!), focusing on explaining Dynamic Memory in depth.
The session, which runs for just under 80 minutes, delves into a number of different areas of Dynamic Memory, from why’s it required, to how it works, so if you’re curious about it, it’s definitely worth a watch. If you head on over to the TechEd Online page, you can actually download it, for offline usage.
Hot on the heels of Thursday’s post on the virtualising of SharePoint 2010, I thought I’d share with you, some useful resources around the virtualising of SQL 2008 R2, on Hyper-V R2, and the kind of performance you can expect from this combination of technologies.
High Performance SQL Server Workloads on Hyper-V White Paper
This whitepaper focuses on the advantages of deploying Microsoft SQL Server database application workloads to a virtualisation environment using Microsoft Windows Server 2008 R2 Hyper-V. It demonstrates that Hyper-V provides the performance and scalability needed to run complex SQL Server workloads in certain scenarios. It also shows how Hyper-V can improve performance when used in conjunction with advanced processor technologies. This paper assumes that the reader has a working knowledge of virtualisation, Windows Server Hyper-V, SQL Server, Microsoft System Center concepts and features.
The whitepaper discusses a number of different tests that were performed, in some detail, yet also, from page 30, you can also start to read about the best practices for running workloads like SQL on Hyper-V. The best practices section provides guidance around configuration for the parent OS, networking, VHD considerations, and more.
If you supplement the information in this whitepaper, with some of these other resources below, you should be in a good position to optimise the performance of SQL in a virtual environment.
SQL Server 2008 Virtualization
SQL Server Analysis Services Virtualization
If you’re a bit of a PowerShell guru, and have NetApp filer’s within your environment, the release of the Data ONTAP PowerShell Toolkit could be something you find very useful. The toolkit provides a collection of PowerShell cmdlets to facilitate integration of Data ONTAP into Microsoft Windows environments, by providing easy-to-use cmdlets that map to low-level Data ONTAP operations (i.e. ZAPIs).
The toolkit has been released via the NetApp Community, and is available free of charge to NetApp partners & customers (NOW login required to download). The Data ONTAP PowerShell Toolkit Community site has been set up under "NetApp Community > Developer Tools" (or here) for discussions, feedback and sharing scripts that use the toolkit.
If you’re interested, and want to learn more, make sure you head on over to the Developer Tools section on the NetApp Community site.
Hot on the heels of the announcements around the beta of SP1 for Windows Server 2008 R2 and Windows 7, 2 new videos have been published to TechNet Edge, showcasing firstly, the upcoming Dynamic Memory, and secondly, RemoteFX.
And the second video…
Looking forward to getting my hands on the beta! Less than 2 months to go! All I have to do now is hope it plays nicely with other other technologies around it… ;-)
If you’re interested in understanding more about the sheer choice you have in front of you for the future of desktops, this handy guide could come in useful. It covers a variety of topics, from ‘What is Desktop Virtualisation’, through to ‘Thinking about your Organisation’s requirements’, and also helps you to ‘choose between the options’. You can use the controls in the bottom-right corner of the screen to create a PDF if you want to take the content offline.
For those of you, like me, who aren’t lucky enough to travel over to New Orleans for TechEd, you’re pretty much restricted to catching up on the gossip via blogs, and news feeds. Whilst this is fine for some, there’s nothing quite like seeing the technology for yourself. It’s good news therefore, that the keynotes are being made available, on the web, on Demand, for you to stream, or download so you can hear first hand, Bob Muglia, the President of our Server & Tools business, talk about Dynamic IT, Cloud and more. If that’s not enough, you’ll also see a selection of speakers from Microsoft show you, live, some of the currently available, and future technologies, that can help you to transform your infrastructure. For those of you who don’t want to watch it, or perhaps don’t have time to watch it all, here’s my highlights…
3m 50s to 10m 30s – Visual Studio Lab Management & System Center Integration
I’m not a dev. Far from it! Sure I know the odd bit of PowerShell (Get-VM) but for those of you who do work in that world, having process around the development, testing, and rollout of an application or workload can be difficult, and expensive. Having watched this demo in action, I have to say, I’m impressed. I’ll reiterate, I’m not a developer, but I could see how the combination of these technologies could easily benefit that type of environment, and leverage tools, like System Center, that can also benefit other areas of the infrastructure. Stephanie Cuthbertson, a Group Program Manager in the Visual Studio TFS team, explains how easy it can be, with the right technologies, to, for example, update troublesome applications that are live in production. Using recording capabilities, IntelliTrace, and more, I was actually blown away with what it gives you. I used to code Java whilst at Uni, ad it was very different then! (it wasn’t that long ago, for those of you thinking the worst!)
Once the application has been fixed, it needs to b deployed into production, and that’s where System Center, in particular, Operations Manager, Virtual Machine Manager, Opalis comes in. Opalis, for those not familiar, is an orchestration engine, helping you to build out workflow with little or no scripting involved, saving time, effort, and grey hairs. It really does enable some compelling scenarios, and, like many of the System Center technologies, the framework they provide means the more you put in, the more you get out.
15m 55s to 23m 10s – System Center Virtual Machine Manager vNext
For me, this is a big one. Anders Vinberg, Technical Fellow in the Management and Security Division, shows us that firstly, the interface has been overhauled, to bring it in line with Office. It’s going to manage the fabric of your infrastructure, as oppose to just the VMs that run on it. It’s going to transform the way you deliver a virtual infrastructure, through integration with things like Server App-V, DAC Packages (Data Tier Applications), and MSDeploy Packages. This separation of these intelligent components, combined with the Service Model wizard, enables construction of tiered applications very quickly, in a very powerful manner. Core Elasticity is how Bob describes the system – a good term to slip into presentations me thinks ;-)
26m 10s – 36m 45s - Building Cloud Applications and Integration with On-Premise Technologies
As I said earlier, I’m not a dev, so anything that’s development related, usually goes over my head, but this demo, very much like the first one, just makes sense. Doug Purdy, CTO in the Data and Modeling group, shows us things like extending your identity up to the cloud from AD make it seamless for the user to utilise cloud services, but also, how this can integrate into on-premise platforms like CRM. Very cool. Doug also announces that as of now, you can start to construct applications on Azure utilising .NET 4. The key thing that is apparent to me, is the deep (and getting deeper) integration from Visual Studio up to Azure, whether it’s looking deeper inside the databases on SQL Azure, or actually deploying applications straight to Azure, utilising tools that were highlighted in the first demo of the keynote. We’ve also released AppFabric to RTM, so you can really start to connect pieces of your infrastructure to the cloud.
The final part of the demo, and my favourite bit of Doug’s section, was the extension of Azure to enable monitoring, side by side with your on-premise elements, with System Center Operations Manager. Very cool indeed.
40m 55s – 43m 15s - Chicago Tribune and it’s use of Azure
OK, not a demo as such, but a useful insight into how a real-world customer is benefiting from Windows Azure to run their business, scaling on demand, and reducing their 32 datacenters, 75000 square feet of raised floor, and 4000 different servers running a variety of different applications. The result was 2 datacenters, and 1 Azure based platform. Massive savings!
46m 28s – 57m 14s Microsoft Unified Communications – Wave 14
Gurdeep Singh Pall, Corporate Vice President of the Office Communications Group, walks us through a number of the capabilities of the next wave of OCS. I don’t know about you, but I’ve come to heavily rely on OCS, as a way to quickly communicate and collaborate with my colleagues, and Partners, through the mediums of Instant Messaging, Voice, or Video. It’s fundamental to my productivity at home too, seeing as my mobile phone signal is so bad, OCS is the only reliable (and cost effective!) way of communicating. One of the big changes in Communicator, is its personalisation. It’s much more of a ‘Corporate Windows Live Messenger’, with a very social experience, with photos, activities and more. The big thing, I guess for many organisations, is cost savings associated with going to a soft-phone based VOIP infrastructure, and to give you an idea of Microsoft saving’s, we’re actually saving over $1million per month using this technology versus our prior phone-based solutions. Impressive stuff. Gurdeep also talks about the integration with SharePoint, and the intelligence around searching for people based on different pieces of information. I have to admit, the high-def video stuff was awesome. I’d like to see how it behaves on my pitiful internet connection in Chorley, Lancashire, but still, it’s extremely promising, and will work perfectly with my LifeCam Cinema!
59m 25s – 1h 5m 28s - Windows Phone 7
I’m excited about Windows Phone 7, as it looks incredibly slick, and as long as someone converts all the iPhone Apps across to Windows Phone, it stands a great chance of having a good crack at the Smartphone market. If that doesn’t happen, and the applications aren’t produced, to the standard, and price that people expect, it could be a long hard slog for Windows Phone to make a breakthrough. Augusto Valdez, Senior Product Manager for Mobile, walks us through a number of capabilities of the new platform. As I said earlier, I’m excited about this. The interface inevitably, will be compared to other touch-screen devices, yet there are both similarities, and differences. Overall though, the general look and feel, the quality of the experience, and the integration with Exchange, Office, and SharePoint is first class, which for businesses, is very important. The use of the Live Tiles is very neat, although I’d be interested to know the effect it has on the battery! I’d like to change the colours of the different tiles if possible – time will tell if that can be done. I’m sure it can be!
Opening of things like Excel, and PowerPoint retain the level of fidelity you expect from a PC-based device, and sure, not every command will be available, but it’s certainly a great way of interacting with data, in a rich manner, whilst on the move. It’s this kind of functionality which will ensure that I wait for WM7, instead of jumping on the iPhone bandwagon…
1h 10m 22s to 1h 19m 35s – Combining Business Intelligence and the Cloud
Amir Netz, Distinguished Engineer in the Microsoft BI Team, showcases the power of Excel 2010 to present rich views of information. What impresses me here, not being a BI specialist, is it’s done without the need for coding, making it instantly more accessible to a greater number of users.
1h 21m 28s to 1h 24m 5s – The use of Windows Server & SQL Server in Avatar
Now, before anyone jumps in and says ‘I didn’t see Windows Server or SQL in Avatar’ – you obviously weren’t paying attention in the film! Just kidding. Product placement in films is common place, but seeing WS & SQL isn’t something I expect to ever see! Unless it’s a film about databases, in which case, I still don’t expect it’s something I’ll go to see with my own free will!
It’s not often you think of technologies like Windows Server or SQL being involved with something like Avatar, but in this short video, you’ll learn about Gaia, and how that, along with Microsoft technologies, was fundamental to the movie. Very impressive stuff indeed.
I wish I could have gone to TechEd. We have our internal equivalent in the next few months, where' I’ll get immersed in new and upcoming technologies, which is something I’m really excited about so it’s not all bad. What’s clear to me is, Microsoft is innovating on multiple fronts. The stuff that’s here with BI, and Visual Studio, along with Azure, and it’s integration into System Center is really starting to take shape, and I cannot wait for future releases of System Center, as, release on release, things are getting stronger, and more relevant for customers’ infrastructures. Windows Phone 7, from a personal usage perspective, is high on my agenda as my current phone, whilst functional, isn’t giving me the experience I’m looking for in a phone anymore. Times have changed, across both desktop, datacenter, cloud and mobile, trends are changing, technologies are moving, and the keynote gives just a small glimpse into what’s possible.
If you want to watch the keynote for yourself, or download it, you can get it from here.
As hardware increases in scale, and new capabilities, such as Dynamic Memory, are introduced into Hyper-V R2 SP1, more and more customers are going to start to encroach on the supported limits of Hyper-V cluster nodes. As of May 2010, those supported limits stood at 64 VMs per cluster node, up to a total of 15+1 nodes, giving a total of 960 VMs. This contrasts considerably with the 384 VMs per non-clustered host, yet will still be more than enough headroom for most customers, however, in a recent announcement at TechEd 2010, we’ve decided to increase the limits on the cluster nodes. The increase is actually pretty considerable too, helping customers to scale to much greater levels, especially on smaller clusters, assuming they have resource in their underlying hardware!
So, in a nutshell, we now support 1000 VMs per cluster, providing you don’t exceed the 384 VMs per node limit, which which will still be enforced. In tabular form:
Number of Nodes in Cluster
Max Number of VMs per Node
Max # VMs in Cluster
2 Nodes (1 active + 1 failover)
3 Nodes (2 active + 1 failover)
4 Nodes (3 active + 1 failover)
5 Nodes (4 active + 1 failover)
6 Nodes (5 active + 1 failover)
7 Nodes (6 active + 1 failover)
8 Nodes (7 active + 1 failover)
9 Nodes (8 active + 1 failover)
10 Nodes (9 active + 1 failover)
11 Nodes (10 active + 1 failover)
12 Nodes (11 active + 1 failover)
13 Nodes (12 active + 1 failover)
14 Nodes (13 active + 1 failover)
15 Nodes (14 active + 1 failover)
16 Nodes (15 active + 1 failover)
and from TechNet:
Nodes per cluster
Consider the number of nodes you want to reserve for failover, as well as maintenance tasks such as applying updates. We recommend that you plan for enough resources to allow for 1 node to be reserved for failover, which means it remains idle until another node is failed over to it. (This is sometimes referred to as a passive node.) You can increase this number if you want to reserve additional nodes. There is no recommended ratio or multiplier of reserved nodes to active nodes; the only specific requirement is that the total number of nodes in a cluster cannot exceed the maximum of 16.
Running virtual machines per cluster and per node
1,000 per cluster, with a maximum of 384 on any one node
Several factors can affect the real number of virtual machines that can be run at the same time on one node, such as:
· Amount of physical memory being used by each virtual machine.
· Networking and storage bandwidth.
· Number of disk spindles, which affects disk I/O performance.
Obviously many of you will look at that and say “We don’t leave 1 node free for ‘failover’'” whereas some of you will always do this, to ensure there’s enough resource for failing over VMs in the event of an issue. Now, I’m not going to say that you absolutely have to have a +1 node, but it is best practice nonetheless and something that should be considered in mission-critical deployments. So, looking at the table, even on a 4 node cluster (3+1), you can hit the big 1000, which shows huge scalability and consolidation. If you went from 1000 servers, down to 4, that would be a % saving of over 99% (assuming my aging maths is correct there). I’m going to say something now, and you should listen carefully.
Just because you can, doesn’t mean you should.
If you’re going to run that many eggs, on so few baskets, you’re going to have to ensure that the underlying infrastructure is rock solid and extremely well capacity planned/architected. From networking requirements (a LOT of NICs would be needed in those hosts I imagine!) through to storage (how much I/O!?), and memory (DM will help!) through to CPU (8-12 core will help!), every little decision could be amplified up to 333 times, so you have to nail it with detailed and thorough planning and comprehensive testing,
Perhaps an area where you’re more likely to hit this limit, is when virtualising desktops, rather than servers. In most organisations, the number of desktops typically outweighs the number of servers, so hitting the previous limits was much more achievable, so this gives the organisation who happened to be creeping closer, a bit of breathing room.
If you’re a Microsoft Partner, and you’re interested in learning more about the recently released System Center Data Protection Manager 2010, or System Center Essentials 2010, here’s a couple of webcasts that you may want to check out:
MGT77PAL: Technical Introduction to System Center Data Protection Manager 2010 Presented by: Rahul Jacob 15th June 2010 – 6pm GMT System Center Data Protection Manager 2010 provides new backup and recovery capabilities at a low cost. Because of the significant new capabilities in DPM, it is highly important that the field, partners and customers are aware of the various solutions and opportunities we have with DPM 2010. This session will help you get started with easy setup and configuration.
MGT78PAL: Application Workloads and DPM - Better Together Presented by: Rahul Jacob 17th June 2010 – 6pm GMT Microsoft System Center Data Protection Manager (DPM) is designed for IT generalists and uses wizards and workflows to help ensure that you can protect your Exchange, SQL and SharePoint data without any advanced degree of storage and backup knowledge. This session will help small and medium business IT administrators plan backups, recoveries and plan for further Disaster Recoveries. The solutions will enable you to take advantage of Exchange 2010 and DPM 2010.
MGT79PAL: Technical Introduction to System Center Essentials 2010 Presented by: Ashok Kumar G 22nd June 2010 – 6pm GMT System Center Essentials 2010 has now hit RTM, so it is important that the Microsoft field and our partners who focus on midsized businesses with less than 50 Servers and 500 clients, learn about the value of this new offering for physical and virtualization management. This session will help you get started with easy setup and configuration. Come see it first, and get ready when your customers ask you about it. Topics include: SCE 2010 Overview, Architecture, Demo, Market challenges, Solution, Licensing
I’ve been a big fan of NetApp technologies for ages, and I’ve worked closely with people like Steve Winfield, and Pete Mason, to produce a number of videos showcasing some of the collaborative work that’s gone on between Microsoft and NetApp, resulting in products like SnapManager for Hyper-V, SnapDrive 6.2 and more. We’ve got some fantastic joint wins on the platform now too, at both small, and large customers, so it’s all good from that perspective.
I’m currently building out my team’s internal demo infrastructure, which currently consists of 1 Dell T605, with Hyper-V R2, and a number of System Center technologies virtualised on top, along with a cluster of 2 Dell R710’s, hooked up to a NetApp FAS3050c. Now this FAS3050c isn’t the latest model, and it doesn’t have the most capacity in the world (my DS14 Disk shelf gives me around 570GB of usable space) but then it was kindly donated to me by NetApp, who were replacing some of their older kit, with newer kit for our Microsoft Technology Center, in Reading, UK. The great thing for me is, I can still have the latest version of OnTap, it’ll work with the latest and greatest versions of SnapDrive, and SnapManager for Hyper-V, and it still gives me all the features I need, like the snapshotting, thin provisioning, and best of all, deduplication. I’ll be honest with you right now. I love dedupe. I think it’s fantastically clever, streamlined, and because it’s at the block-level, rather than the file level, it’ll even dedupe stuff that you think, on the surface, has no chance of being deduped. Crazy stuff. Let me explain more.
Firstly, for those of you not sure what deduplication with NetApp is, and how it works, there’s a great explanation over at the Dr DeDupe blog.
As I said, my cluster environment is 2 Nodes, and to that cluster, I’m presenting 4 LUNs of storage, which in my NetApp environment, are in 4 separate Volumes. You don’t have to do it like this, and who knows, maybe I’ll change it in the future, but right now, this is how it is:
As you can see, I've got a dedicated LUN for my witness disk, (I’m using Node and Disk Majority for my 2-node cluster), and 3 LUNs presented to the cluster, which have been selected to be Cluster Shared Volumes. They aren’t huge, 100GB each for two of them, and a 25GB CSV that will hold the swap files of my key VMs (Each host only has 12GB RAM, so having 25GB for SWAP VHD’s is fine!) You’ll see from the image above, that currently, I’m using around 51% of my CSV2. It’s currently got a 40GB (ish) Fixed VHD with WS2008 R2 inside, but at the same time, CSV2 also has another Dynamic VHD, with Windows 7 x86 inside it, currently expanded to around 8GB. Total consumption of that CSV is 51GB:
So, that means I’ll lose 51GB on my SAN, right? Wrong! We’re actually using a grand total of 17.5GB!
If we go over to NetApp System Manager, and take a look at this particular volume, you can see for yourself:
Just think about this for a minute. Due to the fact that this is block-level deduplication, we can look inside the contents of the VHD files etc, and see where the blocks match, and deduplicate them, so in this case, we’ve saved a grand total of 37.62GB, which amounts to 60%. Obviously Windows still thinks it’s using 51GB, even though, under the covers, the SAN hasn’t lost that space. This is where Thin Provisioning starts to help, as you can make Windows think it has more storage available to it.
This use of deduplication hasn’t just been used on my CSV’s. Oh no. I’ve used it on the Witness disk, where, even though the whole volume is only 1GB, and the consumption was 50MB for the quorum information, deduplication still managed to save me 10mb, which is 20%. What about my other savings? Well, on my SCVMM Library, where I’m storing a couple of VHDs, but also some ISO files, I’ve saved a total of 15%, and on my actual backup store, being used by Data Protection Manager 2010, to protect Hyper-V and SQL so far, I’m saving just under 39GB, which equates to 58%. These savings are real, and are enabling me to get even greater levels of consolidation on my SAN than I would have normally. Brilliant stuff NetApp.
Now I just need to get ApplianceWatch PRO working… :-)