You might think this will would make for a really short article post but actually there’s a huge amount of free tools and resource out there and I have had to restrict myself to a top ten across the server and client, based on what Simon and I have used. So please feel free to comment with your own and I’ll see what we can do about maintaining a list somewhere and rewarding good suggestions.
Yes Microsoft does have a free operating system, although it’s just restricted to the ability to run highly available virtual machines. With Hyper-V Server you are limited to running 8,000 virtual machines on a 64 node cluster and you can only put 64 logical processors in a virtual machine. Also note that there is no graphical interface as this OS is very like Server core and is designed to be remotely managed. (I have a separate post just on Hyper-V Server here).
Microsoft Security Essentials
If your organisation has less than ten PCs then this is the FREE antivirus for you, and it’s also free to use at home. Security Essentials uses the same signatures as System Center Endpoint Protection and has won a slew of awards for being very user friendly. You can install this on XP and Windows 7, but for Windows 8 and Windows RT Windows Defender does the same thing and included and
System Center Advisor
This is a lightweight best practice analyser for Windows Server and SQL Server environment. It uses the same agent System Center Operations Manager agent to collect telemetry about your servers and then sends this every day to Advisor Service. The Advisor Service then provides reports on error and warnings you need be aware of. It uses your own certificates so it’s secure. Like Operations Manager you can configure a gateway to collect the information from other internal servers and then send this daily to the Advisor service. (I have posts on how to set it up and how to use it).
SQL Server Express
If you only need a small database server the there’s quite a lot you can do with SQL Server Express. The tools are essentially the same as its bigger brother and you get reporting services if you need it to deliver rich reporting of your local database. If you don’t need all the tooling and just want a slimmed down engine behind your application then there’s an option LocalDB
The Microsoft Assessment and Planning Toolkit seems to be universally ignored, despite being really useful in planning any kind of upgrade or migration, or jut to make sense of what you have got already – given that you might be new into post. What it does is to crawl your datacentre with various credential you supply and tells you what you have. This might be nothing more than how many servers do you have and what OS are they running, and even that in a world of virtual machine sprawl can be useful. However if you were then to use it plan a Windows Server 2012 migration project it would allow you get reports and plans on how to do that and what you’ll need. It’s constantly updated so always be sure to get the latest version.
One other thing to note while it will allows you to assess your licensing estate the data is NOT reported back to Microsoft, so you won’t be getting loads of phone calls once you’ve run it, but you will at least know where you are.
Data Classification Toolkit
Knowing about your infrastructure is one thing, what matters more is the data that’s in it and to make sense of that there’s the Data Classification Toolkit. Like other solution accelerators it’s continually updated and in this case is now aware of the latest tools in Windows Server 2012 like Dynamic Access Control (My post on getting started is here).
Please note the small print : Use of the Microsoft Data Classification Toolkit for Windows Server 2012 does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfilment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.
Windows Assessment and Deployment Kit (Windows ADK)
Simon and I still see a lot of weird and wonderful ways to deploy operating systems at scale, which is odd when Microsoft have two free of tools the first being Windows ADK. Actually the ADK should count as several free things itself as it contains a number of useful utilities such as:
Microsoft Deployment Toolkit
This is another tool that’s kept up to date, in this case it too support scale deployment of Windows 8 and Server 2012. It may seem like an overlap with the Windows ADK but that toolset requires knowledge of a lot of command line utilities like DISM, where the MDT is a UI driven process. The
The Office Environment Assessment Toolkit (OEAT) scans client computers for add-ins and applications that interact with all versions of Office back to Office 97. It’s designed for detecting compatibility issues but I have seen it used to track down large spreadsheets which means someone in your organisation is using Excel instead of a proper database, which at best might mean there could be data quality issues in some of your reporting and at worst might mean you are storing customer and confidential information where it is not being properly controlled
I still use three utilities from Windows Live to get key tasks done
Note none of these run on Windows RT
I also wanted to share my also rans that didn’t make my top ten..
ZoomIT A Windows Sysinternals tool to make areas of your screen bigger BGInfo Also part of the Windows sysinternals tools which shows key informatio0n about your servers on their desktops.
ZoomIT A Windows Sysinternals tool to make areas of your screen bigger
BGInfo Also part of the Windows sysinternals tools which shows key informatio0n about your servers on their desktops.
RDCMan to manage remote desktops' great for managing Windows 8 / Server 2012 as it’s not based on RDP8 and so the charms etc. don’t work
and finally for a bit of fun Ordnance Survey Maps - I occasionally need to get out of the office and off-road. Street View is fine but what if there are no streets and you need to get from A to B for fun on foot or by cycle. In this instance Ordnance Survey maps are your friend and they are free on Bing Maps if you are in the UK (just select Ordnance Survey from the left hand drop down list of map types)e.g.
You can print them as well if you don’t want to take your slate, tablet, phone with you
With my job title of Evangelist I often get asked about what my roles, both inside and outside of the Microsoft firewall. In the 2 weeks I have presented to the Leeds VM User Group (www.vmug.org.uk), done a careers chat at a Science college, produced and presented at TechDays Online nd attended SQL Bits 11..
So it’s a lot of presenting, blogging and explaining stuff. Of course you can only blog about what you know about, so Simon and I spend a lot of time learning how Microsoft technology works, and talking to IT Professionals about what they are doing and the challenges they face. I hope this gives some credibility both online and in person and given you are reading this that seems to be working. Our knowledge acquisition goes on all the time but occasionally it's good to commit longer periods of time to it and so we get the opportunity to go to things like TechEd in Madrid. For me this checks all the boxes and gives me the chance to hang out with the Microsoft product teams who present at this sort of event.
As an Evangelist I don’t have to pay for my flights, hotel or entry fee, Microsoft pick all of that up for me. In return I will write posts, work up ideas for future presentations and share stories good and bad about how our products are actually being used.
The question is do you fancy being evangelist for a week and come out with our team to Madrid on Microsoft expenses? if you do then you’ll want to enter the TechEd Challenge
From here you can either enter a draw for a place or compete for one of three further places by writing a blog post to show off your evangelist skills. Full details of the prizes are here, good luck and hopefully see you there
Hyper-V Server is a free operating system specifically designed to just run Hyper-V so basically a cut down core installation of a paid for edition of Windows Server. The cut down bit refers to the fact that only the roles and features needed to run Hyper-V are there. However Hyper-V itself is in no way cut down; for example you can create clusters for running HA virtual machines (up to 64 nodes hosting 8,000 VMs) and each VM can still have up to 64 logical processes as per the DataCenter edition of Windows Server.
So what’s the catch?
If there is one then it’s that if you want to run Windows Server in a VM it needs to be licensed and the most efficient way to do that once you get to 6-7 VMs per host is to use Windows Server DataCenter edition as this allows any number of guest VMs’ to be licensed for Windows Server as well as the hosts. However if you were going to use Hyper-V to host VDI then your guests need to be licensed for Windows 7/8 and so Hyper-V Server is a good candidate. Another example is if you want to just host Linux VMs which will run really well and are supported (depending on the flavour you are using).
I have made my usual short screencast to show you what it looks like..
Also, you might want to look at the other posts in my Evaluate This series as Hyper-V Server is best managed remotely, and my other screencasts will show you how to do such things as live migrations, VDI, replicate VM’s etc. all of which are possible with Hyper-V Server.
To configure Hyper-V Server for remotes access all I did was use the built in SConfig utility to join it to my domain as remote management is turned on by default in Windows Server 2012, and I have group policy setup to allow remote desktop on all of my servers.
NIC teaming is now viable in server core and Hyper-V server because it’s built into the OS where in earlier versions of server you might not have been able to install the hardware vendors NIC teaming software without a user interface.
Hyper-V Server like the server core installation option of Windows Server only needs half the patching of a full installation of Windows Server.
Hyper-V Server 2012 now includes Powershell out of the box.
Finally you can get Hyper-V Server 2012 here and try it yourself and put it into production if needed.
Despite common misconceptions Microsoft now has extensive interoperability with open source technologies for example you can run a php application on Azure, get support from us to run RedHat, SUSE or CentOs on Hyper-V and manage your applications from System Center. , So extending this approach to the world of big data with Hadoop is a logical step given the pervasiveness of Hadoop in this space.
Hopefully your reading this because you have some idea of what big data is. If not it is basically an order of magnitude bigger than you can store, it changes very quickly and is typically made up of different kinds of data that you can’t handle with the technologies you already have. For example web logs, tweets, photos, and sounds. Traditionally we have discarded this information as having little or no value compared with the investment needed to process it, especially as it often not clear what value is contained in this information. For this reason big data has been filed in the too difficult drawer, unless you are megacorp or a government.
However after some research by Google, an approach to attacking this problem called map reduce was born. Map is where the structure for the data is declared for example pulling out the actual tweet from a twitter massage, the hashtags and other useful fields such as whether this is a retweet. The Reduce phase then pulls out meaning from these structures such as digrams ( the key 2 word phrases) sentiment, and so on.
Hadoop uses map reduce but the key to its power is that it applies the map reduce concept on large clusters of servers by getting each node to run the functions locally, thus taking the code to the data to minimise IO and network traffic using its own file system – HDFS. There are lots of tools in the Hadoop armoury built on top of this, notably Hive which presents HDFS as a data warehouse that you can run SQL against and the PIG (latin) language where you load data and work with your functions.
Here a Map function defines what a word is in a string of character and the reduce function then counts the words. Obviously this a bit sledgehammer/nut, but hopefully you get the idea. Also the clever bit is that each node has part of the data and the algorithm to process and then reports back when it’s done with the answers to a controlling node a bit like High Performance Computing and the SQL Server Parallel Data warehouse.
So where does Microsoft fit into this?
The answer is HDInsight which is now in public beta. This is a toolkit developed in conjunction with Horton Works to add integration to Hadoop to make it more enterprise friendly:
Big Data is definitely happening, for example there was even a special meeting at the last G8 meeting on this as it is such a significant technology. However it cannot be solved in one formulaic way by one technology; rather it’s an approach and in the case of Microsoft a set of rich tools to consolidate, store, analyse and consume: The point being to integrate Big Data into your business intelligence project using familiar tools, the only rocket science being the map reduce bit, and that is the specialism of a data scientist. Some of their work is published by academics so you might find the algorithm you need is already out there - for example the map function to interpret a tweet and pull out the bits you need is on twitter.
However research is going all the time to crack such problems as earthquake prediction, emotion recognition from photographs, ,edical research and so on. If you are interested in that sort of thing world then you might want to go along to the Big Data Hackathon 13/14th April in Haymarket, London, and see what other like minded individuals can do with this stuff.
In my last post & screen cast I showed how Dynamic Access Control (DAC) worked; the business of matching a users claims to the properties of a file (Resource Property in DAC), however the problem then becomes how do I correctly tag my files so that DAC works. You shouldn’t necessarily be doing this; it’s the users data and you are just the curator of that data. The users aren’t going to have the time or inclination to do this even if they are working in a compliance or regulated environment. However they might be able to give you some rules which you could apply to the files and this is what Data Classification does.
File Classification is part of the part of File System resource Manager (FSRM) role service and is new for Windows Server 2012 where before FSRM was just there to only allow certain file type to be uploaded or to grant quotas to users to restrict how much and of what could be stored on your servers. The secret sauce is then to link the resource property you set using the classification rule to a Central Access Rule in DAC
Hopefully this screencast shows how easy this is to do..
Things to note:
As per my previous post you’ll need your domain functional level to be Windows Server 2012.
You’ll need the FSRM role service on your file servers and these also need to be running Windows Server 2012.
The PowerShell is
Add-WindowsFeature –Name FS-Resource-Manager
Add-WindowsFeature –Name FS-Resource-Manager
and you’ll need a copy of Windows Server 2012 Evaluation Edition to try this out
I used a simple expression “Top Secret” in my screen cast but you can write RegEx to look for things like credit card details, NI numbers and appropriately protect those documents automatically using this technique.
File Classification in a production environment would typically run as a scheduled job, so to be clear this does not magically happen on the fly as users save documents onto your file servers.
Managing users access to the right files is a pain on any OS, the best that’s going ot happen is that no one will complain about not having access to a file while none of your sensitive company data gets into the wrong hands. In a traditional hierarchical business life was pretty easy you had a group called finance, a folder with their finance documents in you set up permissions form one to the other and you were done. However in a virtual taming, outsourcing home working organisation all sort of rules are needed to keep third parties at arms length from confidential data and allow users to have different roles on different teams. Also very few of us are good at filing, for example how many of properly tag our holiday photos so that we can track down our friends in all the photos we have?
Windows Server 2012 has several components in it to make this work, but key to this is Dynamic Access Control (DAC) which itself plugs into Active Directory (AD) , Group Policy and File Server Resource Manager (FSRM). The Dynamic in DAC refers to the fact that whenever a user tries to access a file their claim to do so is evaluate at the time of access. There are several parts to DAC to make this work and in my screencast you can see this in action..
However there’s a lot going on here and so I also wanted to describe the moving parts of DAC in more detail.
Claim Types these are the things we know about our users and the devices they are using based on querying what’s in AD for example here I have defined the Country a user in it..
Resource Properties are the things I know about what the user is trying to access such as a file, for example I could setup a tag of Country and tag each file with one or more Countries..
Resource Property Lists are optional groups of Resource Properties that you want to keep together for a purpose, so a subset of the Global Resource Property List that is there by default in DAC. Here’s the Global Property List..
Central Access Rules allow you to define how to evaluate a claim against a Resource Property and assign permissions of the back of this. At the top of this dialog you are asked about which resources (Target Resources) the rule will apply to in my demo I have set this up so that my rules are only applied to objects that have the resource properties I am interested in already set..
Further down the dialog under Current Permissions I can then set the rule that I want to enforce. Here I have said the device the user on must be running Windows 8 Enterprise to get full permissions to the resource. For this to work AD must know the computer I am on and in Windows Server 2012 AD this property is actually only set if I am on a Windows 8 or a Windows Server 2012 machine . So I can’t get in from on an older windows machine or if my machine is not domain joined.
I also have a rule (User-Country-Department) which says that the user’s country and department must match the country and department of the resource being accessed. This is great I don't have to create groups for each user or folders to categorise departments and fiddle with ACLs this one rule makes that work and provided the users data in AD is kept up to date and files are tagged correctly that’s all I have to do.
Central Access Policies. Several Rules can then be combined into a single policy. In my case I have a Central Access Policy I have called Default and this references my two rules..
This is now a policy object that can be applied like any group policy. So If I look at group policy you can see a policy called DA-FileServer-Policy that is filtered to only apply to Server1 ...
If I edit that and expand Computer Configuration –> Windows Settings –> Security Settings –> File System –> Central Access Policy you can see where I have referenced my Default policy..
DAC requires the AD functional level to be at Windows Server 2012. This can work in concert with traditional ACLs but remember that the principal of least privilege applies so if there’s and explicit deny somewhere in DAC or in an ACL that is what will win. You’ll want to test your scenarios and there’s two tools here to help: You can set proposed permissions in a Central Access Rule as well as actually set permissions. For a particular folder or file you can go into properties –> security tabs –> advance security to evaluate security. You can see what policy is applied and what is granting or blocking users’ access to objects. You can also see there’s a classification tab from which you can see and set (depending on permissions ) the resource properties for that file/folder.
DAC requires the AD functional level to be at Windows Server 2012.
This can work in concert with traditional ACLs but remember that the principal of least privilege applies so if there’s and explicit deny somewhere in DAC or in an ACL that is what will win. You’ll want to test your scenarios and there’s two tools here to help:
I will cover off how to automatically classify files rather then rely on manual tagging them in my next post. In the meantime if you want to try this you’ll need a copy of Windows Server 2012 evaluation edition and use it to make a domain controller.
Deduplication is the business of compressing data without loss and this is now built into Windows Server 2012 as a role service. The official marketing from us states that you will save somewhere between 20-70% of the space on your file servers if you implement this. If that’s sounds interesting my screencast shows how to configure and monitor it..
The clever thing about deduplication is that it’s built into NTFS, so you can apply it to any non system volume without the need for specialist storage. There are some caveats:
To try this yourself all you’ll need is an Evaluation Copy of Windows Server 2012. Having got the idea you may also want to see how well it will work on your data. To do that install it turn on the deduplication e.g. in Powershell..
Add-WindowsFeature –Name “FS-Data-Deduplication”
and then copy windows\system32\ddpeval.exe and run this against a file share, volume etc. Note that this might put some load on your network but otherwise shouldn’t be too invasive as it will run in the background (possibly for hours on a big volume) before telling you what you would save if you enabled this feature.
Finally thanks to my good friend Simon; he has done most of the legwork in setting up deduplication for our IT camps and I have shamelessly used that for the screencast.
So far in this series I have used the new storage features of Windows Server 2012 as a place to run VMs from, but there’s more to it than that. Shared storage used to mean presenting SAN storage inside a cluster, and you relied on your SAN experts to provision the storage you needed. However with SAS / JBOD technologies coming along it’s possible to create storage that’s still highly available. However you might still want access to some of the clever things a SAN can do like thin provisioning, where you define storage you plan to use but actually haven’t got yet. So in this short screencast I show how storage spaces in Windows Server meets this need..
To try this out all you’ll need is one virtual machine running on one laptop and an Evaluation Copy of Windows Server 2012
I used a bunch of SCSI disks in my demo VM to build a storage space and they were all the same size. They don’t have to all be SCSI, they could be attached via USB, SATA etc. and can be of varying size and performance. However if you want to create a storage pool in a cluster then the disks must be SAS (Serial attached SCSI) for that. Also bear in mind that the pool will work down to the slowest disk and not up to the fastest.
I do have a script to build my fileserver, which in turn relies on a configuration file to add in the roles and features I need, and it builds form a sysprep copy of Windows Server 2012 with an answer file to join it to my Contoso domain. It does have a really useful function from Simon to rename the VM in active directory (so it is called FileServer1 in AD as well being the name of the VM in Hyper-V).
Rather than running a virtual machine or using the storage space for ordinary files, in this screencast I used it to host a SQL Server database. SQL Server 2012 has support for storing databases on SMB shares and I have seen 200,000 iops in SQL Server where the database is on a remote share like this. However the UI in Server manager doesn’t seem to allow you to navigate across shares (have I been away form SQL Server too long?) so I did the attach from a simple SQL Server T-SQL script.
Storage spaces often raises a lot of questions at our camps so here’s a good FAQ on TechNet. If you are curious about performance my advice is to test your big idea thoroughly and check this script and whitepaper to ensure you have the optimal setup.
In my last post I showed how easy it is to create virtual desktops in Windows Server 2012, and while that’s now a core part of providing remote desktops to your users there is still the good old fashioned terminal services, or to give its modern name Remote Desktop Services (RDS). RDS also changes quite a lot in Windows Server 2012 and so I have made this short screencast to show how to set it up..
To try this yourself all you’ll need is an Evaluation Copy of Windows Server 2012
VDI and RDS are designed to compliment each other:
So when to use what?
I think this comes down to efficiency and manageability. You can support far more (typically 12x) remote users with RDS than with VDI running on the same server hardware. So if possible use RDS complimented with technologies like App-v to virtualise application delivery to delegated users. That way you’ll just have to maintain the few servers providing RDS and secure the users profile disks.
It may be that some or of all your users can’t use RDS because they applications they want don‘t ‘like’ being run from and RDS server. In that case the next most efficient option is pooled VDI where a virtual desktop is shared rather than being dedicated to a particular user. In this scenario you just have manage one virtual desktop, and then control the deployment of revisions to that (which may just include patches or whole new applications). Your final option is to give your users personal virtual desktops which means that each of these needs to be managed in exactly the same way as if they have real desktops. What’s good about VD/RDS in Windows Server is that the users get a good experience either way with multi touch support, smooth video streaming and USB redirect so they can use webcams, dongle, card readers etc.
Finally if you are planning to do this in your organisation, I would suggest a really thorough trial and to over provision hardware both on the server side to provide a great user experience and also to provide good quality big monitors to win the hearts and minds of your users.
Microsoft is serious about Virtual Desktop Infrastructure, and the first sign of this in Windows server is when you try and add a role or feature..
If you opt for install Remote Desktop Services Installation and select Virtual Desktops as in my short screencast you can see that a lot of work has gone in to making this as simple as possible. However there is more to VDI in Windows Server 2012 than a good installation experience for example:
In this screencast I put all of the Pooled VDI virtual machines’ storage onto a highly available file server (this post shows you how I built that) and this is where my user profile disks are also stored so that no matter which physical host a user gets their pooled desktop from they will still get their own user settings.
I used a separate VM for each role in my remote desktop infrastructure, however if you elect for a quick setup then you can have all the roles on the one physical host from which the virtual desktops will run as well.
There’s a more details lab guide here, and you can easily navigate to other labs form here for a quick setup as well. Either way you’ll need an Evaluation Copy of Windows Server 2012 and Windows 8
In many of the screencast in this series I have moved a VM around my demo setup, however there has only been the one copy of it whether it was on a scale-out file server, in a cluster or both. In any production environment you would want to augment this with additional disaster recovery techniques including have a backup of the key virtual machines somewhere.
Replica in Windows Server 2012 is a partial answer to this. You setup a process to make an offline copy of a given virtual machine (VM) on another server and continually keep it updated. This replica VM can be updated over “UK speed” (don’t get me started!) broadband and you can also maintain up to 4 roiling snapshots enabling you to go back past a data error you may want to correct. This screencast shows you how to set it up ..
The replica is always off and it’s up to you under what conditions you invoke failover and of course you can script this in PowerShell with Start-VMFailover as well as all of the configuration for Replica I did in the screencast. The principal and replica can either be a cluster or an individual server. In my demo all the servers belong to the same domain but if that’s not the case then you can use CredSSP to set this up. One use of this is that hosters are planning to offer replica as a service so you’ll be able to set your critical VMs to be replicated (is that English?) over the internet into their data centres as a service. As I briefly mention in the screencast you can also set your replicated VM to preserve it’s network settings when you fail over to it in its new location. You are going to need 2 x hosts/physical servers to try this and an Evaluation Copy of Windows Server 2012. It doesn’t matter what OS your virtual machine is running, but do be aware of what applications etc. are supported for replication, e.g. SQL Server , System Center, SharePoint etc.
The replica is always off and it’s up to you under what conditions you invoke failover and of course you can script this in PowerShell with Start-VMFailover as well as all of the configuration for Replica I did in the screencast.
The principal and replica can either be a cluster or an individual server.
In my demo all the servers belong to the same domain but if that’s not the case then you can use CredSSP to set this up. One use of this is that hosters are planning to offer replica as a service so you’ll be able to set your critical VMs to be replicated (is that English?) over the internet into their data centres as a service.
As I briefly mention in the screencast you can also set your replicated VM to preserve it’s network settings when you fail over to it in its new location.
You are going to need 2 x hosts/physical servers to try this and an Evaluation Copy of Windows Server 2012. It doesn’t matter what OS your virtual machine is running, but do be aware of what applications etc. are supported for replication, e.g. SQL Server , System Center, SharePoint etc.
19 March 2013 - This post has been changed to reflect the best way to configure a cluster in a box
In my last post I used two clusters; one to host a high availability (HA) file server where I stored a virtual machine and another cluster to run the virtual machine. The file server cluster was built from two virtual machines (VMs) and is commonly known as a guest cluster. However to enable HA for VM I needed to cluster two physical server (aka my Dell Precision Laptops).
What I could have done was to put both the File Server role and the VM role into the same cluster (which would still have to be built from physical servers), However that configuration won’t work and isn’t supported and if you try it you’ll most likely run into Access Denied errors. For more on this look at this post by Jose Barreto, on the File Server engineering team.
I mention all this because there is a new breed of hardware appliances coming to market known as Clusters in a box; So Simply take two Server motherboards each with CPU memory etc. stick them in a 2/4 AU box which also contains multiple power supplies network cards as well as a bunch of SAS disks again with multiple controllers so everything is redundant. A good example is this from Fujitsu..
So how could use this new type appliance to create highly available virtual machines (HAVMs) based on Windows Server 2012 and Hyper-V?
You would create cluster shared volumes over the built in storage via storage spaces and so each node would have access to C:\ClusterStorage\Volume[x] and then create HAVMs on top of that. I haven’t had the chance to create a screencast for that as I need a cluster in a box in order to do that (I’ll re-edit this post when I do) so in the meantime I would refer you to Jose’s other posts on Hyper-V over SMB3
In the meantime if you are experimenting with Windows Server 2012 you can get the Evaluation edition here
But you may be wondering why you would bother as this seems to be needlessly adding another layer of complexity and another potential source of problems rather than just using a SAN. The answer is that a HA File Server doesn’t have to be built on top of a SAN it could be built on any disks you have including JBOD (Just A Bunch of Disks and SAS (Shared Serially attached SCSI) disks. Hardware vendors are bringing out these cluster in a box appliances ; two servers SAS storage multiple controllers and network interfaces and a collapsed cluster like this is an ideal way to set these up to run lots of VMs in a small business that wants to run its own infrastructure.
The two roles (the VMs and the storage) can’t run on the same node (as per this post by Jose Barreto, a collegue of mine on the File Server team), but if you are doing maintenance on one node in a two node cluster then they will have to be. I used a small disk as a quorum disk which is needed to decide which node “owns” the cluster after a node fails the answer being the one that has ownership of the quorum disk. Nodes in a Windows Server Clusters need to members of the same domain. Does this mean you have to have a physical domain controller outside the cluster in case of a cluster failure? No clusters in Windows Server 2012 will start without one but remember they need to find each other and so you will need to use things like fixed IP addresses and an etc/hosts file in each node so this can happen before your DNS and DHCP infrastructure comes up. You could also run a DC as a non HA VM on each node of the cluster and these only need modest resources (512Mb RAM 10GB disk etc.) While I used the evaluation edition of Windows Server 2012, I could have built all of this using the free Hyper-V Server 2012 and while you would still need to license any operating systems in the VMs with this, you can build collapsed clusters/cluster in a box solutions for production with this edition.
The two roles (the VMs and the storage) can’t run on the same node (as per this post by Jose Barreto, a collegue of mine on the File Server team), but if you are doing maintenance on one node in a two node cluster then they will have to be.
I used a small disk as a quorum disk which is needed to decide which node “owns” the cluster after a node fails the answer being the one that has ownership of the quorum disk.
Nodes in a Windows Server Clusters need to members of the same domain. Does this mean you have to have a physical domain controller outside the cluster in case of a cluster failure? No clusters in Windows Server 2012 will start without one but remember they need to find each other and so you will need to use things like fixed IP addresses and an etc/hosts file in each node so this can happen before your DNS and DHCP infrastructure comes up. You could also run a DC as a non HA VM on each node of the cluster and these only need modest resources (512Mb RAM 10GB disk etc.)
While I used the evaluation edition of Windows Server 2012, I could have built all of this using the free Hyper-V Server 2012 and while you would still need to license any operating systems in the VMs with this, you can build collapsed clusters/cluster in a box solutions for production with this edition.
Server virtualisation is all about decoupling the operating system form the hardware it’s running on, and one of the benefits of doing this is to ensure that a virtual machine (VM) can be made resilient to any underlying hardware failure. In the world of Hyper-V this is achieved by building a Windows Server cluster and adding the VM as a role into that cluster. From Windows Server 208 R2 this also gives the benefit of moving the virtual machine around nodes on the cluster without stopping the virtual machine (known as live migration).
In Windows Server 2012 you still need to use a cluster to make virtual machines highly available, but you also have the option to build a cluster without any shared storage using a file share to host the virtual machines storage and metadata. This screencast shows how that works..
Things to note.
This builds on two other posts in this series:
What I have done here illustrates the technology for high availability in Windows Server 2012 and is not a high availability solution itself – the high availability file server is running on two virtual machines but these are connecting to an iscsi target that isn’t highly available itself and I have no redundant network infrastructure.
As with several of my screencasts it’s a SQL Server 2012 VM that's is being migrated around. I run my Resource Governor demo application on the VM while it’s being migrated as this enables me to max out the CPU on the VM to show that migration doesn’t significantly slow this process and certainly doesn’t stop it. I also use remote desktop to connect to the VM because if I used the VM console it would drop during migration because the console is connecting to the VM via the host and of course the host changes during the migration.
To try this yourselves you’re going to need at least two physical hosts (laptops/servers etc.) as well as Windows Server 2012.
In some smaller organisations virtual machines (VM) often run on local storage DAS – Direct Attached Storage on the hosts whereas in bigger businesses many if not all production VMs are hosted on shared storage (e.g. a SAN) , so the virtual machine executes on a given host but the virtual hard disk and VM metadata resides elsewhere. In either case there might be times where you would want to move the storage for a virtual machine but leave it running on its current host. For example you might want to move a virtual machine from DAS to a SAN as it becomes more critical to a business, or you are upgrading or replacing a SAN. This is no a simple process in Windows Server 2012 and you can leave the machine running while you do it as you can see in this short screencast..
where my poor SQL Server 2012 VM gets moved around my demo rig while running a complex query again and again.
This screencast moves the VM to a highly availability file share, which I created in an earlier post in this series. Note that that file share is specifically designed to host running VMs using the new SMB 3 capabilities in Windows Server 2012, and configured to do so as opposed to storing conventional files or to run as an NFS file share. Permissions to that share are granted to the hosts running hyper-V in my case I created a group called Hyper-V Servers to put my hosts in and assigned permissions to that.
This screencast moves the VM to a highly availability file share, which I created in an earlier post in this series. Note that that file share is specifically designed to host running VMs using the new SMB 3 capabilities in Windows Server 2012, and configured to do so as opposed to storing conventional files or to run as an NFS file share.
Permissions to that share are granted to the hosts running hyper-V in my case I created a group called Hyper-V Servers to put my hosts in and assigned permissions to that.
In earlier versions of Windows Server you needed to build a cluster with shared storage (i.e. a SAN) if you wanted to move a virtual machine from server to server without stopping it (known as Live migration in Hyper-V). In server 2012 you just need to configure Live Migration in each of the servers as per this screencast..
But why does this matter? in a word - agility. Particularly for smaller businesses who don't have the budget or expertise to run a SAN, and for whatever reason want to manage their services in house rather than use the cloud. Key services can be moved around as needed without stopping them and this means that planned maintenance tasks can be carried out during the working day.
Setting this up is really easy and we usually get our delegates at our IT Camps to pair up and do this using their own laptops without too many problems. If you have two desktops/laptops lying around you can get and an evaluation copy of Windows Server 2012, and follow along.
The number of live migrations you configure is up to but if you only have limited networking you’ll set this low as you don’t want to interfere with access to the VM if that traffic is on the same network (You can set live migrations to use specific IP addresses). You can use CredSSP or Kerberos (i.e. the host machines are in the same domain) to setup the trusts between the hosts for this to work. Note the domain etc. of the virtual machine isn’t relevant There is no high availability here – If the host running the VM stops working so does the VM and if the host suffers a disk crash the virtual machine will be gone as well, so this technique just helps with planned maintenance.
The number of live migrations you configure is up to but if you only have limited networking you’ll set this low as you don’t want to interfere with access to the VM if that traffic is on the same network (You can set live migrations to use specific IP addresses).
You can use CredSSP or Kerberos (i.e. the host machines are in the same domain) to setup the trusts between the hosts for this to work. Note the domain etc. of the virtual machine isn’t relevant
There is no high availability here – If the host running the VM stops working so does the VM and if the host suffers a disk crash the virtual machine will be gone as well, so this technique just helps with planned maintenance.
Storage in Windows Server 2012 is more than firing up a few file shares and setting up security on them. With SMB3 file shares can now be used to host high performance application data like running hyper-V virtual machines and SQL Server databases. Speed is one thing but reliability is what really matters and in the server world that means high availability and in Server 2012 you can now create a file server role in a cluster. The nodes (up to 8 for this role) need to see some sort of shared storage but not necessarily a SAN. As you can see in this short screencast it’s a simple exercise and the share can be configured for a variety of uses including NFS.
This role also uses the clustered shared volume technology introduced in Windows Server 2008 R2 for live migrations in Hyper-V. Actually you must use CSV if you are using the share for application data as this hands off the read access to another node in a cluster fast enough to enable continuous availability.
In my demo I did everything in VMs except having an iscsi target on my host, and will this is great to show it working and to evaluate the technology. However this really gets interesting if you create something called a collapsed cluster. The hardware vendors are working on a cluster in a box where several computers in a commodity racked box are cross wired to SAS/JBOD disks. You could then create a clustered file share on this and put your virtual machines into the same cluster and then they too would be highly available as there would be no single point of failure in this box, and you wouldn’t need a SAN to do it. I’ll show you how this works in a separate video.
In my last video I showed you how to use Server Manager to manage lots of servers, and given that you want to do that, there is less of a need to have all the management tools on every server. So in this screencast I wanted to show you how to rip parts of the interface out of Windows Server to create a minimal UI (known unofficially as MinShell).
Server Core is stall an install option in Windows Server 2012 but you can now add in the full or partial UI post install. However to do that you’ll need access to install media for windows services (specifically the sources\sxs folder) as the binaries for the UI won’t have been copied as part of the core install process. Because MinShell is simply a feature removal rather than an installation option. You can enable/disable the full use interface whenever you want to. Patching and the reduced attack surface of MinShell are the key benefits for doing this, while another benefit for servers that aren’t in data centres is that casual users won’t be able to go surfing on them as internet explorer won’t be there, as will other tools like explorer.
Server Core is stall an install option in Windows Server 2012 but you can now add in the full or partial UI post install. However to do that you’ll need access to install media for windows services (specifically the sources\sxs folder) as the binaries for the UI won’t have been copied as part of the core install process.
Because MinShell is simply a feature removal rather than an installation option. You can enable/disable the full use interface whenever you want to.
Patching and the reduced attack surface of MinShell are the key benefits for doing this, while another benefit for servers that aren’t in data centres is that casual users won’t be able to go surfing on them as internet explorer won’t be there, as will other tools like explorer.
If you want to try this you just need to download an evaluation copy of Windows Server 2012
My favourite feature in Windows Server 2012 is it’s ability to manage and be managed. For those of you that aren’t yet PowerShell fans, then this means Server Manager and that's the main thing I am using in this screencast..
In a small business you could get away with just using server manager and PowerShell to manage your servers, and in my opinion you’ll be fine where the number of server you have (physical plus virtual) is less than about a hundred). However when you get close to that you need to start thinking about dedicated management tools like System Center 2012
You can manage older servers (back to Windows Server 2008 & 2008 R2), but you’ll need to pull down the Windows Management Framework 3.0, and then run winrm quickconfig if you aren’t already remotely managing them.
You might want to run all these tools from Windows 8, rather than connecting to a server via remote desktop. You’ll want to download the Remote Server Administrator Tools (RSAT). before you ask, RSAT for Windows Server 2012 can only be installed to Windows 8 , in the same way that RSAT for 2008R2 only works form Windows 7.
Having looked at how NIC teaming makes the best use of your network card in my last video I wanted to explore another networking feature in Windows Server 2012, DHCP Failover..
You might be wondering why this matters given that split scopes have been around for ages and you can also create a DHCP role in a windows cluster. The clustering option requires shared storage to work whereas in DHCP failover in Windows Server 2012 all you need is a shared secret (a password). Split scopes might mean you run out of IP address on one node and there is no high availability as such. Whereas in DHCP failover you can either split the IP address allocation, or use one of the server as a hot standby.
Given that you can only have setup two server in a DHCP Failover configuration like this I would see this as a great solution for smaller organisations who can’t justify running a cluster for this workload, but need some resilience for key services like DHCP.
If you want to try it do refer to parts 1 & 2 of this series to setup your demo rig. and for more details on DHCP failover you may also wish to check out TechNet here
In the third in this video series I wanted to show you NIC Teaming..
which is how you can provide a single network interface from multiple interface even if these are from different manufacturers. That's why in the video below I have used some Belkin USB Ethernet adapters combined with the on-board network card on my laptop.
If you want to try this yourselves you would install the hyper-V role on windows server 2012 possibly using the introduction to this series as a guide as well as the introduction to Hyper-V I published yesterday. You would then create several internal virtual switches in Hyper-V and then create a new virtual machine (VM) with several network cards in it bound to these virtual switches. There’s one property you’ll want to change in the settings for the VM so that NIC teaming works properly which is to set the network adapter in the VM to be used for NIC teaming:
The two key things to remember about NIC teaming from this are:
For further reading there’s a deployment guide you can download here
Finally if you haven’t got a TechNet subscription and want to try this yourselves you can get an Evaluation Edition of Windows Server 2012.
Hopefully you will have read the introduction to our Evaluate This series and are now ready to start to have a look at how stuff works in Windows Server 2102. The obvious place to start is Hyper-V as in subsequent videos in the series well need a number of virtual machine, and not everyone knows how to do this in Hyper-V. For example some of you might be new to Hyper-V because you are a DBA, A Vmware expert, for example so hopefully this video will help..
It did occur to me that you may want to try Hyper-V in Windows 8 as well and I didn’t cover that off in this video so from the start menu type programs and look for Programs & Features in Settings..
tand then select Turn Windows features on or off from there ..
and select Hyper-V..
You’ll need to reboot, make sure your BIOS is setup to support virtualisation and even then this may not work if your CPU doesn’t support SLAT (intel = EPT, AMD=NPT) and you can test for this use CoreInfo (part of Windows SysInternals ) if you’re not sure.
1. This is just an introduction to Hyper-v to help you setup your first basic virtual machine as I want to keep each video as short as possible and there will be others in the series to show you some of these other advanced features.
2. In the video I use a sysprepped copy of Windows Server 2012 as a parent disk to create new difference disk to use in my virtual machine, and so please refer to the introduction to this series for details on how to make that. If you haven’t got a TechNet subscription you can use the Windows Server 2012 evaluation edition
3. If you do want to try this in Windows Server 2012 and you haven’t got an MSDN/TechNet subscription there’s a 180 day Evaluation Edition of Windows Server 2012.
The main reason my blog has been totally neglected for the last few months is because Simon and I have been on tour doing IT camps. Despite the number of these we have done there will be many of you of you who haven’t been able to attend these because of time constraints or because you simple didn’t realise that we were doing them. To correct this Simon and I are going to record some of the best demos we have done and publish these over the coming weeks. The series is called Evaluate This! For the simple reason that we want you to try some of this out yourselves just like we encourage you to do if you come along to camp.
Setting these up yourselves allows you to skill up, and work out how and if these new features will work in your organisation. Trying out these new features will also help you prepare for the exams if you want to get certified.
We have tried to engineer these demos to be run on a single laptop/desktop running Hyper-V either inside Windows Server 2012 or on Windows 8. The exceptions are where the demos are showing advanced features like the virtual machine migration options, replica, and Hyper-V.
To get you started particularly if you’re new to Hyper-V we need to introduce you to how to build a lab setup. Actually there’s some good resources on TechNet for this but I also wanted to show what to do to get to that, because these guides assume you have your virtualisation setup and you know how to build virtual machines and configure them.
Build your Virtual Machine Host for demos using Boot to VHD
Rather than fiddling around with partitions Windows 7/2008 R2 and later allow you to boot from a Virtual Hard Disk (VHD) rather than a real disk you can have multiple VHD each with its own operating system and each will have a corresponding entry in the BCD on your system. Each of these can be copied around and restored if things go wrong. So lets get started..
Download Windows Server 2012 and start to install it. To do this you might want to use the Windows 7 ISO to USB utility to make a bootable USB stick form this iso file. Start the installation and as soon as you have a dialog box up in the Pre execution Environment (PRE) stop! Hit SHIFT-F10 and this will bring up a command line.
list volume this will help you identify which drive you want to use to host your VHD e.g. Drive D:
create vdisk file=”D:\SysPrep.VHD” type=flexible maximum=20000
this create a VHD, sysprep.vhd (you can call yours whatever you want) that is dynamic and 20Gb in size
select vdisk file=”D:\SysPrep.VHD”
Now you can go back to the installation environment and customise the installation to install the operating system to your new volume.
When you have completed the installation you will want to find any drivers you need to get the display working properly as well as your various network card. I find that the windows x64 drivers are generally Ok for this if devices aren’t detected automatically. You may also wish to deploy the Remote Server Administration feature on your new deployment so you have the tools to manage all the new features in Windows Server 2012 as well as the Hyper-V role (which needs a reboot).
Once you have your new installation the way you like it sysprep the virtual machine (c:\windows\system32\sysprep\sysprep.exe) and set it for an OOB experience and to shutdown (not restart).
Boot the machine from the installation media again and again enter SHIFT F10 to get a command prompt.
create vdisk file= “D:\Boot.VHD” parent=”D:\SysPrep.VHD”
this creates a differencing VHD, boot.VHD with a parent of the sysprep.vhd you created earlier
select vdisk file=”D:\Boot.VHD”
list volume this will help you identify which drive your new VHD has been mounted to e.g. drive V:
this creates a new boot entry that will boot from the difference disk you have made
Reboot the machine and select the top boot option. The machine will come out of sysprep and all of the changes this makes will be written into the differencing disk leaving the parent disk unchanged (in a pre sysprep state). Go into the system configuration of you machine and remove the second boot option (the one that points to the sysprepped VHD that we started with) to ensure you don’t ever boot into that.
With setup you can use the sysprep.VHD as a parent for your VM’s and if you do one more thing..
copy Boot.VHD Boot-Backup.VHD
You can get back to a sysprepped state by copying the boot-backup.vhd over Boot.VHD to get back to where you started from. You could also back these files up to an external drive and copy them in again or onto another machine to repeat the process. Also I have a quick introduction to other BCD related command here which may be of interest.
Anyway in subsequent posts I’ll go through what you can do now you have this setup.
I was invited to attend the EMEA Dell Partner Direct conference in Madrid last week, specifically to represent Microsoft, alongside Vmware at a discussion about consumerisation, hosted by Dell Wyse. There has been much written about this, the decline of laptop sales as other form factors such as phones and tablets go from strength to strength, so I don’t intend to paraphrase that.
However one question from the floor got me thinking and it was about the cost and speed of internet connectivity while we are out of the office. Simon and I have a lot of experience of this when we are trying to run our camps and this is despite trying to arrange connection in advance and paying considerably for them. We can also get stuck when we are just trying to do our other work in hotel rooms, at service stations and departure lounges.
So for many of our camps we have our demos with us, and for me this is my mighty “Dell-asaurus” a bright orange laptop (m6500) with 32Gb of RAM 3 x SSDs etc. etc. In fact we normally have several of these beasts to show off things like virtual machine mobility in Windows Server 2012, rather than rely on the servers we have back at the office. However if I am lucky enough to get a decent connection then I can get mail and chat on Lync, and best of all get back to the office file shares, and sites with DirectAccess, because we have standardised on Windows 8 clients with Windows Server 2012 servers.
So my advice is to pray for the connected cloud but plan to use a disconnected device like a PC.
However unless you want to show 20 virtual machines running all at once you don’t need to lug a round a huge laptop to work offline, You could simply carry a properly configured (and encrypted memory stick) with which you can boot from on any Windows 7 or 8 compatible PC. To find out about that and the other things we can do if and when your remote workforce have a connection to the office you’ll need to come to our latest round of Windows 8 IT Pro camps which will be focused on Windows 8 on the enterprise. Actually that also means we’ll be showing you the client aware features of Windows Server 2012 that we left out of our last round of server camps such as:, DirectAccess, Branch Cache, VDI, Dynamic Access Control etc. and so you might also need a laptop if you want to evaluate that ( note you can download a Windows Server 2012 trial here).
Finally if we get good internet at our camps Simon also plans to show you how to work with the Windows 8 store, and PC management using Windows InTune,
Simon and I are doing so many events that frankly our blogs are nearly dying of neglect so I am not even sure if anyone is out there reading this!
I am not apologising because it’s great to be out there meeting people at IT Camps, launch events etc. However not everyone can get to one of those, either because there are leaves on the line or your boss can’t let you out of the office and there’s no budget for travel anyway.
Also our IT camps fill up really fast, on one occasion in just ten minutes after it was published. To overcome this we have been asked if we can record them and to be honest our unplugged style makes each camp very different e.g. what the audience are asking us about, which of our demos work as planned and how much SQL Server Simon lets me talk about
Anyway Marcel our intern has put his foot down and ordered us to do a version of our camps for TechDays Online later this month. So all you need is a comfy chair (but not too comfy), a latte and a laptop. If you watch us live on the day you can ask us questions but if not we will be recording it for posterity.
TechDays will be spread over two days with a half day on each topic about all the stuff we think is jolly exciting..
Day One (30th October)
Day Two (31st October)
In the meantime download the evaluations of Windows 8, Windows Server 2012 or System Center 2012sp1 or tune into MVA so you can ask better questions.
I am being asked more and more often about how to move virtual machines onto Hyper-V and so I wanted to do a definitive post on the tools and techniques to do this. Whatever your reasons for doing this you’ll want to ensure your users have a good experience post migration and the secret to this is to prepare and plan. So step one is to understand what is to be migrated such as the spec of the physical servers and for each virtual machine:
These kinds of things can all be discovered by using the Microsoft Assessment and Planning Toolkit(MAP), a free download designed to accelerate your migrations and deployments of any Microsoft technology. It crawls your datacentre with credentials you supply so that it can access such things as what’s running on your Vmware servers, ,the specs of the servers (both physical and virtual) and precisely what OS and software is running on each VM to help you plan migrations and upgrades or just to keep on top of what you have as part of an audit. Note that MAP is continuously updated and already supports migrations to Windows Server 2012 so make sure you pull down the latest version.
However there are things that even MAP doesn’t tell you, for example VMs are often combined to provide n-tier services like SharePoint and it is the overall performance of that service that you’ll want to capture as well as how high availability, disaster recovery and backup are managed. It’s also important to understand who is responsible for each of these services and the business impact of these
Actually the easy bit of the process is the actual conversion of virtual /physical machines and, there are several tools out to convert virtual machines to Hyper-V.
Being ready for Hyper-v is also about the IT guys understanding it. This should just be a conversion process - understanding that the job is the same but is just achieved in a different way using different tools. My top three training tips for getting started with Hyper-V in Windows Server 2012:
| would also recommend getting certified and MCSE Windows Infrastructure or MCSE Private Cloud are the two to consider (the latter includes exams on System Center 2012)