Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

Insufficient data from Andrew Fryer

  • Lab Ops part 9 - an Introduction to Failover Clustering

    For many of us Failover Clustering in Windows still seems to be a black art, so I thought it might be good to show how to do some of the basics in a lab and show off a few of the new clustering features of Windows server 2012 R2 in the process.

    Firstly what is a cluster? It’s simply a way of getting one or more servers in a group to provide some sort of service that will continue to work in some form if one of the servers in that group fails. For this to work the cluster gets its own entry in active directory and in DNS so the service it’s running can be discovered and managed.  Just be clear all the individual servers in a cluster (known as cluster nodes) must be in the same domain.

    So what sort of services can you run on a cluster? In Windows Server 2012R2 that list looks like this..

    cluster roles

    Note: Other roles from other products like SQL Server can also be added in. Notice too that virtual machines (VMs) are listed here and running them in a cluster is how you make them highly available. 

    All of these roles, but one, can be run from guest clusters; that is a cluster built of VMs rather than physical servers and it is also possible to have physical hosts and VMs combined in the same cluster – it’s all Windows servers after all. The exception to this is when you want to make VMs highly available -  the cluster must only contain physical hosts and this is known as a host or physical cluster. 

    Making a simple cluster is easy, it’s just a question of installing the failover clustering feature on each node and then joining them to a cluster. When you add in the feature you can add in failover cluster manager but if you have been following this series you’ll have access to this on your desktop as the point of Lab Ops is to remotely manage where possible and also make use of PowerShell.  So In my example I am going to create 2 x VMs (fileserver2 & 3) from my HA FileServerCluster Setup local.ps1 script which adds in the clustering feature (from this xml file).

    Note: my SSD drive where I store my VMs is E: s you will need to edit my script if you want to follow along

    Having run that I could then simply run a line of PowerShell on one of those servers to create a cluster..

    New-Cluster -Name HACluster -Node FileServer2,FileServer3 -NoStorage -StaticAddress 192.168.10.30

    Note the –NoStorage switch. This cluster has just got my two nodes in and that’s it.  For some clustering roles such as SQL Server 2012 AlwaysOn this is OK but most roles that you put into a cluster will need access to shared storage, and historically this has meant connecting nodes to a SAN via ISCSI or Fibre Channel.  This applies to host and guest clusters, but for the latter guest VMs will need direct access to this shared storage. That can cause problems in a multi-tenancy setup like a hoster as the physical storage has to be opened up to the VMs for this to work, and even if you don’t work for one of those then this will at least cut across the separation of duties in many modern IT departments.  

    There’s another problem with this cluster; it has an even number of objects in it. If one of the nodes fails it will restart and think it “owns” the cluster but the other node already has that ownership and problems will occur.  So for most situations we want an odd number of objects in our cluster and the “side” of a cluster that has the majority after a node failure will own the cluster.  This democratic approach to clustering means that there is no one node that is in charge of the cluster, and this enables the Windows Server to support bigger clusters than VMware who use the older style of clustering that Microsoft abandoned after Windows Server 2003.

    So having built a simple cluster I need add in more objects and work with some shared storage.   If you look at the script I used to create FileServer2 & 3 there’s a couple of things to note:

    I have created 7 shared virtual hard disks and these are attached to both FileServer2 and FileServer3 and if I run this PowerShell

    Get-VMHardDiskDrive –VMName “FileServer2”,”FileServer3” | Out-GridView

    you can see that..

    disks

    Also If I look at the settings of FileServer2 in Hyper-V manager there’s a switch to confirm these disks are shared ..

    shared vhdx settings

    This is new for Windows Server 2012R2 and for production use the shared disks (VHDX only is supported) must be on some sort of real shared storage. However there is also a spoofing mechanism in R2 to allow this feature to be evaluated and demoed and this is in line 272 of my script ..

    start-process "C:\windows\system32\FLTMC.EXE" -argumentlist "attach svhdxflt e:"
    Note you’ll need to rerun this after every reboot of your lab setup.

    Now that my two FileServer VMs are joined in a cluster (HACluster) I can use one of these shared disks as a third object in the cluster. To do that I need to format it add it in as a cluster resource and then declare it’s use as a quorum disk, and I’ll be running this script from one of the cluster nodes..

    #Setup (Initialize format  etc. )the 1Gb Quorom disk 
    $QuoromDisk = get-disk | where size -EQ 1Gb
    Initialize-Disk $QuoromDisk.Number -PartitionStyle GPT
    set-disk $QuoromDisk.number -IsOffline 0
    New-Partition -DiskNumber $QuoromDisk.Number -DriveLetter "Q" -UseMaximumSize
    Initialize-Volume -DriveLetter "Q" -FileSystem NTFS  -NewFileSystemLabel "Quorum"  -Confirm:$false
    #Add in the quorum disk to the cluster, and then set the quorum mode on the cluster
    start-sleep -Seconds 20
    Get-ClusterAvailableDisk -Cluster $ClusterName | where size -eq 1073741824 | Add-ClusterDisk -Cluster $ClusterName
    $Quorum = Get-ClusterResource | where ResourceType -eq "Physical Disk"
    Set-ClusterQuorum -Cluster $ClusterName -NodeAndDiskMajority $Quorum.Name
    Using a disk like this as a third object is only one way to create that odd number of objects (called Quorum). You can just have three physical nodes or use a file share. In my case if a node fails the node that has ownership of this disk will form the cluster, and the failed node will then re-join the cluster rather than try to rebuild an identical cluster when it is recovered

    That’s where I want to stop as there are several ways I can use a basic cluster like this and I’ll be covering those in individual posts. 

    If you want to build the cluster I have described so far all you need is an evaluation edition of Windows Server 2012R2.  and go through part 1 & Part 2 of this series.

  • Lab Ops part 8 –Tidying up

    If you have been to one of our IT camps you’ll know it’s all live and unstructured which  can mean things can go wrong, typically because we didn’t tidy up properly after the previous event. The problems of repeatedly creating VMs are that:

    The VM still exists in Hyper-V

    The files making up the VM are still on disk

    The VM is registered in active directory ( we don’t currently recreate the main domain controllers for each camp

    So the sensible thing is to scripts to tidy this up before I run each demo again, in fact my plan is to have a clean up section at the start of every demo script so I never forget to run it.  My advantage over some of  Power Shell gurus is that I am starting form scratch and can leverage the power of the later versions of PowerShell in Windows server 2012 & R2. For example to delete a DNS record there is now remove-DNSServerResourceRecord where before you would have had to use complex WMI calls.

    My clean up code needs to run without errors whatever state my rig is in so where possible I test for objects where I delete them.  Sadly I found one problem here and that was with the DNS cmdlets – if you do Get-DNSServerResourceRecord it returns an error if the record is not there which I think is wrong, but at least the command is easy to use.

    Anyway here’s what I do..

    #1. delete the File server VM. Note that before a VM can be deleted it must be shutdown no matter what state it’s in when the script is run

    $vmlist = get-vm | where vmname -in $vmname
    $vmlist | where state -eq "saved" | Remove-VM -Verbose -Force
    $vmlist | where state -eq "off" | Remove-VM -Verbose -Force
    $vmlist | where state -eq "running" | stop-vm -verbose -force -Passthru | Remove-VM -verbose –force

    #2. Get back the storage. My scripts to create VMs put all of the metadata and virtual hoard disks in the same folder so once the VM has been deleted in hyper-v, I can just delete its folder.

    $VMPath = $labfilespath + $VMName
    If (Test-Path $VMPath) {Remove-Item $VMPath  -Recurse}

    #3. remove FileServer1 from AD if it's there already.
    Get-ADComputer -Filter {name -eq $VMName} | Remove-ADObject -Recursive -Confirm:$false

    Notes on the above..

    • the code to delete the vms makes extensive use of the pipe “|” command perhaps the most important part of PowerShell.  what crosses over this pipe is not text but the actual instance of the object being referenced. For example the vm properties will include the host it is running on so this doesn’t need to be specified.
    • Another good example of how much easier PowerShell3 is the much simpler where clause we don’t need the {$._} syntax anymore
    • I got the code to remove the entry for the VM from active directory by grabbing it from the Active Directory Administrative Console which is not only a one stop shop for AD, but shows you the PowerShell for each command you run in it for exactly this purpose..

    adac

    • If you want to delete a cluster in AD you’ll need to turn off the switch to stop it being accidentally deleted ..  

    Set-ADObject -Identity:"CN=HAFILECLUSTER,CN=Computers,DC=Contoso,DC=com" -ProtectedFromAccidentalDeletion:$false -Server:"Orange-DC.Contoso.com"

    • I haven’t got any code here to delete DNS entries as creating VM’s the way I do with static IP addresses doesn’t need this. However later in this series I’ll be creating clusters and scale out file servers and that’s when I need to do this..

    invoke-command -ComputerName Orange-DC -ScriptBlock {remove-DnsServerResourceRecord "HAFileCluster" -ComputerName orange-dc -zone contoso.com -RRType A -Force}
    invoke-command -ComputerName Orange-DC -ScriptBlock {remove-DnsServerResourceRecord "HAFileServer" -ComputerName orange-dc -zone contoso.com -RRType A -Force}

    where HAFileCluster is my actual cluster and HA FileServer is the scale out fileserver role running on that cluster.  I am invoking this command remotely as I am running it from a cluster node where I don’t have the  DNS Server PowerShell cmdlets as that role is not installed on the nodes.

    As ever I hope you find this stuff useful, and please let me know if you have any comments. In the meantime I am at TechDays Online for the rest of the week.

  • Lab Ops - Stop Press Windows Server 2012R2 Evaluation edition released

    I have just got back to my blog after a few days at various events and I see the Evaluation edition of Windows Server 2012R2 has been released.  I need this for my lab ops because I am building and blowing away VMs for er… evaluations and I don’t want to have to muck about with license keys. For example I have a script to create FileServer1 VM but if I use the media from MSDN for this and I don’t add in a license key to my answer file, the machine will pause at the license key screen until I intervene.  Now I have the Evaluation Edition I can build VM’s that will starter automatically and when they are running continue to configure them. For example for my FileServer1 VM I created in earleir posts in this series  I can add a line to the end of that script while will run on the VM itself once it is properly alive after its first boot..

    invoke-command -ComputerName $VMName -FilePath 'E:\UK Demo Kit\Powershell\FileServer1 Storage Spaces.ps1'
    ..and this will go away and setup FileServer1 with my storage spaces.

    Note both the script to create FileServer1 (FileServer1 Setup.ps1) and the xml it uses to add features into that VM (File server 1 add features.xml) and the File Server1 Storage Spaces.ps1 script referenced above are on my SkyDrive for you to enjoy.

    One good use case for executing remote PowerShell  scripts remotely like this is when working on a cluster. Although I have put the Remote Server Administration Tools (RSAT) on my host and to have access to the Failover Clustering  cmdlets I get a warning about running these against a remote cluster..

    WARNING: If you are running Windows PowerShell remotely, note that some failover clustering cmdlets do not work remotely. When possible, run the cmdlet locally and specify a remote computer as the target. To run the cmdlet remotely, try using the Credential Security Service Provider (CredSSP). All additional errors or warnings from this cmdlet might be caused by running it remotely.

    While on the subject of new downloads the RSAT for managing Windows Server 2012R2 from Windows 8.1 is now available, so you can look after your servers from the comfort of Windows 8.1 with your usual tools like Server Manager, Active Directory Administrative Console, Hyper-V manager and so on On my admin VM I have also put on the Virtual Machine Manger Console ad SQL Server Manager and a few other admin tools..

     

    image

    Before you ask me the RSAT tools you put on each client version of Windows only manage the equivalent version of server and earlier.  For example you can’t put the RSAT tools for managing Windows Server 2012R2 onto Windows 8 or Windows 7.

    So using my lab ops guides or the more manual guides on TechNet, you can now get stuck into playing with Windows Server 2012R2, as a way of getting up to speed on the latest Windows Server along with the R2 courses on the Microsoft Virtual Academy.

  • Lab Ops 7 – Setting up a pooled VDI collection in Windows Server 2012 R2

    In Windows Server you can create two kinds of Virtual Desktop Infrastructure (VDI), personal or pooled.  A personal collection is a bit like a company car scheme where everyone chooses their own car. This means there needs to be car for everyone even if they are on leave or sick etc. and each car needs to be individually maintained. However the employees are really happy as they can pimp their transport to suit their own preferences.  Contrast that with a car pool of identical cars, where an employee just takes the next one out of the pool and when its brought back its refuelled and checked ready for the next user, and you don’t need a car for everyone as there’ll be days when people just come to the office or use public transport to get to their destination.  That seems to be a better solution than company cars for the for the employer but not so good for the employees. Pooled VDI collections work like pool cars in that they are built from one template and so only one VM has to be maintained, but that means every user has the same experience which, might not be so popular.  However Pooled VDI in Windows Server 2012 has a method for personalising each users experience while still offering the ability to manage just one template VM and that’s why I want to use pooled VDI in my demos. 

    Carrying on from my last post I right click on RD Virtualisation Host and select Create Virtual Desktop Collection

    image

    Now I get specify the collection type

    vdi collection type

        

    Having chosen the collection type I now need to pick a template on which to base the pool..

    vdi template selection

    I found out that you can’t use the new Hyper-V generation 2 VMs as a VDI template even in Windows Server 2012R2 rtm. This does mean you can use that WimtoVHD Powershell script I have promoting in earlier posts in this series to create my template directly from the Windows installation media. 

    Note: you’ll need windows 8.1 enterprise for this which is currently only available on msdn, until 8.1 is generally available in a couple of weeks when there should be an evaluation edition available

    In fact for a basic VDI demo the VHD this creates can be used as is; all you need to do is create a new VM from this VHD to be configured with the settings each of the VDI VMs will inherit, such as CPU, dynamic memory settings, Virtual NICs and which virtual switches they are connected as well as any bandwidth QoS you might want to impose..  

    vdi template details

    Here you can see the setting for my template VM such as it being connected to my FabricNet virtual switch.

    Normally when you build VMs from templates you will want to inject an unattend.xml file into the image to control its settings as it comes out of  sysprep (as I have done in earlier posts in this series). This wizard helps you with that or you can just enter basic settings in the wizard itself as I have done ..

    vdi template settings

    and not bother with an unattend.xml file at all.

     

    vdi template unattend settings

    Now I can start to configure my collection by giving it a name, how many VMs it will contain and specifying who can access it ..

    vdi create collection

    In a production environment you would have several virtualization hosts to run your collection of VMs and here you can specify the load each of those hosts will have. 

    vdi specify hosts for collection

    Having specified which hosts to use I can now get into the specifics of what storage the VMs will use. I am going for a file share, specifically one of the file shares I created earlier in this series, which will make use of the enhancements to storage in R2.  Note the option to store the parent disk on a specific disk, which might be a good use of some of the new flash based devices as this will be read a lot but rarely updated.

    vdi vm location

    My final choices is whether to make use of user profile disks.  This allows all a users settings and work to be stored in their own virtual hard disk and whenever they log in to get a pooled VM, this disk is mounted to give them access to their stuff.  This is really useful if all your users only ever use VDI as you don’t need to worry about all that roaming profiles and so on.  However if your users sometimes use VDI and sometimes want to work on physical desktop such as laptops then you’ll want to  make use of the usual tools for handling their settings across all of this so they get the same desktop whatever they use - remember we work for these people not the other way around! 

    vdi user profile disk location 

    That’s pretty much it - the desktops will build and your users can login via the web access server in my case by going to http://RDWebAccess.contoso.com/RDWeb

    To demo the differences in performance on a pooled VDI collection that sits on a storage space that's had deduplication enabled I could create another collection on the Normal* shares I created in my post on storage spaces by doing this all again.  Or I could just run a PowerShell command, New-RDVirtualDesktopCollection,  and set the appropriate switches..

    $VHost = "Orange.contoso.com"
    $RDBroker = "RDBroker.constoso.com"
    $ColectionName = "ITCamps"

    #The VDI Template is a sysprepped VM running the Virtual Hard Disk, network settings etc.  that all the pooled VMs will inherit.  The VHD will run windows 8.1 configured and sysprepped with any applications and setting needed by end-users

    $VDITemplateVM = get-vm -ComputerName $VHost -Name "Win81x86 Gen1 SysPrep"

    New-RDVirtualDesktopCollection -CollectionName "ITCamp" -PooledManaged -StorageType CentralSmbShareStorage -VirtualDesktopAllocation 5 -VirtualDesktopTemplateHostServer $VHost -VirtualDesktopTemplateName $VDITemplateVM -ConnectionBroker $RDBroker -Domain “contoso.com” -Force -MaxUserProfileDiskSizeGB 40 -CentralStoragePath”\\fileserver1\NormalVMs” -VirtualDesktopNamePrefix "ITC" -OU “VDICampUsers” -UserProfileDiskPath “\\fileserver1\NormalProfiles”
    My good friend Simon May then gradually add in more and more VMs into the collection with the Add-RDVirtualDesktopToCollection cmdlet to see how much space he can save.

    The other really clever thing about a pooled VDI setup like this, is maintaining it.  Clearly you will want to change the tem[plate the Pooled collection is based on from time to time, for example to add or remove version of applications and to keep patches up to date.  All you have to do is to make another template VM with the new applications and latest patches and then   Update the collection from the Collection management screen, or via the Update-RDVirtualDesktopCollection PowerShell cmdlet for example

    PS C:\> Update-RDVirtualDesktopCollection -CollectionName "ITCamp" VirtualDesktopTemplateName "$VDITemplateName" -VirtualDesktopTemplateHostServer $VHost -ForceLogoffTime 12:00am -DisableVirtualDesktopRollback -VirtualDesktopPasswordAge 31 -ConnectionBroker $RDBroker

    where I would have set $VDITemplateName to be the modified and sysprepped VM to base the updated collection on. Note the Force LogOffTime setting; that’s where users will be thrown out and forced to log on again.  If you don’t set this they’ll only get to use the new version when the login and logout again.  However you manage that if you have used User Profile in the collection as I have done their preferences and setting will persist on the updated collection.

     

    So that’s the basics of setting up VDI on a laptop for your evaluations.  From here I could go on to ad other parts of the Microsoft remote desktop solution such as;

    • Set up RemoteApp the business of delivering individual applications over the Remote Desktop Protocol (RDP)
    • Add in RD Session Hosts and RDS Collections to deliver the traditional terminal services functionality and still the most efficient way to deliver remote desktops top users
    • Add in an RD Gateway to open up the RD infrastructure to remote users.  Clever use of the Remote Routing and Access Services role in a VM that’s connected to virtual networks can be used to simulate this connectivity from just one host. 
    • Add in a Licensing host for production environments 

    However I would be interested to know what you would like me to post next, so please add comments or if you are shy e-mail me

  • Lab Ops 6 – Setup VDI in Windows Server 2012R2

    VDI bridges the world of the client and the server, and that can mean that in a world where IT professionals are either experts in one or the other (Simon and I being  a case in point), most of us have a partial knowledge of how VDI works.

    There are other reasons why you might be forgiven that VDI isn’t a Microsoft strength for two reasons:

    • It has been hidden behind offerings from third parties like Citrix, and those partners are still important for larger implementations and because in the past those partners made the only decent remote desktop clients for all sorts of platforms such as Android and IOS.  That all changed on 7th October and to quote the official press release:

    “.. with Windows Server 2012 R2 Microsoft is introducing the Microsoft Remote Desktop app, available for download in application stores later this month, to provide easy access to PCs and virtual desktops on a variety of devices and platforms, including Windows, Windows RT, iOS, OS X and Android.”

    • VDI isn’t referred to as such in TechNet or in the Windows Server interfaces such as Server Manager as VDI – it’s called Remote Desktop Services which also includes the traditional terminal services way of providing a remote desktop, but also includes using pooled or  personal client VMs as well.

    However since Windows Server 2012 it has been pretty easy to setup a secure and resilient VDI environment just using Windows client and server, that offers your users a rich and personalised experience.

    So before I go into how to build a lab what are the moving parts?  The easiest way to see this is post deployment as there’s an overview in server manager..

    VDI Overview

    The green objects have yet to be configured; The RD Gateway for external access to my VDI environment and the licensing server. 

    The grey objects are configured and go blue when in hover on them:

    • the RD Web Access Server is the web server that users connect in from.
    • The RD Connection Broker is the middle tier which orchestrates the whole setup; what servers are performing which functions, where the VDI VMs are to control their state , and handling security to limit who can access which desktop and applications.
    • The RD Virtualization Host are the physical server(s) that that the VDI VMs run from.
    • Optionally there is also an RD Session Host for  Remote Desktop Services (Terminal Services as was)

    Notice the IT Camp collection. A collection is a pool of client VMs. The idea is to manage the collection as one VM and I’ll describe the options and how to configure collections in my next post. 

    One thing to note about RDS/VDI is that highly availability (HA) is going to be very important as no one will get any work done if there are no desktop to work on. For this reason my lab setup needs to be tagged with “don’t try this at work!” Of course there’s no point in evaluating this unless you are sure that you can enable HA and this is the sort of setup that would allow for that..

    WP_20131007_001_thumb1

    Notes: 
    • The web access servers need to be load balanced as for any web site
    • The broker role can be clustered and not the use of SQL Server to store metadata about the environment when this is the case. Of course that SQL Server database would need to reside on resilient shared storage or it becomes a single point of failure.  Also note that Broker1 and Broker 2 can handle connection requests i.e. this is an active active scenario and that’s why there’s a shared database in the mix.
    • The virtualization hosts have no special settings for HA it’s just that there’s more than one of them and typically you’ll want to ensure you have enough spare server resources to able to run your collection of VMs if one of the servers is not longer available. 
    • The VMs virtual hard disks could be on shared storage but actually this doesn’t matter too much, what matters is where the user’s data is and the parent disk of the VDI collection. RDS in Windows Server 2012 has special user profile disks and it is these that should be on shared storage and when you configure RDS you get to specify where these are stored as distinct from the virtual hard disks for each VDI VM.   

    At this point you might be wondering what happens if one of the Virtualization Hosts fails.  It’s really simple - for those users that had open desktops that were based on VMs on that failed server, then those users will loose their session.  However given that the shared storage behind all of the VMs is still there then when the user connects in again they won’t have lost any saved work. So a bit like a real desktop crash except that they can immediately sign back in and continue where they left off as the broker will assign them an unused VM on a running server, and that will be quick as the VDI VMs are left in a saved state to be ready for immediate use

    Anyway on with my lab setup which builds on earlier posts in this series..

    As I’ve already said I need at least one physical host to be the RD Virtualization Host and my laptop in the diagram above is the “Orange” session host. The other roles in the diagram above can all be one or more VMs themselves and those roles can be combined.  A production environment will need to have use multiple VMs on different hosts but you might not need to scale out the roles to dedicated VMs in smaller deployments.

    For my demo I am going to use a VM for the Broker role (RDBroker) , another for Web Access (RDWebAccess) and a third as a Session Host (RDSHost). To create all of these vms I have adapted the setup scripts described in part 2 of this series so I can create these with a line of PowerShell for each..

    Create-VM -Name RDWebAccess -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.20" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDBroker -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.21" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDSHost -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.22" -DnsSvr "192.168.10.2" -Domain "contoso.com"     
    Note: My sysprepped image used to create these vms is based on the download from msdn , so I am asked for a license key which I can skip, but it’s a manual step. When an evaluation edition of Windows Server 2012 R2 is available this won’t be a problem.

    I can now configure these blank vms through Server Manager from one machine, provided you have already added them as servers to be managed as I have (the servers in the yellow box)..

    image_thumb6 

    The first step is to add the roles and features to each of my server VMs, and since Windows Server 2012 this has been a special installation option, Remote Desktop Services Installation.  As I mentioned before Remote Desktop Services is a generic term in Windows Server to cover both VDI and Remote Desktop Sessions (what was terminal services). and so the wizard asks you which one you want.  For this post I am going for VDI. In the following screens for this wizard I can assign each role to one of my servers and the wizard will deploy the whole environment for me, and then show me that overview screen at the start of this post.

    There’s a couple of things you’ll want to check before you actually create a VDI collection:

    Check the Deployment settings:

    image_thumb20

    For example you can add in your licensing server, specify the Web Access URL to use and in my case I don’t the export location for my template to be on the the RDBroker server as it’s a VM so I’ll use a share somewhere else.  You’ll also want to setup certificates for this, and when I have that properly researched I’ll cover it off in this series.

    Increase the VM Build Concurrency on your Hosts

    By default Hyper-V builds VMs one at a time which might be a good if cautious move for heavily used production hosts but on my laptop(s) I want to up this, and this is the PowerShell to hack the appropriate registry setting and reboot the hosts for this to take effect..

    #replace “server1”,”server2” with your list of servers

    $CompNames =@("Server1",”Server2”)

    foreach ($CompName in $CompNames)
    {
         Invoke-Command -ComputerName $CompName -ScriptBlock {

        Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\VmHostAgent\Parameters"`
          -Name "Concurrency" -value 5
         Restart-Computer -force}
    }

    Now I am ready to build “collections” of virtual machines and that’s coming up in the next post.

  • Lab Ops 5 - Access Rights in PowerShell

    In my last post I had a section in my script to create access right to a share I had just created and as promised I wanted to dissect that a bit more.

    To recap what I wanted to do was to create a share with sufficient privileges to use that share for storing VMs as part of my VDI demo. In order to understand what permissions might be needed I used the wizard in Server manager to create a share (FileServer1\\VMTest), checked that I could use the Remote Desktop Services part of Server Manager to successfully create a pooled VDI collection and then examine its permissions.

    The share has everyone full control access..

    Share settings

    and these are the File Access Rights on that share..

    Share secirity settings

    The SMB settings are set by the -fullAccess, for example in my script

    #Orange$ is the name of the physical host (my big orange Dell Precision laptop)

    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator  

    The Access Rights are stored in Access Control Lists (ACLs) comprising individual Access Control Entries (ACEs) where an entry might state the contoso\Andrew has full control of a path and its folders and subfolders.  Out of the box in PowerShell there are just two command involved get-ACL to get this list and Set-ACL to modify setting on a path.  TechNet recommend that you create a share (say x:\test) set up its permissions and pass this across to your new share

    Get-Acl -path “X:\Test” | Set-Acl -path “ X:\My New Share”

    which is fine but in my case I am building a setup form scratch and I don’t wan to have to use an interface to create the template.  So it occurred to me I could write the ACL out to a file with one row per ACE and here’s my script to get the ACL for that VMTest share in the above screenshot..

    #to a CSV for later use
    $CSVFile = \\Orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV

    #If the file is already there delete it
    If(Test-Path $CSVFile) {Remove-Item $CSVFile}
    $Header = "Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags"
    Add-Content -Value $Header -Path $CSVFile
    $TemplatePath = "X:\shares\VMTest"
    $ACLList = get-acl $TemplatePath | ForEach-Object {$_.Access}
    foreach($ACL in $ACLList)
        {
        $LineItem = $TemplatePath+ "|" + $ACL.IdentityReference + "|" +  $ACL.FileSystemRights + "|" + $ACL.AccessControlType + "|" + $ACL.IsInherited + "|" + $ACL.InheritanceFlags + "|" + $ACL.PropogationFlags
        Add-Content -Value $LineItem -Path $CSVFile
        }

    The Add-Content command allows me add write into a file, but not that alothough I have specified this as a CSV file it is in fact pipe (“|”) delimtied because it’s generally madness to use commas as separators as they get used inside the values as in this case..

    Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|False|None|
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|NT AUTHORITY\SYSTEM|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|CREATOR OWNER|268435456|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|ReadAndExecute, Synchronize|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|AppendData|Allow|True|ContainerInherit|
    X:\shares\VMTest|BUILTIN\Users|CreateFiles|Allow|True|ContainerInherit|

    so Inheritance flags and access rights can have multiple values inside one ACE.  While there are several examples of this sort of script out there I had to do some digging to find out how to use this file, So here’s my crude but effective attempt..

    $ACEList = Import-CSV -Path "\\orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV" -Delimiter "|"
    $TestPath = "C:\Test"
    If (test-path -Path $TestPath) {Remove-item -Path $TestPath}
    MD $TestPath
    $ACL = Get-Acl -Path $TestPath
    ForEach($LineItem in $ACEList)
        {
      
        If($LineItem.FileSystemRights -eq 268435456) {$LineItem.FileSystemRights = "FullControl"}
        If($LineItem.FileSystemRights -eq "ReadAndExecute, Synchronize"){$LineItem.FileSystemRights = "ReadAndExecute"}
       
        $NewRights = [System.Security.AccessControl.FileSystemRights]::($LineItem.FileSystemRights)
        $NewAcess = [System.Security.AccessControl.AccessControlType]::($LineItem.AccessControlType)
        $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]($LineItem.InheritanceFlags)
        $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None
        $NewGroup   = New-Object System.Security.Principal.NTAccount($LineItem.IdentityReference)
        
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
         $ACL.SetAccessRule($ACE)
        }
        Set-Acl -Path $TestPath -AclObject $ACL

    While this not be a solution for you it does show up some interesting points in PowerShell and in setting security:

    • There’s no command to make a new ACL you have to start with one from somewhere
    • Once I have declared $AceList as my csv file it is that file. For example I can get PowerShell to display it an all its glory using $ACEList | out-gridview to see it as a table
    • I have declared the separator I used to get round the comma problem with the -Delimiter “|” setting
    • I can loop through each line of my file with a simple foreach loop and so $LineItem represents each line and I can references the columns in that line with the standard . notation used in .Net e.g. $LineItem.FileSystemRights.
    • The stuff in square brackets might look hard to remember but PowerShell will auto complete this so once I typed in [System. I could select Security and so on to build up the correct syntax.
    • I have marked a line in my csv file in red because the File System Rights are set to a number in my case 268435456.  This is an example of an Access Control mask a set of bits that will cause an error when I try and create $ACE, so I have an exception in my code (also in red) to trap that and in my case just assign full control.
    • There is a another problem line marked in purple in my csv and that’s because this also threw an error.  I looked up the synchronize option and realised it gets set by default and so I put in another exception in y script (in purple) to trap that and just apply ReadAndExecute rights.
    • I had to refer to the .Net documentation here to work out what order to put the parameters in to create a new ACE (the line beginning $ACE = New-Object…)
    • Having built up the ACL in the loop it’s really easy to apply it your object ands of course this could be used for multiple objects in another loop for that

    So what I have done here is a bit of a sledgehammer to crack a nut, but you might be able to use some of the principles to solve your real world problem for example you might just knock up a CSV file with permissions you want to set on an object.

    Finally there’s this properly tested File System Security PowerShell Module on the TechNet gallery although it’s not yet been tested on Windows Server 2012 or R2. 

  • Lab Ops 4–Using PowerShell with Storage

    In previous posts I created a fileserver for my lab environment and used the UI to create a storage space on it.  Now I want to automate the creation of two storage spaces over the same storage pool, which will then underpin my VDI demos. At the time of writing the only resource I could find was on Jose Barreto’s blog.

    Before I get to the PowerShell this is a quick sketch of how my file server is going to be configured..

    WP_20131001_002

    My VM already has 6 virtual hard disks attached from my last post - 1x IDE for the operating system and the other 5 are scsi disks.  Two of these scsi disks are 50Gb in size the rest are 1Tb.

    On top of this I want to create a pool “VDIPool” onto which I will put two storage spaces (virtual disks).  The dedup space at the top will have a one partition (X:) volume which will be setup for VDI deduplication while the normal volume/partition (N:) on the normal space won’t have this enabled but will be identical in size and configuration.  Both volumes will then have shares setup on them to host the VDI virtual machines templates and user disks.

    All of this only takes a few of lines of powershell (less if you don’t like comments and want your command all on one line!) ..

     

    $FileServer = "FileServer1"
    $PoolName = "VDIPool"
    $SpaceName1 = "DedupSpace"
    $SpaceName2 = "NormalSpace"

    #1. Create a Storage Pool
    $PoolDisks = get-physicaldisk | where CanPool -eq $true
    $StorageSubSsytem = Get-StorageSubSystem
    New-StoragePool -PhysicalDisks $PoolDisks -FriendlyName $PoolName -StorageSubSystemID $StorageSubSsytem.UniqueId

    #2. tag the disks as SDD,HDD by size
    #   note to get the actual size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    #3. Create the necessary storage tiers
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "SSDTier" -MediaType SSD
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "HDDTier" -MediaType HDD

    #4. Create a virtual disk to use some of the space available
    $SSDTier = Get-StorageTier "SSDTier"
    $HDDTier = Get-StorageTier "HDDTier"

    #5. create two Storage spaces
    New-VirtualDisk -FriendlyName $SpaceName1 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple
    New-VirtualDisk -FriendlyName $SpaceName2 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple

    #6. create the dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName1
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "X" -UseMaximumSize
    Initialize-Volume -DriveLetter "X" -FileSystem NTFS  -NewFileSystemLabel "DedupVol"  -Confirm:$false
    #note -usagetype Hyper-V for use in VDI ONLY!
    Enable-DedupVolume -Volume "X:" -UsageType HyperV

    #7. create the non dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName2
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "N" -UseMaximumSize
    Initialize-Volume -DriveLetter "N" -FileSystem NTFS  -NewFileSystemLabel "NormalVol"  -Confirm:$false

    #8. create the standard share directory on each new volume
    md X:\shares
    md N:\shares

    #9. Get a template ACL for the folder we just created and add in the access rights
    #   need to use the share for VDI VMs (Note there are too many permissions here!)
    #   and set up an ACL to be applied to each share

    $ACL =Get-Acl "N:\shares"
    $NewRights = [System.Security.AccessControl.FileSystemRights]::FullControl
    $NewAcess = [System.Security.AccessControl.AccessControlType]::Allow
    $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
    $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None

    $AccountList = ("Contoso\Administrator","CREATOR OWNER", "SYSTEM","NETWORK SERVICE", "Contoso\Hyper-V-Servers")

    Foreach($Account in $AccountList)
        {
        $NewGroup = New-Object System.Security.Principal.NTAccount($Account)
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
        $ACL.SetAccessRule($ACE)
        Write-Host "ACE is " $ACE
        }
    #10. Now setup the shares needed for VDI pool
    #    one for the VMs and one for the user disks
    #    on each of the volumes

    $SMBshares = ("DedupVMs","DedupProfiles","NormalVMs","NormalProfiles")
    foreach($Share in $SMBshares)
        {
        if($Share -like "Dedup*")
            {
            $SharePath =  "X:\shares\" + $Share
            md $SharePath
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        if($Share -notlike "Dedup*")
            {
            $SharePath =  "N:\shares\" + $Share
            md $SharePath
           
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        }

    Notes:

    • Get-storagesubsytem returns fileserver 1 storage spaces on my demo rig
    • Using PowerShell to create storage spaces gives you access to a lot of switches you don’t see in the interface like setting the cache size
    • when you create a storage space, it’s offline so you have to bring it online to use it.
    • Apart form disabling the cache on an SMB share I couldn’t see what other setting to configure the share as an application share – My test share in the screenshot above was created by using Server Manager to create an application share – a special kind of share in Windows Server 2012 for hosting databases and Hyper-V virtual machines

    Having run this I can now see my shares in server manager

    shares

    This script is designed to be run on Fileserver1, whereas the script to create the VM itself can be run from anywhere so what I want to do is to run this script remotely on that VM and the way to do that is:

    invoke-command -computername FileServer1 -filepath “e:\powershell\FileServer1 Storage Spaces.ps1”

    ..where FileServer1 Storage Spaces.ps1 is my script.  BTW I can apply a script like this to multiple servers at the same time by substituting FilerServer1 with a comma separated list of servers.

    Next time I want to go into what exactly is going on in the script to create an SMB share and assign access rights to it (steps 9 and 10) as I found this quite hard to research. and so perhaps I can help to make this easier for you and as an aide memoire for me!

    As I have mentioned before there isn’t an evaluation edition available just yet, so if you want to try this now you should find it will work from the Windows Server 2012 R2 preview, although you will still have to inject the license key in your scripts at some point to fully automate vm creation.

     

     

     

     

     

     

     

  • Lab Ops part 3 – Storage in Windows Server 2012R2

    In this post I want to show what I plan to do with storage in Windows Server 2012R2 at the various events I get asked to present at.  If you have been to any of our IT Camps you will have seen Simon and I show off deduplication and storage spaces in Windows Server and now I want to add tiered storage into the mix - and I have a cunning plan.  This is to declare some of my storage as hard disk and other parts as SSD.

    Before I get to that I need to create a File Server v. To do that I have adapted the setup script from Marcus I have posted previously to create this vm.  All I needed to do after the VM was created was to add in the fileserver roles and role features I needed.  The easiest way to do that is use the UI to add in the necessary features and then save them off to an xml file..

     add fileserver features

    as the screentip above shows I can then consume this xml file to add in the features to my fileserver vm everytime I create it

    Install-WindowsFeature -Vhd $VHDPath -ConfigurationFilePath "E:\scripts\file server 1 add features.xml" -ComputerName "Orange"
    where the –computername setting is my physical host as I am mounting the VHD off line to add the features before it starts

    Now I can use this vm to show tiered storage and there are two parts to this:

    • Create some virtual disks and set them up to emulate the performance of the two different types of disk.  In Server 2012R2 there are advanced properties of virtual hard disks for Quality of Service (QoS), which are exposed in Hyper-V Manager, Virtual Machine Manager and of course PowerShell..

    #note the MaximumIOPs settings

    for ($i = 1; $i -le 3; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 1tb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI    -ControllerLocation $i -MaximumIOPS 100
       }
      for ($i = 4; $i -le 5; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 50gb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI -ControllerLocation $i -MaximumIOPS 10000
    }
          
    To identify which disk is which in the VM for my demo – remember I want to use the UI to show this being setup as PowerShell scripts aren’t great for explaining stuff.  To this I tried to tag my disks as HDD or SSD now they are inside the VM.  using

    Set-PhysicalDisk  –MediaType1

      where media type can be SSD or HDD. My plan was to base this on the size of the disks (My notional hard disks are 1Tb above as opposed to just 50Gb for the SSD. However you can’t use this command until you have created a storage pool so I will keep this code for later run it interactively
       

    To stress my tiered storage and to show off deduplication in Windows Server 2012R2 I plan to use this storage as the repository for my VDI vms. In order to that I have to do the following things beforehand:

    1. Use the storage to create the pool. From Server Manager

    storagepool

    by right clicking on the primordial row in storage spaces, giving my poll a name and selecting all those blank scsi disks I created..

    image

    and clicking create (Note I have left all the disks set to automatic for simplicity)

    2. Tag the disks as SSD/HDD in order to tier the storage

    I just use the PowerShell above to do this based on size of the disks..

    #note to get size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    Checking back in server manager I can see the media type has been set and you can now see my VDI Pool..

    storagepool

    3. Create a virtual disk over the storage pool. 

    Storage spaces are actually virtual hard disks and so from the screen above I select New Virtual Hard Disk from the tasks in Virtual Disks pane and select the storage pool I have just created.  I then get the option to use the new tiered storage feature in Windows Server 2012R2..

    image

    I’ll select simple in the next screen to stripe my data across the disks in the pool. In the next screen the option to thin provision the storage is greyed out because I have selected tiered storage, which isn’t a problem for my lab as this pool is built on disks which are in fact virtual disks themselves but be aware of this in production. Just to be clear you can either create a storage space (VHD) that is thin provisioned beyond what you have available now OR you can commit to a fixed size and use tiered storage.

    Now I get another new (for R2) screen to set the size of my VHD based on how much of the pool I want to use.  I am going to use 30gb of SSD and 1Tb of HDD.

    newvhd

    I am then get asked to format the VHD and mount it.  I mount it as drive V and give I the name VDIStorage.  In the final step I can enable deduplication..

    imageNote I have the option to use deduplication for VDI in Windows Server 2012R2 so I will go for that.  The wizard will then guide me through formatting and mounting this VHD so it can now be used as a storage space

    Now I am ready to configure VDI . But what if I wanted to do all of that in PowerShell? Well I am going to leave that for next time as this is already a pretty long post.

    In the meantime if you have an MSDN subscription then you can get the rtm version of Windows Server 2012R2 and follow along. If not I am sure there’ll be an evaluation copy coming it in a few weeks to coincide with the general availability of R2, and in the mean time check out the Windows Server 2012R2 jumpstart module on MVA.   

    1Keith Meyer one of my counterparts has more on all of this in this excellent post.

  • After Hours - Canon EOS talking to a Surface Pro over wifi

    Please note This an after hours post, specifically about connecting a Canon EOS 6D to windows 8/8.1.  I have written it for two reasons -  so I can remember how to do it and because this you might need to do something like this for a camera enthusiast that you know who isn’t a networking guy.

    Canon have made it relatively easy to connect the new EOS 6D 70D etc. to your Android or IOS device and to a wifi hotspot to which your PC/laptop is connected.  However what I wanted to do was to  configure windows 8 as an ad hoc wireless connection point so I could remote shoot via wireless from my Surface Pro anywhere I happened to be; jungles, mountains, and the various events I go to.  However Windows 8 doesn’t have a UI for this anymore so you need to run a couple of netsh commands from an elevated prompt to get this working:

    netsh wlan set hostednetwork mode=allow ssid=MyWIFI key=MyPassword

    netsh wlan start hostednetwork

    ..where MyWIFI is the wireless network name you want and MyPassword is the password to connect to it. What this does is to add a new adapter into network connections..

    image

    In my case I renamed my connection to Canon and also note that Deep6 has a three after it as I tried this  a few times! Another thing you may see on forums is that you need to setup sharing when creating connections like this and that’s only true if you want to do the old internet connection sharing. I don’t need to do this for this scenario which is just as will as our IT department have prevented me from doing this in group policy

    On  my Canon EOS6D I need to enable wifi
    IMG_6285

    then set it up by selecting the wifi function which is now highlighted.  From here I want to set up a C connection which is the Remote Control (EOS Utility option)..

    IMG_6288

    I have already don this a few times ..

    IMG_6291

    so to set up a new connection I choose unspecified. Now I ned to find the network I created on my Surface Pro by finding a network..

    IMG_6292

    My ad hoc network is called Deep6 as opposed to FAF which is my home wireless network..

    IMG_6293

    my key is in ASCII so I select that on the next screen and then I get this dialog to enter my password ..

    IMG_6295

    Note you have to use the Q button on the back of the camera to enter the text window. I am asked about ip addresses I select automatic as my wireless network will do that for me. Then I can confirm I want to start pairing devices..

    IMG_6297

    and then I will see this..

    IMG_6298

    I can now check that my 6D is talking to my new wireless access point (which I have called Deep6.

    image

    as you can see I have one device connected.

    So now I can use the supplied Canon software, the EOS Utility,  to control my camera. Or so I thought,  only all the control options are greyed out.  This is because you need to change the preferences to install and configure the wft utility which detects your Canon and allows you to control it. To do this select the option add WFT pairing software to the startup folder

    image

    You’ll then get a little camera icon In your system tray and when your Canon is connected it’ll pop up this window..

    image

    click connect and  you’ll see an acknowledgement and confirmation on the camera..

    IMG_6299

    in my case my Surface is called Vendetta. I click OK, and I am good to go and the camera saves the settings for me, which is great and in fact I can save 3 of them. In my case I have saved my surface connection and FAF to connect to my home wireless router.

    The Canon EOS  Utility will now work..

    IMG_6308

    Now I can start to have fun with this setup and my shots get saved to my Surface Pro..

    IMG_0003

  • Lab Ops 2–The Lee-Robinson Script

    In my last post I mentioned how Marcus Robinson & had adapted PowerShell scripts by Thomas Lee to build a set of VMs to run a course in a reliable and repeatable way.  With Marcus’s permission I have put that Setup Script on SkyDrive, however he has a proper day job running his gold partnership Octari so I am writing up his script for him.

    Notes on the script.

    Unattend.xml is the instructions you can give to setup the operating system as it comes out of SysPrep. Marcus has declared the whole unattend.xml file as an object $unattendxml which he then modifies for each virtual machine to set it’s name in active directory, the domain to join it to, its fixed IP address and its default DNS Server

    He makes use of functions to mount the target VHD and then copies in the modified unattend.xml and then dismounts it.  Overarching this is a create-VM function which then incorporates these functions to create his blank VMs with known names and ip addresses. However these VM are not started as their is as yet no domain controller for the other lab VMs to join

    LabSvr1 in the script is gong to be that domain controller so the first thing to do is add in the AD-Directory services role and here not the use of the PSCredential PowerShell object to store credential and convert-to-secure-string for the password so that the script can work securely on the remote VM’s.  Starting the VM requires a finite amount of time so Marcus checks to see when it’s alive.

    Marcus then has a section to install various other workloads all from the command line Exchange, SharePoint and my favourite SQL Server.  Before he can install some of these he has to install prerequisites such as the .Net Framework 3.5 (aka the NET-Framework-Core feature) and do that he puts in the Windows Server install media. Having installed SQL Server he can then copy in the databases he needs and here I might have attached them as well as SQL has PowerShell to do this..

    $sc = new-object System.Collections.Specialized.StringCollection
    $sc.Add("C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Mydatabase.mdf")
    $server.AttachDatabase("myDataBase", $sc)

    Anyway there’s lots of good stuff in here, and I’ll be using it to make my various demos for our upcoming IT Camps, now that the Windows 8.1 Enterprise ISO is on MSDN subscriptions. 

  • Lab Ops – part 1 Introduction

    For some people building demo setups is a part of the job,  for example Trainers pre-sales Technical roles and evangelists like me.  Everything shifts as these evolve for example

    • a beta product comes out and we need to show new concepts that the technology introduces, for example software defined networking in Windows Server 2012 R2
    • Then as the product matures we need to show this in context of performance and scale, as well as how high availability is affected and how it interoperates with other solutions
    • We also need to show how to migrate onto the new solution and how to retire the legacy version.

    Typically a product even an operating system doesn’t live in isolation and all of this means that new setups need to be continually created.  So the trick is to have a framework to build from rather than a set of virtual machines that get modified checkpointed and so on.  This really hit home to me when I was trying to set up a VDI environment recently as my deployment and desktop wingman Simon May is off to a new role in the USA and my usual hack and slash approach to VMs wasn’t working.

    I was chatting over my problems with Marcus Robinson of Gold Partner Octari at the Virtual Machine User Group in Manchester, and he showed me his PowerShell! Marcus was up til 4am preparing for a course and had developed a  script on the back of something developed by MVP and certified trainer, Thomas Lee, whose scripts are published on PowerShell.com.  My approach was flawed because I was booting up a generic sysprepped VM which while it was joined to the domain had a random name as you can set this in an unattend file.  This meant I couldn’t easily persist a session in PowerShell to rename the VM in Active Directory and the VMs have dynamic IPs as well.   The “Lee-Robinson” approach I picked up  is really clever and works as follows:

    1. Use Windows install media to create a sysprepped Virtual Hard Disk (VHD) using the publicly available WimtoVHD Powershell script

    2. Modify an unattend.xml file to contain the post sysprep configuration  you need, by injecting xml code in fr such things as the ip address and domain name and credentials to join the domain.

    3. mount the newly created VHD and inject the unattend.xml file into [mounted drive letter}:\windows\system32\sysprep

    4. unmount the drive

    5. create a VM around the new VHD

    6. start the VM

    Thomas and Marcus need to do this so they can set up a lab environment for each student or each pair of students on a series of hosts, in Marcus’s case he needed to build a lab to show off Data Protection Manager, while Thomas is constantly pushing PowerShell itself as well as needing to run labs of his own.  I see the other advantages of this approach to a lab

    • Paying for license keys isn’t a problem –VMs are just kept alive for a demo  and most Microsoft evaluation keys are good for 180 days.
    • keeping snapshots of VMs in synch can be problematic for example domain trusts and DNS can get confused which leads to most of the problems you might have seen in my demos last year.
    • The resources I need to keep are just some scripts, databases and possibly some certificates.
    • I can adapt the scripts to work on Azure and have some of my stuff deployed there if I can rely on internet connectivity at an event.
    • I get sharper at deployment and PowerShell, both of which are useful skills anyway.

    This all seems such a good idea I thought I would do more posts on this in a series as I reset my demos for the next round of events I have been asked to do.

    Finally if you want a proper deep dive into PowerShell this is not the blog you are looking for and you could do worse than hang at one of Thomas’s PowerShell camps at the time of writing the next one is 19th October 2013.

  • Green IT Fatigue

    I recently got asked to do an interview on  for TechWeek Europe about green initiatives in the IT industry. However let’s be honest, computers burn a lot of power, require a lot of power to make and are made of some nasty exotic compounds and chemicals, so they aren’t going to save the planet by themselves.  However a few years back everyone was talking about Green IT, and more properly sustainable IT, and while that topic is no longer trending, we don’t seem to have done anything about it and Green Fatigue has set in across IT . 

    Looking at what has been happening in the data centre then good work has been done, but not in the name of green IT. For example server consolidation has meant physical servers are better utilised now; they are typically running 10+ VMs each rather than idling at 10%.  We have also got better at cooling those servers, but this has sometimes been driven not by a green initiative but because of the cost of power and the capacity available from the power supplier in a particular location.

    Later versions of virtualisation technologies always make best use of the latest hardware but swapping out server hardware to get the benefits of the latest CPU or networking has to be balanced against the cost of making the new server and disposing of the old one, so you’ll want to focus on how you can extend the life of your servers possibly by just upgrading the software.

    Virtualisation by itself can also cause more problems for the environment than it solves because while you  have achieved some consolidations you may well end up with a lot more VMs that aren’t doing much useful work.  Effective management of those VMs is the key to efficiency for example:

    • Elimination of  Virtual Machine sprawl.  Typically this shows itself as a spread of numerous dev and test environments, and the only way I can think of to check this over use of resources is to charge the consumer for them on the basis of what they have committed to use so chargeback or at least showback.
    • Dynamically optimising a workloads based on demand – Reducing the capacity of low priority under used services or stopping them altogether to free up resources for busy services without needing more hardware.  
    • Extreme Automation to reduce the number of IT guys per VM, these reduces the footprint per VM as each member of IT uses energy to do their work and often has to travel to work so if this can be distributed across more VMs than that is more efficient.

    These three things are actually all key characteristics of clouds so my assertion is that cloud computing is more environmentally efficient, without necessarily being Green IT per se.  Given the fact that public clouds operate at much greater scale and efficiencies than what is possible in many internal data centres1 plus they are often located specifically to take account favourable environmental conditions all of which means they are greener than running services in house. 

    So we are getting greener, it’s just we don’t call it that, and no doubt no that we are fed up with the word cloud as applied to IT we’ll change that to something else as well.  

     

     

     

     

    1 Internally a Microsoft Global Services defines a data centre having more than 60,000 physical servers

    http://www.techweekeurope.co.uk/interview/video-green-fatigue-microsoft-125267

  • UK Data in the Cloud

    There is still a lot of inertia in the UK about storing data in the cloud for entirely valid reasons and rather vague uncertainties and doubt.  For a few organisations keeping data in the cloud is exactly the right thing to do because those organisations want to actively share their information, the most obvious is the UK government with their data.gov.uk initiative.  Commercial companies may also want to sell their data and rather than opening up fat pipes into their data centres the logical approach is to have this hosted on a public cloud as well. 

    I mention this because one aspect of Azure that Microsoft rarely talks about is the Windows Azure Marketplace (WAM), a portal where for the sharing and consumption of large data sets.  Originally this was just US based like a lot of Microsoft services, but over the last year or two it has grown steadily so that there are now a significant number of UK relevant data sets on there, most notably is the Met Office Open Weather Data (and actually part of data.gov.uk)

    Some of this data you will be paying for based on how many times you query it and so one way to minimise that cost would be to download it then create my own  internal data market  which would also include sets of data from in house systems for users to mash up using tools like PowerPivot.

    You can of course connect PowerPivot etc. to the WAM, and the good thing about this approach rather than just pulling down a .csv file is the connection location is remembered, and this is useful for several reasons:

    • you know where the data came from so it’s verifiable
    • you can easily refresh your analytical model with the attest data form a given source
    • if you wish to scale up your model either to SharePoint in house SharePoint, Online in Office 365 or to a BI Semantic Model in SQL Server 2012, the source is preserved so the model can still be refreshed.

    So the Azure Data Market works well with self service BI to allow analysts to develop models based on external and internal data, say for mapping the weather to sales to develop models to predict demand as I have posted before.

    The other way this data can be consumed is to use it inside an application.  I can see a case for this sort of thing on a property search site where additional local information is bought in alongside the details of the house/flat you are looking for such as schools and their stats, hospital metrics, rail commute times,  and so on.  This will typically incur a cost but would give this site an edge over its competitors and possibly be recouped through advertising.  There are also applications you can integrate with such as translation services and Bing.

    It’s also worth bearing in mind that you could be making money out of your data, by selling it via WAM as well.  Obviously this would not be personal data, so things like market research house price information, trends in the UK job market from a recruitment agency which have been anonymised. 

    Finally there’s extensive help on how to use all aspects of WAM, such as code snippets, samples and hot to videos, and it’s changing all the time so even if there’s nothing of interest right now there may well be next time you look. 

  • Microsoft Valuable Penguin

    I sometimes think the IT industry is a bit like a load of penguins on the pack ice, each one checking the others out to see who going to brave the ocean first.  In IT it’s nervousness about when to jump onto a new technology and if there is one technology that makes IT professionals really nervous then that would be cloud.  So when Microsoft launched the Cloud OS, there was always going to be a some nervousness about it and a reluctance to jump in and try it no matter what it did or how good or bad it was.  However there are some pioneers out there who have already implemented the cloud OS and some of these  are MVPs (Microsoft Valued Penguins Professionals), as they not only try out the new stuff like the Cloud OS, they share their knowledge ad experience as early adopters. Moreover unlike me they are independent, and although well connected into Microsoft they can on occasion be very vocal when they feel there is a real problem or missing feature in a new product.

    There are about 200 MVPs based in the UK and some of the best of them have decided to run a series on the Cloud OS where they can share what they are doing with it and what they have learned by implementing this in production.  The event runs 9th-13th Septembers in Microsoft's London Offices and the agenda looks like this..

    clip_image001

    Monday 9th September

    Please register to attend either track 1 or track 2:

    · Track 1 will focus on building the modern enterprise data platform. In a series of three presentations we will tackle the issues of architecture, application frameworks, data integration and data exchange; learning all about the challenges faced by the modern data tier developer. Most importantly, we will learn how to creatively overcome them by enhancing our processing efficiency and analytical capability. Register to attend

    · Track 2 will focus on the creation of Business Intelligence and advanced analytics solutions that utilise both structured and un-structured data. We will demonstrate the use of data mining and predictive analytics technologies and also demonstrate how advanced visualisation technologies can be used by business users to deliver the insight and action required to drive real value from data.

    Register to attend

    clip_image002

    Tuesday 10th September

    Each session will demonstrate how to:

    · Deliver best practices with Windows Server 2012 R2 and System Center 2012 R2

    · Lower costs through effective management of VMware and Hyper-V

    · Enable management of datacentres of any size!

    · Drive automation of complex applications with service templates

    Register to attend

    clip_image003

    Wednesday 11th September

    Sessions will include:

    · Windows Azure Service Bus

    · Windows Azure BizTalk Services

    · Microsoft BizTalk Server (both on-premise and Cloud Virtual Machine)

    Register to attend

    clip_image004

    Thursday 12th September at the Microsoft Office, London, Victoria

    Join leading MVPs for a one day event to understand how to manage your client devices in a single tool while reducing costs and simplifying management. Best of all, you can leverage your existing tools and infrastructure.

    Sessions will include:

    · Helping with data security and compliance

    · Unified device management

    · What powers people-centric IT with Cloud OS?

    · Real World customer examples

    Register to attend

     

    clip_image005

    Friday 13th September

    The explosion in devices, connectivity, data and the Cloud is changing the way we develop and deliver software.  New infrastructure services permit existing server applications to be “lifted & shifted” into the Cloud.  Attend a one day event to hear from MVPs about how Microsoft’s data platform and development tools enable you to develop, test, and deploy applications faster than ever.

    Sessions will include:

    · Infrastructure services,

    · Media services,

    · Service Bus  & Mobile services

    Register to attend

     

         

    So please register and pppick up an MVP, and learn about what this Cloud OS is really all about.

  • SQL Server Spruce up

    Landrovers, will take a lot of pounding and neglect, but when my wife drove hers to Australia she made very sure it was properly set up for 2 years on the road

    WP_20130807_002

    Similarly SQL Server is also often out in the wild far from DBA’s and inspection from maintenance tools, like System Center.  However now might be a good time for a bit of TLC if things are quiet for you in August. In my last post I dealt with Servers in general, so today I want to look at a SQL Server spruce up, particularly for those who are not full time DBAs. 

    As per that post you may well be able to decommission whole VMs running SQL Server, but what I want to cover here is what you might want to check at the on instances and the databases themselves.  Books have been written on this but I would be interested in:

    • Compatibility level – the ability to get a shiny new copy of SQL Server look like an older one, should only be set where you actually need to have backward compatibility.  Note that by default this is kept at the old level after an upgrade from an older version. 
    • Memory reservation should be set less than what’s available and not left blank. The question is how much less, and the answer is what else is running on that server but at least 768Mb for the operating system itself
    • Where and how many TempDBs there are, as per TechNet guidance here.

    For individual databases you might want to check that

    • Statistics are up to date for all your tables
    • Checking for and repairing index fragmentation
    • There is no extraneous fluff in your databases such as copies of tables, and indexes as well as remnants of dev code like spare views and procedures.

    Rather than perform all of these sort of checks yourself,  you could deploy System Center Advisor which is a free cloud based service.  It matches your installation and database against best practice from Microsoft Premier Feld Engineers and tells you every day what you need to worry about.  It can be securely deployed behind a gateway in your infrastructure so your actual SQL Servers don’t need to be internet facing and I have posts on how to do that here.

    Finally you might also want to benchmark your database by putting a known load on it that you can use as a reference when making any changes to it, such as virtualise,  it upgrade it etc.  To be honest I couldn’t find too much on TechNet/MSDN/Codeplex to help with this so you may want to resort to third party tools such as  Dell(Quest), Idera, RedGate, SQL Sentry,  and dare I mention PowerShell (as per this article by Aaron from SQL Sentry) etc.

  • Spruce Up your Data Centre

    The summer holidays can generally be a quiet time for some IT Pros, depending on the industry you work in, so I wondered if this would be a good time for a bit of a tidy up in the data centre.  The easy bit of that is to actually tidy up the physical environment such as cabling left lying around or temporarily put in place but has now become “live”.  Actually I would love to see some photos of server room chaos, and I am sure Sara can organise a T-Shirt for the messiest.

    What I actually meant was tidying up the data on the servers. At the highest level you might have whole test or evaluation setups that you don’t really need anymore which might make up several VMs.  There might be individual random VM on there as well. The challenge is can they be stopped and archived and that depends on what the owner feels about them, and so a key technique for efficient data centre management is chargeback or at least showback, as waste is a lot lower when you are paying for something!

    VMs are very easy to snapshot/checkpoint and hopefully you are aware of the impact of rolling back/reverting to a checkpoint on any given VM, and if you can’t revert to that checkpoint is there any point in keeping it?

    Then there is a question of what is in those VMs, You might worry about whether they are all properly licensed, and actually if the licenses are expired is the VM any use anymore anyway?  You might also get some licenses back if you can’t shut redundant stuff down.

    Looking at the software that’s on all those VMs,  are they patched and up to date?  Even if it’s a test setup that should be patched to a desired configuration to match the thing that you are testing which actually might be the application of a patch.

    Then there is the data that’s on there: Is that dev, test or production data, and what protection should be accorded it?    The VM itself may well have backups inside it which could be redundant and hopefully you environment will let you reclaim space if you shrink VMs to reclaim that space.

    Those are some of the problems you might want to address in your summer spruce up but how to find the problems in the first place?  In the Microsoft world there are a couple of tools:

    • System Center. If this is being used as intended then you’ll know some key things about your services, VMs and data..
      • Who owns them,
      • Are they compliant with your desired configurations for production, dev and  test.
      • What resources they are using and how close to any thresholds are they
      • What software they are running (I am assuming here you managing servers via Configuration Manager)

    so you know where to start looking to clean things up, and possibly if you are using self service then VMs that are end of life will automatically be decommissioned

    • Microsoft Assessment & Planning Toolkit.  This is a free tool which you run as required against your data centre that  reports back what you have, and this can include non-Microsoft stuff as well.  You’ll need to give it various credentials for the discovery methods

    The next thing is to ensure you have a good backup strategy and see then get rid of the deadwood safe in the knowledge you do have a backup.  Of course you might then want to revisit retention of the backup if no one notices that you got rid of loads of stuff.

  • Installing SQL Server in a Virtual World

    I do get occasional feedback that SQL Server is hard to install, all those screens all those checkboxes etc. etc.  This is simply a reflection that there is so much in SQL Server aside form the database engine, however if you know what you want and you are doing this regularly then script it, if you have less experienced staff you want to delegate the task to then script it and if you want to reduce patching then er … script it.

    On my recent VMware course it was obvious that the rest of the delegates while generally keen on scripting didn’t do this when deploying SQL Server on VMware so here’s my advice (which also works on Hyper-V BTW)

    What you can’t do in a virtual world is simply copy/clone a SQL Server virtual machine because you’ll end up with two VMs with the same Active Directory SID, and SQL server doesn’t like to the server name to change once it’s installed. So this is how I do this as I often need to build a quick SQL VM

    • Setup a VM with your guest OS of choice e.g. Windows Server 2012 Datacenter edition.
    • Install the prerequisites for SQ Server for example the .Net Framework 3.5. sp1
    • Use Image Prepare to partially install SQL Server
    • SysPrep the machine (windows\system32\sysprep\sysprep.exe) to anonymise it
    • Create an unattend.xml to be consumed when the vm comes out of SysPrep and save this to the SysPrep folder above. Typically this answer fie will join your new VM to your domain, setup the local admin account, input locale, date/time etc. and  this TechNet library article will walk you through that.

    Note: passwords in the answer file are stored in clear so plan around that.

    • Save this off as a template.  That typically means saving the VHD on a share for later use
     

    To use the template

    • Create a new VM using a PowerShell / PowerCLI script. For example here’s the sort of thing we used earlier in the year at our camps to create a server on Hyper-V..

    New-VM   -Name $VMName -NoVHD -MemoryStartupBytes 1Gb -bootdevice IDE  -SwitchName $VMSwitch  -Path $VMLocation
    Set-VM   -Name $VMName -ProcessorCount 2 -DynamicMemory       
    Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath

    start-VM $VMName

    which creates a simple VM with 2 processors 1gb dynamic memory based on a given VHS , $VHD Path 

    • Rename the machine. We have a clunky script Simon and I use in our camps to do this..

    #find the ip address of the new V and look it up in DNS

    $vmip = Get-VMNetworkAdapter $VMToRename |where switchname -eq "CorpNet" | `
    select -expandproperty "IPAddresses"  | where {$_ -match "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$"}

    $vmGuestName = [system.net.dns]::GetHostEntry($vmip)
    $vmGuestName = $vmGuestName.HostName

    #now execute a remote powershell command to rename it

    Invoke-Command -ComputerName $vmHostName -ScriptBlock {
    rename-Computer -NewName $args[0] -DomainCredential contoso\administrator } -ArgumentList $NewVMName

    Restart-Computer –ComputerName  $vmGuestName –Wait –For Powershell

    Hopefully you’ll write something better for production

    Doing this from the installer UI and server manager is tedious and prone to mistakes, but there is another reason to do this all from the command line, and that is because you should be installing SQL Server onto an installation of Windows Server that has little or no UI. It’s called called Server Core and is the default method for installing Windows Server 2012.  It cuts patching in half, and there’s no browser to secure, because it’s designed to be managed remotely. New in Windows Server 2012 is the ability to turn the user interface on and off (where in 2008R2 this was an install choice) and there’s a new halfway house installation called MinShell and my post on it here.

    Any VMware expert is going to read this and laugh because VCentre has a built in template capability so you don’t have to do all the PowerShell hand cranking to clone a sysprepped  VM and then domain join it.  Any Hyper-V expert shouldn’t be doing this either as System Centre is how this is done in production as you can create, not just templates of individual VMs, but architect services and setup self service so users can ask for templated VMs via your service desk or directly from a portal.  However my point here is that under the covers this is the sort of thing you’ll need to do to run lots of SQL Server at scale for tier 1 applications where downtime is critical. Having said that this sort of thing might be useful for labs and for setting up evaluations of SQL Server 2014 running on Windows Server 2012R2.

  • Bi Product

    I have just read Paul Gregory’s guest post for the TechNet Flash, and the two things that caught my eye was to be bi-lingual and to keep your skills up to date.  I put these two themes together and came up with the title for this post which essentially means being skilled in more than one technology.  As we move into a world where some of the nuts and bolts are automated or outsources away from us then having a set of skills that can bridge technologies is going to be more valuable. In you own case I think of two ways this works:

    • I am pretty good on SQL Server especially BI, such that I could still have that on my cv with some confidence, however more recently I have been focusing on Windows Server and System Center and my SQL background has really helped with this. For example in my Evaluate This series I used a SQL Server workload to show Hyper-V Live Migrations, SQL Server running on a Storage Space and running SQL in Windows Server core.
    • I have a pretty good knowledge of VMware (I am a lowly VCP5) and  I have MCSE Private Cloud,  This means I understand enough to be able to speak VMware and articulate the world of Hyper-V to VMware experts. 

    This ability to cross technologies comes into play with integration if you know how to get product X to integrate with solution Y, and with migration I want to move from vendor 1 to vendor 2.  Those kinds of projects are always going on and have a number of advantages over other kinds of IT work:

    • You are respected as the expert and you can’t buy respect you can only earn it
    • The work is more challenging so having the skills isn’t enough you need to also (to Paul Gregory's post) relate to users and other technical teams. 
    • The day rates are higher, because the combination those two skills are rare.

    So while it’s quiet over the summer holidays (unless you are in education IT in which case you have my complete respect!) start having a look at some new stuff be it Windows Server 2012 R2, SQL Server 2014 or have another look at the Microsoft Virtual Academy (MVA)

  • Virtualizing Tier 1 SQL Server

    A while back I published a couple of posts on virtualizing SQL Server, and in the light of developments in both the virtualization platform out there and SQL Server itself I feel the need to do a complete rewrite. 

    The traditional approach to implementing high availability (HA) in SQL Server has been to create a cluster and for this to be a more resilient HA means three nodes or more are needed to maintain HA with two nodes while one node is offline for planned maintenance, for example to either patch the OS or the SQL Server node itself.   What does this mean for virtualization? If you are using Hyper-V  it doesn’t really matter; the VM’s comprising this cluster (aka a guest cluster) are kept on separate physical nodes on a (physical) cluster you can patch the hosts the guest OS, and SQL Server all using Cluster Aware Updating (CAU) in Windows Server 2012.  However it’s not quite so easy in VMware, you‘ll have use VMware Update Manager to patch the hosts and then use CAU to patch the guest OS and SQL Server. Moreover as far as I know you can only have a two node guest cluster in VSphere so while you are patching SQL Server you are down to one node.   So what if you have to use VMware and you want more in the way of HA like you have on Hyper-V? 

    One option would be to use Availability Groups in SQL Server 2012 Enterprise edition. This combines the best of mirroring/ log shipping with Clustering:

    • There’s no shared storage so I don’t see why you would be limited to a three node guest Windows Cluster.
    • Failover is very quick as there’s no shared storage each node has it’s own copy of the database being protected
    • Unlike mirroring and log shipping you are protecting a group of databases as though they were one and you can use the secondaries for reporting and as a source for backups (only full backups though). Plus you can have multiple secondaries for example a synchronous secondary in your local data centre with an asynchronous copy at another location , so a bit like replication in Hyper-V & VMware but at the database rather than VM level.  That’s an important point you should use this techniques over VM replication as all you are synching is the actual SQL Server

    Your next consideration is going to be making sure you get a predictable level of performance or your users might be phoning if there’s issues with speed as well.  Tuning in a physical world has occupied many a mind and there’s tons of advice out there from MVPs, TechNet etc.   Things get tricky in a virtual world as resources are shared.  However if you are running a tier 1 on database then best practice would be:

    • CPU Don’t over commit and use all the NUMA capabilities in your hypervisor to pass through maximum performance to the database.  Bear in mind that for HA you might well want this capacity reserved on other nodes.
    • RAM can’t be over committed in Hyper-V, but shouldn’t be over committed on VMware as performance suffers.
    • IO use the latest SR-IOV cards which can recieve and mange virtual network traffic straight fro the VMs if you can.  However you can’t team SR-IOV cards so you might want to pass through multiple SR-IOV NICs to a VMand then team inside the VM (while you can do this in Hyper-V but I am not sure if you can do that on VMware where the guest OS is Windows Server 2012).  If not use NIC teaming at the host level and the appropriate  teaming policies for access to the database (VMware advice on this is under networking policies here).
    • Storage Access questions usually revolve around whether to use RDMA/pass through disks so that the database itself is stored directly on a LUN referenced by the VMs. There’s actually very little difference these days and in both platforms you could use a share if you have a Windows Server 2012 fileserver running storage spaces. 

    The definitive white paper for virtualizing SQL Server 2012 is here. However the latest version of a best practices guide for running SQL Server on VMware I could find is here but it’s three years old and so applies to older versions of SQL Server (typically 2005/2008) and Windows Server 2008.  Hopefully this will change as Windows Server 2012  & SQL Server 2012 are now supported and of course there’s going to be even more new stuff with SQL Server 2014 running on Windows Server 2012 R2. Whatever you decide to do you’ll want your HA design to be supported and the definitive word on that check KB956893.

    Finally if you are a DBA reading this, one way to get to know your data centre admins is to help them with their SQL Server, as whether they are using System Center or VSphere it’s likely that the database underpinning these is SQL Server and it could probably do with a but of TLC, and a general discussion about protecting those databases too as they are vital components of your data centre.

    Note:

    • My definition of tier 1 isn’t necessarily big, it’s more about the impact of a tier 1 service not being there. If it isn’t there you can’t operate trade, function etc.  Of course systems like this tend to be heavily used and predictable performance is also important too.
    • I haven’t mentioned VMware fault tolerance here because it has so many limitations that render it impractical for all but the smallest databases and I generally find that if it’s tier 1 it’s generally very big and used by lots of people and so only having one CPU doesn’t really work.
  • Virtualisation and Interoperability

    I often think that Microsoft is a bit like the English language, a lot of people speak it, a lot of people don’t and for many people it’s a second or third language they need for work e.g. air traffic controllers.  In technology few of us run a totally Microsoft environment, a few more will have nothing to do with Microsoft for almost religious reasons but the majority have some of this and some of that and are working hard every day to get everything to work.

    So I am pleased that there are three really important announcements to make support issues for the majority of us a little easier:

    • The Server Virtualisation Validation Platform (SVVP) has been updated and Microsoft in conjunction with VMware will now support Windows Server 2012 running on VSphere 5.0 update 1 and 5.1.  That means you can phone Microsoft support when a VM running Windows Server 2012 doesn’t work properly on either of those versions of VSphere.  For example you might want to see how reverting to snapshots of virtual your domain controllers  works properly when the domain controller is running at the Server 2012 domain functional level and you are running that on top of VSphere 5.0 update 2 + ESXi 5.0 update 2  or later, both of which exchange information via a Virtual machine GenID
    • A couple of weeks ago at TechEd it was announced Oracle will be supported to run either on Hyper-V or on Azure, that means the database, WebLogic servers, Oracle Linux and support for Java.
    • Open Management Infrastructure (OMI). A couple of months ago I interviewed the lead architect for Windows Server, Jeffrey Snover (the man behind PowerShell) as part of TechDays Online and he was talking about the work we are doing on OMI,  a standards based framework to enable cross platform management of all devices in the same way as WMI is used to manage Windows.  While tools like System Center already do a pretty good job at managing multiple platforms, from switches to phones to hypervisors and application software, OMI will make this possible without agents and enable management tools to work to or from the open source world.

    Combining these three development means you can get proper support, and effectively manage the  heterogeneous hell that can arise in your data centre, as a result of acquisitions, migrations or because your policy is to have best of breed solutions for your business needs.   What this means  for us IT professionals is that those of us who have multiple disciplines will be in more demand and so being skilled in say Hyper-V and VMware or Oracle and SQL Server will only be good for you.

  • Server 4024 part 3 - Networking

    Databases are typically bound either by networking or IO and actually this can be two sides of the same coin if you are using remote storage.  So what’s new in Windows Server to improve networking both to improve access to shared storage and access to database workloads from another tier in a service or directly to your users and their applications?

    So what is there in networking in Windows Server 2012 to help?

    The answer is a lot and the most tangible thing is NIC teaming which is now built into the operating system this can be used both for load balancing and to provide failover, and can either use LACP (Link aggregation control protocol was 802.3ad & 802.1ax) on switches that support that or use switch independent mode.  This has several advantages over the drivers that come with NICs to provide NIC teaming:

    • It’s easy to set up and use either form server manager or PowerShell (My post on using it is here
    • You can team NICs from disparate providers  
    • You can create NIC teams inside a VM if the VM has more than one NIC.   but whatever your hypervisor you might want to do this if you have several of the new SR-IOV NICs in your hosts to provide failover as this newer NIC can’t be teamed as the network virtualisation is done on the card itself.

    Note: In Hyper-V there is a per VM setting to declare which NICs will comprise a team in the VM and although I haven’t tried this on VMware but it should be fine.

    If you are using SQL Server inside a VM then support for those SR-IOV NICs will improve performance, but what’s more important in my opinion is the ability to regulate network bandwidth like you can regulate CPU and Memory in SQL server with Resource Governor. Network Quality of Service (QoS) can be set on a per VM basis through Hyper-V, System Center Virtual Machine Manager or PowerShell

    Then there are numerous improvements designed to improve  low latency performance to allow better remote storage access via SMB and better data centre bridging support.

    Finally managing networks gets a lot easier with a comprehensive IP Address Management (IPAM) role which uses a SQL server database to manage and monitor all your subnets, DHCP scopes and IP address usage. You also get DHCP guard and Router guard option in the Hyper-V virtual switch to stop conflict occurring between applications that might actually have the same ip addresses etc. such as in a multi tenancy environment.

     

    So that’s a quick look at SQL Server 2012 running on Windows Server 2012, but while I have been away Windows Server 2012 R2 has been announced as has SQL Server 2014. There are public betas you can download but as ever a word of caution on those – You can’t upgrade from the betas to the final products so take snapshots and backups if you want to look at those now.

  • Server 4024 part 2 - Management

    In the second post in this series I wanted to look at how changes to the way you manage things in Windows Server 2012 (WS 2012) affects your management of SQL Server 2012 (SQL 2012).

    Multi Server Management

    Traditionally to manage the OS we used to remote desktop onto every server we managed and applied changes directly to it, however that’s not really viable anymore; for one thing Virtualisation has led to a lot more servers for us to manage. Now there is server manager and as with SQL Server Management Studio (SSMS) we can now register multiple servers in one console and pretty well do everything we need to from there, including adding new features, check performance and monitor alerts, events as well as starting or stopping services..

    server manager

    hopefully your servers will look healthier than mine, but at least I can see where the problems are

    Notes:

    PowerShell 3.0

    Allows all of the above to be done from the command line, and now is the time to bite the bullet and learn PowerShell;

     

    • The new built in PowerShell ISE has a lot of guidance in it to help you get started like snippets, and help to set all the switches in commands you aren’t familiar with.
    • If you have SQL management tools installed then you’ll see you can select SQLPS as a module and get help on the specific SQL2012 cmdlets as well – note you don’t have to load modules anymore in PowerShell 3 it’ll do that for you..

    sql and powershell

    The commands section on the right gives me help on the SQL Server PowerShell cmdlets

    • PowerShell can be executed remotely on other machines as well as in sessions which can be run in parallel and can persist after a reboot if needed, for example;

    Invoke-Command -ComputerName London-SQL -ScriptBlock `
    {
      Backup-SqlDatabase -Database adventureworks -BackupAction Database -CompressionOption On -LogTruncationType TruncateOnlyBackup-SqlDatabase
      }

    ..runs the PowerShell inside the braces on my London-SQL virtual machine

     

    MinShell

    Given that you are managing servers remotely why put all the tools on each server? We get this with SQL Server and generally don’t install SSMS on every server. WS2012 now allows you to deselect installing all the associated mmc snap-ins for every role/feature and also allows you to remove some or all of the GUI as well. In fact the default install option for WS2012 is now Server Core, with nothing on it but PowerShell, Notepad, a command prompt, Registry Editor and Task Manager.

    SQL 2012 runs just fine on Server Core, and with SQL 2012 sp1 you can also run Analysis Services, Integration Services but not Reporting Services (for more info check here).  This reduces the attack surface of the OS and will cut you patching in half. 

    Note: I have already got posts on how to do this and also how to properly work with sysprep using the image prepare and complete options when installing SQL Server both form the installer and form the command line

     

    and finally..

    • Please  ensure your SQL Server is given a good home and try it on Windows Server 2012
    • I’ll be discussing this during my sessions at SQL Relay (in Glasgow, Leeds, Birmingham & Norwich)
  • Server 4024 part 1 – Virtualisation

    A database is only as good as the operating system it resides on. SQL Server only runs on Windows and given that Windows Server 2012 (WS2012) is a huge change from what there was in Server 2008 R2 what does this new OS mean for the DBA.  The reference to Server 4012 is because that would be WS2012 + SQL 2012 which is what you want be running to get the most out of your modern hardware and push that through to your database engine.

    In this particular post I wanted to look at what Hyper-V in Windows Server 2012  does for SQL Server.

    It used to be that SQL Server was limited by what the operating system could surface but in a virtual world this becomes what the hypervisor provides to the virtual machine (VM). In WS 2012 all of the limits for VM’s have gone up by at least 4X..

    image

     

    In many cases these limits are higher than the specifications of many physical servers that you are running SQL Server on today.  There’s also lots of use made of the latest developments in hardware that you may not have on your servers yet, for example:

    • NUMA support , so you can pass through NUMA nodes to a VM for optimal VM performance or allow your VMs to span NUMA nodes..
    • SR-IOV is away of making a PCI card look like multiple PCI devices each of which can be presented to a particular virtual machine.  In WS2012 there is SR-IOV support for network cards (NICs) so that the virtualisation is handled by the card not the hypervisor.  The clever bit that’s currently unique to WS 2012 is that you can live migrate a VM using SR-IOV to server that doesn’t have it , i.e. you don’t have to stop the VM.
    • IPsec can also be offloaded to NICs that support it
    • Support for 4 x Fibre Channel HBAs in a VM.  Note this doesn’t prevent live migrations either but the setup of the Virtual SAN must be the same on the source and target hosts.

    While I am on about hardware, Hyper-V also introduces a new hard disk format, VHDX which can be up to 64TB (Note the VHD format is still supported) which also allows you to efficiently use the 4KB sector size on the newer larger hard disks by having a logical sector size of 4KB as well.

    On a physical server running lots of VMs you’ll want to ensure that SQL Server gets a predictable set of resources, such as CPU, RAM.  Hyper-V has always allowed you to set and prioritise all these, and in WS2012 you can also set Network bandwidth maximum and minimum for each Virtual NIC.

    In the past SQL Server hasn’t been virtualised because of concerns about performance. As I have shown that doesn’t really apply to WS2012, but you will need to follow best practice for setting it up and confirm that you are getting the performance you are expecting.  The best practice is on the SQL CAT (Customer Advisory Team) blog, and your testing should show that you are getting about 93% of the performance you had on your equivalent physical hardware.

    So please  ensure your SQL Server is given a good home and try it on Windows Server 2012

     

    note: I’ll be discussing this during my sessions at SQL Relay (in Glasgow, Leeds, Birmingham & Norwich)

  • Read it and keep

    I have to confess to having quite a few real books in my house. Some of these books are reproductions of originals or have really big hi definition pictures in so they don’t work well as e-books.  I also find that on occasion technical books work better in print than they do on a small ebook reader.  Having said that I only have finite shelf space so I only keep the good (for me) stuff.

    When I got  my review copy of Windows Server 2012 Hyper-V Installation and Configuration Guide written three of my MVP friends ( Aidan Fynn, Damian Flynn and Patrick Lownds), I honestly thought I'll do a quick post on it then give it away as a prize as I thought I was reasonably competent with this new OS.  It turns out I was wrong!

    Firstly the book goes deep in to corners that I have only briefly looked at like how virtual (aka software defined) networks work in Hyper-V.  This is hard because it is only surfaced in Windows Server 2012 through PowerShell cmdlets.  If you want a UI then you’ll need to use System Center Virtual Machine Manager 2012 sp1.Actually the PowerShell in this book is another reason I like it, as it shows the art of the possible and that may help some of you get over the hurdle of learning it because you can see all the good stuff you can do once you have mastered the basics.

    Other examples of good deep technical content are the numerous grey boxes in the text e.g. Linux considerations on NUMA, what’s the right number for simultaneous live migrations, and Backup and Virtual Machine Mobility.

    Secondly there’s a lot of good discussion on when to use what, for example

    • Anti-Virus software for the physical hosts,
    • typical uses of file shares and SMB3 to host VM’s and what this means for high availability
    • How to use the new Fibre channel support in Hyper-V

    Finally it’s well written. By that I mean it’s been written from the ground up not a bad rehash of an earlier version. Like any good book it takes you on a journey in each section from simple to complex, so initially you’ll what something is like NIC team and learn to do something via the UI and then you’ll get all the nuances and best practices and real world stuff.

    So if you want a copy to help you get the most out Windows Server 2012  get your own, your not having mine!

  • Microsoft Free Software – a personal top ten

    You might think this will would make for a really short article post but actually  there’s a  huge amount of free tools and resource out there and I have had to restrict myself to a top ten across the server and client,  based on what Simon and I have used.  So please feel free to comment with your own and I’ll see what we can do about maintaining a list somewhere and rewarding good suggestions.

     

    Hyper-V Server

    Yes Microsoft does have a free operating system, although it’s just restricted to the ability to run highly available virtual machines.  With Hyper-V Server you are limited to running 8,000 virtual machines on a 64 node cluster and you can only put 64 logical processors in a virtual machine. Also note that there is no graphical interface as this OS is very like Server core and is designed to be remotely managed.  (I have a separate post just on Hyper-V Server here).

     

    Microsoft Security Essentials

    If your organisation has less than ten PCs then this is the FREE antivirus for you, and it’s also free to use at home.  Security Essentials uses the same signatures as System Center Endpoint Protection and has won a slew of awards for being very user friendly.  You can install this on XP and Windows 7, but for Windows 8  and Windows RT Windows Defender does the same thing and  included and

     

    System Center Advisor

    This is a lightweight best practice analyser for Windows Server and SQL Server environment.  It uses the same agent System Center Operations Manager agent to collect telemetry about your servers and then sends this every day to Advisor Service.  The Advisor Service then provides reports on error and warnings you need be aware of.  It uses your own certificates so it’s secure.  Like Operations Manager you can configure a gateway to collect the information from other internal servers and then send this daily to the Advisor service.  (I have posts on  how to set it up and how to use it).

    SQL Server Express

    If you only need a small database server the there’s quite a lot you can do with SQL Server Express.  The tools are essentially the same as its bigger brother and you get reporting services if you need it to deliver rich reporting of your local database. If you don’t need all the tooling and just want a slimmed down engine behind your application then there’s an option LocalDB

     

    MAPT

    The Microsoft Assessment and Planning Toolkit seems to be universally ignored, despite being really useful in planning any kind of upgrade or migration, or jut to make sense of what you have got already – given that you might be new into post. What it does is to crawl your datacentre with various credential you supply and tells you what you have.  This might be nothing more than how many servers do you have and what OS are they running, and even that in a world of virtual machine sprawl can be useful.  However if you were then to use it plan a Windows Server 2012 migration project it would allow you get reports and plans on how to do that and what you’ll need.  It’s constantly updated so always be sure to get the latest version.

    One other thing to note while it will allows you to assess your licensing estate the data is NOT reported back to Microsoft, so you won’t be getting loads of phone calls once you’ve run it, but you will at least know where you are.

     

    Data Classification Toolkit

    Knowing about your infrastructure is one thing, what matters more is the data that’s in it and to make sense of that there’s the Data Classification Toolkit. Like other solution accelerators it’s continually updated and in this case is now aware of the latest tools in Windows Server 2012 like Dynamic Access Control (My post on getting started is here).

    Please note the small print : Use of the Microsoft Data Classification Toolkit for Windows Server 2012 does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfilment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.

     

    Windows Assessment and Deployment Kit (Windows ADK)

    Simon and I still see a lot of weird and wonderful ways to deploy operating systems at scale, which is odd when Microsoft have two free of tools the first being Windows ADK.  Actually the ADK should count as several free things itself as it contains a number of useful utilities such as:

    Microsoft Deployment Toolkit

    This is another tool that’s kept up to date, in this case it too support scale deployment of Windows 8 and Server 2012.  It may seem like an overlap with the Windows ADK but that toolset requires knowledge of a lot of command line utilities like DISM, where the MDT is a UI driven process.  The

    OEAT

    The Office Environment Assessment Toolkit (OEAT) scans client computers for add-ins and applications that interact with all versions of Office back to Office 97. It’s designed for detecting compatibility issues but I have seen it used to track down large spreadsheets which means someone in your organisation is using Excel instead of a proper database, which at best might mean there could be data quality issues in some of your reporting and at worst might mean you are storing customer and confidential information where it is not being properly controlled 

     

    Windows Live

    I still use three utilities from Windows Live to get key tasks done

    Note none of these run on Windows RT

    I also wanted to share my also rans that didn’t make my top ten..

    Windows Sysinternals:

    ZoomIT A Windows Sysinternals tool to make areas of your screen bigger

    BGInfo Also part of the Windows sysinternals tools which shows key informatio0n about  your servers on their desktops.

    RDCMan to manage remote desktops' great for managing Windows 8 / Server 2012 as it’s not based on RDP8 and so the charms etc. don’t work

    and finally for a bit of fun Ordnance Survey Maps - I occasionally need to get out of the office and off-road.  Street View is fine but what if there are no streets and you need to get from A to B for fun on foot or by cycle.  In this instance Ordnance Survey maps are your friend and they are free on Bing Maps if you are in the UK (just select Ordnance Survey from the left hand drop down list of map types)e.g.

    image

    You can print them as well if you don’t want to take your slate, tablet, phone with you