Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

Insufficient data from Andrew Fryer

  • Lab Ops 7 – Setting up a pooled VDI collection in Windows Server 2012 R2

    In Windows Server you can create two kinds of Virtual Desktop Infrastructure (VDI), personal or pooled.  A personal collection is a bit like a company car scheme where everyone chooses their own car. This means there needs to be car for everyone even if they are on leave or sick etc. and each car needs to be individually maintained. However the employees are really happy as they can pimp their transport to suit their own preferences.  Contrast that with a car pool of identical cars, where an employee just takes the next one out of the pool and when its brought back its refuelled and checked ready for the next user, and you don’t need a car for everyone as there’ll be days when people just come to the office or use public transport to get to their destination.  That seems to be a better solution than company cars for the for the employer but not so good for the employees. Pooled VDI collections work like pool cars in that they are built from one template and so only one VM has to be maintained, but that means every user has the same experience which, might not be so popular.  However Pooled VDI in Windows Server 2012 has a method for personalising each users experience while still offering the ability to manage just one template VM and that’s why I want to use pooled VDI in my demos. 

    Carrying on from my last post I right click on RD Virtualisation Host and select Create Virtual Desktop Collection

    image

    Now I get specify the collection type

    vdi collection type

        

    Having chosen the collection type I now need to pick a template on which to base the pool..

    vdi template selection

    I found out that you can’t use the new Hyper-V generation 2 VMs as a VDI template even in Windows Server 2012R2 rtm. This does mean you can use that WimtoVHD Powershell script I have promoting in earlier posts in this series to create my template directly from the Windows installation media. 

    Note: you’ll need windows 8.1 enterprise for this which is currently only available on msdn, until 8.1 is generally available in a couple of weeks when there should be an evaluation edition available

    In fact for a basic VDI demo the VHD this creates can be used as is; all you need to do is create a new VM from this VHD to be configured with the settings each of the VDI VMs will inherit, such as CPU, dynamic memory settings, Virtual NICs and which virtual switches they are connected as well as any bandwidth QoS you might want to impose..  

    vdi template details

    Here you can see the setting for my template VM such as it being connected to my FabricNet virtual switch.

    Normally when you build VMs from templates you will want to inject an unattend.xml file into the image to control its settings as it comes out of  sysprep (as I have done in earlier posts in this series). This wizard helps you with that or you can just enter basic settings in the wizard itself as I have done ..

    vdi template settings

    and not bother with an unattend.xml file at all.

     

    vdi template unattend settings

    Now I can start to configure my collection by giving it a name, how many VMs it will contain and specifying who can access it ..

    vdi create collection

    In a production environment you would have several virtualization hosts to run your collection of VMs and here you can specify the load each of those hosts will have. 

    vdi specify hosts for collection

    Having specified which hosts to use I can now get into the specifics of what storage the VMs will use. I am going for a file share, specifically one of the file shares I created earlier in this series, which will make use of the enhancements to storage in R2.  Note the option to store the parent disk on a specific disk, which might be a good use of some of the new flash based devices as this will be read a lot but rarely updated.

    vdi vm location

    My final choices is whether to make use of user profile disks.  This allows all a users settings and work to be stored in their own virtual hard disk and whenever they log in to get a pooled VM, this disk is mounted to give them access to their stuff.  This is really useful if all your users only ever use VDI as you don’t need to worry about all that roaming profiles and so on.  However if your users sometimes use VDI and sometimes want to work on physical desktop such as laptops then you’ll want to  make use of the usual tools for handling their settings across all of this so they get the same desktop whatever they use - remember we work for these people not the other way around! 

    vdi user profile disk location 

    That’s pretty much it - the desktops will build and your users can login via the web access server in my case by going to http://RDWebAccess.contoso.com/RDWeb

    To demo the differences in performance on a pooled VDI collection that sits on a storage space that's had deduplication enabled I could create another collection on the Normal* shares I created in my post on storage spaces by doing this all again.  Or I could just run a PowerShell command, New-RDVirtualDesktopCollection,  and set the appropriate switches..

    $VHost = "Orange.contoso.com"
    $RDBroker = "RDBroker.constoso.com"
    $ColectionName = "ITCamps"

    #The VDI Template is a sysprepped VM running the Virtual Hard Disk, network settings etc.  that all the pooled VMs will inherit.  The VHD will run windows 8.1 configured and sysprepped with any applications and setting needed by end-users

    $VDITemplateVM = get-vm -ComputerName $VHost -Name "Win81x86 Gen1 SysPrep"

    New-RDVirtualDesktopCollection -CollectionName "ITCamp" -PooledManaged -StorageType CentralSmbShareStorage -VirtualDesktopAllocation 5 -VirtualDesktopTemplateHostServer $VHost -VirtualDesktopTemplateName $VDITemplateVM -ConnectionBroker $RDBroker -Domain “contoso.com” -Force -MaxUserProfileDiskSizeGB 40 -CentralStoragePath”\\fileserver1\NormalVMs” -VirtualDesktopNamePrefix "ITC" -OU “VDICampUsers” -UserProfileDiskPath “\\fileserver1\NormalProfiles”
    My good friend Simon May then gradually add in more and more VMs into the collection with the Add-RDVirtualDesktopToCollection cmdlet to see how much space he can save.

    The other really clever thing about a pooled VDI setup like this, is maintaining it.  Clearly you will want to change the tem[plate the Pooled collection is based on from time to time, for example to add or remove version of applications and to keep patches up to date.  All you have to do is to make another template VM with the new applications and latest patches and then   Update the collection from the Collection management screen, or via the Update-RDVirtualDesktopCollection PowerShell cmdlet for example

    PS C:\> Update-RDVirtualDesktopCollection -CollectionName "ITCamp" VirtualDesktopTemplateName "$VDITemplateName" -VirtualDesktopTemplateHostServer $VHost -ForceLogoffTime 12:00am -DisableVirtualDesktopRollback -VirtualDesktopPasswordAge 31 -ConnectionBroker $RDBroker

    where I would have set $VDITemplateName to be the modified and sysprepped VM to base the updated collection on. Note the Force LogOffTime setting; that’s where users will be thrown out and forced to log on again.  If you don’t set this they’ll only get to use the new version when the login and logout again.  However you manage that if you have used User Profile in the collection as I have done their preferences and setting will persist on the updated collection.

     

    So that’s the basics of setting up VDI on a laptop for your evaluations.  From here I could go on to ad other parts of the Microsoft remote desktop solution such as;

    • Set up RemoteApp the business of delivering individual applications over the Remote Desktop Protocol (RDP)
    • Add in RD Session Hosts and RDS Collections to deliver the traditional terminal services functionality and still the most efficient way to deliver remote desktops top users
    • Add in an RD Gateway to open up the RD infrastructure to remote users.  Clever use of the Remote Routing and Access Services role in a VM that’s connected to virtual networks can be used to simulate this connectivity from just one host. 
    • Add in a Licensing host for production environments 

    However I would be interested to know what you would like me to post next, so please add comments or if you are shy e-mail me

  • Lab Ops 6 – Setup VDI in Windows Server 2012R2

    VDI bridges the world of the client and the server, and that can mean that in a world where IT professionals are either experts in one or the other (Simon and I being  a case in point), most of us have a partial knowledge of how VDI works.

    There are other reasons why you might be forgiven that VDI isn’t a Microsoft strength for two reasons:

    • It has been hidden behind offerings from third parties like Citrix, and those partners are still important for larger implementations and because in the past those partners made the only decent remote desktop clients for all sorts of platforms such as Android and IOS.  That all changed on 7th October and to quote the official press release:

    “.. with Windows Server 2012 R2 Microsoft is introducing the Microsoft Remote Desktop app, available for download in application stores later this month, to provide easy access to PCs and virtual desktops on a variety of devices and platforms, including Windows, Windows RT, iOS, OS X and Android.”

    • VDI isn’t referred to as such in TechNet or in the Windows Server interfaces such as Server Manager as VDI – it’s called Remote Desktop Services which also includes the traditional terminal services way of providing a remote desktop, but also includes using pooled or  personal client VMs as well.

    However since Windows Server 2012 it has been pretty easy to setup a secure and resilient VDI environment just using Windows client and server, that offers your users a rich and personalised experience.

    So before I go into how to build a lab what are the moving parts?  The easiest way to see this is post deployment as there’s an overview in server manager..

    VDI Overview

    The green objects have yet to be configured; The RD Gateway for external access to my VDI environment and the licensing server. 

    The grey objects are configured and go blue when in hover on them:

    • the RD Web Access Server is the web server that users connect in from.
    • The RD Connection Broker is the middle tier which orchestrates the whole setup; what servers are performing which functions, where the VDI VMs are to control their state , and handling security to limit who can access which desktop and applications.
    • The RD Virtualization Host are the physical server(s) that that the VDI VMs run from.
    • Optionally there is also an RD Session Host for  Remote Desktop Services (Terminal Services as was)

    Notice the IT Camp collection. A collection is a pool of client VMs. The idea is to manage the collection as one VM and I’ll describe the options and how to configure collections in my next post. 

    One thing to note about RDS/VDI is that highly availability (HA) is going to be very important as no one will get any work done if there are no desktop to work on. For this reason my lab setup needs to be tagged with “don’t try this at work!” Of course there’s no point in evaluating this unless you are sure that you can enable HA and this is the sort of setup that would allow for that..

    WP_20131007_001_thumb1

    Notes: 
    • The web access servers need to be load balanced as for any web site
    • The broker role can be clustered and not the use of SQL Server to store metadata about the environment when this is the case. Of course that SQL Server database would need to reside on resilient shared storage or it becomes a single point of failure.  Also note that Broker1 and Broker 2 can handle connection requests i.e. this is an active active scenario and that’s why there’s a shared database in the mix.
    • The virtualization hosts have no special settings for HA it’s just that there’s more than one of them and typically you’ll want to ensure you have enough spare server resources to able to run your collection of VMs if one of the servers is not longer available. 
    • The VMs virtual hard disks could be on shared storage but actually this doesn’t matter too much, what matters is where the user’s data is and the parent disk of the VDI collection. RDS in Windows Server 2012 has special user profile disks and it is these that should be on shared storage and when you configure RDS you get to specify where these are stored as distinct from the virtual hard disks for each VDI VM.   

    At this point you might be wondering what happens if one of the Virtualization Hosts fails.  It’s really simple - for those users that had open desktops that were based on VMs on that failed server, then those users will loose their session.  However given that the shared storage behind all of the VMs is still there then when the user connects in again they won’t have lost any saved work. So a bit like a real desktop crash except that they can immediately sign back in and continue where they left off as the broker will assign them an unused VM on a running server, and that will be quick as the VDI VMs are left in a saved state to be ready for immediate use

    Anyway on with my lab setup which builds on earlier posts in this series..

    As I’ve already said I need at least one physical host to be the RD Virtualization Host and my laptop in the diagram above is the “Orange” session host. The other roles in the diagram above can all be one or more VMs themselves and those roles can be combined.  A production environment will need to have use multiple VMs on different hosts but you might not need to scale out the roles to dedicated VMs in smaller deployments.

    For my demo I am going to use a VM for the Broker role (RDBroker) , another for Web Access (RDWebAccess) and a third as a Session Host (RDSHost). To create all of these vms I have adapted the setup scripts described in part 2 of this series so I can create these with a line of PowerShell for each..

    Create-VM -Name RDWebAccess -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.20" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDBroker -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.21" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDSHost -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.22" -DnsSvr "192.168.10.2" -Domain "contoso.com"     
    Note: My sysprepped image used to create these vms is based on the download from msdn , so I am asked for a license key which I can skip, but it’s a manual step. When an evaluation edition of Windows Server 2012 R2 is available this won’t be a problem.

    I can now configure these blank vms through Server Manager from one machine, provided you have already added them as servers to be managed as I have (the servers in the yellow box)..

    image_thumb6 

    The first step is to add the roles and features to each of my server VMs, and since Windows Server 2012 this has been a special installation option, Remote Desktop Services Installation.  As I mentioned before Remote Desktop Services is a generic term in Windows Server to cover both VDI and Remote Desktop Sessions (what was terminal services). and so the wizard asks you which one you want.  For this post I am going for VDI. In the following screens for this wizard I can assign each role to one of my servers and the wizard will deploy the whole environment for me, and then show me that overview screen at the start of this post.

    There’s a couple of things you’ll want to check before you actually create a VDI collection:

    Check the Deployment settings:

    image_thumb20

    For example you can add in your licensing server, specify the Web Access URL to use and in my case I don’t the export location for my template to be on the the RDBroker server as it’s a VM so I’ll use a share somewhere else.  You’ll also want to setup certificates for this, and when I have that properly researched I’ll cover it off in this series.

    Increase the VM Build Concurrency on your Hosts

    By default Hyper-V builds VMs one at a time which might be a good if cautious move for heavily used production hosts but on my laptop(s) I want to up this, and this is the PowerShell to hack the appropriate registry setting and reboot the hosts for this to take effect..

    #replace “server1”,”server2” with your list of servers

    $CompNames =@("Server1",”Server2”)

    foreach ($CompName in $CompNames)
    {
         Invoke-Command -ComputerName $CompName -ScriptBlock {

        Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\VmHostAgent\Parameters"`
          -Name "Concurrency" -value 5
         Restart-Computer -force}
    }

    Now I am ready to build “collections” of virtual machines and that’s coming up in the next post.

  • Lab-Ops part 12 – A crude but effective Domain Controller

    I realised I need to recreate a Domain Controller in my labs and in so doing I noticed a snag in my earlier scripts that really breaks when I use the same snippet for a new DC.  I have this test to see if a VM is ready to be used..

    do {Start-sleep –seconds 10}

    until(Get-VMIntegrationService $VMName  | where name -eq "Heartbeat").PrimaryStatusDescription -eq "OK")

    #the code to create the $localcred credential is at the end of this post

    It does work in that this will return true if the VM is on, but if a VM is coming out of sysprep this do until loop will exit way before I can log in and actually use the VM. So then I tried this command in my until clause ..

    Invoke-Command –command 192.168.10.1 {dir c:\} –ErrorAction SilentlyContinue –Credential $LocalCred

    a crude but effective test based on whether I could connect to and run a simple command on the VM.That worked for most of my VMs, but this was still no good for my script to build a Domain Controller (DC). The problem here is that after I add in the feature (which doesn’t require a reboot)..

    Install-WindowsFeature -name AD-Domain-Services –IncludeManagementTools

    and then create the Domain with..

    Install-ADDSForest –DomainName Contoso.com –safemodeadministratorpassword (cofor thisnverttosecurestring “Passw0rd” –asplaintext –force) –force

    this will at some point cause a reboot but this doesn’t happen inline as this command is itself calling  PowerShell in a session I can’t control.  The result is that my script will continue to execute while this is going on in the background. So my test for a C Drive could work before the reboot and I would be in a mess because some subsequent commands would fail while my VM reboots. So my hack for this is to trap the time my VM takes to come out of sysprep..

    $Uptime = (Get-VM –Name $VMName).uptime.totalseconds

    and test when the current uptime is LESS than $Uptime which can only be true after the VM has rebooted.

    do {Start-sleep –seconds 10}

    until(Get-VM –Name $VMName).uptime.totalseconds –lt $Uptime)

    Then I can test to see if the VM is ready to be configured by checking the Active Directory Web Service is alive on my new DC..

    Get-Service –Name ADWS | where status –EQ running

    However even after this test returned true I was still getting errors from PowerShell saying that a default domain controller couldn't be found so I specified the DC with a –server switch in each command for example ..

    New-ADOrganizationalUnit -Description:"RDS VDI Collection VMs" -Name:"RDS-VDI" -Path:"DC=Contoso,DC=com" -ProtectedFromAccidentalDeletion:$true -Server:"RDS-DC.Contoso.com"
      Just to be extra sure I also slapped in a 20 second wait to ensure the service really was there as I want this to run cleanly again and again.

    I won’t bore you with the code for adding the rest of the users, groups etc. to Active Directory as the easiest way to write that is to do something to a Domain controller in the Active Directory Administrative Centre and grab the stuff you need from the PowerShell History at the bottom of the console..

    image

    I also showed you how to read and write to text based CSV files in part 5 of this Lab Ops Series so you could amend my script to have a whole list of objects to add in to your DC from a CSV file that you have previously lifted from a production DC.

    I also need a DHCP server in my lab and I typically put that as a role on MY DC.  Here again you can see how PowerShell has improved for newbies like me..

    #Install the DHCP Role
    Add-WindowsFeature -Name "DHCP" -IncludeManagementTools -IncludeAllSubFeature
    #Authorize this DHCP server in AD
    Add-DhcpServerInDC -DnsName contoso.com 
    #Setup a scope for use with RDS/VDI later on
    Add-DhcpServerv4Scope -StartRange 192.168.10.200 -EndRange 192.168.10.254 -SubnetMask 255.255.255.0 -Name RDSDesktops -Description "Pool for RDS desktop virtual machines"
    #Set up the DNS Server Option (6) in DHCP so DHCP clients have the DNS Server entry set
    Set-DhcpServerv4OptionValue -OptionId 6 -value 192.168.10.1
    Set-DhcpServerv4OptionValue -OptionId 15 -value "contoso.com"

    Sadly the trusty old DHCP MMC snapin doesn’t have a history window so I looked at the options set by the wizard and set them as you can see here.  Once all this is working I can go on to create the other VMs in this series. However this DC also sets up and uses a Hyper-V Internal Virtual Switch “RDS-Switch” and ensures that my physical host (Orange – which is my big Dell laptop) can connect to my new DC on that switch..

    # Setup the Networking we need - we'll use an internal network called RDS-Switch. If it's not there already create it and set DNS to point to our new DC (RDS_DC) on 192.168.10.1
    If (!(Get-VMSwitch | where name -EQ $VMSwitch )){New-VMSwitch -Name $VMSwitch -SwitchType Internal}
    # Now configure switch on the host with a static IPaddress and point it to our new VM for DNS
    $NetAdapter = Get-NetAdapter | Where name -Like ("*" + $VMSwitch + "*")

    #Note the use of the !(some condition) syntax to refer to not true
    If (!(get-NetIPAddress  -InterfaceAlias $NetAdapter.InterfaceAlias -IPAddress "192.168.10.100" -PrefixLength 24)) {New-NetIPAddress -InterfaceAlias $NetAdapter.InterfaceAlias -IPAddress "192.168.10.100" -PrefixLength 24} 
    Set-DnsClientServerAddress -InterfaceAlias $NetAdapter.InterfaceAlias -ServerAddresses "192.168.10.1"

    The final piece of the puzzle is to join my physical laptop  to this domain,   as I am going to need the host for VDI, and for now I am going to run that manually with the add computer command..

    $LocalCred = new-object -typename System.Management.Automation.PSCredential -argumentlist "orange\administrator", (ConvertTo-SecureString "Passw0rd" -AsPlainText -Force  )
    $DomainCred = new-object -typename System.Management.Automation.PSCredential -argumentlist "Contoso\administrator", (ConvertTo-SecureString "Passw0rd" -AsPlainText –Force  )
    add-Computer ComputerName Orange –Domain Contoso.com–LocalCredential $Localcred –DomainCredential $DomainCred –Force

    Start-Sleep –seconds 5

    Restart-Computer Orange $localcred

    ..and of course to test that I need to pull it out of that domain before I test it again with Remove-Computer. By the Way don’t put the –restart switch on the end of add computer as that will bounce your Host and hence your DC as well and while your host appears to be domain joined it doesn’t show up in the domain.

    I have posted the whole script on SkyDrive (Create RDS-DC.ps1) and it’s called RDS-DC as it’s designed to underpin my Remote Desktop Services demos. For example there are a couple of write-host lines in there to echo output to the console where in reality you would log progress to a file.

    As ever any advice and comments on this is welcome and I can repay in swag and by properly crediting your work.

  • Lab Ops part 11 – Server Core

    My laptop running Windows Server 2012R2 looks like this when it starts:

    startmenu

    This is a good thing and a bad thing. It’s a good thing if:

    • You have a server you want to connect to from a device with touch on that may have a smaller form factor and you have big fingers so you can get to the task you want in a hurry as you can pin web sites, MMC snap-ins etc. to the Start Menu and organise them as I have done.
    • If you use your server for Remote Desktop Session Virtualization (Terminal Services as was) your users will see the same interface as their Windows 8.1 desktop, and will get a consistent experience.
    • You are an evangelist at Microsoft, who has to lots of demos and isn’t allowed near production servers! Smile

    However if you are managing production servers at any kind of scale this is a bad thing as you don’t need all the tools and interfaces on every server you deploy.  All those tools expose interfaces like File and Internet explore so if your servers are in the wild (so not in managed data centres) then curious local admins might wish to use those tools to surf the net or reconfigure your servers.  Also these interface require patching and can consume resources.

    This is why Server Core was introduced in Windows Server 2008 and in Windows Server 2012R2 Server Core is the default installation option.  All you get if you install Windows Server with this option is:

    • Task Manager
    • Registry Editor
    • Notepad!
    • Command Line
    • SConfig a lightweight menu of scripts to do basic configuration tasks.
    • and with Windows Server 2012 and later you also get PowerShell

    Server Core wasn’t a popular choice in Windows Server 2008 for  a number of reasons:

    • It was too limited. For example there was no ability to run the .Net framework and so it couldn’t run things like asp.Net websites or SQL Server and it didn’t include PowerShell by default because PowerShell is also built on .Net.
    • 2008 wasn’t setup for remote management by default, and patching was problematic, and unless you paid for System Center there wasn’t a tool to manage these servers at any sort of scale.
    • It was an install only option, so the only way to get back to a full interface was to do a complete reinstall.

    That has all been fixed in Windows Server 2012 and later and so Server Core should be your default option for all your servers except those used for Remote desktop Session Virtualization Hosts.  You could achieve nearly the same result by doing an install and rip out all the interfaces once your server is configured they way you want in which case you would go to the menu remove features in Server Manager and in the features screen uncheck each of the options..

    ui features

    where:

    • The Server Graphical Shell has IE and File Explorer the start button and Start Screen.
    • The Desktop Experience gives you the Windows Store and makes Windows Server behave like Windows 8.1 (complete with the Windows Store).
    • The Graphical Management Tools have Server Manager and MMV snap-ins.

    If you remove all of these from here or use PowerShell to do this:

    Remove-WindowsFeature –Name Desktop-Experience, Server_Gui-Mgmt-Infra, Server-Gui-Shell

    you essentially get Server Core. If you leave behind the Graphical Management Tools ..

    Remove-WindowsFeature –Name Desktop-Experience, Server-Gui-Shell

    You’ll get what’s known as “MinShell” all the management tools, but no extra fluff like IE and the Start menu, so this is also a popular choice.

    If you do elect do a Server Core install and want then later decide to put the management user interface back in, you need to remember that the binaries for these aren’t on the server so when you add the features in you’ll need to specify a source switch, and before that you’ll need to have access to the source for these features by mounting the appropriate .wim file using the Disk Image Servicing Management command (DISM):

    MD C:\WS2012install

    DISM /Mount-WIM /WIMFile:D:\sources\install.wim /Index:4 /MountDir:C:\WS2012Install /ReadOnly

    Add-WindowsFeature –Name Server-GUI-Mgmt-Infra –Source c:\ws2012install\Windows\SxS

    shutdown /r /t 0

    DISM /Unmount-WIM /MountDir:C:\WS2012Install /Discard

    Notes:

    • This turns Server Core in to a MinShell installation
    • D: is the install media for Windows Server 2012R2
    • Index is the individual installation on the media and for the evaluation edition 4 corresponds to the full install of Datacenter edition (You can run DISM /Get-WIMInfo /WIMFile:D:\sources\install.wim to see what there is)
    • shutdown /r restarts the server

    This is actually quite a useful template as doesn’t have the binaries needed to put IE etc. back in but is still easy to manage. You could then use this as the the basis for creating VMs for your labs and evaluations by sysprepping it:

    sysprep /generalize /shutdown /oobe

    and if you wanted to save space you could then use it  as a parent for the  differencing disks behind the various VMs  in your lab environment.

    There is a free version of Windows Server 2012R2 specifically designed for Hyper-V,  called Hyper-V Server 2012R2. This is very like Server Core, but you can’t add the UI back in and it only has a few roles in it specifically for Hyper-V, clustering and file servers as that’s all that is included in the license.

    To finish up Server core is really useful, and much easier to manage remotely than it was, as Windows Server is setup by default for remote management within a domain. Just as importantly you might have dismissed a feature in Windows when it first shows up in a new version, but it’s worth checking back to how it’s changed in each new versions, be that Server Core, Hyper-V, storage or those big squares!  

  • After Hours - Canon EOS talking to a Surface Pro over wifi

    Please note This an after hours post, specifically about connecting a Canon EOS 6D to windows 8/8.1.  I have written it for two reasons -  so I can remember how to do it and because this you might need to do something like this for a camera enthusiast that you know who isn’t a networking guy.

    Canon have made it relatively easy to connect the new EOS 6D 70D etc. to your Android or IOS device and to a wifi hotspot to which your PC/laptop is connected.  However what I wanted to do was to  configure windows 8 as an ad hoc wireless connection point so I could remote shoot via wireless from my Surface Pro anywhere I happened to be; jungles, mountains, and the various events I go to.  However Windows 8 doesn’t have a UI for this anymore so you need to run a couple of netsh commands from an elevated prompt to get this working:

    netsh wlan set hostednetwork mode=allow ssid=MyWIFI key=MyPassword

    netsh wlan start hostednetwork

    ..where MyWIFI is the wireless network name you want and MyPassword is the password to connect to it. What this does is to add a new adapter into network connections..

    image

    In my case I renamed my connection to Canon and also note that Deep6 has a three after it as I tried this  a few times! Another thing you may see on forums is that you need to setup sharing when creating connections like this and that’s only true if you want to do the old internet connection sharing. I don’t need to do this for this scenario which is just as will as our IT department have prevented me from doing this in group policy

    On  my Canon EOS6D I need to enable wifi
    IMG_6285

    then set it up by selecting the wifi function which is now highlighted.  From here I want to set up a C connection which is the Remote Control (EOS Utility option)..

    IMG_6288

    I have already don this a few times ..

    IMG_6291

    so to set up a new connection I choose unspecified. Now I ned to find the network I created on my Surface Pro by finding a network..

    IMG_6292

    My ad hoc network is called Deep6 as opposed to FAF which is my home wireless network..

    IMG_6293

    my key is in ASCII so I select that on the next screen and then I get this dialog to enter my password ..

    IMG_6295

    Note you have to use the Q button on the back of the camera to enter the text window. I am asked about ip addresses I select automatic as my wireless network will do that for me. Then I can confirm I want to start pairing devices..

    IMG_6297

    and then I will see this..

    IMG_6298

    I can now check that my 6D is talking to my new wireless access point (which I have called Deep6.

    image

    as you can see I have one device connected.

    So now I can use the supplied Canon software, the EOS Utility,  to control my camera. Or so I thought,  only all the control options are greyed out.  This is because you need to change the preferences to install and configure the wft utility which detects your Canon and allows you to control it. To do this select the option add WFT pairing software to the startup folder

    image

    You’ll then get a little camera icon In your system tray and when your Canon is connected it’ll pop up this window..

    image

    click connect and  you’ll see an acknowledgement and confirmation on the camera..

    IMG_6299

    in my case my Surface is called Vendetta. I click OK, and I am good to go and the camera saves the settings for me, which is great and in fact I can save 3 of them. In my case I have saved my surface connection and FAF to connect to my home wireless router.

    The Canon EOS  Utility will now work..

    IMG_6308

    Now I can start to have fun with this setup and my shots get saved to my Surface Pro..

    IMG_0003

  • Lab Ops part 15 - Combining MDT with Windows Update Services

    If we are going to deploy VDI to our users we are going to still have some of the same challenges as we would have if we still managed their laptops directly.  Perhaps the most important of these is keeping VDI up to date with patches.  What I want to do in this post is show who we can integrate Windows Update Services(WSUS) with MDT to achieve this:

    • Set up WSUS
    • Connect it to MDT
    • Approve patches
    • recreate the Virtual Desktop Template with the script I created in Part `12 of this series
    • Use one line of PowerShell to recreate my pooled VDI collection based on the new VDT.

    Some notes before I begin:

    • All of this is easier in Configuration Manager but the same principles apply plus I can do a better job of automating an monitoring this process with Orchestrator.  I am doing it this way to show the principles.
    • I am using my RDS-Ops VM to deploy WSUS on as it’s running Windows Server 2012R2 and I have a separate volume on this VM (E:) with the deduplication feature enabled, which as well being home to my deployment share can also be the place where WSUS can efficiently keep its updates.  It’s also quite logical, we normally keep our deployments and updates away from production and then have a change control process to approve and apply updates once we have done our testing.
    • RDS-Ops is connected to the internet already as I have configured the Routing and Remote Access (RRAS) role for network address translation (NAT)

    Installing & Configuring WSUS

    WSUS is now a role inside Windows Server 2012 & later and on my RS-Ops VM I already have a SQL Server installation so I can use that for WSUS as well.  The WSUS team have not fully embraced PowerShell (I will tell on them!) so although I I was able to capture the settings I wanted and save those off to an xml file when I added in the Roles and Features I also needed to run something like this after the feature is installed..

    .\wsusutil.exe postinstall SQL_INSTANCE_NAME="RDS-Ops\MSSQLServer" CONTENT_DIR=E:\Updates

    (the Scripting Guy blog has more on this here)

    Now I need to configure WSUS for the updates I want and there isn’t enough out of the box PowerShell for that -I found I could set the synchronization to Microsoft Update with Set-WsusServerSynchronization -SyncFromMU, but there’s no equivalent Get-WsusServerSynchronization command, plus I couldn’t easily see how to set which languages I wanted only products and classification (whether the update is a driver, an update service pack, etc.) so unless you are also a .net expert with time on your hands and I am not you will need to set most everything form the initial wizard and hope for better PowerShell in future. In the meantime rather than pad this post out with Screengrabs I’ll refer you to the WSUS TechNet Documentation on what to configure and explain what I selected.. 

    • Upstream Server Synchronize. Set to  Microsoft Update
    • Specify Proxy Server. None
    • Languages. English
    • Products. I decided that all I wanted to do for now was to ensure I had updates for just Windows 8.1 & Windows Server 2-012R2 and SQL Server 2012 (my lab has no old stuff in it).  This would mean I would have the updates I needed to patch my lab setup and my Virtual Desktop Template via MDT.
    • Classifications. I selected everything but drivers (I am mainly running VMs so the drivers are synthetic and part of the OS)
    • Synch Schedule. Set to daily automatic updates

    I ran the  initial synchronize process to kick things off and then had a look at  what sort of PowerShell I could use and I got a bit stuck.  

    I then looked at creating something like an automatic approval rule as you can see here..

    image

    only in PowerShell and came up with this ..

    Get-WsusUpdate | where classification -in ("Critical Update", "Security Updates") | Approve-WsusUpdate -Action Install -TargetGroupName "All Computers" # chuck in -whatif to test this

    which I could run behind my schedules update. Anyway I have now set some updates as approved so I can now turn my attention to MDT and see how to get those updates into my Deployment once they have actually downloaded onto my RDS-Ops Server. BTW I got a message to download the Microsoft Report Viewer 2008 sp1 Redistributable package on the way.

    Top Tip: If the MDT stuff below doesn’t work check that WSUS is working by updating group policy on a VM to point to it.  Open GPEdit.msc expand Computer Configuration -> Administrative Templates -> Windows Components -> Windows Updates and set the Specify Intranet Microsoft update service location to http://<WSUS server>:8530 in my case http://RDS-Ops:8530

    If I now go into the MDT Deployment Workbench on my RDS-Ops VM I can edit my Task Sequence and as with with my last post on installing applications it’s in  State Restore node that my Updates get referenced..

    image

    Note there are two places where updates can be applied bot pre and post an application install and both of these are disabled by default. The post application install would be good if you had updates in WSUS that applied to applications not just the OS as I have just set up. The application updates could then be added on top of the base application install.  This is a nice touch but how does MDT “know” where to get the updates from?  We can’t really set anything in WSUS itself or apply any group policy because the machines aren’t built yet.  The answer is to add one more setting into the rule for the Deployment Share aka ControlSettings.ini WSUSServer=http://<WSUS Server>:8530 as I left the default port as is when I setup WSUS ..

    [Settings]
    Priority=Default
    Properties=MyCustomProperty

    [Default]
    DeploymentType=NEWCOMPUTER
    OSInstall=YES
    SkipAdminPassword=YES
    SkipProductKey=YES
    SkipComputerBackup=YES
    SkipBitLocker=YES
    EventService=http://RDS-Ops:9800
    SkipBDDWelcome=YES
    WSUSServer=http://RDS-Ops:8530

    SkipTaskSequence=YES
    TaskSequenceID=Win81Ref

    SkipCapture=YES
    DoCapture=SYSPREP
    FinishAction=SHUTDOWN

    SkipComputerName=YES
    SkipDomainMembership=YES

    SkipLocaleSelection=YES
    KeyboardLocale=en-US
    UserLocale=en-US
    UILanguage=en-US

    SkipPackageDisplay=YES
    SkipSummary=YES
    SkipFinalSummary=NO
    SkipTimeZone=YES
    TimeZoneName=Central Standard Time

    SkipUserData=Yes
    SkipApplications=Yes
    Applications001 ={ec8fcd8e-ec1e-45d8-a3d5-613be5770b14}

    As I said in my last post you might want to disable skipping the final summary screen ( SkipFinalSummary=No) to check it’s all working (also don’t forget to update the Deployment Share each time you do a test) and if I do that and then go into Windows Update on my Reference Computer I can see my updates..

    image

    So to sum up I know have MDT setup to create a new deployment which includes any patches from my Update Server, and a sample application (Foxit Reader). so I can keep my VDI collections up to date by doing the following

    1. Approve any updates that have come in to WSUS since I last looked at it OR  Auto approve those I want by product or classification with PowerShell
    2. Add in any new applications I want in the Deployment Workbench in MDT.
    3. Automatically build a VM from this deployment with the script in part 13 of this series which will sysprep and shutdown at the end of the task sequence.
    4. Either create  a new collection with New-RDVirtualDesktopCollection or update an existing collection with Update-RDVirtualDesktopCollection where the VM I just created is the Virtual Desktop Template.

    Obviously this would look a little nicer in Configuration Manager 2012R2 and I could use Orchestrator and other parts of System Center to sharpen this up but what this gives us is one approach to maintaining VDI which I hope you’ll have found useful.

  • Lab Ops Part 10–Scale Out File Servers

    In my last post I showed how to build a simple cluster out of two VMs and a shared disk.  The VMs are nodes in this cluster..

    HA Cluster nodes

    and the shared “quorum” disk is used to mediate which node owns the cluster when a connection is lost between them..

    HA Cluster quorum disks

    However this cluster is not actually doing anything; it isn’t providing any service as yet. In this post I am going to fix that and use this cluster as a scale out file server.  This builds on top of what I did with a single file server earlier in this series. To recap I am essentially going to build a SAN, a bunch of disks managed by two controllers. The disks are the shared VHDX files I created last time and the controllers are the file server nodes in my cluster.

    First of all I need to add in the the Scale out File Server role into the cluster..

    Add-ClusterScaleOutFileServerRole  -Name HACluster -Cluster HAFileServer    
    If I go back to cluster manager I can see that the role is now installed..

    haFileserver role

    This time the pool will be created on the cluster, and while I could expand pools under storage in cluster manager and create a new storage pool via the wizard, it’s more interesting to look at the equivalent PowerShell.  If I look at the storage subsystems on one of my nodes with

    Get-StorageSubSystem | Out-GridView

    I get this..

    storage subsystem

    I need to use the second option so that my new pool gets created on the cluster rather than the on the server I am working on. The actual script I have is..

    $PoolDisks = get-physicaldisk  | where CanPool -eq $true
    $StorageSubSsytem = Get-StorageSubSystem | where FriendlyName -Like "Clustered Storage Spaces*"
    New-StoragePool -PhysicalDisks $PoolDisks –FriendlyName “ClusterPool” -StorageSubSystemID $StorageSubSsytem.UniqueId

    Here again is an example of the power in PowerShell comes out:

    • $StorageSubSsytem is actually an instance of a class as you can see when I reference it’s unique ID as in $StorageSubSsytem.UniqueId

    In the same way $PoolDisk is an array of disks showing that we don’t need to declare the type of object stored in a variable it could be a number, string  collection of these or in this case a bunch of disks that can be put in a pool!

    • The use of the pipe command to pass objects along a process, and the simple where clause to filter objects by anyone of its/their  properties. BTW we can easily find the properties of any object with get-member as in

    #if you try this on your windows 8 laptop you’ll need to run PowerShell as an administrator

    get-disk | get-member

    I said earlier that I am building a SAN, and within this my storage pool is a just a group of disks that I can manage.  Having done that my next task on a SAN would create logical units (LUNs). Modern storage solutions are making more and more use of hybrid solutions where the disks involved are a mix of SSD and hard disks (HDDs), and intelligently placing the most used or ‘hot’ data on the SSD.  Windows Server 2012R2 can do this and this is referred to as tiered storage.  Because I am spoofing my  shared storage (as per my last post) the media type of the disks is not set and so tiered storage wouldn't normally be available.  However I can fix that with this PowerShell once I have create the pool..

    #Assign media types based on size
    # Use (physicalDisk).size to get the sizes

    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 9932111872 | Set-PhysicalDisk -MediaType SSD

    #Create the necessary storage tiers
    New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "SSDTier" -MediaType SSD
    New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "HDDTier" -MediaType HDD

    #Create a virtualDisk to use some of the space available
    $SSDTier = Get-StorageTier "SSDTier"
    $HDDTier = Get-StorageTier "HDDTier"

    In the world of Windows Server storage I create a storage space which is actually a virtual hard disk and if I go into Cluster Manager and highlight my pool I can create a Virtual Disk...

    tiered storage1

     

    Foolishly I called my disk LabOpsPool, but it’s a storage space, not a pool. Anyway I did at check the option to use storage tiers (this only appears because of the spoofing I have already don to mark the disks as SSD/HDD), Next I can select the storage layout.

    tiered storage2

    and then I can decide how much of each tier I want to allocate to my Storage Space/Virtual Disk..

    tiered storage3

    Note the amount of space available is fixed - we have to use thick provisioning with storage tiers.  and I could thin provision if I wasn’t.  BTW my numbers are more limited than you would expect because I have been testing this, and also the SSD number will be lower than you would think because the write cache also gets put on the SSDs as well.  Having done that the wizard will allow me initialise the disk put a simple volume on it and format it.

    tiered storage4

    Now I need to add the disk I created to Clustered Shared Volumes as I want to put application data (VM’s and databases) on this disk.  Then I need to navigate to the ScaleOut File Server role  on the left pane and create a  share on the disk so it can be actually used..

    tiered storage5

    This fires up the same share wizard as you get when creating shares in Server Manager..tiered storage6

    I am going to use this for storing application data,

    tiered storage7

    it’s going to be called VMStorage

    tiered storage8

    my options are greyed out based on the choices I already made, but I can encrypt the data if I want to, and I don’t need to do any additional work other than check this.

    tiered storage9 

    I then need to setup file permissions. You’ll need to ensure your Hyper-V hosts, database servers etc. have full control on this share to use it. In my case my Hyper-V server are in a group I created imaginatively titled  Hyper-V Servers.

    The last few steps can of course be done in PowerShell as well so here’s how I do that, but note that this is my live demo script so some of the bits are slightly different

    #create two Storage spaces on with tiering and one without
    New-VirtualDisk -FriendlyName $SpaceName1 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 30Gb,5Gb -WriteCacheSize 1gb -ResiliencySettingName Mirror
    New-VirtualDisk -FriendlyName $SpaceName2 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 30Gb,5Gb -WriteCacheSize 1gb -ResiliencySettingName Mirror

    #create the dedup volume and mount it
    #First we need to put the disk into maintenance mode
    $ClusterresourceName = "*("+ $SpaceName1 + ")*"
    Get-ClusterResource | where name -like $ClusterResourceName | Suspend-ClusterResource
    $VHD = Get-VirtualDisk $SpaceName1
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    New-Partition -DiskNumber $Disk.Number -DriveLetter "X" -UseMaximumSize
    Initialize-Volume -DriveLetter "X" -FileSystem NTFS  -NewFileSystemLabel "DedupVol"  -Confirm:$false
    #note -usagetype Hyper-V for use in VDI ONLY!
    Enable-DedupVolume -Volume "X:" -UsageType HyperV
    #Bring the disk back on line
    Get-ClusterResource | where name -like $ClusterResourceName | Resume-ClusterResource

    #create the dedup volume and mount it
    #First we need to put the disk into maintenance mode
    $ClusterresourceName = "*("+ $SpaceName2 + ")*"
    Get-ClusterResource | where name -like $ClusterResourceName | Suspend-ClusterResource
    $VHD = Get-VirtualDisk $SpaceName2
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    New-Partition -DiskNumber $Disk.Number -DriveLetter "N" -UseMaximumSize
    Initialize-Volume -DriveLetter "N" -FileSystem NTFS  -NewFileSystemLabel "NormalVol"  -Confirm:$false
    #Bring the disk back on line
    Get-ClusterResource | where name -like $ClusterResourceName | Resume-ClusterResource

    #Add to Cluster Shared Volumes
    $StorageSpaces = Get-ClusterResource | where Name -Like "Cluster Virtual Disk*"
    ForEach ($Space in $StorageSpaces) { Add-ClusterSharedVolume -Cluster $ClusterName -InputObject $Space } 

    #create the standard share directory on each new volume
    $DedupShare  = "C:\ClusterStorage\Volume1\shares"
    $NormalShare = "C:\ClusterStorage\Volume2\shares"
    $VMShare     = "VDI-VMs"
    $UserShare   = "UserDisks"

    md $DedupShare
    md $NormalShare

    $Share = "Dedup"+$VMShare
    $SharePath = $DedupShare + "\" + $VMShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Dedup"+$UserShare
    $SharePath = $DedupShare + "\" + $UserShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Normal"+$VMShare
    $SharePath = $NormalShare + "\" + $VMShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Normal"+$UserShare
    $SharePath = $NormalShare + "\" + $UserShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    I know this can be sharpened up for production with loops and functions but I hope it’s clearer laid out this way.  Note I have to take the disk into maintenance mode on the cluster while I format them etc. where the storage spaces wizard takes care of that.  This script is part of my VDI demo setup and so I have enabled deduplication on one  storage space and not on the other to compare performance on each of these.

    Once I have created my spaces and shares  I am ready to use it, and a quick way to test all is well is to do a quick storage migration of a running VM to this new share. Just right click on a VM in Hyper-V Manager and select move to bring up the wizard.

    tiered storage10

    after about 30 seconds my VM arrived safely on my File Server as you can see from its properties..

    tiered storage11

     

    Hopefully that’s useful - it certainly went down well in London, Birmingham & Glasgow during our recent IYT Camps and if you want the full script I used I have also put it on Skydrive. Not that this script is designed to be run from FileServer 2 having already setup the cluster with the cluster setup script I have already posted

  • Lab Ops 2–The Lee-Robinson Script

    In my last post I mentioned how Marcus Robinson & had adapted PowerShell scripts by Thomas Lee to build a set of VMs to run a course in a reliable and repeatable way.  With Marcus’s permission I have put that Setup Script on SkyDrive, however he has a proper day job running his gold partnership Octari so I am writing up his script for him.

    Notes on the script.

    Unattend.xml is the instructions you can give to setup the operating system as it comes out of SysPrep. Marcus has declared the whole unattend.xml file as an object $unattendxml which he then modifies for each virtual machine to set it’s name in active directory, the domain to join it to, its fixed IP address and its default DNS Server

    He makes use of functions to mount the target VHD and then copies in the modified unattend.xml and then dismounts it.  Overarching this is a create-VM function which then incorporates these functions to create his blank VMs with known names and ip addresses. However these VM are not started as their is as yet no domain controller for the other lab VMs to join

    LabSvr1 in the script is gong to be that domain controller so the first thing to do is add in the AD-Directory services role and here not the use of the PSCredential PowerShell object to store credential and convert-to-secure-string for the password so that the script can work securely on the remote VM’s.  Starting the VM requires a finite amount of time so Marcus checks to see when it’s alive.

    Marcus then has a section to install various other workloads all from the command line Exchange, SharePoint and my favourite SQL Server.  Before he can install some of these he has to install prerequisites such as the .Net Framework 3.5 (aka the NET-Framework-Core feature) and do that he puts in the Windows Server install media. Having installed SQL Server he can then copy in the databases he needs and here I might have attached them as well as SQL has PowerShell to do this..

    $sc = new-object System.Collections.Specialized.StringCollection
    $sc.Add("C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Mydatabase.mdf")
    $server.AttachDatabase("myDataBase", $sc)

    Anyway there’s lots of good stuff in here, and I’ll be using it to make my various demos for our upcoming IT Camps, now that the Windows 8.1 Enterprise ISO is on MSDN subscriptions. 

  • Lab Ops - Part 14 Putting Applications into the VDI Template with MDT

    In my last post I created a process to quickly create what MDT refers to as a Reference Computer and directly boot this from the LiteTouchPE_x64.iso in the deployment share.  This was then shutdown and sysprepped so that I could use it to deploy VDI collections in Remote Desktop Services.   Now I want to add in a simple example of an application deployment along with the OS. Simple because I am only interested in the mechanics of the process in MDT rather than getting into any specifics about what is needed to install a particular application e.g. command line switches, dependencies and licensing.  If you are going to do this for real create I wouod recommend creating a new VM and test the installation process from the command line and tune the switches etc. for that application accordingly before doing any work in MDT.  There is already lots of help out there on the form for the application you are interested in.

    I am going to use Foxit Enterprise Reader for this example , as it’s not a Microsoft application and it’s a traditional application in that it’s not what MDT calls a packaged application designed for the modern interface in Windows 8.1. It’s also free (although you’ll have to register to get the Enterprise Reader as an msi) and I actually use it anyway to read PDF documents. All this is is actually pretty easy to do but I got caught out a few times and  wading through the huge amount of MDT documentation can be time consuming so I hope you’ll find this useful. My steps will be:

    • Import the Foxit application into the Deployment Share in  the MDT Deployment Workbench
    • Modify the Task Sequence to deploy Foxit
    • amend the Rules (aka ControlSettings.ini) of the Deployment Share properties to install Foxit without any intervention on my part.

    Import the Application. 

    To import an application in MDT all you need to do is navigate to the Applications folder in the Deployment share right click and select Import Application. However I thought it would be good to create folders for each software publisher so I created a Foxit folder underneath Applications and then I did right click -> Imported Application from there. This looked OK in the Deployment Workbench but actually there’s no actual folder created on the Deployment Share.  This is by design and if you want to create your own physical folder structure then you should store them on a share you control and point MDT to the applications ton that share than importing them which is the Application without  source files or elsewhere on the network option in the Import Application Wizard.

    Next I found that I couldn’t import a file only a folder  which I guess is typical for many applications so I stored the Foxit msi in its own folder before importing it. 

    The next thing that caught me out was the Command details.  It’s pretty easy to install an msi for Foxit this would be msiexec /i EnterpriseFoxitReader612.1224_enu.msi /quiet. However the Working directory entry confused me because MDT has the application now so surely I could just leave this empty? Well no and this not a problem with MDT rather it’s because of the way I am using it. Anyway I set the Working Directory to the UNC path of Foxit Reader folder (\\RDS-OPS\DeploymentShare$\Applications\Foxit Enterprise Reader in my case) and that worked.  

    Modify the Task Sequence

    I just used a standard Task Sequence template in my last post which already has a step in it to install an application, but where is it?    The answer turns out to be that it’s inside the State Restore folder ..

    1461_03_08

    Anyway I changed the settings here to reference Foxit and all is well.

    Configure the Rules

    I didn’t think I need to make any changes to the rules (in my last post) as my deployment was already fully automated so I was surprised to be presented with a popup asking me to confirm which application I wanted install when I first tested this. So I needed to add in two more settings one to skip which application to install and another to actually select it.  However the Rules identify applications by GUID not by name, so I had to get the GUID from the general tab of the Application properties and enter it like this..

    SkipApplications=Yes

    Applications001 ={ec8fcd8e-ec1e-45d8-a3d5-613be5770b14}

    Your GUID will be different and if you want more than one application then you would add more afterwards (Application002 = , Application003  etc).

    I also set  SkipFinallSummary=No at this point as I wanted to see if everything was working before the VM switched off.

    Summary

    MDT also has the ability to deploy bundles of applications and before you ask me you’ll need to do something completely different for Office 365 and Office 213 Pro Plus and my recommendation for a simple life would be to use Application Virtualization aka App-V.  This is included in MDOP (Microsoft Desktop Optimization Pack) and is one  of the benefits of having Software Assurance.  That’s a topic for another day based on feedback and my doing a bot more research.  Next up the exciting world of patching VDI

  • Lab Ops part 8 –Tidying up

    If you have been to one of our IT camps you’ll know it’s all live and unstructured which  can mean things can go wrong, typically because we didn’t tidy up properly after the previous event. The problems of repeatedly creating VMs are that:

    The VM still exists in Hyper-V

    The files making up the VM are still on disk

    The VM is registered in active directory ( we don’t currently recreate the main domain controllers for each camp

    So the sensible thing is to scripts to tidy this up before I run each demo again, in fact my plan is to have a clean up section at the start of every demo script so I never forget to run it.  My advantage over some of  Power Shell gurus is that I am starting form scratch and can leverage the power of the later versions of PowerShell in Windows server 2012 & R2. For example to delete a DNS record there is now remove-DNSServerResourceRecord where before you would have had to use complex WMI calls.

    My clean up code needs to run without errors whatever state my rig is in so where possible I test for objects where I delete them.  Sadly I found one problem here and that was with the DNS cmdlets – if you do Get-DNSServerResourceRecord it returns an error if the record is not there which I think is wrong, but at least the command is easy to use.

    Anyway here’s what I do..

    #1. delete the File server VM. Note that before a VM can be deleted it must be shutdown no matter what state it’s in when the script is run

    $vmlist = get-vm | where vmname -in $vmname
    $vmlist | where state -eq "saved" | Remove-VM -Verbose -Force
    $vmlist | where state -eq "off" | Remove-VM -Verbose -Force
    $vmlist | where state -eq "running" | stop-vm -verbose -force -Passthru | Remove-VM -verbose –force

    #2. Get back the storage. My scripts to create VMs put all of the metadata and virtual hoard disks in the same folder so once the VM has been deleted in hyper-v, I can just delete its folder.

    $VMPath = $labfilespath + $VMName
    If (Test-Path $VMPath) {Remove-Item $VMPath  -Recurse}

    #3. remove FileServer1 from AD if it's there already.
    Get-ADComputer -Filter {name -eq $VMName} | Remove-ADObject -Recursive -Confirm:$false

    Notes on the above..

    • the code to delete the vms makes extensive use of the pipe “|” command perhaps the most important part of PowerShell.  what crosses over this pipe is not text but the actual instance of the object being referenced. For example the vm properties will include the host it is running on so this doesn’t need to be specified.
    • Another good example of how much easier PowerShell3 is the much simpler where clause we don’t need the {$._} syntax anymore
    • I got the code to remove the entry for the VM from active directory by grabbing it from the Active Directory Administrative Console which is not only a one stop shop for AD, but shows you the PowerShell for each command you run in it for exactly this purpose..

    adac

    • If you want to delete a cluster in AD you’ll need to turn off the switch to stop it being accidentally deleted ..  

    Set-ADObject -Identity:"CN=HAFILECLUSTER,CN=Computers,DC=Contoso,DC=com" -ProtectedFromAccidentalDeletion:$false -Server:"Orange-DC.Contoso.com"

    • I haven’t got any code here to delete DNS entries as creating VM’s the way I do with static IP addresses doesn’t need this. However later in this series I’ll be creating clusters and scale out file servers and that’s when I need to do this..

    invoke-command -ComputerName Orange-DC -ScriptBlock {remove-DnsServerResourceRecord "HAFileCluster" -ComputerName orange-dc -zone contoso.com -RRType A -Force}
    invoke-command -ComputerName Orange-DC -ScriptBlock {remove-DnsServerResourceRecord "HAFileServer" -ComputerName orange-dc -zone contoso.com -RRType A -Force}

    where HAFileCluster is my actual cluster and HA FileServer is the scale out fileserver role running on that cluster.  I am invoking this command remotely as I am running it from a cluster node where I don’t have the  DNS Server PowerShell cmdlets as that role is not installed on the nodes.

    As ever I hope you find this stuff useful, and please let me know if you have any comments. In the meantime I am at TechDays Online for the rest of the week.

  • Lab Ops Part 13 - MDT 2013 for VDI

    The core of any VDI  deployment is the Virtual Desktop Template (VDT) which is the blueprint from which all the virtual desktop VMs are created.  It occurred to me that there must be a way to create and maintain this using the deployment tools used to create real desktops rather than the way I hack the Windows 8.1 Enterprise Evaluation iso currently with this PowerShell  ..

    $VMName = "RDS-VDITemplate"
    $VMSwitch = "RDS-Switch"
    $WorkingDir = "E:\Temp VM Store\"
    $VMPath = $WorkingDir + $VMName
    $SysPrepVHDX = $WorkingDir + $VMName +"\RDS-VDITemplate.VHDX"

    # Create the VHD from the Installation iso using the Microsoft Convert windows image script
    md $VMPath
    cd ($WorkingDir + "resources")
    .\Convert-WindowsImage.ps1 -SourcePath ($WorkingDir +"Resources\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO") -Size 100GB -VHDFormat VHDX -VHD $SysPrepVHDX -Edition "Enterprise"
    #Create the VM itself
    New-VM –Name $VMName  –VHDPath $SysPrepVHDX -SwitchName $VMSwitch -Path $VMPath -Generation 1 -BootDevice IDE

    # Tune these setting as you need to
    Set-VM -Name $VMName –MemoryStartupBytes   1024Mb
    Set-VM -Name $VMName -DynamicMemory
    Set-VM -Name $VMName -MemoryMinimumBytes   512Mb
    Set-VM -Name $VMName -AutomaticStartAction StartIfRunning
    Set-Vm -Name $VMName -AutomaticStopAction  ShutDown
    Set-Vm -Name $VMName -ProcessorCount       2

    So how does a deployment guy like Simon create Windows8.1 desktops - he uses the Microsoft Deployment Toolkit 2013 (MDT) and the Windows Assessment and Deployment Toolkit 8.1  (ADK) that it’s based on.  So I created another VM RDS-Ops with these tools on and started to learn how to do deployment.   I know that when I create a collection with the wizard or with PowerShell (e.g. New-VirtualDesktopCollection) I can specify an unattend.xml file to use as part of the process. The ADK allows you to do this directly but I am going to  build a better mousetrap in MDT and because I want to go on to deploy Group Policy Packs, updates and applications which I know I can do in MDT as well.

    If you have used MDT please look away now as this isn’t may day job,.However there doesn’t seem to be any posts or articles on creating a VDT from either ADK,  MDT or even System Center Configuration Manager so I am going to try and fill that gap here

    I wanted to install MDT onto a VM running Windows Server 2012R2 with 2x VHDXs the second one being for my deployment share so I could deduplicate the iso and wim files that will be stored here. I then installed the ADK  which needs to be done twice -  the initial ADK download is only tiny because it pulls the rest of the installation files as part of the setup so I first ran adksetup /layout <Path> on an internet connected laptop and then copied the install across to the VM (along with MDT) and then ran..

    adksetup.exe /quiet /installpath <the path specified in the layout option> /features OptionId.DeploymentTools OptionId.WindowsPreinstallationEnvironment OptionId.UserStateMigrationTool'

    before installing MDT with:

    MicrosoftDeploymentToolkit2013_x64.msi /Quiet.

    Now I am ready to start to learn or demo MDT to build my template based on the Quick Start Guide for Lite Touch Installation included in the MDT documentation. which goes like this:

    • On the machine running MDT Create a Deployment Share 
    • Import an OS - I used the Windows 8.1 Enterprise Eval iso for this by mounting the iso on the VM and importing from that.
    • Add in drivers packages and applications - I will do this in a later post 
    • Create a task sequence to deploy the imported image to a Reference Computer. 
    • Update the Deployment Share which builds a special image (in both wim and iso formats)
    • Deploy all that to a Reference Computer and start it
    • The deployment wizard that runs on the Reference Computer when it comes out of sysprep allows you to capture an image of it back into MDT.
    • Capture that image form the Reference Computer 
    • Create a task sequence to deploy that captured image to the Target computers
    • Update the Deployment Share again with the captured image in and optionally hook it up to Windows Deployment Services and you are now ready to deploy your custom image to your users’ desktops.

    However I deviated from this in two ways:

    1. Creating the Reference Computer:

    All I needed to do here was to create a VM (RDS-Ref) based on the iso created by the deployment share update process..

    $VMName = "RDS-Ref"
    $VMSwitch = "RDS-Switch"
    $WorkingDir = "E:\Temp VM Store\"
    $VMPath = $WorkingDir + $VMName
    $VHDXPath = $WorkingDir + $VMName +"\" + $VMName +".VHDX"

    # Housekeeping 1. delete the VM from Hyper-V
        $vmlist = get-vm | where vmname -in $vmname
        $vmlist | where state -eq "saved" | Remove-VM -Verbose -Force
        $vmlist | where state -eq "off" | Remove-VM -Verbose -Force
        $vmlist | where state -eq "running" | stop-vm -verbose -force -Passthru | Remove-VM -verbose -force
        #House keeping 2. get back the storage   
        If (Test-Path $VMPath) {Remove-Item $VMPath  -Recurse}
    # Create a new VHD
    md $VMPath
    new-VHD -Path $VHDXPath -Dynamic -SizeBytes 30Gb

    #Create the VM itself
    New-VM –Name $VMName  –VHDPath $VHDXPath -SwitchName $VMSwitch -Path $VMPath -Generation 1

    #Attach iso in the deployment share to build the Reference Computer from the MDT VM (RDS-OPs)
    Set-VMDvdDrive -VMName $VMName -Path '\\rds-ops\DeploymentShare$\Boot\LiteTouchPE_x64.iso'
    Start-VM -Name $VMname

    Once this VM comes out of sysprep it will launch the Deployment Wizard on the Reference Computer.  I designed the script to be run again and again until I get it right which was good because I kept making mistakes as I refined it.  The documentation is pretty good but I also referred to the excellent posts by Mitch Tulloch on MDT especially part 7 on automating Lite Touch by editing the INI files scenario above on the Deployment Share properties described below.

    2. Completing the Deployment Wizard on the Reference Computer

    In the Lite Touch scenario  the Reference Computer is captured back into MDT and used to deploy to target computers usually by using the Windows Deployment Services role in Windows Server directly or via Configuration Manager. In VDI the target computers are VMs and their deployment is handled by the RDS Broker either in Server Manager or with the Remote Desktop Powershell commands like New-VirtualDesktopCollection.  Whichever way I create VDI collections all I need is that virtual desktop template and in this case that’s just the Reference Computer but it needs to be turned off and in a sysprepped state.  The good news is that the Deployment Wizard in MDT 2013 has exactly this option so I can select that and when it’s complete all I need to do is to remember to eject the iso with the Lite Touch pre execution installation on (or that will be inherited by all the virtual desktops!).

    Automation

    If you are with me so far you can see we have the makings of something quite useful even in production.   What I need to do now is automate this so that my Reference Computer will start install and configure the OS based on my Deployment Share and then sysprep and shutdwon without any user intervention. To do that I need to modify the bootstrap.ini file that launches the deployment wizard (from the Deployment Share properties go to the rules tab and select edit Bootsrap.ini)..

    [Settings]

    Priority=Default

    [Default]

    DeployRoot=\\RDS-OPS\DeploymentShare$

    UserID=Administrator

    UserDomain=CONTOSO

    UserPassword=Passw0rd

    KeyboardLocale=en-GB

    SkipBDDWelcome=YES.

    to tell the wizard where my deployment share is and how to connect to it, and then suppress the welcome screen. Then I need to modify the rules themselves (Control Setting.ini) so that the wizard uses my task sequence, hides all the settings screens and supplies the answers to those setting directly..

    [Settings]

    Priority=Default

    Properties=MyCustomProperty

    [Default]

    DeploymentType=NEWCOMPUTER

    OSInstall=YES

    SkipAdminPassword=YES

    SkipProductKey=YES

    SkipComputerBackup=YES

    SkipBitLocker=YES

    EventService=http://RDS-Ops:9800

    SkipBDDWelcome=YES

    SkipTaskSequence=YES

    TaskSequenceID=Win81Ref

    SkipCapture=YES

    DoCapture=SYSPREP

    FinishAction=SHUTDOWN

    SkipComputerName=YES

    SkipDomainMembership=YES

    SkipLocaleSelection=YES

    KeyboardLocale=en-US

    UserLocale=en-US

    UILanguage=en-US

    SkipPackageDisplay=YES

    SkipSummary=YES

    SkipFinalSummary=YES

    SkipTimeZone=YES

    TimeZoneName=Central Standard Time

    SkipUserData=Yes

    Note the bits of this in bold;

    • Event Service enables monitoring which is very useful as all the wizard screens won’t show up the way I have this set now!. 
    • MDT2012 and later allow you to sysprep and shutdown a machine which is just what I need to create my Virtual Desktop Template.

    So what’s really useful here is that when I change my deployment share to add in applications and packages, modify my Task Sequence or the INI settings above, all I need to do to test the result each time is to recreate the Reference Computer like this:

    • stop the Reference Computer VM (RDS-Ref in may case) if it’s running as it will have a lock on the deployment iso
    • Update the Deployment Share
    • Run the Powershell to re-create and start it.
    • Make more coffee

    Having got that working I can now turn my attention to deploy applications (both classic and modern) into my VDI collections, and then think about an automated patching process.

  • Lab Ops 4–Using PowerShell with Storage

    In previous posts I created a fileserver for my lab environment and used the UI to create a storage space on it.  Now I want to automate the creation of two storage spaces over the same storage pool, which will then underpin my VDI demos. At the time of writing the only resource I could find was on Jose Barreto’s blog.

    Before I get to the PowerShell this is a quick sketch of how my file server is going to be configured..

    WP_20131001_002

    My VM already has 6 virtual hard disks attached from my last post - 1x IDE for the operating system and the other 5 are scsi disks.  Two of these scsi disks are 50Gb in size the rest are 1Tb.

    On top of this I want to create a pool “VDIPool” onto which I will put two storage spaces (virtual disks).  The dedup space at the top will have a one partition (X:) volume which will be setup for VDI deduplication while the normal volume/partition (N:) on the normal space won’t have this enabled but will be identical in size and configuration.  Both volumes will then have shares setup on them to host the VDI virtual machines templates and user disks.

    All of this only takes a few of lines of powershell (less if you don’t like comments and want your command all on one line!) ..

     

    $FileServer = "FileServer1"
    $PoolName = "VDIPool"
    $SpaceName1 = "DedupSpace"
    $SpaceName2 = "NormalSpace"

    #1. Create a Storage Pool
    $PoolDisks = get-physicaldisk | where CanPool -eq $true
    $StorageSubSsytem = Get-StorageSubSystem
    New-StoragePool -PhysicalDisks $PoolDisks -FriendlyName $PoolName -StorageSubSystemID $StorageSubSsytem.UniqueId

    #2. tag the disks as SDD,HDD by size
    #   note to get the actual size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    #3. Create the necessary storage tiers
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "SSDTier" -MediaType SSD
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "HDDTier" -MediaType HDD

    #4. Create a virtual disk to use some of the space available
    $SSDTier = Get-StorageTier "SSDTier"
    $HDDTier = Get-StorageTier "HDDTier"

    #5. create two Storage spaces
    New-VirtualDisk -FriendlyName $SpaceName1 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple
    New-VirtualDisk -FriendlyName $SpaceName2 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple

    #6. create the dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName1
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "X" -UseMaximumSize
    Initialize-Volume -DriveLetter "X" -FileSystem NTFS  -NewFileSystemLabel "DedupVol"  -Confirm:$false
    #note -usagetype Hyper-V for use in VDI ONLY!
    Enable-DedupVolume -Volume "X:" -UsageType HyperV

    #7. create the non dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName2
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "N" -UseMaximumSize
    Initialize-Volume -DriveLetter "N" -FileSystem NTFS  -NewFileSystemLabel "NormalVol"  -Confirm:$false

    #8. create the standard share directory on each new volume
    md X:\shares
    md N:\shares

    #9. Get a template ACL for the folder we just created and add in the access rights
    #   need to use the share for VDI VMs (Note there are too many permissions here!)
    #   and set up an ACL to be applied to each share

    $ACL =Get-Acl "N:\shares"
    $NewRights = [System.Security.AccessControl.FileSystemRights]::FullControl
    $NewAcess = [System.Security.AccessControl.AccessControlType]::Allow
    $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
    $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None

    $AccountList = ("Contoso\Administrator","CREATOR OWNER", "SYSTEM","NETWORK SERVICE", "Contoso\Hyper-V-Servers")

    Foreach($Account in $AccountList)
        {
        $NewGroup = New-Object System.Security.Principal.NTAccount($Account)
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
        $ACL.SetAccessRule($ACE)
        Write-Host "ACE is " $ACE
        }
    #10. Now setup the shares needed for VDI pool
    #    one for the VMs and one for the user disks
    #    on each of the volumes

    $SMBshares = ("DedupVMs","DedupProfiles","NormalVMs","NormalProfiles")
    foreach($Share in $SMBshares)
        {
        if($Share -like "Dedup*")
            {
            $SharePath =  "X:\shares\" + $Share
            md $SharePath
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        if($Share -notlike "Dedup*")
            {
            $SharePath =  "N:\shares\" + $Share
            md $SharePath
           
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        }

    Notes:

    • Get-storagesubsytem returns fileserver 1 storage spaces on my demo rig
    • Using PowerShell to create storage spaces gives you access to a lot of switches you don’t see in the interface like setting the cache size
    • when you create a storage space, it’s offline so you have to bring it online to use it.
    • Apart form disabling the cache on an SMB share I couldn’t see what other setting to configure the share as an application share – My test share in the screenshot above was created by using Server Manager to create an application share – a special kind of share in Windows Server 2012 for hosting databases and Hyper-V virtual machines

    Having run this I can now see my shares in server manager

    shares

    This script is designed to be run on Fileserver1, whereas the script to create the VM itself can be run from anywhere so what I want to do is to run this script remotely on that VM and the way to do that is:

    invoke-command -computername FileServer1 -filepath “e:\powershell\FileServer1 Storage Spaces.ps1”

    ..where FileServer1 Storage Spaces.ps1 is my script.  BTW I can apply a script like this to multiple servers at the same time by substituting FilerServer1 with a comma separated list of servers.

    Next time I want to go into what exactly is going on in the script to create an SMB share and assign access rights to it (steps 9 and 10) as I found this quite hard to research. and so perhaps I can help to make this easier for you and as an aide memoire for me!

    As I have mentioned before there isn��t an evaluation edition available just yet, so if you want to try this now you should find it will work from the Windows Server 2012 R2 preview, although you will still have to inject the license key in your scripts at some point to fully automate vm creation.

     

     

     

     

     

     

     

  • Lab Ops part 3 – Storage in Windows Server 2012R2

    In this post I want to show what I plan to do with storage in Windows Server 2012R2 at the various events I get asked to present at.  If you have been to any of our IT Camps you will have seen Simon and I show off deduplication and storage spaces in Windows Server and now I want to add tiered storage into the mix - and I have a cunning plan.  This is to declare some of my storage as hard disk and other parts as SSD.

    Before I get to that I need to create a File Server v. To do that I have adapted the setup script from Marcus I have posted previously to create this vm.  All I needed to do after the VM was created was to add in the fileserver roles and role features I needed.  The easiest way to do that is use the UI to add in the necessary features and then save them off to an xml file..

     add fileserver features

    as the screentip above shows I can then consume this xml file to add in the features to my fileserver vm everytime I create it

    Install-WindowsFeature -Vhd $VHDPath -ConfigurationFilePath "E:\scripts\file server 1 add features.xml" -ComputerName "Orange"
    where the –computername setting is my physical host as I am mounting the VHD off line to add the features before it starts

    Now I can use this vm to show tiered storage and there are two parts to this:

    • Create some virtual disks and set them up to emulate the performance of the two different types of disk.  In Server 2012R2 there are advanced properties of virtual hard disks for Quality of Service (QoS), which are exposed in Hyper-V Manager, Virtual Machine Manager and of course PowerShell..

    #note the MaximumIOPs settings

    for ($i = 1; $i -le 3; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 1tb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI    -ControllerLocation $i -MaximumIOPS 100
       }
      for ($i = 4; $i -le 5; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 50gb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI -ControllerLocation $i -MaximumIOPS 10000
    }
          
    To identify which disk is which in the VM for my demo – remember I want to use the UI to show this being setup as PowerShell scripts aren’t great for explaining stuff.  To this I tried to tag my disks as HDD or SSD now they are inside the VM.  using

    Set-PhysicalDisk  –MediaType1

      where media type can be SSD or HDD. My plan was to base this on the size of the disks (My notional hard disks are 1Tb above as opposed to just 50Gb for the SSD. However you can’t use this command until you have created a storage pool so I will keep this code for later run it interactively
       

    To stress my tiered storage and to show off deduplication in Windows Server 2012R2 I plan to use this storage as the repository for my VDI vms. In order to that I have to do the following things beforehand:

    1. Use the storage to create the pool. From Server Manager

    storagepool

    by right clicking on the primordial row in storage spaces, giving my poll a name and selecting all those blank scsi disks I created..

    image

    and clicking create (Note I have left all the disks set to automatic for simplicity)

    2. Tag the disks as SSD/HDD in order to tier the storage

    I just use the PowerShell above to do this based on size of the disks..

    #note to get size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    Checking back in server manager I can see the media type has been set and you can now see my VDI Pool..

    storagepool

    3. Create a virtual disk over the storage pool. 

    Storage spaces are actually virtual hard disks and so from the screen above I select New Virtual Hard Disk from the tasks in Virtual Disks pane and select the storage pool I have just created.  I then get the option to use the new tiered storage feature in Windows Server 2012R2..

    image

    I’ll select simple in the next screen to stripe my data across the disks in the pool. In the next screen the option to thin provision the storage is greyed out because I have selected tiered storage, which isn’t a problem for my lab as this pool is built on disks which are in fact virtual disks themselves but be aware of this in production. Just to be clear you can either create a storage space (VHD) that is thin provisioned beyond what you have available now OR you can commit to a fixed size and use tiered storage.

    Now I get another new (for R2) screen to set the size of my VHD based on how much of the pool I want to use.  I am going to use 30gb of SSD and 1Tb of HDD.

    newvhd

    I am then get asked to format the VHD and mount it.  I mount it as drive V and give I the name VDIStorage.  In the final step I can enable deduplication..

    imageNote I have the option to use deduplication for VDI in Windows Server 2012R2 so I will go for that.  The wizard will then guide me through formatting and mounting this VHD so it can now be used as a storage space

    Now I am ready to configure VDI . But what if I wanted to do all of that in PowerShell? Well I am going to leave that for next time as this is already a pretty long post.

    In the meantime if you have an MSDN subscription then you can get the rtm version of Windows Server 2012R2 and follow along. If not I am sure there’ll be an evaluation copy coming it in a few weeks to coincide with the general availability of R2, and in the mean time check out the Windows Server 2012R2 jumpstart module on MVA.   

    1Keith Meyer one of my counterparts has more on all of this in this excellent post.

  • Lab Ops part 9 - an Introduction to Failover Clustering

    For many of us Failover Clustering in Windows still seems to be a black art, so I thought it might be good to show how to do some of the basics in a lab and show off a few of the new clustering features of Windows server 2012 R2 in the process.

    Firstly what is a cluster? It’s simply a way of getting one or more servers in a group to provide some sort of service that will continue to work in some form if one of the servers in that group fails. For this to work the cluster gets its own entry in active directory and in DNS so the service it’s running can be discovered and managed.  Just be clear all the individual servers in a cluster (known as cluster nodes) must be in the same domain.

    So what sort of services can you run on a cluster? In Windows Server 2012R2 that list looks like this..

    cluster roles

    Note: Other roles from other products like SQL Server can also be added in. Notice too that virtual machines (VMs) are listed here and running them in a cluster is how you make them highly available. 

    All of these roles, but one, can be run from guest clusters; that is a cluster built of VMs rather than physical servers and it is also possible to have physical hosts and VMs combined in the same cluster – it’s all Windows servers after all. The exception to this is when you want to make VMs highly available -  the cluster must only contain physical hosts and this is known as a host or physical cluster. 

    Making a simple cluster is easy, it’s just a question of installing the failover clustering feature on each node and then joining them to a cluster. When you add in the feature you can add in failover cluster manager but if you have been following this series you’ll have access to this on your desktop as the point of Lab Ops is to remotely manage where possible and also make use of PowerShell.  So In my example I am going to create 2 x VMs (fileserver2 & 3) from my HA FileServerCluster Setup local.ps1 script which adds in the clustering feature (from this xml file).

    Note: my SSD drive where I store my VMs is E: s you will need to edit my script if you want to follow along

    Having run that I could then simply run a line of PowerShell on one of those servers to create a cluster..

    New-Cluster -Name HACluster -Node FileServer2,FileServer3 -NoStorage -StaticAddress 192.168.10.30

    Note the –NoStorage switch. This cluster has just got my two nodes in and that’s it.  For some clustering roles such as SQL Server 2012 AlwaysOn this is OK but most roles that you put into a cluster will need access to shared storage, and historically this has meant connecting nodes to a SAN via ISCSI or Fibre Channel.  This applies to host and guest clusters, but for the latter guest VMs will need direct access to this shared storage. That can cause problems in a multi-tenancy setup like a hoster as the physical storage has to be opened up to the VMs for this to work, and even if you don’t work for one of those then this will at least cut across the separation of duties in many modern IT departments.  

    There’s another problem with this cluster; it has an even number of objects in it. If one of the nodes fails it will restart and think it “owns” the cluster but the other node already has that ownership and problems will occur.  So for most situations we want an odd number of objects in our cluster and the “side” of a cluster that has the majority after a node failure will own the cluster.  This democratic approach to clustering means that there is no one node that is in charge of the cluster, and this enables the Windows Server to support bigger clusters than VMware who use the older style of clustering that Microsoft abandoned after Windows Server 2003.

    So having built a simple cluster I need add in more objects and work with some shared storage.   If you look at the script I used to create FileServer2 & 3 there’s a couple of things to note:

    I have created 7 shared virtual hard disks and these are attached to both FileServer2 and FileServer3 and if I run this PowerShell

    Get-VMHardDiskDrive –VMName “FileServer2”,”FileServer3” | Out-GridView

    you can see that..

    disks

    Also If I look at the settings of FileServer2 in Hyper-V manager there’s a switch to confirm these disks are shared ..

    shared vhdx settings

    This is new for Windows Server 2012R2 and for production use the shared disks (VHDX only is supported) must be on some sort of real shared storage. However there is also a spoofing mechanism in R2 to allow this feature to be evaluated and demoed and this is in line 272 of my script ..

    start-process "C:\windows\system32\FLTMC.EXE" -argumentlist "attach svhdxflt e:"
    Note you’ll need to rerun this after every reboot of your lab setup.

    Now that my two FileServer VMs are joined in a cluster (HACluster) I can use one of these shared disks as a third object in the cluster. To do that I need to format it add it in as a cluster resource and then declare it’s use as a quorum disk, and I’ll be running this script from one of the cluster nodes..

    #Setup (Initialize format  etc. )the 1Gb Quorom disk 
    $QuoromDisk = get-disk | where size -EQ 1Gb
    Initialize-Disk $QuoromDisk.Number -PartitionStyle GPT
    set-disk $QuoromDisk.number -IsOffline 0
    New-Partition -DiskNumber $QuoromDisk.Number -DriveLetter "Q" -UseMaximumSize
    Initialize-Volume -DriveLetter "Q" -FileSystem NTFS  -NewFileSystemLabel "Quorum"  -Confirm:$false
    #Add in the quorum disk to the cluster, and then set the quorum mode on the cluster
    start-sleep -Seconds 20
    Get-ClusterAvailableDisk -Cluster $ClusterName | where size -eq 1073741824 | Add-ClusterDisk -Cluster $ClusterName
    $Quorum = Get-ClusterResource | where ResourceType -eq "Physical Disk"
    Set-ClusterQuorum -Cluster $ClusterName -NodeAndDiskMajority $Quorum.Name
    Using a disk like this as a third object is only one way to create that odd number of objects (called Quorum). You can just have three physical nodes or use a file share. In my case if a node fails the node that has ownership of this disk will form the cluster, and the failed node will then re-join the cluster rather than try to rebuild an identical cluster when it is recovered

    That’s where I want to stop as there are several ways I can use a basic cluster like this and I’ll be covering those in individual posts. 

    If you want to build the cluster I have described so far all you need is an evaluation edition of Windows Server 2012R2.  and go through part 1 & Part 2 of this series.

  • Lab Ops 5 - Access Rights in PowerShell

    In my last post I had a section in my script to create access right to a share I had just created and as promised I wanted to dissect that a bit more.

    To recap what I wanted to do was to create a share with sufficient privileges to use that share for storing VMs as part of my VDI demo. In order to understand what permissions might be needed I used the wizard in Server manager to create a share (FileServer1\\VMTest), checked that I could use the Remote Desktop Services part of Server Manager to successfully create a pooled VDI collection and then examine its permissions.

    The share has everyone full control access..

    Share settings

    and these are the File Access Rights on that share..

    Share secirity settings

    The SMB settings are set by the -fullAccess, for example in my script

    #Orange$ is the name of the physical host (my big orange Dell Precision laptop)

    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator  

    The Access Rights are stored in Access Control Lists (ACLs) comprising individual Access Control Entries (ACEs) where an entry might state the contoso\Andrew has full control of a path and its folders and subfolders.  Out of the box in PowerShell there are just two command involved get-ACL to get this list and Set-ACL to modify setting on a path.  TechNet recommend that you create a share (say x:\test) set up its permissions and pass this across to your new share

    Get-Acl -path “X:\Test” | Set-Acl -path “ X:\My New Share”

    which is fine but in my case I am building a setup form scratch and I don’t wan to have to use an interface to create the template.  So it occurred to me I could write the ACL out to a file with one row per ACE and here’s my script to get the ACL for that VMTest share in the above screenshot..

    #to a CSV for later use
    $CSVFile = \\Orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV

    #If the file is already there delete it
    If(Test-Path $CSVFile) {Remove-Item $CSVFile}
    $Header = "Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags"
    Add-Content -Value $Header -Path $CSVFile
    $TemplatePath = "X:\shares\VMTest"
    $ACLList = get-acl $TemplatePath | ForEach-Object {$_.Access}
    foreach($ACL in $ACLList)
        {
        $LineItem = $TemplatePath+ "|" + $ACL.IdentityReference + "|" +  $ACL.FileSystemRights + "|" + $ACL.AccessControlType + "|" + $ACL.IsInherited + "|" + $ACL.InheritanceFlags + "|" + $ACL.PropogationFlags
        Add-Content -Value $LineItem -Path $CSVFile
        }

    The Add-Content command allows me add write into a file, but not that alothough I have specified this as a CSV file it is in fact pipe (“|”) delimtied because it’s generally madness to use commas as separators as they get used inside the values as in this case..

    Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|False|None|
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|NT AUTHORITY\SYSTEM|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|CREATOR OWNER|268435456|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|ReadAndExecute, Synchronize|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|AppendData|Allow|True|ContainerInherit|
    X:\shares\VMTest|BUILTIN\Users|CreateFiles|Allow|True|ContainerInherit|

    so Inheritance flags and access rights can have multiple values inside one ACE.  While there are several examples of this sort of script out there I had to do some digging to find out how to use this file, So here’s my crude but effective attempt..

    $ACEList = Import-CSV -Path "\\orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV" -Delimiter "|"
    $TestPath = "C:\Test"
    If (test-path -Path $TestPath) {Remove-item -Path $TestPath}
    MD $TestPath
    $ACL = Get-Acl -Path $TestPath
    ForEach($LineItem in $ACEList)
        {
      
        If($LineItem.FileSystemRights -eq 268435456) {$LineItem.FileSystemRights = "FullControl"}
        If($LineItem.FileSystemRights -eq "ReadAndExecute, Synchronize"){$LineItem.FileSystemRights = "ReadAndExecute"}
       
        $NewRights = [System.Security.AccessControl.FileSystemRights]::($LineItem.FileSystemRights)
        $NewAcess = [System.Security.AccessControl.AccessControlType]::($LineItem.AccessControlType)
        $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]($LineItem.InheritanceFlags)
        $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None
        $NewGroup   = New-Object System.Security.Principal.NTAccount($LineItem.IdentityReference)
        
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
         $ACL.SetAccessRule($ACE)
        }
        Set-Acl -Path $TestPath -AclObject $ACL

    While this not be a solution for you it does show up some interesting points in PowerShell and in setting security:

    • There’s no command to make a new ACL you have to start with one from somewhere
    • Once I have declared $AceList as my csv file it is that file. For example I can get PowerShell to display it an all its glory using $ACEList | out-gridview to see it as a table
    • I have declared the separator I used to get round the comma problem with the -Delimiter “|” setting
    • I can loop through each line of my file with a simple foreach loop and so $LineItem represents each line and I can references the columns in that line with the standard . notation used in .Net e.g. $LineItem.FileSystemRights.
    • The stuff in square brackets might look hard to remember but PowerShell will auto complete this so once I typed in [System. I could select Security and so on to build up the correct syntax.
    • I have marked a line in my csv file in red because the File System Rights are set to a number in my case 268435456.  This is an example of an Access Control mask a set of bits that will cause an error when I try and create $ACE, so I have an exception in my code (also in red) to trap that and in my case just assign full control.
    • There is a another problem line marked in purple in my csv and that’s because this also threw an error.  I looked up the synchronize option and realised it gets set by default and so I put in another exception in y script (in purple) to trap that and just apply ReadAndExecute rights.
    • I had to refer to the .Net documentation here to work out what order to put the parameters in to create a new ACE (the line beginning $ACE = New-Object…)
    • Having built up the ACL in the loop it’s really easy to apply it your object ands of course this could be used for multiple objects in another loop for that

    So what I have done here is a bit of a sledgehammer to crack a nut, but you might be able to use some of the principles to solve your real world problem for example you might just knock up a CSV file with permissions you want to set on an object.

    Finally there’s this properly tested File System Security PowerShell Module on the TechNet gallery although it’s not yet been tested on Windows Server 2012 or R2. 

  • Lab Ops - Stop Press Windows Server 2012R2 Evaluation edition released

    I have just got back to my blog after a few days at various events and I see the Evaluation edition of Windows Server 2012R2 has been released.  I need this for my lab ops because I am building and blowing away VMs for er… evaluations and I don’t want to have to muck about with license keys. For example I have a script to create FileServer1 VM but if I use the media from MSDN for this and I don’t add in a license key to my answer file, the machine will pause at the license key screen until I intervene.  Now I have the Evaluation Edition I can build VM’s that will starter automatically and when they are running continue to configure them. For example for my FileServer1 VM I created in earleir posts in this series  I can add a line to the end of that script while will run on the VM itself once it is properly alive after its first boot..

    invoke-command -ComputerName $VMName -FilePath 'E:\UK Demo Kit\Powershell\FileServer1 Storage Spaces.ps1'
    ..and this will go away and setup FileServer1 with my storage spaces.

    Note both the script to create FileServer1 (FileServer1 Setup.ps1) and the xml it uses to add features into that VM (File server 1 add features.xml) and the File Server1 Storage Spaces.ps1 script referenced above are on my SkyDrive for you to enjoy.

    One good use case for executing remote PowerShell  scripts remotely like this is when working on a cluster. Although I have put the Remote Server Administration Tools (RSAT) on my host and to have access to the Failover Clustering  cmdlets I get a warning about running these against a remote cluster..

    WARNING: If you are running Windows PowerShell remotely, note that some failover clustering cmdlets do not work remotely. When possible, run the cmdlet locally and specify a remote computer as the target. To run the cmdlet remotely, try using the Credential Security Service Provider (CredSSP). All additional errors or warnings from this cmdlet might be caused by running it remotely.

    While on the subject of new downloads the RSAT for managing Windows Server 2012R2 from Windows 8.1 is now available, so you can look after your servers from the comfort of Windows 8.1 with your usual tools like Server Manager, Active Directory Administrative Console, Hyper-V manager and so on On my admin VM I have also put on the Virtual Machine Manger Console ad SQL Server Manager and a few other admin tools..

     

    image

    Before you ask me the RSAT tools you put on each client version of Windows only manage the equivalent version of server and earlier.  For example you can’t put the RSAT tools for managing Windows Server 2012R2 onto Windows 8 or Windows 7.

    So using my lab ops guides or the more manual guides on TechNet, you can now get stuck into playing with Windows Server 2012R2, as a way of getting up to speed on the latest Windows Server along with the R2 courses on the Microsoft Virtual Academy.

  • System Center: Use the right tool for the right job

    This post isn’t really about Lab Ops as it’s more theory than Ops, but before I dive in to the world of what you can do with System Center I wanted to stress one important concept:

    Use the right tool for the right job. 

    That old saying that when you have a hammer everything looks like a nail can harm your perception of what SC is all about.  Perhaps the biggest issue here is simply to have a good handle on the suite, which is not easy as traditionally many of us will have been an expert in just one or two components (or whole products as was). So here’s how I think about this..

    • Virtual Machine Manager controls the fabric of the modern virtualized data centre and allows us to provision services on top of that
    • App Controller is there to allow us to control services in Azure as well as what we have in our datacentre
    • Configuration Manager allows us to manage our users, the devices they use and the applications they have. It can also manage our servers but actually in the world of the cloud this is better done in VMM
    • Then it’s important to understand what’s going on with our services and that’s Operations Manager. 
    • Rather than sit there and watch Operations Manager all day, we need to have an automated response when certain things happen and that’s what Orchestrator is for.
    • In an ITIL service delivery world we want change to happen in a controlled and audited manner whether that’s change need to fix things or change because somebody has asked for something.  That’s what Service Manager is for and so if something is picked up by Operations Manager that we need to respond to this would be raised as an incident in Service Manager which in turn would automatically remediate the problem by calling a process in Orchestrator which might do something in Virtual Machine Manager for example.

    The reason that SC is not fully configured and integrated out of the box is simply down to history and honesty. Historically SC was a bunch of different products which are becoming more and more integrated.  Honesty comes from the realisation that in the real world, many organisations have made significant investments in infrastructure and its management which are not Microsoft based.  For example if your helpdesk isn’t based on Service Manager then the other parts of SVC can still to large extent integrate with what you do have, and if you aren’t using Windows for Virtualization or your guest OS then SC can still do a good job of managing VMs, and letting you know that the services on those servers are OK or not as the case maybe.  

    Another important principle in SC is that it’s very important not to go behind the back of SC and use tools like Server Manager and raw PowerShell to change your infrastructure (or fabric as it’s referred to in SC).  This is important for two reasons, you are wasting your investment in SC and you have lost a key aspect of its capabilities such as it’s audit function.  Notice I used the term “raw PowerShell”; what I mean here is that SC itself has a lot of PowerShell cmdlets of its own however these are making calls to SC itself and so if I create a new VM with a Virtual Machine Manager (VMM) PowerShell cmdlet then the event will be logged. 

    There’s another key concept in SCV and that is “run as” accounts so whther I am delegating a control to user by giving them limited access to an SC console or I am using SC’s PowerShell cmdlets, I can reference a run as account to manage or change something without exposing the actual credentials need to do that to the user or in my script.

    Frankly my PowerShell is not production ready, some of it is deliberate in that I don’t clutter my code with too much error trapping and some is that I am just not that much of an expert in things like remote sessions and logging.  The point is that if you are using SC for any serious automation you should use Orchestrator for all sorts of reasons:

    • Orchestrator is easy, I haven’t and won’t post an update on getting started with Orchestrator because it hasn’t really changed since I did this
    • It’s very lightweight  - it doesn’t need a lot of compute resources to run
    • You can configure it for HA so that your jobs will run when they are supposed to which is hard with raw PowerShell.
    • You can include PowerShell scripts in the processes (run books) that you design for things that Orchestrator can’t do
    • There are loads of integration packs to connect to other resources and the these are setup is with configurations which have the credentials in to those other services so they won’t be visible in the run book itself.
    • you have already bought it when you bought SC!

    Another thing about SC generally is that there is some overlap, I discussed this a bit in my last post with respect to reporting and it crops up in other areas too.  In VMM I can configure a bare metal deployment of a new physical host to run my VMs on, but I can also do server provisioning in Configuration Manager so which should I use? That depends on the culture of your IT department and  whether you have both in production.  On the one hand a datacentre admin should be able to provision new fabric as the demand for virtualization grows on the other hand all servers be they physical or virtual should be in some sort of desired state and CM does a great job of that.  It all comes back to responsibility and control if you are in control you are responsible so you need to have the right tools for your role. 

    So use the right tool for the right job, and after all this theory we’ll look at what the job is by using SC in future posts

  • Lab Ops– 16 System Center setup

    This post isn’t going to tell you how to install System Center screen by screen as there are some 434 of these to do a complete install and configure.  That’s a lot of clicking with a lot of opportunity for mistakes and while I realise that not everyone needs to tear down and reset everything surely there must be a better way to try it out?

    There is but it involves some pretty intense PowerShell scripts  and accompanying xml  configuration files collectively known as the PowerShell Deployment Toolkit (PDT) is on the Tech Net Gallery. It works from scratch  - it will pull down all the installs and prereqs you need, install the components across the servers you define complete with SQL Server and do all of the integration work as well.  There is a full set of instructions here on how to edit the xml configuration files (the PowerShell doesn’t need to change at all) so I am not going to repeat those here.

    What I do want to do is to discuss the design considerations for deploying System Center 2012R2 (SC2012R2)  in a lab for evaluation, before I go on to showing some cool stuff in following posts.

    SC2012R2 Rules of the game:

    Most parts of the SC2012R2 suite are pretty heavyweight applications, and will benefit from being on separate servers and all of SC2012R2 is designed to be run virtually; just as today you might be running VCenter in a VM. Note that Virtual Machine Manager (VMM) is quite happy on a VM managing the host the VM is running on. 

    Operations Manager, Service Manager and the Service Manager Data Warehouse cannot be on the same VM or server and even the Operations Manager agent won’t install onto a server running any part of Service Manager.  I would recommend  keeping VMM away from these components as well from a performance perspective.

    The lighter weight parts of the suite are Orchestrator and App Controller both of which could for example be collocated with VMM which is what I do. 

    All of the SC2012R2 components make use of SQL Server for one or more databases.  In evaluation land we can get SQL Server for 180 days just as SC2012R2 is good for 180 days but the question is where to put the databases, alongside the relevant component or centrally.  My American cousins used to put all the databases on the DC in a lab as both of these are needed all the time, however we generally run our labs on self contained VMs each with it’s own local database.

    Speaking of Domains I tend to have a domain for my hosts and the System Center infrastructure, and I do on occasion create tenant domains in VMM to show the hosting and multi-tenancy.  The stuff that’s managed by System Center doesn’t have to be in the same domain and may not be capable of joining a domain such as Linux VM’s , switches , SANs but we will need various run as accounts to access that infrastructure with community strings and ssh credentials.

    Best Practice for production.  The real change for deploying System Center in production is all about high availability.  Given that System Center is based on JBOD (just a bunch of databases) what needs protecting are the databases and the certificates associated with them so that if a VM running VMM is lost we can simple create a new VM add in VMM and point it to our VMM database.  The System Center databases are best protected with Availability Groups and while I realise that is only available in SQL Server Enterprise edition it doesn’t itself rely on shared storage. Availability groups replicate the data from server to server in a cluster and although clustering is used the  databases can be on direct attached storage on each node.  There is some special info on how to use this with System Center on TechNet which will also apply to  Service Manager as well.

    That leads me onto my next point about production – there are a lot of databases in System Center  and some of those are datamarts/data warehouses and actually only one of those could arguably be called a data warehouse and that’s the one in Service Manager.  Why? well if you are using Service Manager you don’t need the others as it should for the central reporting aka (Configuration Management DB) CMDB.  So if you have another help desk tool and that is properly integrated into System Center then that’s where you should go for your reporting. If none of the above then you’ll have to dip in and out of the components and tools you have to join the dots (I feel another post coming on about this).

    and finally..

    I have the capacity to run an extra VM which runs Windows 8.1 plus the Remote Server Administration Tools (RSAT) ,SQL Server Management Studio and all of the SC2012R2 management consoles on it. This means I don’t have to jump from VM to VM to show how things work.  Plus in the process or installing all of those tools in one place I have access to all of the PowerShell cmdlets associated with Server Management, SQL Server and all of System Center.  So now I can write scripts form one place to get stuff done right across my datacentre or carry on filling in dialog boxes.

  • Lab Ops part 17 - Getting Started with Virtual Machine Manager

    In my last post I talked a lot about the way all of System Center  works, now I want to look at  one part of it Virtual Machine Manager (VMM) as VMs in a modern data centre are at the core of the services we provide to our users. In the good old days we managed real servers that had real disks connected to real networks and of course they still exist, but consumers of data centre resources whether that other parts of the IT department or business units will only see virtual machines with virtual disks connected to virtual networks.  So administration in VMM is all about translating all the real stuff (or fabric as it’s referred to) into virtual resources. So before we do anything in VMM we need to configure the fabric, specifically our physical hosts, networking and storage.

    There’s good TechNet labs and MVA courses to learn this stuff, but I think it’s still good to do some of this on your own server, so you can come back it again whenever you want to especially if you are serious about getting certified.  So what I am going to do in the next few posts is to explain how to use the minimum of kit to try some of this at home.  I generally use laptops which are sort of portable typically with 500Gb plus of SSD and at least 16Gb or RAM half of those resources should be enough.

    I am going to assume you have got an initial setup in place:

    • Copy the above VHDX to the root of a suitable disk on your host (say e:\) and mount it(so it now shows up as a drive for example X:)
    • from an elevated prompt type BCDBoot X:\windows
    • Type BCDEdit /set “{default}” hypervisorlaunchtype auto
    • Type BCDEdit /set description “Hyper-V Rocks!”
    • reboot the host and select the top boot option which should say Hyper-V Rocks!
    • from PowerShell type add-windowsfeatures Hyper-V –includemanagementtools –restart
    • the machine should restart and you are good to go.
    • VMM running already either by downloading a complete VMM evaluation VHD or installing from the VMM2012R2 iso yourself.
    • A domain controller as per my earlier post on this, with the host and the VMM VM belonging to this domain.

    The first thing you want to do in VMM is to configure Run As Accounts.  One of the reasons my PowerShell scripts in this series are not production ready is that they have my domain admin password littered all over them which is not good.  VMM allows us to create accounts used to authenticate against anything we might need to  which could be a domain account a local account on a VM (be that Windows or Linux), access to a switch or a SAN.  So lets start by adding in domain admin. We can do this from settings | Security | Run As Accounts or with the VMM PowerShell cmdlets and not all I have to do is open PowerShell from inside VMM or use PowerShell ISE and either way there’s no more of that mucking about importing modules to do this..

    #New Run As Account in VMM
    $credential = Get-Credential
    $runAsAccount = New-SCRunAsAccount -Credential $credential -Name "Domain Admin" -Description "" -JobGroup "cb839483-39eb-45e0-9bc9-7f482488b2d1"

    Note this will popup the credential screen for me to complete (contoso\administrator is what I put in). The jobGroup at the end puts this activity into a group that ends up in the VMM job history so even if we use Powershell in VMM our work is recorded which a good thing.  We can get at that job history with Get-SCJob | out-gridview

    We’ll probably want to do the same for a local account so just Administrator and a password so that any VM’s we create will have the same local admin account & Note this will popup the credential screen for me to complete.

    Now we can consume that domain login account to add manage our host.  In VMM we would do this from Fabric | Servers | all Hosts and before we add in a server we can create Host groups to manage lots of hosts ( I have one already called Contoso). To add a host right click on all hosts or a host group and select add Hyper-V Hosts or Clusters and the equivalent PowerShell is ..

    <#Note in the raw PowerShell from VMM there  is a load of IDs included but this will work as long as we don’t use duplicate names in accounts etc.  #>

    $runAsAccount = Get-SCRunAsAccount -Name "Domain Admin"

    $hostGroup = Get-SCVMHostGroup -Name "Contoso"
    Add-SCVMHost -ComputerName "clockwork.contoso.com" -RunAsynchronously -VMHostGroup $hostGroup -Credential $runAsAccount

    In the background the VMM agent has been installed on our host and it has been associated with this instance of VMM.  You should now see the host in VMM and against all the VMMs that are on it so not bad for a line of PowerShell! We can also see the properties of our host by right clicking on it, and of special interest are the virtual switches  on the host and if you have used my script  with modifying it you’ll see a switch called RDS-Switch on the host.  We can also see the local storage attached to our host here.

    So now we have a basic VMM environment we can play with, a host a DC and VMM itself so what do we need to do next.  If this was VCenter we would probably want to setup our virtual switches and port groups so let’s look at the slightly scary but powerful world of virtual networks next.

  • Lab Ops 18 Getting started with Software Defined Networking

    In my last post I finished up where I had my host under management of Virtual Machine Manager (VMM) and that was about it. As with Hyper-V I can’t really use VMM until I have my fabric configured and after adding in hosts the first thing we need to do is look at Networking.  To recap my current setup now looks like this

    image

    Where RDS-Switch is an internal virtual switch, and my RDS-DC is my DC & DHCP server with one scope of 192.168.10.200-254.  VMM has a dynamic ip address and is also hosting SQL Server for its own database.

    If I go to VMM and go to VMs & Services | All Hosts | Contoso | Orange (which is my Dell Orange laptop) I can right click and select View Networking. If I look at the Host Networks all I can see are the physical NICs, If I look at VM Networks all I see is my VMs but no networks and the Network topology screen is entirely blank so what’s going on?  Basically things are very different in VMM than they are in Hyper-V, and frankly we don’t want to use Hyper-V for managing our networks anymore than a VMWare expert would configure networks on individual ESXi hosts.  In our case we use VMM to mange virtual switches centrally where in VMWare distributed switches are controlled in VCenter.  So my plan is to use VMM to create a network topology that reflects what my VMs above are for;  to manage my datacentre.  Later on I’ll add in more networking which will enable me to isolate my services and applications from this, in the same way that Cloud providers like Azure hide the underlying infrastructure from customers.

    If we look at the Fabric in VMM and expand Networking we have 8 different types of objects we can create and there is a ninth VM Networks that shows up under VMs and Services so where to begin?

    Your starter for ten (or nine in this case) is the TechNet guide Configuring Networking in VMM, and once you dig into that you realise that VMM wants to control not just the switching but ip management as well.  The core to all of this are the Logical Network and Virtual Networks which are just containers for various properties including sites and ip pools.   I am going to start simple and as I only have one host just create the first object we need, a Logical Network, that has one connected network.    For now I am going to ignore the sub options to get us started. 

    Logical network properties

    note this is a screen grab from the end of the process

    I can’t create a Logical Network without a Network Site which has a specific subnet and optionally VLAN set..

    network site

    The small print
    As per usual in this series I am going to share the PowerShell to do this.  VMM is very good at allowing you to see the equivalent script when doing something,  however I have modified what VMM spits out to make it easier to read, while ensuring it still worksSmile .  This is a good thing as it’s easy to cut and paste form this post and get exactly the same results and you can see how the variables are passed and related to each other. 
    Note the raw PowerShell from VMM often runs everything at the as a job which is a useful trick it has, and in all cases all our work gets logged in VMM whether using the UI or the VMM cmdlets

    Note the segments in this post are all using the same variables so you will need to turn them in the order shown

    The equivalent PowerShell is: 

    $logicalNetwork = New-SCLogicalNetwork -Name "RDS-FabricNet" -LogicalNetworkDefinitionIsolation $false -EnableNetworkVirtualization $true -UseGRE $true -IsPVLAN $false
    $allSubnetVlan = New-SCSubnetVLan -Subnet "192.168.10.0/24" -VLanID 0
    $allHostGroups = Get-SCVMHostGroup -Name "All Hosts"
    $logicalNetworkDefinition =New-SCLogicalNetworkDefinition -Name "RDS-FabricNet_Site192" -LogicalNetwork $logicalNetwork -VMHostGroup $allHostGroups -SubnetVLan $allSubnetVlan -RunAsynchronously

    Note that in the code above a network site is referred to as a Logical Network Definition.

    VMs are connected to virtual machine networks and we had the option  to create one of these when we created the logical network with the same name. In this case that would have been fine for what I am doing here, as my two VM’s are actually there to manage the hosts in much the same way as a VMWare appliance does.   so I am going to create a virtual network that is directly connected to the logical one..

    VM Network

    $vmNetwork = New-SCVMNetwork -Name "RDS-FabricVNet" -LogicalNetwork $logicalNetwork -IsolationType "NoIsolation"

    However this and the logical network and site are just a containers in which we put our settings as points of management.  We now need to create an uplink port profile from Fabric | Networking | Port Profiles.  This needs to be an Uplink Port profile, and when we select that option we can describe how the underlying NIC can be teamed directly from here rather than doing that it in Server Manager on each host.  We then simply select our Network site (RDS-FabricNet_Site192) and we are done..

    port profile network config

    The one line of PowerShell for this is..
    $nativeProfile = New-SCNativeUplinkPortProfile -Name "RDS-FabricUplink" -Description "" -LogicalNetworkDefinition $logicalNetworkDefinition -EnableNetworkVirtualization $false -LBFOLoadBalancingAlgorithm "HostDefault" -LBFOTeamMode "SwitchIndependent" -RunAsynchronously

    The next piece of the puzzle is to create a Logical Switch.  This is a logical container that emulates  a top of Rack switch in a real server room.  It can have a number of virtual ports but unlike VMWare these are limited by numbers but are there to manage traffic through the use of port classifications.  We’ll need at least one of these and I am going for Host management for the port classification as that is what all of this is for..

    Logical swith link to uplink prot profile

    The PowerShell is:

    $virtualSwitchExtensions = Get-SCVirtualSwitchExtension -Name "Microsoft Windows Filtering Platform"
    $logicalSwitch = New-SCLogicalSwitch -Name "RDS_FabricSwitch" -Description "" -EnableSriov $false -SwitchUplinkMode "NoTeam" -VirtualSwitchExtensions $virtualSwitchExtensions
    $UplinkPortProfileSet = New-SCUplinkPortProfileSet -Name "RDS-FabricUplink-Set" -LogicalSwitch $logicalSwitch -RunAsynchronously -NativeUplinkPortProfile $UplinkProfile

    We should also create a Virtual port with the  host management port classification:

    Logical switch virtual port

    The PowerShell is..

    $portClassification = Get-SCPortClassification -Name "Host management"
    $nativeProfile = Get-SCVirtualNetworkAdapterNativePortProfile -Name "Host management"
    New-SCVirtualNetworkAdapterPortProfileSet -Name "Host management" -PortClassification $portClassification -LogicalSwitch $logicalSwitch -RunAsynchronously -VirtualNetworkAdapterNativePortProfile $nativeProfile

    We can now apply this logical switch is then applied to our hosts by going to its properties and navigating to Hardware | Virtual Switches and adding a new Virtual Switch | New Logical Switch.  Immediately our RDS-FabricSwitch is selected and we can see that our adapter (physical NIC) is connected to the Uplink we have created through this switch.

    Host Virtual switch properties

    However that is just like using virtual switches in Hyper-V manager what we also need to do is to add in a Virtual Network Adapter as in the diagram above.  This picks up the VM Network we already created (RDS-FabricVNet). Notice I can have all kinds of fun with ip addresses here..

    host virtual switch virtual network adapter

    BTW I should have set the Port Profile to the only option available, Host Management, in the above screen shot. If I look at Hardware | Network adapters I can also see my logical network and site..

    host hardware logical network

    The equivalent PowerShell to connect the logical switch virtual network adapter to the host is ..

    #My Host is called Orange

    $vmHost = Get-SCVMHost -ComputerName Orange

    #Note you’ll need to at least change the Get-SCVMHostNetworkAadpter –Name to reflect the NIC in your host.

    $networkAdapter = Get-SCVMHostNetworkAdapter -Name "Broadcom NetXtreme 57xx Gigabit Controller" -VMHost $vmHost
    Set-SCVMHostNetworkAdapter -VMHostNetworkAdapter $networkAdapter -UplinkPortProfileSet $uplinkPortProfileSet
    New-SCVirtualNetwork -VMHost $vmHost -VMHostNetworkAdapters $networkAdapter -LogicalSwitch $logicalSwitch  -CreateManagementAdapter -ManagementAdapterName "RDS-FabricVNetAdatper" -ManagementAdapterVMNetwork $vmNetwork -ManagementAdapterPortClassification $portClassification

    Now we can finally see what on earth we have been doing as this Logical switch we have created is now visible in Hyper-V Manager..

    hyper-v virtual switch

    and if we look at it’s extensions we can see a new Microsoft VMM DHCPv4 Server Switch Extension in here which allows to control all the virtual switches from VMM.

    The tricky part now is to add the VMs to the virtual network. This isn’t tricky because it’s hard it’s tricky because if VMM and the DC loose sight of each other or VMM can’t see the host then we are in trouble so as we can’t easily change these settings in the UI or PowerShell plus we’ll need to flush DNS as DHCP will kick in and change things as well.  However what we are doing is moving VMs that are essentially part of the data centre fabric.  Other VMs would not be affected like this indeed that’s the point we should be able to move VMs together across hosts and datacentres without affecting their connectivity.  

    Here is a diagram of what we have created to contrast with what was above.

    image

    This has been quite a lot of work to achieve very little, but we now have the foundations in place to quickly add in more networks and more hosts and to isolate and manage our networking without needing to use VLANs.  However if there are already VLANs in existence then all of this will work just fine as well (for more on that check this post from the VMM engineering team).

    Now I have a basic network for VMM and the hosts I need to do some of this again for the VMs that will serve out applications.  Until next time have a go at this stuff read the stuff on TechNet and create a lab like this to get your head around it.

  • Lab Ops – part 1 Introduction

    For some people building demo setups is a part of the job,  for example Trainers pre-sales Technical roles and evangelists like me.  Everything shifts as these evolve for example

    • a beta product comes out and we need to show new concepts that the technology introduces, for example software defined networking in Windows Server 2012 R2
    • Then as the product matures we need to show this in context of performance and scale, as well as how high availability is affected and how it interoperates with other solutions
    • We also need to show how to migrate onto the new solution and how to retire the legacy version.

    Typically a product even an operating system doesn’t live in isolation and all of this means that new setups need to be continually created.  So the trick is to have a framework to build from rather than a set of virtual machines that get modified checkpointed and so on.  This really hit home to me when I was trying to set up a VDI environment recently as my deployment and desktop wingman Simon May is off to a new role in the USA and my usual hack and slash approach to VMs wasn’t working.

    I was chatting over my problems with Marcus Robinson of Gold Partner Octari at the Virtual Machine User Group in Manchester, and he showed me his PowerShell! Marcus was up til 4am preparing for a course and had developed a  script on the back of something developed by MVP and certified trainer, Thomas Lee, whose scripts are published on PowerShell.com.  My approach was flawed because I was booting up a generic sysprepped VM which while it was joined to the domain had a random name as you can set this in an unattend file.  This meant I couldn’t easily persist a session in PowerShell to rename the VM in Active Directory and the VMs have dynamic IPs as well.   The “Lee-Robinson” approach I picked up  is really clever and works as follows:

    1. Use Windows install media to create a sysprepped Virtual Hard Disk (VHD) using the publicly available WimtoVHD Powershell script

    2. Modify an unattend.xml file to contain the post sysprep configuration  you need, by injecting xml code in fr such things as the ip address and domain name and credentials to join the domain.

    3. mount the newly created VHD and inject the unattend.xml file into [mounted drive letter}:\windows\system32\sysprep

    4. unmount the drive

    5. create a VM around the new VHD

    6. start the VM

    Thomas and Marcus need to do this so they can set up a lab environment for each student or each pair of students on a series of hosts, in Marcus’s case he needed to build a lab to show off Data Protection Manager, while Thomas is constantly pushing PowerShell itself as well as needing to run labs of his own.  I see the other advantages of this approach to a lab

    • Paying for license keys isn’t a problem –VMs are just kept alive for a demo  and most Microsoft evaluation keys are good for 180 days.
    • keeping snapshots of VMs in synch can be problematic for example domain trusts and DNS can get confused which leads to most of the problems you might have seen in my demos last year.
    • The resources I need to keep are just some scripts, databases and possibly some certificates.
    • I can adapt the scripts to work on Azure and have some of my stuff deployed there if I can rely on internet connectivity at an event.
    • I get sharper at deployment and PowerShell, both of which are useful skills anyway.

    This all seems such a good idea I thought I would do more posts on this in a series as I reset my demos for the next round of events I have been asked to do.

    Finally if you want a proper deep dive into PowerShell this is not the blog you are looking for and you could do worse than hang at one of Thomas’s PowerShell camps at the time of writing the next one is 19th October 2013.

  • Lab Ops 19 – Software Defined Networking - where’s my stuff

    Well done if you got through my last post, I bolted a lot of stuff together and it all looks a bit confusing particularly if you are moving from Hyper-V to System Center Virtual Machine Manager 2012 (VMM).  For a start where’s my network settings? When I was testing my rig I found I couldn’t swap out the network connectivity on a test VM from an internal virtual switch to my shiny new VM Network, and when I looked at the error it said I couldn’t change the MAC address of a running VM.  What was going on here was that Hyper-V had assigned a static MAC address at random to my VM and by assigning it to my new VMM infrastructure it wanted to assign a new one.  This got me to wondering about MAC address spoofing which can be important for legacy applications and in Hyper-V you can do this from the advanced features of the Network adapter on a VM. In VMM MAC address spoofing is part of a Virtual Port Profile which also allows us to set the other advance features of Networking in Hyper-V like bandwidth management guest NIC teaming as well as router and DHCP guard..

    Virtual Port Profiles are just logical groupings of these things which you can assign to a logical switch and there are off the shelf ones like Guest Dynamic IP and the Host Management profile we use last time.  This might seem an unnecessary extra step but now that we live in a world of NIC teaming we need to identify the different type of traffic flowing down a fat connection.  We can also see Uplink Port Profiles in this list..

    such as the RDS-FabricUplink I created in my last post, which allows us to connect a logical network and specify how that port gets teamed.  A logical switch has one or more uplink port profiles connected to it and has virtual ports to specify one or more port classifications to describe what traffic this switch can handle.  At this point we can assign the switch to as many hosts as we want and each one will inherit all these properties:

    • The logical switch appears as a Virtual Switch in Hyper-V and is bound to a NIC or team on that host.  When we do that we can see it’s the Uplink Port Profile that is passed to the NIC
    • Underneath that we have one or more Virtual Network adapters which are associated to a VM Network and a Virtual Port Profile.

    When we attach VM to a VM network many of the properties are now greyed out (like those MAC address settings ).

    Anyway I am now ready to create a couple of VM Networks for Dev and Test on which I can put identical VMs without them seeing in each other but also allow them to span hosts ..

       

    To do this I need to create the two VM Networks and integrate them into the networking fabric, by associating them with my logical network (RDS-FabricNet) I created in my last post, and here’s the PowerShell:

    Import-Module VirtualMachineManager
    $logicalNetwork = Get-SCLogicalNetwork -Name "RDS-FabricNet"

    # 2 x VMNetworks each with the same subnet
    $vmNetwork1 = New-SCVMNetwork -Name "DevVMNet" -LogicalNetwork $logicalNetwork -IsolationType "WindowsNetworkVirtualization" -Description "Developer Virtual Network" -CAIPAddressPoolType "IPV4" -PAIPAddressPoolType "IPV4"
    $subnet1 = New-SCSubnetVLan -Subnet "10.10.10.0/24"
    New-SCVMSubnet -Name "DevVMNet Subnet 10" -VMNetwork $vmNetwork1 -SubnetVLan $subnet
    $vmNetwork2 = New-SCVMNetwork -Name "ProductionVMNet" -LogicalNetwork $logicalNetwork -IsolationType "WindowsNetworkVirtualization" -Description "Developer Virtual Network" -CAIPAddressPoolType "IPV4" -PAIPAddressPoolType "IPV4"
    $subnet1 = New-SCSubnetVLan -Subnet "10.10.10.0/24"
    New-SCVMSubnet -Name "ProductionVMNet Subnet 10" -VMNetwork $vmNetwork2 -SubnetVLan $subnet

    At this point You might be wondering about what to do about DNS and AD in this situation as we would normally assign fixed ip addresses to these.  The answer is to start these VMs first and then they’ll get the lowest address by default where x.x.x.1 is reserved on the subnet for the switch. This is similar to Azure except that Azure hands out x.x.x.4 as the lowest address as there are three reserved addresses on a subnet. 

    Anyway the other thing we’ll want to do is specify the new traffic that will be carried on our virtual switch by these VM Networks and to do that we’ll add in another port profile.

    $portClassification = Get-SCPortClassification -Name "Guest Dynamic IP"
    $nativeProfile = Get-SCVirtualNetworkAdapterNativePortProfile -Name "Guest Dynamic IP"
    New-SCVirtualNetworkAdapterPortProfileSet -Name "Guest Dynamic IP" -PortClassification $portClassification -LogicalSwitch $logicalSwitch -RunAsynchronously -VirtualNetworkAdapterNativePortProfile $nativeProfile

    Our design now looks like this..

    We can then assign these VM network to new or existing VMs and it will be available on any hosts we manage in VMM provided we connect those host to our virtual switch. To do that we need a process to create VMs and to do that we need somewhere to put them so next up Storage in VMM.

  • Green IT Fatigue

    I recently got asked to do an interview on  for TechWeek Europe about green initiatives in the IT industry. However let’s be honest, computers burn a lot of power, require a lot of power to make and are made of some nasty exotic compounds and chemicals, so they aren’t going to save the planet by themselves.  However a few years back everyone was talking about Green IT, and more properly sustainable IT, and while that topic is no longer trending, we don’t seem to have done anything about it and Green Fatigue has set in across IT . 

    Looking at what has been happening in the data centre then good work has been done, but not in the name of green IT. For example server consolidation has meant physical servers are better utilised now; they are typically running 10+ VMs each rather than idling at 10%.  We have also got better at cooling those servers, but this has sometimes been driven not by a green initiative but because of the cost of power and the capacity available from the power supplier in a particular location.

    Later versions of virtualisation technologies always make best use of the latest hardware but swapping out server hardware to get the benefits of the latest CPU or networking has to be balanced against the cost of making the new server and disposing of the old one, so you’ll want to focus on how you can extend the life of your servers possibly by just upgrading the software.

    Virtualisation by itself can also cause more problems for the environment than it solves because while you  have achieved some consolidations you may well end up with a lot more VMs that aren’t doing much useful work.  Effective management of those VMs is the key to efficiency for example:

    • Elimination of  Virtual Machine sprawl.  Typically this shows itself as a spread of numerous dev and test environments, and the only way I can think of to check this over use of resources is to charge the consumer for them on the basis of what they have committed to use so chargeback or at least showback.
    • Dynamically optimising a workloads based on demand – Reducing the capacity of low priority under used services or stopping them altogether to free up resources for busy services without needing more hardware.  
    • Extreme Automation to reduce the number of IT guys per VM, these reduces the footprint per VM as each member of IT uses energy to do their work and often has to travel to work so if this can be distributed across more VMs than that is more efficient.

    These three things are actually all key characteristics of clouds so my assertion is that cloud computing is more environmentally efficient, without necessarily being Green IT per se.  Given the fact that public clouds operate at much greater scale and efficiencies than what is possible in many internal data centres1 plus they are often located specifically to take account favourable environmental conditions all of which means they are greener than running services in house. 

    So we are getting greener, it’s just we don’t call it that, and no doubt no that we are fed up with the word cloud as applied to IT we’ll change that to something else as well.  

     

     

     

     

    1 Internally a Microsoft Global Services defines a data centre having more than 60,000 physical servers

    http://www.techweekeurope.co.uk/interview/video-green-fatigue-microsoft-125267

  • SQL Server 2012 – Always On

    There have always been several ways to do high availability in SQL Server, but choosing the right one has always been difficult as each approach has obvious benefits coupled with unavoidable limitations:

    Clustering looks after a whole instance of SQL Server containing many databases and is completely transparent to an application. However shared storage adds cost and complexity and there is only the one copy of the database(s) on that shared storage.

    Mirroring creates a continuously updating replica of a given database, failover is really fast and it’s easy for a DBA to setup. However mirroring has several significant limitations:

    • A special connection (SQL Server native client )is needed to mirroring so not all applications can work with it
    • Protecting multiple databases so that if one fails they all fail over is not really possible.
    • There is only one mirror of the database
    • The mirror is not directly usable it just sits there unless you are prepared to work with snapshots.

    Log shipping is sort of manual mirroring which allows more than one replica to be kept; perhaps a local one and a remote one.  This is more difficult to setup and failover is not automatic you have to reset all of this yourself.

    To build a better SQL Server mousetrap, you would want a solution that:

    • Looks like a cluster to any application i.e. there is a DNS entry to the cluster to which the application connects without ‘knowing’ which node SQL Server is running on
    • You would want to treat a group of databases as an object so that they can be failed over etc. as needed in one go. 
    • As with log shipping, there wouldn’t just be one other node behind the primary there would be multiple mirrors/secondaries
    • The mirror could be read only and therefore available for reporting
    • You could opt to have some nodes connecting asynchronously and thus have a remote replica of your databases without needlessly slowing down the primary.

    Up until know that meant that we would have use more than one feature in concert e.g. mirroring and clustering together to achieve the high availability we wanted. What SQL Server 2012  AlwaysOn  does is to provide this combination in one single feature:

    It uses the Windows Failover Cluster feature in Windows Server but doesn’t use any shared storage. A normal install of SQL Server 2012 is then done on each node and the SQL Server 2012 service is then configured to use the cluster..

    image

    Having done that you then tell the SQL Server service on each node to use the cluster the new AlwaysOn High Availability tab in the properties for the service..

    image

    However AlwaysOn is actually doing something very similar to mirroring under the covers, in that there are replicated copies of the databases being protected not just one copy on shared storage as there is for clustering – and AlwaysOn doesn’t need to use shared storage. You’ll also notice that for databases to be protected by AlwaysOn they need to be in full recovery mode and backed up (preferably to a share that’s visible from the other nodes). However with AlwaysOn you can have multiple secondaries and you create availability groups, which are sets of the databases you want to keep together.

    There’s a wizard in SQL Server Management Studio for this where you can specify the nodes, the databases and the options for accessing each node. Note this uses TCPIP ports like mirroring does (so port 5022 by default) and these need to be opened in the firewall for this all to work.

    There’s a dashboard to confirm all is well ..

    image

    There is also an option to create a TCPIP listener which provides an address and DNS entry for the cluster.  If you set this up you can  connect directly to the cluster from any tool that can connect to SQL Server, in this case I have connected to the TechNet cluster from management studio in the same way I would connect to any other instance or cluster..

     image

    However you can also connect directly to the primary or secondary as well and for a read only secondary that’s how you would do reporting.

    I have a short (8 min) AlwaysOn screen cast if you want to know more or have a guide to help you try it yourself.

     

    Finally be aware that this is not replacing clustering, mirroring or log shipping but it is only going to be available in SQL Server 2012 Enterprise edition.

  • UK Data in the Cloud

    There is still a lot of inertia in the UK about storing data in the cloud for entirely valid reasons and rather vague uncertainties and doubt.  For a few organisations keeping data in the cloud is exactly the right thing to do because those organisations want to actively share their information, the most obvious is the UK government with their data.gov.uk initiative.  Commercial companies may also want to sell their data and rather than opening up fat pipes into their data centres the logical approach is to have this hosted on a public cloud as well. 

    I mention this because one aspect of Azure that Microsoft rarely talks about is the Windows Azure Marketplace (WAM), a portal where for the sharing and consumption of large data sets.  Originally this was just US based like a lot of Microsoft services, but over the last year or two it has grown steadily so that there are now a significant number of UK relevant data sets on there, most notably is the Met Office Open Weather Data (and actually part of data.gov.uk)

    Some of this data you will be paying for based on how many times you query it and so one way to minimise that cost would be to download it then create my own  internal data market  which would also include sets of data from in house systems for users to mash up using tools like PowerPivot.

    You can of course connect PowerPivot etc. to the WAM, and the good thing about this approach rather than just pulling down a .csv file is the connection location is remembered, and this is useful for several reasons:

    • you know where the data came from so it’s verifiable
    • you can easily refresh your analytical model with the attest data form a given source
    • if you wish to scale up your model either to SharePoint in house SharePoint, Online in Office 365 or to a BI Semantic Model in SQL Server 2012, the source is preserved so the model can still be refreshed.

    So the Azure Data Market works well with self service BI to allow analysts to develop models based on external and internal data, say for mapping the weather to sales to develop models to predict demand as I have posted before.

    The other way this data can be consumed is to use it inside an application.  I can see a case for this sort of thing on a property search site where additional local information is bought in alongside the details of the house/flat you are looking for such as schools and their stats, hospital metrics, rail commute times,  and so on.  This will typically incur a cost but would give this site an edge over its competitors and possibly be recouped through advertising.  There are also applications you can integrate with such as translation services and Bing.

    It’s also worth bearing in mind that you could be making money out of your data, by selling it via WAM as well.  Obviously this would not be personal data, so things like market research house price information, trends in the UK job market from a recruitment agency which have been anonymised. 

    Finally there’s extensive help on how to use all aspects of WAM, such as code snippets, samples and hot to videos, and it’s changing all the time so even if there’s nothing of interest right now there may well be next time you look.