Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

Insufficient data from Andrew Fryer

  • Lab Ops 6 – Setup VDI in Windows Server 2012R2

    VDI bridges the world of the client and the server, and that can mean that in a world where IT professionals are either experts in one or the other (Simon and I being  a case in point), most of us have a partial knowledge of how VDI works.

    There are other reasons why you might be forgiven that VDI isn’t a Microsoft strength for two reasons:

    • It has been hidden behind offerings from third parties like Citrix, and those partners are still important for larger implementations and because in the past those partners made the only decent remote desktop clients for all sorts of platforms such as Android and IOS.  That all changed on 7th October and to quote the official press release:

    “.. with Windows Server 2012 R2 Microsoft is introducing the Microsoft Remote Desktop app, available for download in application stores later this month, to provide easy access to PCs and virtual desktops on a variety of devices and platforms, including Windows, Windows RT, iOS, OS X and Android.”

    • VDI isn’t referred to as such in TechNet or in the Windows Server interfaces such as Server Manager as VDI – it’s called Remote Desktop Services which also includes the traditional terminal services way of providing a remote desktop, but also includes using pooled or  personal client VMs as well.

    However since Windows Server 2012 it has been pretty easy to setup a secure and resilient VDI environment just using Windows client and server, that offers your users a rich and personalised experience.

    So before I go into how to build a lab what are the moving parts?  The easiest way to see this is post deployment as there’s an overview in server manager..

    VDI Overview

    The green objects have yet to be configured; The RD Gateway for external access to my VDI environment and the licensing server. 

    The grey objects are configured and go blue when in hover on them:

    • the RD Web Access Server is the web server that users connect in from.
    • The RD Connection Broker is the middle tier which orchestrates the whole setup; what servers are performing which functions, where the VDI VMs are to control their state , and handling security to limit who can access which desktop and applications.
    • The RD Virtualization Host are the physical server(s) that that the VDI VMs run from.
    • Optionally there is also an RD Session Host for  Remote Desktop Services (Terminal Services as was)

    Notice the IT Camp collection. A collection is a pool of client VMs. The idea is to manage the collection as one VM and I’ll describe the options and how to configure collections in my next post. 

    One thing to note about RDS/VDI is that highly availability (HA) is going to be very important as no one will get any work done if there are no desktop to work on. For this reason my lab setup needs to be tagged with “don’t try this at work!” Of course there’s no point in evaluating this unless you are sure that you can enable HA and this is the sort of setup that would allow for that..

    WP_20131007_001_thumb1

    Notes: 
    • The web access servers need to be load balanced as for any web site
    • The broker role can be clustered and not the use of SQL Server to store metadata about the environment when this is the case. Of course that SQL Server database would need to reside on resilient shared storage or it becomes a single point of failure.  Also note that Broker1 and Broker 2 can handle connection requests i.e. this is an active active scenario and that’s why there’s a shared database in the mix.
    • The virtualization hosts have no special settings for HA it’s just that there’s more than one of them and typically you’ll want to ensure you have enough spare server resources to able to run your collection of VMs if one of the servers is not longer available. 
    • The VMs virtual hard disks could be on shared storage but actually this doesn’t matter too much, what matters is where the user’s data is and the parent disk of the VDI collection. RDS in Windows Server 2012 has special user profile disks and it is these that should be on shared storage and when you configure RDS you get to specify where these are stored as distinct from the virtual hard disks for each VDI VM.   

    At this point you might be wondering what happens if one of the Virtualization Hosts fails.  It’s really simple - for those users that had open desktops that were based on VMs on that failed server, then those users will loose their session.  However given that the shared storage behind all of the VMs is still there then when the user connects in again they won’t have lost any saved work. So a bit like a real desktop crash except that they can immediately sign back in and continue where they left off as the broker will assign them an unused VM on a running server, and that will be quick as the VDI VMs are left in a saved state to be ready for immediate use

    Anyway on with my lab setup which builds on earlier posts in this series..

    As I’ve already said I need at least one physical host to be the RD Virtualization Host and my laptop in the diagram above is the “Orange” session host. The other roles in the diagram above can all be one or more VMs themselves and those roles can be combined.  A production environment will need to have use multiple VMs on different hosts but you might not need to scale out the roles to dedicated VMs in smaller deployments.

    For my demo I am going to use a VM for the Broker role (RDBroker) , another for Web Access (RDWebAccess) and a third as a Session Host (RDSHost). To create all of these vms I have adapted the setup scripts described in part 2 of this series so I can create these with a line of PowerShell for each..

    Create-VM -Name RDWebAccess -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.20" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDBroker -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.21" -DnsSvr "192.168.10.2" -Domain "contoso.com"
    Create-VM -Name RDSHost -VmPath ($labfilespath) -SysPrepVHDX ("E:\WS2012RTM SysPrep.VHDX") -Network "FabricNet" -VMMemory 2GB -UnattendXML $unattendxml -IPAddr "192.168.10.22" -DnsSvr "192.168.10.2" -Domain "contoso.com"     
    Note: My sysprepped image used to create these vms is based on the download from msdn , so I am asked for a license key which I can skip, but it’s a manual step. When an evaluation edition of Windows Server 2012 R2 is available this won’t be a problem.

    I can now configure these blank vms through Server Manager from one machine, provided you have already added them as servers to be managed as I have (the servers in the yellow box)..

    image_thumb6 

    The first step is to add the roles and features to each of my server VMs, and since Windows Server 2012 this has been a special installation option, Remote Desktop Services Installation.  As I mentioned before Remote Desktop Services is a generic term in Windows Server to cover both VDI and Remote Desktop Sessions (what was terminal services). and so the wizard asks you which one you want.  For this post I am going for VDI. In the following screens for this wizard I can assign each role to one of my servers and the wizard will deploy the whole environment for me, and then show me that overview screen at the start of this post.

    There’s a couple of things you’ll want to check before you actually create a VDI collection:

    Check the Deployment settings:

    image_thumb20

    For example you can add in your licensing server, specify the Web Access URL to use and in my case I don’t the export location for my template to be on the the RDBroker server as it’s a VM so I’ll use a share somewhere else.  You’ll also want to setup certificates for this, and when I have that properly researched I’ll cover it off in this series.

    Increase the VM Build Concurrency on your Hosts

    By default Hyper-V builds VMs one at a time which might be a good if cautious move for heavily used production hosts but on my laptop(s) I want to up this, and this is the PowerShell to hack the appropriate registry setting and reboot the hosts for this to take effect..

    #replace “server1”,”server2” with your list of servers

    $CompNames =@("Server1",”Server2”)

    foreach ($CompName in $CompNames)
    {
         Invoke-Command -ComputerName $CompName -ScriptBlock {

        Set-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Services\VmHostAgent\Parameters"`
          -Name "Concurrency" -value 5
         Restart-Computer -force}
    }

    Now I am ready to build “collections” of virtual machines and that’s coming up in the next post.

  • Lab Ops 5 - Access Rights in PowerShell

    In my last post I had a section in my script to create access right to a share I had just created and as promised I wanted to dissect that a bit more.

    To recap what I wanted to do was to create a share with sufficient privileges to use that share for storing VMs as part of my VDI demo. In order to understand what permissions might be needed I used the wizard in Server manager to create a share (FileServer1\\VMTest), checked that I could use the Remote Desktop Services part of Server Manager to successfully create a pooled VDI collection and then examine its permissions.

    The share has everyone full control access..

    Share settings

    and these are the File Access Rights on that share..

    Share secirity settings

    The SMB settings are set by the -fullAccess, for example in my script

    #Orange$ is the name of the physical host (my big orange Dell Precision laptop)

    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator  

    The Access Rights are stored in Access Control Lists (ACLs) comprising individual Access Control Entries (ACEs) where an entry might state the contoso\Andrew has full control of a path and its folders and subfolders.  Out of the box in PowerShell there are just two command involved get-ACL to get this list and Set-ACL to modify setting on a path.  TechNet recommend that you create a share (say x:\test) set up its permissions and pass this across to your new share

    Get-Acl -path “X:\Test” | Set-Acl -path “ X:\My New Share”

    which is fine but in my case I am building a setup form scratch and I don’t wan to have to use an interface to create the template.  So it occurred to me I could write the ACL out to a file with one row per ACE and here’s my script to get the ACL for that VMTest share in the above screenshot..

    #to a CSV for later use
    $CSVFile = \\Orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV

    #If the file is already there delete it
    If(Test-Path $CSVFile) {Remove-Item $CSVFile}
    $Header = "Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags"
    Add-Content -Value $Header -Path $CSVFile
    $TemplatePath = "X:\shares\VMTest"
    $ACLList = get-acl $TemplatePath | ForEach-Object {$_.Access}
    foreach($ACL in $ACLList)
        {
        $LineItem = $TemplatePath+ "|" + $ACL.IdentityReference + "|" +  $ACL.FileSystemRights + "|" + $ACL.AccessControlType + "|" + $ACL.IsInherited + "|" + $ACL.InheritanceFlags + "|" + $ACL.PropogationFlags
        Add-Content -Value $LineItem -Path $CSVFile
        }

    The Add-Content command allows me add write into a file, but not that alothough I have specified this as a CSV file it is in fact pipe (“|”) delimtied because it’s generally madness to use commas as separators as they get used inside the values as in this case..

    Path|IdentityReference|FileSystemRights|AccessControlType|IsInherited|InheritanceFlags|PropogationFlags
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|False|None|
    X:\shares\VMTest|BUILTIN\Administrators|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|NT AUTHORITY\SYSTEM|FullControl|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|CREATOR OWNER|268435456|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|ReadAndExecute, Synchronize|Allow|True|ContainerInherit, ObjectInherit|
    X:\shares\VMTest|BUILTIN\Users|AppendData|Allow|True|ContainerInherit|
    X:\shares\VMTest|BUILTIN\Users|CreateFiles|Allow|True|ContainerInherit|

    so Inheritance flags and access rights can have multiple values inside one ACE.  While there are several examples of this sort of script out there I had to do some digging to find out how to use this file, So here’s my crude but effective attempt..

    $ACEList = Import-CSV -Path "\\orange\E`$\UK Demo Kit\Powershell\FileShareACL.CSV" -Delimiter "|"
    $TestPath = "C:\Test"
    If (test-path -Path $TestPath) {Remove-item -Path $TestPath}
    MD $TestPath
    $ACL = Get-Acl -Path $TestPath
    ForEach($LineItem in $ACEList)
        {
      
        If($LineItem.FileSystemRights -eq 268435456) {$LineItem.FileSystemRights = "FullControl"}
        If($LineItem.FileSystemRights -eq "ReadAndExecute, Synchronize"){$LineItem.FileSystemRights = "ReadAndExecute"}
       
        $NewRights = [System.Security.AccessControl.FileSystemRights]::($LineItem.FileSystemRights)
        $NewAcess = [System.Security.AccessControl.AccessControlType]::($LineItem.AccessControlType)
        $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]($LineItem.InheritanceFlags)
        $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None
        $NewGroup   = New-Object System.Security.Principal.NTAccount($LineItem.IdentityReference)
        
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
         $ACL.SetAccessRule($ACE)
        }
        Set-Acl -Path $TestPath -AclObject $ACL

    While this not be a solution for you it does show up some interesting points in PowerShell and in setting security:

    • There’s no command to make a new ACL you have to start with one from somewhere
    • Once I have declared $AceList as my csv file it is that file. For example I can get PowerShell to display it an all its glory using $ACEList | out-gridview to see it as a table
    • I have declared the separator I used to get round the comma problem with the -Delimiter “|” setting
    • I can loop through each line of my file with a simple foreach loop and so $LineItem represents each line and I can references the columns in that line with the standard . notation used in .Net e.g. $LineItem.FileSystemRights.
    • The stuff in square brackets might look hard to remember but PowerShell will auto complete this so once I typed in [System. I could select Security and so on to build up the correct syntax.
    • I have marked a line in my csv file in red because the File System Rights are set to a number in my case 268435456.  This is an example of an Access Control mask a set of bits that will cause an error when I try and create $ACE, so I have an exception in my code (also in red) to trap that and in my case just assign full control.
    • There is a another problem line marked in purple in my csv and that’s because this also threw an error.  I looked up the synchronize option and realised it gets set by default and so I put in another exception in y script (in purple) to trap that and just apply ReadAndExecute rights.
    • I had to refer to the .Net documentation here to work out what order to put the parameters in to create a new ACE (the line beginning $ACE = New-Object…)
    • Having built up the ACL in the loop it’s really easy to apply it your object ands of course this could be used for multiple objects in another loop for that

    So what I have done here is a bit of a sledgehammer to crack a nut, but you might be able to use some of the principles to solve your real world problem for example you might just knock up a CSV file with permissions you want to set on an object.

    Finally there’s this properly tested File System Security PowerShell Module on the TechNet gallery although it’s not yet been tested on Windows Server 2012 or R2. 

  • Lab Ops 4–Using PowerShell with Storage

    In previous posts I created a fileserver for my lab environment and used the UI to create a storage space on it.  Now I want to automate the creation of two storage spaces over the same storage pool, which will then underpin my VDI demos. At the time of writing the only resource I could find was on Jose Barreto’s blog.

    Before I get to the PowerShell this is a quick sketch of how my file server is going to be configured..

    WP_20131001_002

    My VM already has 6 virtual hard disks attached from my last post - 1x IDE for the operating system and the other 5 are scsi disks.  Two of these scsi disks are 50Gb in size the rest are 1Tb.

    On top of this I want to create a pool “VDIPool” onto which I will put two storage spaces (virtual disks).  The dedup space at the top will have a one partition (X:) volume which will be setup for VDI deduplication while the normal volume/partition (N:) on the normal space won’t have this enabled but will be identical in size and configuration.  Both volumes will then have shares setup on them to host the VDI virtual machines templates and user disks.

    All of this only takes a few of lines of powershell (less if you don’t like comments and want your command all on one line!) ..

     

    $FileServer = "FileServer1"
    $PoolName = "VDIPool"
    $SpaceName1 = "DedupSpace"
    $SpaceName2 = "NormalSpace"

    #1. Create a Storage Pool
    $PoolDisks = get-physicaldisk | where CanPool -eq $true
    $StorageSubSsytem = Get-StorageSubSystem
    New-StoragePool -PhysicalDisks $PoolDisks -FriendlyName $PoolName -StorageSubSystemID $StorageSubSsytem.UniqueId

    #2. tag the disks as SDD,HDD by size
    #   note to get the actual size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    #3. Create the necessary storage tiers
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "SSDTier" -MediaType SSD
    New-StorageTier -StoragePoolFriendlyName $PoolName -FriendlyName "HDDTier" -MediaType HDD

    #4. Create a virtual disk to use some of the space available
    $SSDTier = Get-StorageTier "SSDTier"
    $HDDTier = Get-StorageTier "HDDTier"

    #5. create two Storage spaces
    New-VirtualDisk -FriendlyName $SpaceName1 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple
    New-VirtualDisk -FriendlyName $SpaceName2 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 1Tb,30Gb -WriteCacheSize 2gb -ResiliencySettingName Simple

    #6. create the dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName1
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "X" -UseMaximumSize
    Initialize-Volume -DriveLetter "X" -FileSystem NTFS  -NewFileSystemLabel "DedupVol"  -Confirm:$false
    #note -usagetype Hyper-V for use in VDI ONLY!
    Enable-DedupVolume -Volume "X:" -UsageType HyperV

    #7. create the non dedup volume and mount it
    $VHD = Get-VirtualDisk $SpaceName2
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    Initialize-Disk $Disk.Number -PartitionStyle GPT
    New-Partition -DiskNumber $Disk.Number -DriveLetter "N" -UseMaximumSize
    Initialize-Volume -DriveLetter "N" -FileSystem NTFS  -NewFileSystemLabel "NormalVol"  -Confirm:$false

    #8. create the standard share directory on each new volume
    md X:\shares
    md N:\shares

    #9. Get a template ACL for the folder we just created and add in the access rights
    #   need to use the share for VDI VMs (Note there are too many permissions here!)
    #   and set up an ACL to be applied to each share

    $ACL =Get-Acl "N:\shares"
    $NewRights = [System.Security.AccessControl.FileSystemRights]::FullControl
    $NewAcess = [System.Security.AccessControl.AccessControlType]::Allow
    $InheritanceFlags = [System.Security.AccessControl.InheritanceFlags]"ContainerInherit, ObjectInherit"
    $PropogationFlags = [System.Security.AccessControl.PropagationFlags]::None

    $AccountList = ("Contoso\Administrator","CREATOR OWNER", "SYSTEM","NETWORK SERVICE", "Contoso\Hyper-V-Servers")

    Foreach($Account in $AccountList)
        {
        $NewGroup = New-Object System.Security.Principal.NTAccount($Account)
        $ACE = New-Object System.Security.AccessControl.FileSystemAccessRule ($NewGroup, $NewRights,$InheritanceFlags,$PropogationFlags,$NewAcess)
        $ACL.SetAccessRule($ACE)
        Write-Host "ACE is " $ACE
        }
    #10. Now setup the shares needed for VDI pool
    #    one for the VMs and one for the user disks
    #    on each of the volumes

    $SMBshares = ("DedupVMs","DedupProfiles","NormalVMs","NormalProfiles")
    foreach($Share in $SMBshares)
        {
        if($Share -like "Dedup*")
            {
            $SharePath =  "X:\shares\" + $Share
            md $SharePath
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        if($Share -notlike "Dedup*")
            {
            $SharePath =  "N:\shares\" + $Share
            md $SharePath
           
            New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator
            Set-Acl -Path $SharePath -AclObject $ACL
            }
        }

    Notes:

    • Get-storagesubsytem returns fileserver 1 storage spaces on my demo rig
    • Using PowerShell to create storage spaces gives you access to a lot of switches you don’t see in the interface like setting the cache size
    • when you create a storage space, it’s offline so you have to bring it online to use it.
    • Apart form disabling the cache on an SMB share I couldn’t see what other setting to configure the share as an application share – My test share in the screenshot above was created by using Server Manager to create an application share – a special kind of share in Windows Server 2012 for hosting databases and Hyper-V virtual machines

    Having run this I can now see my shares in server manager

    shares

    This script is designed to be run on Fileserver1, whereas the script to create the VM itself can be run from anywhere so what I want to do is to run this script remotely on that VM and the way to do that is:

    invoke-command -computername FileServer1 -filepath “e:\powershell\FileServer1 Storage Spaces.ps1”

    ..where FileServer1 Storage Spaces.ps1 is my script.  BTW I can apply a script like this to multiple servers at the same time by substituting FilerServer1 with a comma separated list of servers.

    Next time I want to go into what exactly is going on in the script to create an SMB share and assign access rights to it (steps 9 and 10) as I found this quite hard to research. and so perhaps I can help to make this easier for you and as an aide memoire for me!

    As I have mentioned before there isn’t an evaluation edition available just yet, so if you want to try this now you should find it will work from the Windows Server 2012 R2 preview, although you will still have to inject the license key in your scripts at some point to fully automate vm creation.

     

     

     

     

     

     

     

  • Lab Ops part 3 – Storage in Windows Server 2012R2

    In this post I want to show what I plan to do with storage in Windows Server 2012R2 at the various events I get asked to present at.  If you have been to any of our IT Camps you will have seen Simon and I show off deduplication and storage spaces in Windows Server and now I want to add tiered storage into the mix - and I have a cunning plan.  This is to declare some of my storage as hard disk and other parts as SSD.

    Before I get to that I need to create a File Server v. To do that I have adapted the setup script from Marcus I have posted previously to create this vm.  All I needed to do after the VM was created was to add in the fileserver roles and role features I needed.  The easiest way to do that is use the UI to add in the necessary features and then save them off to an xml file..

     add fileserver features

    as the screentip above shows I can then consume this xml file to add in the features to my fileserver vm everytime I create it

    Install-WindowsFeature -Vhd $VHDPath -ConfigurationFilePath "E:\scripts\file server 1 add features.xml" -ComputerName "Orange"
    where the –computername setting is my physical host as I am mounting the VHD off line to add the features before it starts

    Now I can use this vm to show tiered storage and there are two parts to this:

    • Create some virtual disks and set them up to emulate the performance of the two different types of disk.  In Server 2012R2 there are advanced properties of virtual hard disks for Quality of Service (QoS), which are exposed in Hyper-V Manager, Virtual Machine Manager and of course PowerShell..

    #note the MaximumIOPs settings

    for ($i = 1; $i -le 3; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 1tb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI    -ControllerLocation $i -MaximumIOPS 100
       }
      for ($i = 4; $i -le 5; $i++)
       {
     
        $VHDPath = $labfilespath + "\" + $VMName + "\"  +"SCSI" + $i +".vhdx"
       
        New-VHD -Dynamic -Path $VHDPath -SizeBytes 50gb
        Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath -ControllerType SCSI -ControllerLocation $i -MaximumIOPS 10000
    }
          
    To identify which disk is which in the VM for my demo – remember I want to use the UI to show this being setup as PowerShell scripts aren’t great for explaining stuff.  To this I tried to tag my disks as HDD or SSD now they are inside the VM.  using

    Set-PhysicalDisk  –MediaType1

      where media type can be SSD or HDD. My plan was to base this on the size of the disks (My notional hard disks are 1Tb above as opposed to just 50Gb for the SSD. However you can’t use this command until you have created a storage pool so I will keep this code for later run it interactively
       

    To stress my tiered storage and to show off deduplication in Windows Server 2012R2 I plan to use this storage as the repository for my VDI vms. In order to that I have to do the following things beforehand:

    1. Use the storage to create the pool. From Server Manager

    storagepool

    by right clicking on the primordial row in storage spaces, giving my poll a name and selecting all those blank scsi disks I created..

    image

    and clicking create (Note I have left all the disks set to automatic for simplicity)

    2. Tag the disks as SSD/HDD in order to tier the storage

    I just use the PowerShell above to do this based on size of the disks..

    #note to get size of disks use (physicaldisk).size

    Get-PhysicalDisk | where size -eq 1098706321408 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType SSD

    Checking back in server manager I can see the media type has been set and you can now see my VDI Pool..

    storagepool

    3. Create a virtual disk over the storage pool. 

    Storage spaces are actually virtual hard disks and so from the screen above I select New Virtual Hard Disk from the tasks in Virtual Disks pane and select the storage pool I have just created.  I then get the option to use the new tiered storage feature in Windows Server 2012R2..

    image

    I’ll select simple in the next screen to stripe my data across the disks in the pool. In the next screen the option to thin provision the storage is greyed out because I have selected tiered storage, which isn’t a problem for my lab as this pool is built on disks which are in fact virtual disks themselves but be aware of this in production. Just to be clear you can either create a storage space (VHD) that is thin provisioned beyond what you have available now OR you can commit to a fixed size and use tiered storage.

    Now I get another new (for R2) screen to set the size of my VHD based on how much of the pool I want to use.  I am going to use 30gb of SSD and 1Tb of HDD.

    newvhd

    I am then get asked to format the VHD and mount it.  I mount it as drive V and give I the name VDIStorage.  In the final step I can enable deduplication..

    imageNote I have the option to use deduplication for VDI in Windows Server 2012R2 so I will go for that.  The wizard will then guide me through formatting and mounting this VHD so it can now be used as a storage space

    Now I am ready to configure VDI . But what if I wanted to do all of that in PowerShell? Well I am going to leave that for next time as this is already a pretty long post.

    In the meantime if you have an MSDN subscription then you can get the rtm version of Windows Server 2012R2 and follow along. If not I am sure there’ll be an evaluation copy coming it in a few weeks to coincide with the general availability of R2, and in the mean time check out the Windows Server 2012R2 jumpstart module on MVA.   

    1Keith Meyer one of my counterparts has more on all of this in this excellent post.

  • After Hours - Canon EOS talking to a Surface Pro over wifi

    Please note This an after hours post, specifically about connecting a Canon EOS 6D to windows 8/8.1.  I have written it for two reasons -  so I can remember how to do it and because this you might need to do something like this for a camera enthusiast that you know who isn’t a networking guy.

    Canon have made it relatively easy to connect the new EOS 6D 70D etc. to your Android or IOS device and to a wifi hotspot to which your PC/laptop is connected.  However what I wanted to do was to  configure windows 8 as an ad hoc wireless connection point so I could remote shoot via wireless from my Surface Pro anywhere I happened to be; jungles, mountains, and the various events I go to.  However Windows 8 doesn’t have a UI for this anymore so you need to run a couple of netsh commands from an elevated prompt to get this working:

    netsh wlan set hostednetwork mode=allow ssid=MyWIFI key=MyPassword

    netsh wlan start hostednetwork

    ..where MyWIFI is the wireless network name you want and MyPassword is the password to connect to it. What this does is to add a new adapter into network connections..

    image

    In my case I renamed my connection to Canon and also note that Deep6 has a three after it as I tried this  a few times! Another thing you may see on forums is that you need to setup sharing when creating connections like this and that’s only true if you want to do the old internet connection sharing. I don’t need to do this for this scenario which is just as will as our IT department have prevented me from doing this in group policy

    On  my Canon EOS6D I need to enable wifi
    IMG_6285

    then set it up by selecting the wifi function which is now highlighted.  From here I want to set up a C connection which is the Remote Control (EOS Utility option)..

    IMG_6288

    I have already don this a few times ..

    IMG_6291

    so to set up a new connection I choose unspecified. Now I ned to find the network I created on my Surface Pro by finding a network..

    IMG_6292

    My ad hoc network is called Deep6 as opposed to FAF which is my home wireless network..

    IMG_6293

    my key is in ASCII so I select that on the next screen and then I get this dialog to enter my password ..

    IMG_6295

    Note you have to use the Q button on the back of the camera to enter the text window. I am asked about ip addresses I select automatic as my wireless network will do that for me. Then I can confirm I want to start pairing devices..

    IMG_6297

    and then I will see this..

    IMG_6298

    I can now check that my 6D is talking to my new wireless access point (which I have called Deep6.

    image

    as you can see I have one device connected.

    So now I can use the supplied Canon software, the EOS Utility,  to control my camera. Or so I thought,  only all the control options are greyed out.  This is because you need to change the preferences to install and configure the wft utility which detects your Canon and allows you to control it. To do this select the option add WFT pairing software to the startup folder

    image

    You’ll then get a little camera icon In your system tray and when your Canon is connected it’ll pop up this window..

    image

    click connect and  you’ll see an acknowledgement and confirmation on the camera..

    IMG_6299

    in my case my Surface is called Vendetta. I click OK, and I am good to go and the camera saves the settings for me, which is great and in fact I can save 3 of them. In my case I have saved my surface connection and FAF to connect to my home wireless router.

    The Canon EOS  Utility will now work..

    IMG_6308

    Now I can start to have fun with this setup and my shots get saved to my Surface Pro..

    IMG_0003

  • Lab Ops 2–The Lee-Robinson Script

    In my last post I mentioned how Marcus Robinson & had adapted PowerShell scripts by Thomas Lee to build a set of VMs to run a course in a reliable and repeatable way.  With Marcus’s permission I have put that Setup Script on SkyDrive, however he has a proper day job running his gold partnership Octari so I am writing up his script for him.

    Notes on the script.

    Unattend.xml is the instructions you can give to setup the operating system as it comes out of SysPrep. Marcus has declared the whole unattend.xml file as an object $unattendxml which he then modifies for each virtual machine to set it’s name in active directory, the domain to join it to, its fixed IP address and its default DNS Server

    He makes use of functions to mount the target VHD and then copies in the modified unattend.xml and then dismounts it.  Overarching this is a create-VM function which then incorporates these functions to create his blank VMs with known names and ip addresses. However these VM are not started as their is as yet no domain controller for the other lab VMs to join

    LabSvr1 in the script is gong to be that domain controller so the first thing to do is add in the AD-Directory services role and here not the use of the PSCredential PowerShell object to store credential and convert-to-secure-string for the password so that the script can work securely on the remote VM’s.  Starting the VM requires a finite amount of time so Marcus checks to see when it’s alive.

    Marcus then has a section to install various other workloads all from the command line Exchange, SharePoint and my favourite SQL Server.  Before he can install some of these he has to install prerequisites such as the .Net Framework 3.5 (aka the NET-Framework-Core feature) and do that he puts in the Windows Server install media. Having installed SQL Server he can then copy in the databases he needs and here I might have attached them as well as SQL has PowerShell to do this..

    $sc = new-object System.Collections.Specialized.StringCollection
    $sc.Add("C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\Mydatabase.mdf")
    $server.AttachDatabase("myDataBase", $sc)

    Anyway there’s lots of good stuff in here, and I’ll be using it to make my various demos for our upcoming IT Camps, now that the Windows 8.1 Enterprise ISO is on MSDN subscriptions. 

  • Lab Ops – part 1 Introduction

    For some people building demo setups is a part of the job,  for example Trainers pre-sales Technical roles and evangelists like me.  Everything shifts as these evolve for example

    • a beta product comes out and we need to show new concepts that the technology introduces, for example software defined networking in Windows Server 2012 R2
    • Then as the product matures we need to show this in context of performance and scale, as well as how high availability is affected and how it interoperates with other solutions
    • We also need to show how to migrate onto the new solution and how to retire the legacy version.

    Typically a product even an operating system doesn’t live in isolation and all of this means that new setups need to be continually created.  So the trick is to have a framework to build from rather than a set of virtual machines that get modified checkpointed and so on.  This really hit home to me when I was trying to set up a VDI environment recently as my deployment and desktop wingman Simon May is off to a new role in the USA and my usual hack and slash approach to VMs wasn’t working.

    I was chatting over my problems with Marcus Robinson of Gold Partner Octari at the Virtual Machine User Group in Manchester, and he showed me his PowerShell! Marcus was up til 4am preparing for a course and had developed a  script on the back of something developed by MVP and certified trainer, Thomas Lee, whose scripts are published on PowerShell.com.  My approach was flawed because I was booting up a generic sysprepped VM which while it was joined to the domain had a random name as you can set this in an unattend file.  This meant I couldn’t easily persist a session in PowerShell to rename the VM in Active Directory and the VMs have dynamic IPs as well.   The “Lee-Robinson” approach I picked up  is really clever and works as follows:

    1. Use Windows install media to create a sysprepped Virtual Hard Disk (VHD) using the publicly available WimtoVHD Powershell script

    2. Modify an unattend.xml file to contain the post sysprep configuration  you need, by injecting xml code in fr such things as the ip address and domain name and credentials to join the domain.

    3. mount the newly created VHD and inject the unattend.xml file into [mounted drive letter}:\windows\system32\sysprep

    4. unmount the drive

    5. create a VM around the new VHD

    6. start the VM

    Thomas and Marcus need to do this so they can set up a lab environment for each student or each pair of students on a series of hosts, in Marcus’s case he needed to build a lab to show off Data Protection Manager, while Thomas is constantly pushing PowerShell itself as well as needing to run labs of his own.  I see the other advantages of this approach to a lab

    • Paying for license keys isn’t a problem –VMs are just kept alive for a demo  and most Microsoft evaluation keys are good for 180 days.
    • keeping snapshots of VMs in synch can be problematic for example domain trusts and DNS can get confused which leads to most of the problems you might have seen in my demos last year.
    • The resources I need to keep are just some scripts, databases and possibly some certificates.
    • I can adapt the scripts to work on Azure and have some of my stuff deployed there if I can rely on internet connectivity at an event.
    • I get sharper at deployment and PowerShell, both of which are useful skills anyway.

    This all seems such a good idea I thought I would do more posts on this in a series as I reset my demos for the next round of events I have been asked to do.

    Finally if you want a proper deep dive into PowerShell this is not the blog you are looking for and you could do worse than hang at one of Thomas’s PowerShell camps at the time of writing the next one is 19th October 2013.

  • Green IT Fatigue

    I recently got asked to do an interview on  for TechWeek Europe about green initiatives in the IT industry. However let’s be honest, computers burn a lot of power, require a lot of power to make and are made of some nasty exotic compounds and chemicals, so they aren’t going to save the planet by themselves.  However a few years back everyone was talking about Green IT, and more properly sustainable IT, and while that topic is no longer trending, we don’t seem to have done anything about it and Green Fatigue has set in across IT . 

    Looking at what has been happening in the data centre then good work has been done, but not in the name of green IT. For example server consolidation has meant physical servers are better utilised now; they are typically running 10+ VMs each rather than idling at 10%.  We have also got better at cooling those servers, but this has sometimes been driven not by a green initiative but because of the cost of power and the capacity available from the power supplier in a particular location.

    Later versions of virtualisation technologies always make best use of the latest hardware but swapping out server hardware to get the benefits of the latest CPU or networking has to be balanced against the cost of making the new server and disposing of the old one, so you’ll want to focus on how you can extend the life of your servers possibly by just upgrading the software.

    Virtualisation by itself can also cause more problems for the environment than it solves because while you  have achieved some consolidations you may well end up with a lot more VMs that aren’t doing much useful work.  Effective management of those VMs is the key to efficiency for example:

    • Elimination of  Virtual Machine sprawl.  Typically this shows itself as a spread of numerous dev and test environments, and the only way I can think of to check this over use of resources is to charge the consumer for them on the basis of what they have committed to use so chargeback or at least showback.
    • Dynamically optimising a workloads based on demand – Reducing the capacity of low priority under used services or stopping them altogether to free up resources for busy services without needing more hardware.  
    • Extreme Automation to reduce the number of IT guys per VM, these reduces the footprint per VM as each member of IT uses energy to do their work and often has to travel to work so if this can be distributed across more VMs than that is more efficient.

    These three things are actually all key characteristics of clouds so my assertion is that cloud computing is more environmentally efficient, without necessarily being Green IT per se.  Given the fact that public clouds operate at much greater scale and efficiencies than what is possible in many internal data centres1 plus they are often located specifically to take account favourable environmental conditions all of which means they are greener than running services in house. 

    So we are getting greener, it’s just we don’t call it that, and no doubt no that we are fed up with the word cloud as applied to IT we’ll change that to something else as well.  

     

     

     

     

    1 Internally a Microsoft Global Services defines a data centre having more than 60,000 physical servers

    http://www.techweekeurope.co.uk/interview/video-green-fatigue-microsoft-125267

  • UK Data in the Cloud

    There is still a lot of inertia in the UK about storing data in the cloud for entirely valid reasons and rather vague uncertainties and doubt.  For a few organisations keeping data in the cloud is exactly the right thing to do because those organisations want to actively share their information, the most obvious is the UK government with their data.gov.uk initiative.  Commercial companies may also want to sell their data and rather than opening up fat pipes into their data centres the logical approach is to have this hosted on a public cloud as well. 

    I mention this because one aspect of Azure that Microsoft rarely talks about is the Windows Azure Marketplace (WAM), a portal where for the sharing and consumption of large data sets.  Originally this was just US based like a lot of Microsoft services, but over the last year or two it has grown steadily so that there are now a significant number of UK relevant data sets on there, most notably is the Met Office Open Weather Data (and actually part of data.gov.uk)

    Some of this data you will be paying for based on how many times you query it and so one way to minimise that cost would be to download it then create my own  internal data market  which would also include sets of data from in house systems for users to mash up using tools like PowerPivot.

    You can of course connect PowerPivot etc. to the WAM, and the good thing about this approach rather than just pulling down a .csv file is the connection location is remembered, and this is useful for several reasons:

    • you know where the data came from so it’s verifiable
    • you can easily refresh your analytical model with the attest data form a given source
    • if you wish to scale up your model either to SharePoint in house SharePoint, Online in Office 365 or to a BI Semantic Model in SQL Server 2012, the source is preserved so the model can still be refreshed.

    So the Azure Data Market works well with self service BI to allow analysts to develop models based on external and internal data, say for mapping the weather to sales to develop models to predict demand as I have posted before.

    The other way this data can be consumed is to use it inside an application.  I can see a case for this sort of thing on a property search site where additional local information is bought in alongside the details of the house/flat you are looking for such as schools and their stats, hospital metrics, rail commute times,  and so on.  This will typically incur a cost but would give this site an edge over its competitors and possibly be recouped through advertising.  There are also applications you can integrate with such as translation services and Bing.

    It’s also worth bearing in mind that you could be making money out of your data, by selling it via WAM as well.  Obviously this would not be personal data, so things like market research house price information, trends in the UK job market from a recruitment agency which have been anonymised. 

    Finally there’s extensive help on how to use all aspects of WAM, such as code snippets, samples and hot to videos, and it’s changing all the time so even if there’s nothing of interest right now there may well be next time you look. 

  • Microsoft Valuable Penguin

    I sometimes think the IT industry is a bit like a load of penguins on the pack ice, each one checking the others out to see who going to brave the ocean first.  In IT it’s nervousness about when to jump onto a new technology and if there is one technology that makes IT professionals really nervous then that would be cloud.  So when Microsoft launched the Cloud OS, there was always going to be a some nervousness about it and a reluctance to jump in and try it no matter what it did or how good or bad it was.  However there are some pioneers out there who have already implemented the cloud OS and some of these  are MVPs (Microsoft Valued Penguins Professionals), as they not only try out the new stuff like the Cloud OS, they share their knowledge ad experience as early adopters. Moreover unlike me they are independent, and although well connected into Microsoft they can on occasion be very vocal when they feel there is a real problem or missing feature in a new product.

    There are about 200 MVPs based in the UK and some of the best of them have decided to run a series on the Cloud OS where they can share what they are doing with it and what they have learned by implementing this in production.  The event runs 9th-13th Septembers in Microsoft's London Offices and the agenda looks like this..

    clip_image001

    Monday 9th September

    Please register to attend either track 1 or track 2:

    · Track 1 will focus on building the modern enterprise data platform. In a series of three presentations we will tackle the issues of architecture, application frameworks, data integration and data exchange; learning all about the challenges faced by the modern data tier developer. Most importantly, we will learn how to creatively overcome them by enhancing our processing efficiency and analytical capability. Register to attend

    · Track 2 will focus on the creation of Business Intelligence and advanced analytics solutions that utilise both structured and un-structured data. We will demonstrate the use of data mining and predictive analytics technologies and also demonstrate how advanced visualisation technologies can be used by business users to deliver the insight and action required to drive real value from data.

    Register to attend

    clip_image002

    Tuesday 10th September

    Each session will demonstrate how to:

    · Deliver best practices with Windows Server 2012 R2 and System Center 2012 R2

    · Lower costs through effective management of VMware and Hyper-V

    · Enable management of datacentres of any size!

    · Drive automation of complex applications with service templates

    Register to attend

    clip_image003

    Wednesday 11th September

    Sessions will include:

    · Windows Azure Service Bus

    · Windows Azure BizTalk Services

    · Microsoft BizTalk Server (both on-premise and Cloud Virtual Machine)

    Register to attend

    clip_image004

    Thursday 12th September at the Microsoft Office, London, Victoria

    Join leading MVPs for a one day event to understand how to manage your client devices in a single tool while reducing costs and simplifying management. Best of all, you can leverage your existing tools and infrastructure.

    Sessions will include:

    · Helping with data security and compliance

    · Unified device management

    · What powers people-centric IT with Cloud OS?

    · Real World customer examples

    Register to attend

     

    clip_image005

    Friday 13th September

    The explosion in devices, connectivity, data and the Cloud is changing the way we develop and deliver software.  New infrastructure services permit existing server applications to be “lifted & shifted” into the Cloud.  Attend a one day event to hear from MVPs about how Microsoft’s data platform and development tools enable you to develop, test, and deploy applications faster than ever.

    Sessions will include:

    · Infrastructure services,

    · Media services,

    · Service Bus  & Mobile services

    Register to attend

     

         

    So please register and pppick up an MVP, and learn about what this Cloud OS is really all about.

  • SQL Server Spruce up

    Landrovers, will take a lot of pounding and neglect, but when my wife drove hers to Australia she made very sure it was properly set up for 2 years on the road

    WP_20130807_002

    Similarly SQL Server is also often out in the wild far from DBA’s and inspection from maintenance tools, like System Center.  However now might be a good time for a bit of TLC if things are quiet for you in August. In my last post I dealt with Servers in general, so today I want to look at a SQL Server spruce up, particularly for those who are not full time DBAs. 

    As per that post you may well be able to decommission whole VMs running SQL Server, but what I want to cover here is what you might want to check at the on instances and the databases themselves.  Books have been written on this but I would be interested in:

    • Compatibility level – the ability to get a shiny new copy of SQL Server look like an older one, should only be set where you actually need to have backward compatibility.  Note that by default this is kept at the old level after an upgrade from an older version. 
    • Memory reservation should be set less than what’s available and not left blank. The question is how much less, and the answer is what else is running on that server but at least 768Mb for the operating system itself
    • Where and how many TempDBs there are, as per TechNet guidance here.

    For individual databases you might want to check that

    • Statistics are up to date for all your tables
    • Checking for and repairing index fragmentation
    • There is no extraneous fluff in your databases such as copies of tables, and indexes as well as remnants of dev code like spare views and procedures.

    Rather than perform all of these sort of checks yourself,  you could deploy System Center Advisor which is a free cloud based service.  It matches your installation and database against best practice from Microsoft Premier Feld Engineers and tells you every day what you need to worry about.  It can be securely deployed behind a gateway in your infrastructure so your actual SQL Servers don’t need to be internet facing and I have posts on how to do that here.

    Finally you might also want to benchmark your database by putting a known load on it that you can use as a reference when making any changes to it, such as virtualise,  it upgrade it etc.  To be honest I couldn’t find too much on TechNet/MSDN/Codeplex to help with this so you may want to resort to third party tools such as  Dell(Quest), Idera, RedGate, SQL Sentry,  and dare I mention PowerShell (as per this article by Aaron from SQL Sentry) etc.

  • Spruce Up your Data Centre

    The summer holidays can generally be a quiet time for some IT Pros, depending on the industry you work in, so I wondered if this would be a good time for a bit of a tidy up in the data centre.  The easy bit of that is to actually tidy up the physical environment such as cabling left lying around or temporarily put in place but has now become “live”.  Actually I would love to see some photos of server room chaos, and I am sure Sara can organise a T-Shirt for the messiest.

    What I actually meant was tidying up the data on the servers. At the highest level you might have whole test or evaluation setups that you don’t really need anymore which might make up several VMs.  There might be individual random VM on there as well. The challenge is can they be stopped and archived and that depends on what the owner feels about them, and so a key technique for efficient data centre management is chargeback or at least showback, as waste is a lot lower when you are paying for something!

    VMs are very easy to snapshot/checkpoint and hopefully you are aware of the impact of rolling back/reverting to a checkpoint on any given VM, and if you can’t revert to that checkpoint is there any point in keeping it?

    Then there is a question of what is in those VMs, You might worry about whether they are all properly licensed, and actually if the licenses are expired is the VM any use anymore anyway?  You might also get some licenses back if you can’t shut redundant stuff down.

    Looking at the software that’s on all those VMs,  are they patched and up to date?  Even if it’s a test setup that should be patched to a desired configuration to match the thing that you are testing which actually might be the application of a patch.

    Then there is the data that’s on there: Is that dev, test or production data, and what protection should be accorded it?    The VM itself may well have backups inside it which could be redundant and hopefully you environment will let you reclaim space if you shrink VMs to reclaim that space.

    Those are some of the problems you might want to address in your summer spruce up but how to find the problems in the first place?  In the Microsoft world there are a couple of tools:

    • System Center. If this is being used as intended then you’ll know some key things about your services, VMs and data..
      • Who owns them,
      • Are they compliant with your desired configurations for production, dev and  test.
      • What resources they are using and how close to any thresholds are they
      • What software they are running (I am assuming here you managing servers via Configuration Manager)

    so you know where to start looking to clean things up, and possibly if you are using self service then VMs that are end of life will automatically be decommissioned

    • Microsoft Assessment & Planning Toolkit.  This is a free tool which you run as required against your data centre that  reports back what you have, and this can include non-Microsoft stuff as well.  You’ll need to give it various credentials for the discovery methods

    The next thing is to ensure you have a good backup strategy and see then get rid of the deadwood safe in the knowledge you do have a backup.  Of course you might then want to revisit retention of the backup if no one notices that you got rid of loads of stuff.

  • Installing SQL Server in a Virtual World

    I do get occasional feedback that SQL Server is hard to install, all those screens all those checkboxes etc. etc.  This is simply a reflection that there is so much in SQL Server aside form the database engine, however if you know what you want and you are doing this regularly then script it, if you have less experienced staff you want to delegate the task to then script it and if you want to reduce patching then er … script it.

    On my recent VMware course it was obvious that the rest of the delegates while generally keen on scripting didn’t do this when deploying SQL Server on VMware so here’s my advice (which also works on Hyper-V BTW)

    What you can’t do in a virtual world is simply copy/clone a SQL Server virtual machine because you’ll end up with two VMs with the same Active Directory SID, and SQL server doesn’t like to the server name to change once it’s installed. So this is how I do this as I often need to build a quick SQL VM

    • Setup a VM with your guest OS of choice e.g. Windows Server 2012 Datacenter edition.
    • Install the prerequisites for SQ Server for example the .Net Framework 3.5. sp1
    • Use Image Prepare to partially install SQL Server
    • SysPrep the machine (windows\system32\sysprep\sysprep.exe) to anonymise it
    • Create an unattend.xml to be consumed when the vm comes out of SysPrep and save this to the SysPrep folder above. Typically this answer fie will join your new VM to your domain, setup the local admin account, input locale, date/time etc. and  this TechNet library article will walk you through that.

    Note: passwords in the answer file are stored in clear so plan around that.

    • Save this off as a template.  That typically means saving the VHD on a share for later use
     

    To use the template

    • Create a new VM using a PowerShell / PowerCLI script. For example here’s the sort of thing we used earlier in the year at our camps to create a server on Hyper-V..

    New-VM   -Name $VMName -NoVHD -MemoryStartupBytes 1Gb -bootdevice IDE  -SwitchName $VMSwitch  -Path $VMLocation
    Set-VM   -Name $VMName -ProcessorCount 2 -DynamicMemory       
    Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath

    start-VM $VMName

    which creates a simple VM with 2 processors 1gb dynamic memory based on a given VHS , $VHD Path 

    • Rename the machine. We have a clunky script Simon and I use in our camps to do this..

    #find the ip address of the new V and look it up in DNS

    $vmip = Get-VMNetworkAdapter $VMToRename |where switchname -eq "CorpNet" | `
    select -expandproperty "IPAddresses"  | where {$_ -match "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$"}

    $vmGuestName = [system.net.dns]::GetHostEntry($vmip)
    $vmGuestName = $vmGuestName.HostName

    #now execute a remote powershell command to rename it

    Invoke-Command -ComputerName $vmHostName -ScriptBlock {
    rename-Computer -NewName $args[0] -DomainCredential contoso\administrator } -ArgumentList $NewVMName

    Restart-Computer –ComputerName  $vmGuestName –Wait –For Powershell

    Hopefully you’ll write something better for production

    Doing this from the installer UI and server manager is tedious and prone to mistakes, but there is another reason to do this all from the command line, and that is because you should be installing SQL Server onto an installation of Windows Server that has little or no UI. It’s called called Server Core and is the default method for installing Windows Server 2012.  It cuts patching in half, and there’s no browser to secure, because it’s designed to be managed remotely. New in Windows Server 2012 is the ability to turn the user interface on and off (where in 2008R2 this was an install choice) and there’s a new halfway house installation called MinShell and my post on it here.

    Any VMware expert is going to read this and laugh because VCentre has a built in template capability so you don’t have to do all the PowerShell hand cranking to clone a sysprepped  VM and then domain join it.  Any Hyper-V expert shouldn’t be doing this either as System Centre is how this is done in production as you can create, not just templates of individual VMs, but architect services and setup self service so users can ask for templated VMs via your service desk or directly from a portal.  However my point here is that under the covers this is the sort of thing you’ll need to do to run lots of SQL Server at scale for tier 1 applications where downtime is critical. Having said that this sort of thing might be useful for labs and for setting up evaluations of SQL Server 2014 running on Windows Server 2012R2.

  • Bi Product

    I have just read Paul Gregory’s guest post for the TechNet Flash, and the two things that caught my eye was to be bi-lingual and to keep your skills up to date.  I put these two themes together and came up with the title for this post which essentially means being skilled in more than one technology.  As we move into a world where some of the nuts and bolts are automated or outsources away from us then having a set of skills that can bridge technologies is going to be more valuable. In you own case I think of two ways this works:

    • I am pretty good on SQL Server especially BI, such that I could still have that on my cv with some confidence, however more recently I have been focusing on Windows Server and System Center and my SQL background has really helped with this. For example in my Evaluate This series I used a SQL Server workload to show Hyper-V Live Migrations, SQL Server running on a Storage Space and running SQL in Windows Server core.
    • I have a pretty good knowledge of VMware (I am a lowly VCP5) and  I have MCSE Private Cloud,  This means I understand enough to be able to speak VMware and articulate the world of Hyper-V to VMware experts. 

    This ability to cross technologies comes into play with integration if you know how to get product X to integrate with solution Y, and with migration I want to move from vendor 1 to vendor 2.  Those kinds of projects are always going on and have a number of advantages over other kinds of IT work:

    • You are respected as the expert and you can’t buy respect you can only earn it
    • The work is more challenging so having the skills isn’t enough you need to also (to Paul Gregory's post) relate to users and other technical teams. 
    • The day rates are higher, because the combination those two skills are rare.

    So while it’s quiet over the summer holidays (unless you are in education IT in which case you have my complete respect!) start having a look at some new stuff be it Windows Server 2012 R2, SQL Server 2014 or have another look at the Microsoft Virtual Academy (MVA)

  • Virtualizing Tier 1 SQL Server

    A while back I published a couple of posts on virtualizing SQL Server, and in the light of developments in both the virtualization platform out there and SQL Server itself I feel the need to do a complete rewrite. 

    The traditional approach to implementing high availability (HA) in SQL Server has been to create a cluster and for this to be a more resilient HA means three nodes or more are needed to maintain HA with two nodes while one node is offline for planned maintenance, for example to either patch the OS or the SQL Server node itself.   What does this mean for virtualization? If you are using Hyper-V  it doesn’t really matter; the VM’s comprising this cluster (aka a guest cluster) are kept on separate physical nodes on a (physical) cluster you can patch the hosts the guest OS, and SQL Server all using Cluster Aware Updating (CAU) in Windows Server 2012.  However it’s not quite so easy in VMware, you‘ll have use VMware Update Manager to patch the hosts and then use CAU to patch the guest OS and SQL Server. Moreover as far as I know you can only have a two node guest cluster in VSphere so while you are patching SQL Server you are down to one node.   So what if you have to use VMware and you want more in the way of HA like you have on Hyper-V? 

    One option would be to use Availability Groups in SQL Server 2012 Enterprise edition. This combines the best of mirroring/ log shipping with Clustering:

    • There’s no shared storage so I don’t see why you would be limited to a three node guest Windows Cluster.
    • Failover is very quick as there’s no shared storage each node has it’s own copy of the database being protected
    • Unlike mirroring and log shipping you are protecting a group of databases as though they were one and you can use the secondaries for reporting and as a source for backups (only full backups though). Plus you can have multiple secondaries for example a synchronous secondary in your local data centre with an asynchronous copy at another location , so a bit like replication in Hyper-V & VMware but at the database rather than VM level.  That’s an important point you should use this techniques over VM replication as all you are synching is the actual SQL Server

    Your next consideration is going to be making sure you get a predictable level of performance or your users might be phoning if there’s issues with speed as well.  Tuning in a physical world has occupied many a mind and there’s tons of advice out there from MVPs, TechNet etc.   Things get tricky in a virtual world as resources are shared.  However if you are running a tier 1 on database then best practice would be:

    • CPU Don’t over commit and use all the NUMA capabilities in your hypervisor to pass through maximum performance to the database.  Bear in mind that for HA you might well want this capacity reserved on other nodes.
    • RAM can’t be over committed in Hyper-V, but shouldn’t be over committed on VMware as performance suffers.
    • IO use the latest SR-IOV cards which can recieve and mange virtual network traffic straight fro the VMs if you can.  However you can’t team SR-IOV cards so you might want to pass through multiple SR-IOV NICs to a VMand then team inside the VM (while you can do this in Hyper-V but I am not sure if you can do that on VMware where the guest OS is Windows Server 2012).  If not use NIC teaming at the host level and the appropriate  teaming policies for access to the database (VMware advice on this is under networking policies here).
    • Storage Access questions usually revolve around whether to use RDMA/pass through disks so that the database itself is stored directly on a LUN referenced by the VMs. There’s actually very little difference these days and in both platforms you could use a share if you have a Windows Server 2012 fileserver running storage spaces. 

    The definitive white paper for virtualizing SQL Server 2012 is here. However the latest version of a best practices guide for running SQL Server on VMware I could find is here but it’s three years old and so applies to older versions of SQL Server (typically 2005/2008) and Windows Server 2008.  Hopefully this will change as Windows Server 2012  & SQL Server 2012 are now supported and of course there’s going to be even more new stuff with SQL Server 2014 running on Windows Server 2012 R2. Whatever you decide to do you’ll want your HA design to be supported and the definitive word on that check KB956893.

    Finally if you are a DBA reading this, one way to get to know your data centre admins is to help them with their SQL Server, as whether they are using System Center or VSphere it’s likely that the database underpinning these is SQL Server and it could probably do with a but of TLC, and a general discussion about protecting those databases too as they are vital components of your data centre.

    Note:

    • My definition of tier 1 isn’t necessarily big, it’s more about the impact of a tier 1 service not being there. If it isn’t there you can’t operate trade, function etc.  Of course systems like this tend to be heavily used and predictable performance is also important too.
    • I haven’t mentioned VMware fault tolerance here because it has so many limitations that render it impractical for all but the smallest databases and I generally find that if it’s tier 1 it’s generally very big and used by lots of people and so only having one CPU doesn’t really work.
  • Virtualisation and Interoperability

    I often think that Microsoft is a bit like the English language, a lot of people speak it, a lot of people don’t and for many people it’s a second or third language they need for work e.g. air traffic controllers.  In technology few of us run a totally Microsoft environment, a few more will have nothing to do with Microsoft for almost religious reasons but the majority have some of this and some of that and are working hard every day to get everything to work.

    So I am pleased that there are three really important announcements to make support issues for the majority of us a little easier:

    • The Server Virtualisation Validation Platform (SVVP) has been updated and Microsoft in conjunction with VMware will now support Windows Server 2012 running on VSphere 5.0 update 1 and 5.1.  That means you can phone Microsoft support when a VM running Windows Server 2012 doesn’t work properly on either of those versions of VSphere.  For example you might want to see how reverting to snapshots of virtual your domain controllers  works properly when the domain controller is running at the Server 2012 domain functional level and you are running that on top of VSphere 5.0 update 2 + ESXi 5.0 update 2  or later, both of which exchange information via a Virtual machine GenID
    • A couple of weeks ago at TechEd it was announced Oracle will be supported to run either on Hyper-V or on Azure, that means the database, WebLogic servers, Oracle Linux and support for Java.
    • Open Management Infrastructure (OMI). A couple of months ago I interviewed the lead architect for Windows Server, Jeffrey Snover (the man behind PowerShell) as part of TechDays Online and he was talking about the work we are doing on OMI,  a standards based framework to enable cross platform management of all devices in the same way as WMI is used to manage Windows.  While tools like System Center already do a pretty good job at managing multiple platforms, from switches to phones to hypervisors and application software, OMI will make this possible without agents and enable management tools to work to or from the open source world.

    Combining these three development means you can get proper support, and effectively manage the  heterogeneous hell that can arise in your data centre, as a result of acquisitions, migrations or because your policy is to have best of breed solutions for your business needs.   What this means  for us IT professionals is that those of us who have multiple disciplines will be in more demand and so being skilled in say Hyper-V and VMware or Oracle and SQL Server will only be good for you.

  • Server 4024 part 3 - Networking

    Databases are typically bound either by networking or IO and actually this can be two sides of the same coin if you are using remote storage.  So what’s new in Windows Server to improve networking both to improve access to shared storage and access to database workloads from another tier in a service or directly to your users and their applications?

    So what is there in networking in Windows Server 2012 to help?

    The answer is a lot and the most tangible thing is NIC teaming which is now built into the operating system this can be used both for load balancing and to provide failover, and can either use LACP (Link aggregation control protocol was 802.3ad & 802.1ax) on switches that support that or use switch independent mode.  This has several advantages over the drivers that come with NICs to provide NIC teaming:

    • It’s easy to set up and use either form server manager or PowerShell (My post on using it is here
    • You can team NICs from disparate providers  
    • You can create NIC teams inside a VM if the VM has more than one NIC.   but whatever your hypervisor you might want to do this if you have several of the new SR-IOV NICs in your hosts to provide failover as this newer NIC can’t be teamed as the network virtualisation is done on the card itself.

    Note: In Hyper-V there is a per VM setting to declare which NICs will comprise a team in the VM and although I haven’t tried this on VMware but it should be fine.

    If you are using SQL Server inside a VM then support for those SR-IOV NICs will improve performance, but what’s more important in my opinion is the ability to regulate network bandwidth like you can regulate CPU and Memory in SQL server with Resource Governor. Network Quality of Service (QoS) can be set on a per VM basis through Hyper-V, System Center Virtual Machine Manager or PowerShell

    Then there are numerous improvements designed to improve  low latency performance to allow better remote storage access via SMB and better data centre bridging support.

    Finally managing networks gets a lot easier with a comprehensive IP Address Management (IPAM) role which uses a SQL server database to manage and monitor all your subnets, DHCP scopes and IP address usage. You also get DHCP guard and Router guard option in the Hyper-V virtual switch to stop conflict occurring between applications that might actually have the same ip addresses etc. such as in a multi tenancy environment.

     

    So that’s a quick look at SQL Server 2012 running on Windows Server 2012, but while I have been away Windows Server 2012 R2 has been announced as has SQL Server 2014. There are public betas you can download but as ever a word of caution on those – You can’t upgrade from the betas to the final products so take snapshots and backups if you want to look at those now.

  • Server 4024 part 2 - Management

    In the second post in this series I wanted to look at how changes to the way you manage things in Windows Server 2012 (WS 2012) affects your management of SQL Server 2012 (SQL 2012).

    Multi Server Management

    Traditionally to manage the OS we used to remote desktop onto every server we managed and applied changes directly to it, however that’s not really viable anymore; for one thing Virtualisation has led to a lot more servers for us to manage. Now there is server manager and as with SQL Server Management Studio (SSMS) we can now register multiple servers in one console and pretty well do everything we need to from there, including adding new features, check performance and monitor alerts, events as well as starting or stopping services..

    server manager

    hopefully your servers will look healthier than mine, but at least I can see where the problems are

    Notes:

    PowerShell 3.0

    Allows all of the above to be done from the command line, and now is the time to bite the bullet and learn PowerShell;

     

    • The new built in PowerShell ISE has a lot of guidance in it to help you get started like snippets, and help to set all the switches in commands you aren’t familiar with.
    • If you have SQL management tools installed then you’ll see you can select SQLPS as a module and get help on the specific SQL2012 cmdlets as well – note you don’t have to load modules anymore in PowerShell 3 it’ll do that for you..

    sql and powershell

    The commands section on the right gives me help on the SQL Server PowerShell cmdlets

    • PowerShell can be executed remotely on other machines as well as in sessions which can be run in parallel and can persist after a reboot if needed, for example;

    Invoke-Command -ComputerName London-SQL -ScriptBlock `
    {
      Backup-SqlDatabase -Database adventureworks -BackupAction Database -CompressionOption On -LogTruncationType TruncateOnlyBackup-SqlDatabase
      }

    ..runs the PowerShell inside the braces on my London-SQL virtual machine

     

    MinShell

    Given that you are managing servers remotely why put all the tools on each server? We get this with SQL Server and generally don’t install SSMS on every server. WS2012 now allows you to deselect installing all the associated mmc snap-ins for every role/feature and also allows you to remove some or all of the GUI as well. In fact the default install option for WS2012 is now Server Core, with nothing on it but PowerShell, Notepad, a command prompt, Registry Editor and Task Manager.

    SQL 2012 runs just fine on Server Core, and with SQL 2012 sp1 you can also run Analysis Services, Integration Services but not Reporting Services (for more info check here).  This reduces the attack surface of the OS and will cut you patching in half. 

    Note: I have already got posts on how to do this and also how to properly work with sysprep using the image prepare and complete options when installing SQL Server both form the installer and form the command line

     

    and finally..

    • Please  ensure your SQL Server is given a good home and try it on Windows Server 2012
    • I’ll be discussing this during my sessions at SQL Relay (in Glasgow, Leeds, Birmingham & Norwich)
  • Server 4024 part 1 – Virtualisation

    A database is only as good as the operating system it resides on. SQL Server only runs on Windows and given that Windows Server 2012 (WS2012) is a huge change from what there was in Server 2008 R2 what does this new OS mean for the DBA.  The reference to Server 4012 is because that would be WS2012 + SQL 2012 which is what you want be running to get the most out of your modern hardware and push that through to your database engine.

    In this particular post I wanted to look at what Hyper-V in Windows Server 2012  does for SQL Server.

    It used to be that SQL Server was limited by what the operating system could surface but in a virtual world this becomes what the hypervisor provides to the virtual machine (VM). In WS 2012 all of the limits for VM’s have gone up by at least 4X..

    image

     

    In many cases these limits are higher than the specifications of many physical servers that you are running SQL Server on today.  There’s also lots of use made of the latest developments in hardware that you may not have on your servers yet, for example:

    • NUMA support , so you can pass through NUMA nodes to a VM for optimal VM performance or allow your VMs to span NUMA nodes..
    • SR-IOV is away of making a PCI card look like multiple PCI devices each of which can be presented to a particular virtual machine.  In WS2012 there is SR-IOV support for network cards (NICs) so that the virtualisation is handled by the card not the hypervisor.  The clever bit that’s currently unique to WS 2012 is that you can live migrate a VM using SR-IOV to server that doesn’t have it , i.e. you don’t have to stop the VM.
    • IPsec can also be offloaded to NICs that support it
    • Support for 4 x Fibre Channel HBAs in a VM.  Note this doesn’t prevent live migrations either but the setup of the Virtual SAN must be the same on the source and target hosts.

    While I am on about hardware, Hyper-V also introduces a new hard disk format, VHDX which can be up to 64TB (Note the VHD format is still supported) which also allows you to efficiently use the 4KB sector size on the newer larger hard disks by having a logical sector size of 4KB as well.

    On a physical server running lots of VMs you’ll want to ensure that SQL Server gets a predictable set of resources, such as CPU, RAM.  Hyper-V has always allowed you to set and prioritise all these, and in WS2012 you can also set Network bandwidth maximum and minimum for each Virtual NIC.

    In the past SQL Server hasn’t been virtualised because of concerns about performance. As I have shown that doesn’t really apply to WS2012, but you will need to follow best practice for setting it up and confirm that you are getting the performance you are expecting.  The best practice is on the SQL CAT (Customer Advisory Team) blog, and your testing should show that you are getting about 93% of the performance you had on your equivalent physical hardware.

    So please  ensure your SQL Server is given a good home and try it on Windows Server 2012

     

    note: I’ll be discussing this during my sessions at SQL Relay (in Glasgow, Leeds, Birmingham & Norwich)

  • Read it and keep

    I have to confess to having quite a few real books in my house. Some of these books are reproductions of originals or have really big hi definition pictures in so they don’t work well as e-books.  I also find that on occasion technical books work better in print than they do on a small ebook reader.  Having said that I only have finite shelf space so I only keep the good (for me) stuff.

    When I got  my review copy of Windows Server 2012 Hyper-V Installation and Configuration Guide written three of my MVP friends ( Aidan Fynn, Damian Flynn and Patrick Lownds), I honestly thought I'll do a quick post on it then give it away as a prize as I thought I was reasonably competent with this new OS.  It turns out I was wrong!

    Firstly the book goes deep in to corners that I have only briefly looked at like how virtual (aka software defined) networks work in Hyper-V.  This is hard because it is only surfaced in Windows Server 2012 through PowerShell cmdlets.  If you want a UI then you’ll need to use System Center Virtual Machine Manager 2012 sp1.Actually the PowerShell in this book is another reason I like it, as it shows the art of the possible and that may help some of you get over the hurdle of learning it because you can see all the good stuff you can do once you have mastered the basics.

    Other examples of good deep technical content are the numerous grey boxes in the text e.g. Linux considerations on NUMA, what’s the right number for simultaneous live migrations, and Backup and Virtual Machine Mobility.

    Secondly there’s a lot of good discussion on when to use what, for example

    • Anti-Virus software for the physical hosts,
    • typical uses of file shares and SMB3 to host VM’s and what this means for high availability
    • How to use the new Fibre channel support in Hyper-V

    Finally it’s well written. By that I mean it’s been written from the ground up not a bad rehash of an earlier version. Like any good book it takes you on a journey in each section from simple to complex, so initially you’ll what something is like NIC team and learn to do something via the UI and then you’ll get all the nuances and best practices and real world stuff.

    So if you want a copy to help you get the most out Windows Server 2012  get your own, your not having mine!

  • Microsoft Free Software – a personal top ten

    You might think this will would make for a really short article post but actually  there’s a  huge amount of free tools and resource out there and I have had to restrict myself to a top ten across the server and client,  based on what Simon and I have used.  So please feel free to comment with your own and I’ll see what we can do about maintaining a list somewhere and rewarding good suggestions.

     

    Hyper-V Server

    Yes Microsoft does have a free operating system, although it’s just restricted to the ability to run highly available virtual machines.  With Hyper-V Server you are limited to running 8,000 virtual machines on a 64 node cluster and you can only put 64 logical processors in a virtual machine. Also note that there is no graphical interface as this OS is very like Server core and is designed to be remotely managed.  (I have a separate post just on Hyper-V Server here).

     

    Microsoft Security Essentials

    If your organisation has less than ten PCs then this is the FREE antivirus for you, and it’s also free to use at home.  Security Essentials uses the same signatures as System Center Endpoint Protection and has won a slew of awards for being very user friendly.  You can install this on XP and Windows 7, but for Windows 8  and Windows RT Windows Defender does the same thing and  included and

     

    System Center Advisor

    This is a lightweight best practice analyser for Windows Server and SQL Server environment.  It uses the same agent System Center Operations Manager agent to collect telemetry about your servers and then sends this every day to Advisor Service.  The Advisor Service then provides reports on error and warnings you need be aware of.  It uses your own certificates so it’s secure.  Like Operations Manager you can configure a gateway to collect the information from other internal servers and then send this daily to the Advisor service.  (I have posts on  how to set it up and how to use it).

    SQL Server Express

    If you only need a small database server the there’s quite a lot you can do with SQL Server Express.  The tools are essentially the same as its bigger brother and you get reporting services if you need it to deliver rich reporting of your local database. If you don’t need all the tooling and just want a slimmed down engine behind your application then there’s an option LocalDB

     

    MAPT

    The Microsoft Assessment and Planning Toolkit seems to be universally ignored, despite being really useful in planning any kind of upgrade or migration, or jut to make sense of what you have got already – given that you might be new into post. What it does is to crawl your datacentre with various credential you supply and tells you what you have.  This might be nothing more than how many servers do you have and what OS are they running, and even that in a world of virtual machine sprawl can be useful.  However if you were then to use it plan a Windows Server 2012 migration project it would allow you get reports and plans on how to do that and what you’ll need.  It’s constantly updated so always be sure to get the latest version.

    One other thing to note while it will allows you to assess your licensing estate the data is NOT reported back to Microsoft, so you won’t be getting loads of phone calls once you’ve run it, but you will at least know where you are.

     

    Data Classification Toolkit

    Knowing about your infrastructure is one thing, what matters more is the data that’s in it and to make sense of that there’s the Data Classification Toolkit. Like other solution accelerators it’s continually updated and in this case is now aware of the latest tools in Windows Server 2012 like Dynamic Access Control (My post on getting started is here).

    Please note the small print : Use of the Microsoft Data Classification Toolkit for Windows Server 2012 does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfilment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.

     

    Windows Assessment and Deployment Kit (Windows ADK)

    Simon and I still see a lot of weird and wonderful ways to deploy operating systems at scale, which is odd when Microsoft have two free of tools the first being Windows ADK.  Actually the ADK should count as several free things itself as it contains a number of useful utilities such as:

    Microsoft Deployment Toolkit

    This is another tool that’s kept up to date, in this case it too support scale deployment of Windows 8 and Server 2012.  It may seem like an overlap with the Windows ADK but that toolset requires knowledge of a lot of command line utilities like DISM, where the MDT is a UI driven process.  The

    OEAT

    The Office Environment Assessment Toolkit (OEAT) scans client computers for add-ins and applications that interact with all versions of Office back to Office 97. It’s designed for detecting compatibility issues but I have seen it used to track down large spreadsheets which means someone in your organisation is using Excel instead of a proper database, which at best might mean there could be data quality issues in some of your reporting and at worst might mean you are storing customer and confidential information where it is not being properly controlled 

     

    Windows Live

    I still use three utilities from Windows Live to get key tasks done

    Note none of these run on Windows RT

    I also wanted to share my also rans that didn’t make my top ten..

    Windows Sysinternals:

    ZoomIT A Windows Sysinternals tool to make areas of your screen bigger

    BGInfo Also part of the Windows sysinternals tools which shows key informatio0n about  your servers on their desktops.

    RDCMan to manage remote desktops' great for managing Windows 8 / Server 2012 as it’s not based on RDP8 and so the charms etc. don’t work

    and finally for a bit of fun Ordnance Survey Maps - I occasionally need to get out of the office and off-road.  Street View is fine but what if there are no streets and you need to get from A to B for fun on foot or by cycle.  In this instance Ordnance Survey maps are your friend and they are free on Bing Maps if you are in the UK (just select Ordnance Survey from the left hand drop down list of map types)e.g.

    image

    You can print them as well if you don’t want to take your slate, tablet, phone with you

  • So You want to be an Evangelist

    With my job title of Evangelist I often get asked about what my roles, both inside and outside of the Microsoft firewall.  In the 2 weeks I have presented to the Leeds VM User Group (www.vmug.org.uk), done a careers chat at a Science college, produced and presented at TechDays Online nd attended SQL Bits 11..

    922679_10151615358393234_647811937_n

    So it’s a lot of presenting, blogging and explaining stuff.  Of course you can only blog about what you know about, so Simon and I spend a lot of time learning how Microsoft technology works, and talking to IT Professionals about what they are doing and the challenges they face.  I hope this gives some credibility both online and in person and given you are reading this that seems to be working.  Our knowledge acquisition goes on all the time but occasionally it's good to commit longer periods of time to it and so we get the opportunity to go to things like TechEd in Madrid.  For me this checks all the boxes and gives me the chance to hang out with the Microsoft product teams who present at this sort of event. 

    As an Evangelist I don’t have to pay for my flights, hotel or entry fee, Microsoft pick all of that up for me. In return I will write posts, work up ideas for future presentations and share stories good and bad about how our products are actually being used. 

    The question is do you fancy being evangelist for a week and come out with our team to Madrid on Microsoft expenses?  if you do then you’ll want to enter the TechEd Challenge

    image

     

    From here you can either enter a draw for a place or compete for one of three further places by writing a blog post to show off your evangelist skills.  Full details of the prizes are here, good luck and hopefully see you there

  • Evaluate This – Hyper-V Server 2012

    Hyper-V Server is a free operating system specifically designed to just run Hyper-V so basically a cut down core installation of a paid for edition of Windows Server.  The cut down bit refers to the fact that only the roles and features needed to run Hyper-V are there. However Hyper-V itself is in no way cut down; for example you can create clusters for running HA virtual machines (up to 64 nodes hosting 8,000 VMs)  and each VM can still have up to 64 logical processes as per the DataCenter edition of Windows Server.

    So what’s the catch?

    If there is one then it’s that if you want to run Windows Server in a VM it needs to be licensed and the most efficient way to do that once you get to 6-7 VMs per host is to use Windows Server DataCenter edition as this allows any number of guest VMs’ to be licensed for Windows Server as well as the hosts.  However if you were going to use Hyper-V to host VDI then your guests need to be licensed for Windows 7/8 and so Hyper-V Server is a good candidate. Another example is if you want to just host Linux VMs which will run really well and are supported (depending on the flavour you are using).

    I have made my usual short screencast to show you what it looks like..

    Also, you might want to look at the other posts in my Evaluate This series as Hyper-V Server is best managed remotely, and my other screencasts will show you how to do such things as live migrations,  VDI, replicate VM’s etc. all of which are possible with Hyper-V Server.

    Notes:

    To configure Hyper-V Server for remotes access all I did was use the built in SConfig utility to join it to my domain as remote management is turned on by default in Windows Server 2012, and I have group policy setup to allow remote desktop on all of my servers.

    NIC teaming is now viable in server core and Hyper-V server because it’s built into the OS where in earlier versions of server you might not have been able to install the hardware vendors NIC teaming software without a user interface.

    Hyper-V Server like the server core installation option of Windows Server only needs half the patching of a full installation of Windows Server.

    Hyper-V Server 2012 now includes Powershell out of the box.

    Finally you can get Hyper-V Server 2012 here and try it yourself and put it into production if needed.

  • HD Insight

    Despite common misconceptions Microsoft now has extensive interoperability with open source technologies for example you can run a php application on Azure, get support from us to run RedHat, SUSE or CentOs on Hyper-V and manage your applications from System Center. ,  So extending this approach to the world of big data with Hadoop is a logical step given the pervasiveness of Hadoop in this space.

    Hopefully your reading this because you have some idea of what big data is. If not it is basically an order of magnitude bigger than you can store, it  changes very quickly and is typically made up of different kinds of data that you can’t handle with the technologies you already have.  For example web logs, tweets, photos, and sounds.  Traditionally we have discarded this information as having little or no value compared with the investment needed to process it, especially as it often not clear what value is contained in this information.  For this reason big data has been filed in the too difficult drawer, unless you are megacorp or a government.

    However after some research by Google, an approach to attacking this problem called map reduce was born.  Map is where the structure for the data is declared for example pulling out the actual tweet from a twitter massage, the hashtags and other useful fields such as whether this is a retweet.  The Reduce phase then pulls out meaning from these structures such as digrams ( the key 2 word phrases) sentiment, and so on. 

    Hadoop uses map reduce but the key to its power is that it applies  the map reduce concept on large clusters of servers by getting each node to run the functions locally, thus taking the code to the data to minimise IO and network traffic using its own file system – HDFS.  There are lots of tools in the Hadoop armoury built on top of this, notably Hive which presents HDFS as a data warehouse that you can run SQL against and the PIG (latin) language where you load data and work with your functions.

    image_thumb[1]

    Here a Map function defines what a word is in a string of character and the reduce function then counts the words.  Obviously this a bit sledgehammer/nut, but hopefully you get the idea. Also the clever bit is that each node has part of the data and the algorithm to process and then reports back when it’s done with the answers to a controlling node a bit like High Performance Computing and the SQL Server Parallel Data warehouse.

    So where does Microsoft fit into this?

    The answer is HDInsight which is now in public beta. This is a toolkit developed in conjunction with Horton Works to add integration to Hadoop to make it more enterprise friendly: 

    • Azure HDInsight is  the ability run Hadoop on Azure so you can create clusters and when you need them and use Azure’s massive connectivity to the internet to pull data in there rather than choke bandwidth to your own data centre.  There is actually a lot of free and paid for data already in Azure you might want to use as well via the Windows Azure Marketplace including weather data from the Met. Office, UK Crime stats and stats for Greater London.
    • You can also run HDInsight as a service on Windows Server 2012 via the Web Platform Installer.  Like any web service running on server this can of course be managed from System Center 2012; so you can monitor health and performance and if needs be spin up/shut down HDInsight nodes, just like any other web service under your control.  I would see this being used to prototype solutions or work on complex data that you have in house and don’t or can’t want to put up in the cloud.
    • An odbc driver to connect to Hive. With this HDInsight just be caome another data source; for example you can pull the data into SQL Server via it’s built in integration services, and squirt it out if needs be too.  The dirver means you can directly build analysis services cubes (another part of SQL Server) or use PowerPivot in Excel to explore the data that way.  
    • The Data Explorer Preview add-in in Excel 2013 to query the HDInsight as well as load of other relational sources
    • F# programming for Hadoop. F# is a functional programming language that data scientists understand in the same way as I learned Fortran in my distant past as an engineering student. Note these add-ins are free but are NOT Microsoft tools.

     

    Big Data is definitely happening, for example there was even a special meeting at the last G8 meeting on this as it is such a significant technology.  However it cannot be solved in one formulaic way by one technology; rather it’s an approach and in the case of Microsoft a set of rich tools to consolidate, store, analyse and consume: The point being to integrate Big Data into your business intelligence project using familiar tools, the only rocket science being the map reduce bit, and that is the specialism of a data scientist. Some of their work is published by academics so you might find the algorithm you need is already out there - for example the map function to interpret a tweet and pull out the bits you need is on twitter.  

    However research is going all the time to crack such problems as earthquake prediction, emotion recognition from photographs, ,edical research and so on. If you are interested in that sort of thing world then you might want to go along to the Big Data Hackathon 13/14th April in Haymarket, London, and see what other like minded individuals can do with this stuff.

  • Evaluate This–File Classification

    In my last post & screen cast I showed how Dynamic Access Control (DAC) worked; the business of matching a users claims to the properties of a file (Resource Property in DAC), however the problem then becomes how do I correctly tag my files so that DAC works.  You shouldn’t necessarily be doing this; it’s the users data and you are just the curator of that data.  The users aren’t going to have the time or inclination to do this even if they are working in a compliance or regulated environment.  However they might be able to give you some rules which you could apply to the files and this is what Data Classification does.

    File Classification is part of the part of  File System resource Manager (FSRM) role service and is new for Windows Server 2012 where before FSRM was just there to only allow certain file type to be uploaded or to grant quotas to users to restrict how much and of what could be stored on your servers. The secret sauce is then to link the resource property you set using the classification rule to a Central Access Rule in DAC

    Hopefully this screencast shows how easy this is to do..

    Things to note:

    As per my previous post you’ll need your domain functional level to be Windows Server 2012.

    You’ll need the FSRM role service on your file servers and these also need to be running Windows Server 2012.

    The PowerShell is

    Add-WindowsFeature –Name FS-Resource-Manager

    and you’ll need a copy of Windows Server 2012 Evaluation Edition to try this out

     

    I used a simple expression “Top Secret” in my screen cast but you can write RegEx to look for things like credit card details, NI numbers and appropriately protect those documents automatically using this technique.

    File Classification in a production environment would typically run as a scheduled job, so to be clear this does not magically happen on the fly as users save documents onto your file servers.