Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

December, 2013

  • Lab Ops Part 13 - MDT 2013 for VDI

    The core of any VDI  deployment is the Virtual Desktop Template (VDT) which is the blueprint from which all the virtual desktop VMs are created.  It occurred to me that there must be a way to create and maintain this using the deployment tools used to create real desktops rather than the way I hack the Windows 8.1 Enterprise Evaluation iso currently with this PowerShell  ..

    $VMName = "RDS-VDITemplate"
    $VMSwitch = "RDS-Switch"
    $WorkingDir = "E:\Temp VM Store\"
    $VMPath = $WorkingDir + $VMName
    $SysPrepVHDX = $WorkingDir + $VMName +"\RDS-VDITemplate.VHDX"

    # Create the VHD from the Installation iso using the Microsoft Convert windows image script
    md $VMPath
    cd ($WorkingDir + "resources")
    .\Convert-WindowsImage.ps1 -SourcePath ($WorkingDir +"Resources\9600.16384.WINBLUE_RTM.130821-1623_X64FRE_ENTERPRISE_EVAL_EN-US-IRM_CENA_X64FREE_EN-US_DV5.ISO") -Size 100GB -VHDFormat VHDX -VHD $SysPrepVHDX -Edition "Enterprise"
    #Create the VM itself
    New-VM –Name $VMName  –VHDPath $SysPrepVHDX -SwitchName $VMSwitch -Path $VMPath -Generation 1 -BootDevice IDE

    # Tune these setting as you need to
    Set-VM -Name $VMName –MemoryStartupBytes   1024Mb
    Set-VM -Name $VMName -DynamicMemory
    Set-VM -Name $VMName -MemoryMinimumBytes   512Mb
    Set-VM -Name $VMName -AutomaticStartAction StartIfRunning
    Set-Vm -Name $VMName -AutomaticStopAction  ShutDown
    Set-Vm -Name $VMName -ProcessorCount       2

    So how does a deployment guy like Simon create Windows8.1 desktops - he uses the Microsoft Deployment Toolkit 2013 (MDT) and the Windows Assessment and Deployment Toolkit 8.1  (ADK) that it’s based on.  So I created another VM RDS-Ops with these tools on and started to learn how to do deployment.   I know that when I create a collection with the wizard or with PowerShell (e.g. New-VirtualDesktopCollection) I can specify an unattend.xml file to use as part of the process. The ADK allows you to do this directly but I am going to  build a better mousetrap in MDT and because I want to go on to deploy Group Policy Packs, updates and applications which I know I can do in MDT as well.

    If you have used MDT please look away now as this isn’t may day job,.However there doesn’t seem to be any posts or articles on creating a VDT from either ADK,  MDT or even System Center Configuration Manager so I am going to try and fill that gap here

    I wanted to install MDT onto a VM running Windows Server 2012R2 with 2x VHDXs the second one being for my deployment share so I could deduplicate the iso and wim files that will be stored here. I then installed the ADK  which needs to be done twice -  the initial ADK download is only tiny because it pulls the rest of the installation files as part of the setup so I first ran adksetup /layout <Path> on an internet connected laptop and then copied the install across to the VM (along with MDT) and then ran..

    adksetup.exe /quiet /installpath <the path specified in the layout option> /features OptionId.DeploymentTools OptionId.WindowsPreinstallationEnvironment OptionId.UserStateMigrationTool'

    before installing MDT with:

    MicrosoftDeploymentToolkit2013_x64.msi /Quiet.

    Now I am ready to start to learn or demo MDT to build my template based on the Quick Start Guide for Lite Touch Installation included in the MDT documentation. which goes like this:

    • On the machine running MDT Create a Deployment Share 
    • Import an OS - I used the Windows 8.1 Enterprise Eval iso for this by mounting the iso on the VM and importing from that.
    • Add in drivers packages and applications - I will do this in a later post 
    • Create a task sequence to deploy the imported image to a Reference Computer. 
    • Update the Deployment Share which builds a special image (in both wim and iso formats)
    • Deploy all that to a Reference Computer and start it
    • The deployment wizard that runs on the Reference Computer when it comes out of sysprep allows you to capture an image of it back into MDT.
    • Capture that image form the Reference Computer 
    • Create a task sequence to deploy that captured image to the Target computers
    • Update the Deployment Share again with the captured image in and optionally hook it up to Windows Deployment Services and you are now ready to deploy your custom image to your users’ desktops.

    However I deviated from this in two ways:

    1. Creating the Reference Computer:

    All I needed to do here was to create a VM (RDS-Ref) based on the iso created by the deployment share update process..

    $VMName = "RDS-Ref"
    $VMSwitch = "RDS-Switch"
    $WorkingDir = "E:\Temp VM Store\"
    $VMPath = $WorkingDir + $VMName
    $VHDXPath = $WorkingDir + $VMName +"\" + $VMName +".VHDX"

    # Housekeeping 1. delete the VM from Hyper-V
        $vmlist = get-vm | where vmname -in $vmname
        $vmlist | where state -eq "saved" | Remove-VM -Verbose -Force
        $vmlist | where state -eq "off" | Remove-VM -Verbose -Force
        $vmlist | where state -eq "running" | stop-vm -verbose -force -Passthru | Remove-VM -verbose -force
        #House keeping 2. get back the storage   
        If (Test-Path $VMPath) {Remove-Item $VMPath  -Recurse}
    # Create a new VHD
    md $VMPath
    new-VHD -Path $VHDXPath -Dynamic -SizeBytes 30Gb

    #Create the VM itself
    New-VM –Name $VMName  –VHDPath $VHDXPath -SwitchName $VMSwitch -Path $VMPath -Generation 1

    #Attach iso in the deployment share to build the Reference Computer from the MDT VM (RDS-OPs)
    Set-VMDvdDrive -VMName $VMName -Path '\\rds-ops\DeploymentShare$\Boot\LiteTouchPE_x64.iso'
    Start-VM -Name $VMname

    Once this VM comes out of sysprep it will launch the Deployment Wizard on the Reference Computer.  I designed the script to be run again and again until I get it right which was good because I kept making mistakes as I refined it.  The documentation is pretty good but I also referred to the excellent posts by Mitch Tulloch on MDT especially part 7 on automating Lite Touch by editing the INI files scenario above on the Deployment Share properties described below.

    2. Completing the Deployment Wizard on the Reference Computer

    In the Lite Touch scenario  the Reference Computer is captured back into MDT and used to deploy to target computers usually by using the Windows Deployment Services role in Windows Server directly or via Configuration Manager. In VDI the target computers are VMs and their deployment is handled by the RDS Broker either in Server Manager or with the Remote Desktop Powershell commands like New-VirtualDesktopCollection.  Whichever way I create VDI collections all I need is that virtual desktop template and in this case that’s just the Reference Computer but it needs to be turned off and in a sysprepped state.  The good news is that the Deployment Wizard in MDT 2013 has exactly this option so I can select that and when it’s complete all I need to do is to remember to eject the iso with the Lite Touch pre execution installation on (or that will be inherited by all the virtual desktops!).

    Automation

    If you are with me so far you can see we have the makings of something quite useful even in production.   What I need to do now is automate this so that my Reference Computer will start install and configure the OS based on my Deployment Share and then sysprep and shutdwon without any user intervention. To do that I need to modify the bootstrap.ini file that launches the deployment wizard (from the Deployment Share properties go to the rules tab and select edit Bootsrap.ini)..

    [Settings]

    Priority=Default

    [Default]

    DeployRoot=\\RDS-OPS\DeploymentShare$

    UserID=Administrator

    UserDomain=CONTOSO

    UserPassword=Passw0rd

    KeyboardLocale=en-GB

    SkipBDDWelcome=YES.

    to tell the wizard where my deployment share is and how to connect to it, and then suppress the welcome screen. Then I need to modify the rules themselves (Control Setting.ini) so that the wizard uses my task sequence, hides all the settings screens and supplies the answers to those setting directly..

    [Settings]

    Priority=Default

    Properties=MyCustomProperty

    [Default]

    DeploymentType=NEWCOMPUTER

    OSInstall=YES

    SkipAdminPassword=YES

    SkipProductKey=YES

    SkipComputerBackup=YES

    SkipBitLocker=YES

    EventService=http://RDS-Ops:9800

    SkipBDDWelcome=YES

    SkipTaskSequence=YES

    TaskSequenceID=Win81Ref

    SkipCapture=YES

    DoCapture=SYSPREP

    FinishAction=SHUTDOWN

    SkipComputerName=YES

    SkipDomainMembership=YES

    SkipLocaleSelection=YES

    KeyboardLocale=en-US

    UserLocale=en-US

    UILanguage=en-US

    SkipPackageDisplay=YES

    SkipSummary=YES

    SkipFinalSummary=YES

    SkipTimeZone=YES

    TimeZoneName=Central Standard Time

    SkipUserData=Yes

    Note the bits of this in bold;

    • Event Service enables monitoring which is very useful as all the wizard screens won’t show up the way I have this set now!. 
    • MDT2012 and later allow you to sysprep and shutdown a machine which is just what I need to create my Virtual Desktop Template.

    So what’s really useful here is that when I change my deployment share to add in applications and packages, modify my Task Sequence or the INI settings above, all I need to do to test the result each time is to recreate the Reference Computer like this:

    • stop the Reference Computer VM (RDS-Ref in may case) if it’s running as it will have a lock on the deployment iso
    • Update the Deployment Share
    • Run the Powershell to re-create and start it.
    • Make more coffee

    Having got that working I can now turn my attention to deploy applications (both classic and modern) into my VDI collections, and then think about an automated patching process.

  • Lab-Ops part 12 – A crude but effective Domain Controller

    I realised I need to recreate a Domain Controller in my labs and in so doing I noticed a snag in my earlier scripts that really breaks when I use the same snippet for a new DC.  I have this test to see if a VM is ready to be used..

    do {Start-sleep –seconds 10}

    until(Get-VMIntegrationService $VMName  | where name -eq "Heartbeat").PrimaryStatusDescription -eq "OK")

    #the code to create the $localcred credential is at the end of this post

    It does work in that this will return true if the VM is on, but if a VM is coming out of sysprep this do until loop will exit way before I can log in and actually use the VM. So then I tried this command in my until clause ..

    Invoke-Command –command 192.168.10.1 {dir c:\} –ErrorAction SilentlyContinue –Credential $LocalCred

    a crude but effective test based on whether I could connect to and run a simple command on the VM.That worked for most of my VMs, but this was still no good for my script to build a Domain Controller (DC). The problem here is that after I add in the feature (which doesn’t require a reboot)..

    Install-WindowsFeature -name AD-Domain-Services –IncludeManagementTools

    and then create the Domain with..

    Install-ADDSForest –DomainName Contoso.com –safemodeadministratorpassword (cofor thisnverttosecurestring “Passw0rd” –asplaintext –force) –force

    this will at some point cause a reboot but this doesn’t happen inline as this command is itself calling  PowerShell in a session I can’t control.  The result is that my script will continue to execute while this is going on in the background. So my test for a C Drive could work before the reboot and I would be in a mess because some subsequent commands would fail while my VM reboots. So my hack for this is to trap the time my VM takes to come out of sysprep..

    $Uptime = (Get-VM –Name $VMName).uptime.totalseconds

    and test when the current uptime is LESS than $Uptime which can only be true after the VM has rebooted.

    do {Start-sleep –seconds 10}

    until(Get-VM –Name $VMName).uptime.totalseconds –lt $Uptime)

    Then I can test to see if the VM is ready to be configured by checking the Active Directory Web Service is alive on my new DC..

    Get-Service –Name ADWS | where status –EQ running

    However even after this test returned true I was still getting errors from PowerShell saying that a default domain controller couldn't be found so I specified the DC with a –server switch in each command for example ..

    New-ADOrganizationalUnit -Description:"RDS VDI Collection VMs" -Name:"RDS-VDI" -Path:"DC=Contoso,DC=com" -ProtectedFromAccidentalDeletion:$true -Server:"RDS-DC.Contoso.com"
      Just to be extra sure I also slapped in a 20 second wait to ensure the service really was there as I want this to run cleanly again and again.

    I won’t bore you with the code for adding the rest of the users, groups etc. to Active Directory as the easiest way to write that is to do something to a Domain controller in the Active Directory Administrative Centre and grab the stuff you need from the PowerShell History at the bottom of the console..

    image

    I also showed you how to read and write to text based CSV files in part 5 of this Lab Ops Series so you could amend my script to have a whole list of objects to add in to your DC from a CSV file that you have previously lifted from a production DC.

    I also need a DHCP server in my lab and I typically put that as a role on MY DC.  Here again you can see how PowerShell has improved for newbies like me..

    #Install the DHCP Role
    Add-WindowsFeature -Name "DHCP" -IncludeManagementTools -IncludeAllSubFeature
    #Authorize this DHCP server in AD
    Add-DhcpServerInDC -DnsName contoso.com 
    #Setup a scope for use with RDS/VDI later on
    Add-DhcpServerv4Scope -StartRange 192.168.10.200 -EndRange 192.168.10.254 -SubnetMask 255.255.255.0 -Name RDSDesktops -Description "Pool for RDS desktop virtual machines"
    #Set up the DNS Server Option (6) in DHCP so DHCP clients have the DNS Server entry set
    Set-DhcpServerv4OptionValue -OptionId 6 -value 192.168.10.1
    Set-DhcpServerv4OptionValue -OptionId 15 -value "contoso.com"

    Sadly the trusty old DHCP MMC snapin doesn’t have a history window so I looked at the options set by the wizard and set them as you can see here.  Once all this is working I can go on to create the other VMs in this series. However this DC also sets up and uses a Hyper-V Internal Virtual Switch “RDS-Switch” and ensures that my physical host (Orange – which is my big Dell laptop) can connect to my new DC on that switch..

    # Setup the Networking we need - we'll use an internal network called RDS-Switch. If it's not there already create it and set DNS to point to our new DC (RDS_DC) on 192.168.10.1
    If (!(Get-VMSwitch | where name -EQ $VMSwitch )){New-VMSwitch -Name $VMSwitch -SwitchType Internal}
    # Now configure switch on the host with a static IPaddress and point it to our new VM for DNS
    $NetAdapter = Get-NetAdapter | Where name -Like ("*" + $VMSwitch + "*")

    #Note the use of the !(some condition) syntax to refer to not true
    If (!(get-NetIPAddress  -InterfaceAlias $NetAdapter.InterfaceAlias -IPAddress "192.168.10.100" -PrefixLength 24)) {New-NetIPAddress -InterfaceAlias $NetAdapter.InterfaceAlias -IPAddress "192.168.10.100" -PrefixLength 24} 
    Set-DnsClientServerAddress -InterfaceAlias $NetAdapter.InterfaceAlias -ServerAddresses "192.168.10.1"

    The final piece of the puzzle is to join my physical laptop  to this domain,   as I am going to need the host for VDI, and for now I am going to run that manually with the add computer command..

    $LocalCred = new-object -typename System.Management.Automation.PSCredential -argumentlist "orange\administrator", (ConvertTo-SecureString "Passw0rd" -AsPlainText -Force  )
    $DomainCred = new-object -typename System.Management.Automation.PSCredential -argumentlist "Contoso\administrator", (ConvertTo-SecureString "Passw0rd" -AsPlainText –Force  )
    add-Computer ComputerName Orange –Domain Contoso.com–LocalCredential $Localcred –DomainCredential $DomainCred –Force

    Start-Sleep –seconds 5

    Restart-Computer Orange $localcred

    ..and of course to test that I need to pull it out of that domain before I test it again with Remove-Computer. By the Way don’t put the –restart switch on the end of add computer as that will bounce your Host and hence your DC as well and while your host appears to be domain joined it doesn’t show up in the domain.

    I have posted the whole script on SkyDrive (Create RDS-DC.ps1) and it’s called RDS-DC as it’s designed to underpin my Remote Desktop Services demos. For example there are a couple of write-host lines in there to echo output to the console where in reality you would log progress to a file.

    As ever any advice and comments on this is welcome and I can repay in swag and by properly crediting your work.

  • Lab Ops part 11 – Server Core

    My laptop running Windows Server 2012R2 looks like this when it starts:

    startmenu

    This is a good thing and a bad thing. It’s a good thing if:

    • You have a server you want to connect to from a device with touch on that may have a smaller form factor and you have big fingers so you can get to the task you want in a hurry as you can pin web sites, MMC snap-ins etc. to the Start Menu and organise them as I have done.
    • If you use your server for Remote Desktop Session Virtualization (Terminal Services as was) your users will see the same interface as their Windows 8.1 desktop, and will get a consistent experience.
    • You are an evangelist at Microsoft, who has to lots of demos and isn’t allowed near production servers! Smile

    However if you are managing production servers at any kind of scale this is a bad thing as you don’t need all the tools and interfaces on every server you deploy.  All those tools expose interfaces like File and Internet explore so if your servers are in the wild (so not in managed data centres) then curious local admins might wish to use those tools to surf the net or reconfigure your servers.  Also these interface require patching and can consume resources.

    This is why Server Core was introduced in Windows Server 2008 and in Windows Server 2012R2 Server Core is the default installation option.  All you get if you install Windows Server with this option is:

    • Task Manager
    • Registry Editor
    • Notepad!
    • Command Line
    • SConfig a lightweight menu of scripts to do basic configuration tasks.
    • and with Windows Server 2012 and later you also get PowerShell

    Server Core wasn’t a popular choice in Windows Server 2008 for  a number of reasons:

    • It was too limited. For example there was no ability to run the .Net framework and so it couldn’t run things like asp.Net websites or SQL Server and it didn’t include PowerShell by default because PowerShell is also built on .Net.
    • 2008 wasn’t setup for remote management by default, and patching was problematic, and unless you paid for System Center there wasn’t a tool to manage these servers at any sort of scale.
    • It was an install only option, so the only way to get back to a full interface was to do a complete reinstall.

    That has all been fixed in Windows Server 2012 and later and so Server Core should be your default option for all your servers except those used for Remote desktop Session Virtualization Hosts.  You could achieve nearly the same result by doing an install and rip out all the interfaces once your server is configured they way you want in which case you would go to the menu remove features in Server Manager and in the features screen uncheck each of the options..

    ui features

    where:

    • The Server Graphical Shell has IE and File Explorer the start button and Start Screen.
    • The Desktop Experience gives you the Windows Store and makes Windows Server behave like Windows 8.1 (complete with the Windows Store).
    • The Graphical Management Tools have Server Manager and MMV snap-ins.

    If you remove all of these from here or use PowerShell to do this:

    Remove-WindowsFeature –Name Desktop-Experience, Server_Gui-Mgmt-Infra, Server-Gui-Shell

    you essentially get Server Core. If you leave behind the Graphical Management Tools ..

    Remove-WindowsFeature –Name Desktop-Experience, Server-Gui-Shell

    You’ll get what’s known as “MinShell” all the management tools, but no extra fluff like IE and the Start menu, so this is also a popular choice.

    If you do elect do a Server Core install and want then later decide to put the management user interface back in, you need to remember that the binaries for these aren’t on the server so when you add the features in you’ll need to specify a source switch, and before that you’ll need to have access to the source for these features by mounting the appropriate .wim file using the Disk Image Servicing Management command (DISM):

    MD C:\WS2012install

    DISM /Mount-WIM /WIMFile:D:\sources\install.wim /Index:4 /MountDir:C:\WS2012Install /ReadOnly

    Add-WindowsFeature –Name Server-GUI-Mgmt-Infra –Source c:\ws2012install\Windows\SxS

    shutdown /r /t 0

    DISM /Unmount-WIM /MountDir:C:\WS2012Install /Discard

    Notes:

    • This turns Server Core in to a MinShell installation
    • D: is the install media for Windows Server 2012R2
    • Index is the individual installation on the media and for the evaluation edition 4 corresponds to the full install of Datacenter edition (You can run DISM /Get-WIMInfo /WIMFile:D:\sources\install.wim to see what there is)
    • shutdown /r restarts the server

    This is actually quite a useful template as doesn’t have the binaries needed to put IE etc. back in but is still easy to manage. You could then use this as the the basis for creating VMs for your labs and evaluations by sysprepping it:

    sysprep /generalize /shutdown /oobe

    and if you wanted to save space you could then use it  as a parent for the  differencing disks behind the various VMs  in your lab environment.

    There is a free version of Windows Server 2012R2 specifically designed for Hyper-V,  called Hyper-V Server 2012R2. This is very like Server Core, but you can’t add the UI back in and it only has a few roles in it specifically for Hyper-V, clustering and file servers as that’s all that is included in the license.

    To finish up Server core is really useful, and much easier to manage remotely than it was, as Windows Server is setup by default for remote management within a domain. Just as importantly you might have dismissed a feature in Windows when it first shows up in a new version, but it’s worth checking back to how it’s changed in each new versions, be that Server Core, Hyper-V, storage or those big squares!  

  • Lab Ops Part 10–Scale Out File Servers

    In my last post I showed how to build a simple cluster out of two VMs and a shared disk.  The VMs are nodes in this cluster..

    HA Cluster nodes

    and the shared “quorum” disk is used to mediate which node owns the cluster when a connection is lost between them..

    HA Cluster quorum disks

    However this cluster is not actually doing anything; it isn’t providing any service as yet. In this post I am going to fix that and use this cluster as a scale out file server.  This builds on top of what I did with a single file server earlier in this series. To recap I am essentially going to build a SAN, a bunch of disks managed by two controllers. The disks are the shared VHDX files I created last time and the controllers are the file server nodes in my cluster.

    First of all I need to add in the the Scale out File Server role into the cluster..

    Add-ClusterScaleOutFileServerRole  -Name HACluster -Cluster HAFileServer    
    If I go back to cluster manager I can see that the role is now installed..

    haFileserver role

    This time the pool will be created on the cluster, and while I could expand pools under storage in cluster manager and create a new storage pool via the wizard, it’s more interesting to look at the equivalent PowerShell.  If I look at the storage subsystems on one of my nodes with

    Get-StorageSubSystem | Out-GridView

    I get this..

    storage subsystem

    I need to use the second option so that my new pool gets created on the cluster rather than the on the server I am working on. The actual script I have is..

    $PoolDisks = get-physicaldisk  | where CanPool -eq $true
    $StorageSubSsytem = Get-StorageSubSystem | where FriendlyName -Like "Clustered Storage Spaces*"
    New-StoragePool -PhysicalDisks $PoolDisks –FriendlyName “ClusterPool” -StorageSubSystemID $StorageSubSsytem.UniqueId

    Here again is an example of the power in PowerShell comes out:

    • $StorageSubSsytem is actually an instance of a class as you can see when I reference it’s unique ID as in $StorageSubSsytem.UniqueId

    In the same way $PoolDisk is an array of disks showing that we don’t need to declare the type of object stored in a variable it could be a number, string  collection of these or in this case a bunch of disks that can be put in a pool!

    • The use of the pipe command to pass objects along a process, and the simple where clause to filter objects by anyone of its/their  properties. BTW we can easily find the properties of any object with get-member as in

    #if you try this on your windows 8 laptop you’ll need to run PowerShell as an administrator

    get-disk | get-member

    I said earlier that I am building a SAN, and within this my storage pool is a just a group of disks that I can manage.  Having done that my next task on a SAN would create logical units (LUNs). Modern storage solutions are making more and more use of hybrid solutions where the disks involved are a mix of SSD and hard disks (HDDs), and intelligently placing the most used or ‘hot’ data on the SSD.  Windows Server 2012R2 can do this and this is referred to as tiered storage.  Because I am spoofing my  shared storage (as per my last post) the media type of the disks is not set and so tiered storage wouldn't normally be available.  However I can fix that with this PowerShell once I have create the pool..

    #Assign media types based on size
    # Use (physicalDisk).size to get the sizes

    Get-PhysicalDisk | where size -eq 52881784832 | Set-PhysicalDisk -MediaType HDD
    Get-PhysicalDisk | where size -eq 9932111872 | Set-PhysicalDisk -MediaType SSD

    #Create the necessary storage tiers
    New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "SSDTier" -MediaType SSD
    New-StorageTier -StoragePoolFriendlyName ClusterPool -FriendlyName "HDDTier" -MediaType HDD

    #Create a virtualDisk to use some of the space available
    $SSDTier = Get-StorageTier "SSDTier"
    $HDDTier = Get-StorageTier "HDDTier"

    In the world of Windows Server storage I create a storage space which is actually a virtual hard disk and if I go into Cluster Manager and highlight my pool I can create a Virtual Disk...

    tiered storage1

     

    Foolishly I called my disk LabOpsPool, but it’s a storage space, not a pool. Anyway I did at check the option to use storage tiers (this only appears because of the spoofing I have already don to mark the disks as SSD/HDD), Next I can select the storage layout.

    tiered storage2

    and then I can decide how much of each tier I want to allocate to my Storage Space/Virtual Disk..

    tiered storage3

    Note the amount of space available is fixed - we have to use thick provisioning with storage tiers.  and I could thin provision if I wasn’t.  BTW my numbers are more limited than you would expect because I have been testing this, and also the SSD number will be lower than you would think because the write cache also gets put on the SSDs as well.  Having done that the wizard will allow me initialise the disk put a simple volume on it and format it.

    tiered storage4

    Now I need to add the disk I created to Clustered Shared Volumes as I want to put application data (VM’s and databases) on this disk.  Then I need to navigate to the ScaleOut File Server role  on the left pane and create a  share on the disk so it can be actually used..

    tiered storage5

    This fires up the same share wizard as you get when creating shares in Server Manager..tiered storage6

    I am going to use this for storing application data,

    tiered storage7

    it’s going to be called VMStorage

    tiered storage8

    my options are greyed out based on the choices I already made, but I can encrypt the data if I want to, and I don’t need to do any additional work other than check this.

    tiered storage9 

    I then need to setup file permissions. You’ll need to ensure your Hyper-V hosts, database servers etc. have full control on this share to use it. In my case my Hyper-V server are in a group I created imaginatively titled  Hyper-V Servers.

    The last few steps can of course be done in PowerShell as well so here’s how I do that, but note that this is my live demo script so some of the bits are slightly different

    #create two Storage spaces on with tiering and one without
    New-VirtualDisk -FriendlyName $SpaceName1 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 30Gb,5Gb -WriteCacheSize 1gb -ResiliencySettingName Mirror
    New-VirtualDisk -FriendlyName $SpaceName2 -StoragePoolFriendlyName $PoolName  -StorageTiers $HDDTier,$SSDTier -StorageTierSizes 30Gb,5Gb -WriteCacheSize 1gb -ResiliencySettingName Mirror

    #create the dedup volume and mount it
    #First we need to put the disk into maintenance mode
    $ClusterresourceName = "*("+ $SpaceName1 + ")*"
    Get-ClusterResource | where name -like $ClusterResourceName | Suspend-ClusterResource
    $VHD = Get-VirtualDisk $SpaceName1
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    New-Partition -DiskNumber $Disk.Number -DriveLetter "X" -UseMaximumSize
    Initialize-Volume -DriveLetter "X" -FileSystem NTFS  -NewFileSystemLabel "DedupVol"  -Confirm:$false
    #note -usagetype Hyper-V for use in VDI ONLY!
    Enable-DedupVolume -Volume "X:" -UsageType HyperV
    #Bring the disk back on line
    Get-ClusterResource | where name -like $ClusterResourceName | Resume-ClusterResource

    #create the dedup volume and mount it
    #First we need to put the disk into maintenance mode
    $ClusterresourceName = "*("+ $SpaceName2 + ")*"
    Get-ClusterResource | where name -like $ClusterResourceName | Suspend-ClusterResource
    $VHD = Get-VirtualDisk $SpaceName2
    $Disk = $VHD | Get-Disk
    Set-Disk  $Disk.Number -IsOffline 0
    New-Partition -DiskNumber $Disk.Number -DriveLetter "N" -UseMaximumSize
    Initialize-Volume -DriveLetter "N" -FileSystem NTFS  -NewFileSystemLabel "NormalVol"  -Confirm:$false
    #Bring the disk back on line
    Get-ClusterResource | where name -like $ClusterResourceName | Resume-ClusterResource

    #Add to Cluster Shared Volumes
    $StorageSpaces = Get-ClusterResource | where Name -Like "Cluster Virtual Disk*"
    ForEach ($Space in $StorageSpaces) { Add-ClusterSharedVolume -Cluster $ClusterName -InputObject $Space } 

    #create the standard share directory on each new volume
    $DedupShare  = "C:\ClusterStorage\Volume1\shares"
    $NormalShare = "C:\ClusterStorage\Volume2\shares"
    $VMShare     = "VDI-VMs"
    $UserShare   = "UserDisks"

    md $DedupShare
    md $NormalShare

    $Share = "Dedup"+$VMShare
    $SharePath = $DedupShare + "\" + $VMShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Dedup"+$UserShare
    $SharePath = $DedupShare + "\" + $UserShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Normal"+$VMShare
    $SharePath = $NormalShare + "\" + $VMShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    $Share = "Normal"+$UserShare
    $SharePath = $NormalShare + "\" + $UserShare
    MD $SharePath
    New-SmbShare -Name $Share -Path $SharePath -CachingMode None -FullAccess contoso\Orange$,contoso\administrator -ScopeName $HAFileServerName -ContinuouslyAvailable $True

    I know this can be sharpened up for production with loops and functions but I hope it’s clearer laid out this way.  Note I have to take the disk into maintenance mode on the cluster while I format them etc. where the storage spaces wizard takes care of that.  This script is part of my VDI demo setup and so I have enabled deduplication on one  storage space and not on the other to compare performance on each of these.

    Once I have created my spaces and shares  I am ready to use it, and a quick way to test all is well is to do a quick storage migration of a running VM to this new share. Just right click on a VM in Hyper-V Manager and select move to bring up the wizard.

    tiered storage10

    after about 30 seconds my VM arrived safely on my File Server as you can see from its properties..

    tiered storage11

     

    Hopefully that’s useful - it certainly went down well in London, Birmingham & Glasgow during our recent IYT Camps and if you want the full script I used I have also put it on Skydrive. Not that this script is designed to be run from FileServer 2 having already setup the cluster with the cluster setup script I have already posted