Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

January, 2014

  • Lab Ops part 17 - Getting Started with Virtual Machine Manager

    In my last post I talked a lot about the way all of System Center  works, now I want to look at  one part of it Virtual Machine Manager (VMM) as VMs in a modern data centre are at the core of the services we provide to our users. In the good old days we managed real servers that had real disks connected to real networks and of course they still exist, but consumers of data centre resources whether that other parts of the IT department or business units will only see virtual machines with virtual disks connected to virtual networks.  So administration in VMM is all about translating all the real stuff (or fabric as it’s referred to) into virtual resources. So before we do anything in VMM we need to configure the fabric, specifically our physical hosts, networking and storage.

    There’s good TechNet labs and MVA courses to learn this stuff, but I think it’s still good to do some of this on your own server, so you can come back it again whenever you want to especially if you are serious about getting certified.  So what I am going to do in the next few posts is to explain how to use the minimum of kit to try some of this at home.  I generally use laptops which are sort of portable typically with 500Gb plus of SSD and at least 16Gb or RAM half of those resources should be enough.

    I am going to assume you have got an initial setup in place:

    • Copy the above VHDX to the root of a suitable disk on your host (say e:\) and mount it(so it now shows up as a drive for example X:)
    • from an elevated prompt type BCDBoot X:\windows
    • Type BCDEdit /set “{default}” hypervisorlaunchtype auto
    • Type BCDEdit /set description “Hyper-V Rocks!”
    • reboot the host and select the top boot option which should say Hyper-V Rocks!
    • from PowerShell type add-windowsfeatures Hyper-V –includemanagementtools –restart
    • the machine should restart and you are good to go.
    • VMM running already either by downloading a complete VMM evaluation VHD or installing from the VMM2012R2 iso yourself.
    • A domain controller as per my earlier post on this, with the host and the VMM VM belonging to this domain.

    The first thing you want to do in VMM is to configure Run As Accounts.  One of the reasons my PowerShell scripts in this series are not production ready is that they have my domain admin password littered all over them which is not good.  VMM allows us to create accounts used to authenticate against anything we might need to  which could be a domain account a local account on a VM (be that Windows or Linux), access to a switch or a SAN.  So lets start by adding in domain admin. We can do this from settings | Security | Run As Accounts or with the VMM PowerShell cmdlets and not all I have to do is open PowerShell from inside VMM or use PowerShell ISE and either way there’s no more of that mucking about importing modules to do this..

    #New Run As Account in VMM
    $credential = Get-Credential
    $runAsAccount = New-SCRunAsAccount -Credential $credential -Name "Domain Admin" -Description "" -JobGroup "cb839483-39eb-45e0-9bc9-7f482488b2d1"

    Note this will popup the credential screen for me to complete (contoso\administrator is what I put in). The jobGroup at the end puts this activity into a group that ends up in the VMM job history so even if we use Powershell in VMM our work is recorded which a good thing.  We can get at that job history with Get-SCJob | out-gridview

    We’ll probably want to do the same for a local account so just Administrator and a password so that any VM’s we create will have the same local admin account & Note this will popup the credential screen for me to complete.

    Now we can consume that domain login account to add manage our host.  In VMM we would do this from Fabric | Servers | all Hosts and before we add in a server we can create Host groups to manage lots of hosts ( I have one already called Contoso). To add a host right click on all hosts or a host group and select add Hyper-V Hosts or Clusters and the equivalent PowerShell is ..

    <#Note in the raw PowerShell from VMM there  is a load of IDs included but this will work as long as we don’t use duplicate names in accounts etc.  #>

    $runAsAccount = Get-SCRunAsAccount -Name "Domain Admin"

    $hostGroup = Get-SCVMHostGroup -Name "Contoso"
    Add-SCVMHost -ComputerName "clockwork.contoso.com" -RunAsynchronously -VMHostGroup $hostGroup -Credential $runAsAccount

    In the background the VMM agent has been installed on our host and it has been associated with this instance of VMM.  You should now see the host in VMM and against all the VMMs that are on it so not bad for a line of PowerShell! We can also see the properties of our host by right clicking on it, and of special interest are the virtual switches  on the host and if you have used my script  with modifying it you’ll see a switch called RDS-Switch on the host.  We can also see the local storage attached to our host here.

    So now we have a basic VMM environment we can play with, a host a DC and VMM itself so what do we need to do next.  If this was VCenter we would probably want to setup our virtual switches and port groups so let’s look at the slightly scary but powerful world of virtual networks next.

  • Virtual Machine Manager 2012R2 Templates cubed

    Up until now in my Lab Ops series I have been using bits of PowerShell to create VM’s in a known state, however this requires a certain amount of editing and I would have to write a lot more code to automate it and properly log what is going on. Plus I would probably want a database to store variables in and track those logs and some sort of reporting to see what I have done.  That’s pretty much how Virtual Machine Manager (VMM) works anyway so rather than recreate it in PowerShell I’ll just use the tool instead.  VMM not only manages VMs it also manages the network and storage used by those VMs. However before we get to that we need to create some VM’s to play with and before we can do that we need to understand how templates are used.  It’s actually similar to what I have been doing already – using a sysprepped copy of the OS and then configuring that for a particular function (file server, web server etc.) and building a VM around it.  It’s possible just to use the Windows Server 212R2 evaluation iso and get straight on and build a template and from there a VM.  However VMM also has the concept of profiles which are sort of templates used to build the templates themselves. There are profiles for the Application , the Capability (which Hypervisor to use) the Guest OS and the Hardware. Only the hardware profile will look familiar if you have been using Hyper-V Manager as this has all the VM setting in.  The idea behind profiles is that when you create a VM template you can simply select a profile rather than filling in all the settings on a given VM template tab and in so doing you are setting up an inheritance to that profile. However the setting in Application and Guest OS profiles are only relevant when creating a Service Template. So what are those and why all this complexity when I can create a VM in a few minutes and a bit of PowerShell?

    For me Service Templates are the key what VMM is all about. If you are VMware expert they are  a more sophisticated version of Resource Groups and before you comment below please bear me out.  A service template completely describes a service and each of its tiers as one entity..

    image

    a sample two-tier service

    If I take something like SharePoint,  there are the Web Front Ends (WFE) the Service itself (the middle tier) and back end databases which should be on some sort of SQL Server cluster.  The Service Template allows us to define maximum and minimum limits for each tier in this service and to declare upgrade domains which will enable you to update the service while it is running by only taking parts of it off line at a time.  The upper and lower VM limits on each tier enable you to scale up and down the service based on demand or a service request, by using other parts of System Center.    There might well be away to do this sort of thing with Resource Groups and PowerCLI in VCenter, but then there are those application and hardware profiles I mentioned earlier.  They mean that I can actually deploy the fully working SharePoint environment  from a template including having a working SQL Server guest cluster where the shared storage for that cluster is on a shared VHDX file.

    Services created by these templates can then be assigned to clouds, which are nothing more than logical groupings of compute, networking and storage splashed across a given set of hosts, switches and storage providers and assigned to a given group of users who have delegated control of that resource within set limits. 

    So templates might seem to be piled on top of one another here, but you don’t have to use all of this capability if you don’t want to. However if you do have a datacentre (the internal Microsoft definition of a datacentre has more than 60,000 physical servers) then this power is there if you need it.

    If you haven’t a spare server and a VMM lab setup then you can just jump into the relevant TechNet Lab and see how this works.

  • System Center: Use the right tool for the right job

    This post isn’t really about Lab Ops as it’s more theory than Ops, but before I dive in to the world of what you can do with System Center I wanted to stress one important concept:

    Use the right tool for the right job. 

    That old saying that when you have a hammer everything looks like a nail can harm your perception of what SC is all about.  Perhaps the biggest issue here is simply to have a good handle on the suite, which is not easy as traditionally many of us will have been an expert in just one or two components (or whole products as was). So here’s how I think about this..

    • Virtual Machine Manager controls the fabric of the modern virtualized data centre and allows us to provision services on top of that
    • App Controller is there to allow us to control services in Azure as well as what we have in our datacentre
    • Configuration Manager allows us to manage our users, the devices they use and the applications they have. It can also manage our servers but actually in the world of the cloud this is better done in VMM
    • Then it’s important to understand what’s going on with our services and that’s Operations Manager. 
    • Rather than sit there and watch Operations Manager all day, we need to have an automated response when certain things happen and that’s what Orchestrator is for.
    • In an ITIL service delivery world we want change to happen in a controlled and audited manner whether that’s change need to fix things or change because somebody has asked for something.  That’s what Service Manager is for and so if something is picked up by Operations Manager that we need to respond to this would be raised as an incident in Service Manager which in turn would automatically remediate the problem by calling a process in Orchestrator which might do something in Virtual Machine Manager for example.

    The reason that SC is not fully configured and integrated out of the box is simply down to history and honesty. Historically SC was a bunch of different products which are becoming more and more integrated.  Honesty comes from the realisation that in the real world, many organisations have made significant investments in infrastructure and its management which are not Microsoft based.  For example if your helpdesk isn’t based on Service Manager then the other parts of SVC can still to large extent integrate with what you do have, and if you aren’t using Windows for Virtualization or your guest OS then SC can still do a good job of managing VMs, and letting you know that the services on those servers are OK or not as the case maybe.  

    Another important principle in SC is that it’s very important not to go behind the back of SC and use tools like Server Manager and raw PowerShell to change your infrastructure (or fabric as it’s referred to in SC).  This is important for two reasons, you are wasting your investment in SC and you have lost a key aspect of its capabilities such as it’s audit function.  Notice I used the term “raw PowerShell”; what I mean here is that SC itself has a lot of PowerShell cmdlets of its own however these are making calls to SC itself and so if I create a new VM with a Virtual Machine Manager (VMM) PowerShell cmdlet then the event will be logged. 

    There’s another key concept in SCV and that is “run as” accounts so whther I am delegating a control to user by giving them limited access to an SC console or I am using SC’s PowerShell cmdlets, I can reference a run as account to manage or change something without exposing the actual credentials need to do that to the user or in my script.

    Frankly my PowerShell is not production ready, some of it is deliberate in that I don’t clutter my code with too much error trapping and some is that I am just not that much of an expert in things like remote sessions and logging.  The point is that if you are using SC for any serious automation you should use Orchestrator for all sorts of reasons:

    • Orchestrator is easy, I haven’t and won’t post an update on getting started with Orchestrator because it hasn’t really changed since I did this
    • It’s very lightweight  - it doesn’t need a lot of compute resources to run
    • You can configure it for HA so that your jobs will run when they are supposed to which is hard with raw PowerShell.
    • You can include PowerShell scripts in the processes (run books) that you design for things that Orchestrator can’t do
    • There are loads of integration packs to connect to other resources and the these are setup is with configurations which have the credentials in to those other services so they won’t be visible in the run book itself.
    • you have already bought it when you bought SC!

    Another thing about SC generally is that there is some overlap, I discussed this a bit in my last post with respect to reporting and it crops up in other areas too.  In VMM I can configure a bare metal deployment of a new physical host to run my VMs on, but I can also do server provisioning in Configuration Manager so which should I use? That depends on the culture of your IT department and  whether you have both in production.  On the one hand a datacentre admin should be able to provision new fabric as the demand for virtualization grows on the other hand all servers be they physical or virtual should be in some sort of desired state and CM does a great job of that.  It all comes back to responsibility and control if you are in control you are responsible so you need to have the right tools for your role. 

    So use the right tool for the right job, and after all this theory we’ll look at what the job is by using SC in future posts

  • Lab Ops– 16 System Center setup

    This post isn’t going to tell you how to install System Center screen by screen as there are some 434 of these to do a complete install and configure.  That’s a lot of clicking with a lot of opportunity for mistakes and while I realise that not everyone needs to tear down and reset everything surely there must be a better way to try it out?

    There is but it involves some pretty intense PowerShell scripts  and accompanying xml  configuration files collectively known as the PowerShell Deployment Toolkit (PDT) is on the Tech Net Gallery. It works from scratch  - it will pull down all the installs and prereqs you need, install the components across the servers you define complete with SQL Server and do all of the integration work as well.  There is a full set of instructions here on how to edit the xml configuration files (the PowerShell doesn’t need to change at all) so I am not going to repeat those here.

    What I do want to do is to discuss the design considerations for deploying System Center 2012R2 (SC2012R2)  in a lab for evaluation, before I go on to showing some cool stuff in following posts.

    SC2012R2 Rules of the game:

    Most parts of the SC2012R2 suite are pretty heavyweight applications, and will benefit from being on separate servers and all of SC2012R2 is designed to be run virtually; just as today you might be running VCenter in a VM. Note that Virtual Machine Manager (VMM) is quite happy on a VM managing the host the VM is running on. 

    Operations Manager, Service Manager and the Service Manager Data Warehouse cannot be on the same VM or server and even the Operations Manager agent won’t install onto a server running any part of Service Manager.  I would recommend  keeping VMM away from these components as well from a performance perspective.

    The lighter weight parts of the suite are Orchestrator and App Controller both of which could for example be collocated with VMM which is what I do. 

    All of the SC2012R2 components make use of SQL Server for one or more databases.  In evaluation land we can get SQL Server for 180 days just as SC2012R2 is good for 180 days but the question is where to put the databases, alongside the relevant component or centrally.  My American cousins used to put all the databases on the DC in a lab as both of these are needed all the time, however we generally run our labs on self contained VMs each with it’s own local database.

    Speaking of Domains I tend to have a domain for my hosts and the System Center infrastructure, and I do on occasion create tenant domains in VMM to show the hosting and multi-tenancy.  The stuff that’s managed by System Center doesn’t have to be in the same domain and may not be capable of joining a domain such as Linux VM’s , switches , SANs but we will need various run as accounts to access that infrastructure with community strings and ssh credentials.

    Best Practice for production.  The real change for deploying System Center in production is all about high availability.  Given that System Center is based on JBOD (just a bunch of databases) what needs protecting are the databases and the certificates associated with them so that if a VM running VMM is lost we can simple create a new VM add in VMM and point it to our VMM database.  The System Center databases are best protected with Availability Groups and while I realise that is only available in SQL Server Enterprise edition it doesn’t itself rely on shared storage. Availability groups replicate the data from server to server in a cluster and although clustering is used the  databases can be on direct attached storage on each node.  There is some special info on how to use this with System Center on TechNet which will also apply to  Service Manager as well.

    That leads me onto my next point about production – there are a lot of databases in System Center  and some of those are datamarts/data warehouses and actually only one of those could arguably be called a data warehouse and that’s the one in Service Manager.  Why? well if you are using Service Manager you don’t need the others as it should for the central reporting aka (Configuration Management DB) CMDB.  So if you have another help desk tool and that is properly integrated into System Center then that’s where you should go for your reporting. If none of the above then you’ll have to dip in and out of the components and tools you have to join the dots (I feel another post coming on about this).

    and finally..

    I have the capacity to run an extra VM which runs Windows 8.1 plus the Remote Server Administration Tools (RSAT) ,SQL Server Management Studio and all of the SC2012R2 management consoles on it. This means I don’t have to jump from VM to VM to show how things work.  Plus in the process or installing all of those tools in one place I have access to all of the PowerShell cmdlets associated with Server Management, SQL Server and all of System Center.  So now I can write scripts form one place to get stuff done right across my datacentre or carry on filling in dialog boxes.

  • Lab Ops part 15 - Combining MDT with Windows Update Services

    If we are going to deploy VDI to our users we are going to still have some of the same challenges as we would have if we still managed their laptops directly.  Perhaps the most important of these is keeping VDI up to date with patches.  What I want to do in this post is show who we can integrate Windows Update Services(WSUS) with MDT to achieve this:

    • Set up WSUS
    • Connect it to MDT
    • Approve patches
    • recreate the Virtual Desktop Template with the script I created in Part `12 of this series
    • Use one line of PowerShell to recreate my pooled VDI collection based on the new VDT.

    Some notes before I begin:

    • All of this is easier in Configuration Manager but the same principles apply plus I can do a better job of automating an monitoring this process with Orchestrator.  I am doing it this way to show the principles.
    • I am using my RDS-Ops VM to deploy WSUS on as it’s running Windows Server 2012R2 and I have a separate volume on this VM (E:) with the deduplication feature enabled, which as well being home to my deployment share can also be the place where WSUS can efficiently keep its updates.  It’s also quite logical, we normally keep our deployments and updates away from production and then have a change control process to approve and apply updates once we have done our testing.
    • RDS-Ops is connected to the internet already as I have configured the Routing and Remote Access (RRAS) role for network address translation (NAT)

    Installing & Configuring WSUS

    WSUS is now a role inside Windows Server 2012 & later and on my RS-Ops VM I already have a SQL Server installation so I can use that for WSUS as well.  The WSUS team have not fully embraced PowerShell (I will tell on them!) so although I I was able to capture the settings I wanted and save those off to an xml file when I added in the Roles and Features I also needed to run something like this after the feature is installed..

    .\wsusutil.exe postinstall SQL_INSTANCE_NAME="RDS-Ops\MSSQLServer" CONTENT_DIR=E:\Updates

    (the Scripting Guy blog has more on this here)

    Now I need to configure WSUS for the updates I want and there isn’t enough out of the box PowerShell for that -I found I could set the synchronization to Microsoft Update with Set-WsusServerSynchronization -SyncFromMU, but there’s no equivalent Get-WsusServerSynchronization command, plus I couldn’t easily see how to set which languages I wanted only products and classification (whether the update is a driver, an update service pack, etc.) so unless you are also a .net expert with time on your hands and I am not you will need to set most everything form the initial wizard and hope for better PowerShell in future. In the meantime rather than pad this post out with Screengrabs I’ll refer you to the WSUS TechNet Documentation on what to configure and explain what I selected.. 

    • Upstream Server Synchronize. Set to  Microsoft Update
    • Specify Proxy Server. None
    • Languages. English
    • Products. I decided that all I wanted to do for now was to ensure I had updates for just Windows 8.1 & Windows Server 2-012R2 and SQL Server 2012 (my lab has no old stuff in it).  This would mean I would have the updates I needed to patch my lab setup and my Virtual Desktop Template via MDT.
    • Classifications. I selected everything but drivers (I am mainly running VMs so the drivers are synthetic and part of the OS)
    • Synch Schedule. Set to daily automatic updates

    I ran the  initial synchronize process to kick things off and then had a look at  what sort of PowerShell I could use and I got a bit stuck.  

    I then looked at creating something like an automatic approval rule as you can see here..

    image

    only in PowerShell and came up with this ..

    Get-WsusUpdate | where classification -in ("Critical Update", "Security Updates") | Approve-WsusUpdate -Action Install -TargetGroupName "All Computers" # chuck in -whatif to test this

    which I could run behind my schedules update. Anyway I have now set some updates as approved so I can now turn my attention to MDT and see how to get those updates into my Deployment once they have actually downloaded onto my RDS-Ops Server. BTW I got a message to download the Microsoft Report Viewer 2008 sp1 Redistributable package on the way.

    Top Tip: If the MDT stuff below doesn’t work check that WSUS is working by updating group policy on a VM to point to it.  Open GPEdit.msc expand Computer Configuration -> Administrative Templates -> Windows Components -> Windows Updates and set the Specify Intranet Microsoft update service location to http://<WSUS server>:8530 in my case http://RDS-Ops:8530

    If I now go into the MDT Deployment Workbench on my RDS-Ops VM I can edit my Task Sequence and as with with my last post on installing applications it’s in  State Restore node that my Updates get referenced..

    image

    Note there are two places where updates can be applied bot pre and post an application install and both of these are disabled by default. The post application install would be good if you had updates in WSUS that applied to applications not just the OS as I have just set up. The application updates could then be added on top of the base application install.  This is a nice touch but how does MDT “know” where to get the updates from?  We can’t really set anything in WSUS itself or apply any group policy because the machines aren’t built yet.  The answer is to add one more setting into the rule for the Deployment Share aka ControlSettings.ini WSUSServer=http://<WSUS Server>:8530 as I left the default port as is when I setup WSUS ..

    [Settings]
    Priority=Default
    Properties=MyCustomProperty

    [Default]
    DeploymentType=NEWCOMPUTER
    OSInstall=YES
    SkipAdminPassword=YES
    SkipProductKey=YES
    SkipComputerBackup=YES
    SkipBitLocker=YES
    EventService=http://RDS-Ops:9800
    SkipBDDWelcome=YES
    WSUSServer=http://RDS-Ops:8530

    SkipTaskSequence=YES
    TaskSequenceID=Win81Ref

    SkipCapture=YES
    DoCapture=SYSPREP
    FinishAction=SHUTDOWN

    SkipComputerName=YES
    SkipDomainMembership=YES

    SkipLocaleSelection=YES
    KeyboardLocale=en-US
    UserLocale=en-US
    UILanguage=en-US

    SkipPackageDisplay=YES
    SkipSummary=YES
    SkipFinalSummary=NO
    SkipTimeZone=YES
    TimeZoneName=Central Standard Time

    SkipUserData=Yes
    SkipApplications=Yes
    Applications001 ={ec8fcd8e-ec1e-45d8-a3d5-613be5770b14}

    As I said in my last post you might want to disable skipping the final summary screen ( SkipFinalSummary=No) to check it’s all working (also don’t forget to update the Deployment Share each time you do a test) and if I do that and then go into Windows Update on my Reference Computer I can see my updates..

    image

    So to sum up I know have MDT setup to create a new deployment which includes any patches from my Update Server, and a sample application (Foxit Reader). so I can keep my VDI collections up to date by doing the following

    1. Approve any updates that have come in to WSUS since I last looked at it OR  Auto approve those I want by product or classification with PowerShell
    2. Add in any new applications I want in the Deployment Workbench in MDT.
    3. Automatically build a VM from this deployment with the script in part 13 of this series which will sysprep and shutdown at the end of the task sequence.
    4. Either create  a new collection with New-RDVirtualDesktopCollection or update an existing collection with Update-RDVirtualDesktopCollection where the VM I just created is the Virtual Desktop Template.

    Obviously this would look a little nicer in Configuration Manager 2012R2 and I could use Orchestrator and other parts of System Center to sharpen this up but what this gives us is one approach to maintaining VDI which I hope you’ll have found useful.

  • Lab Ops - Part 14 Putting Applications into the VDI Template with MDT

    In my last post I created a process to quickly create what MDT refers to as a Reference Computer and directly boot this from the LiteTouchPE_x64.iso in the deployment share.  This was then shutdown and sysprepped so that I could use it to deploy VDI collections in Remote Desktop Services.   Now I want to add in a simple example of an application deployment along with the OS. Simple because I am only interested in the mechanics of the process in MDT rather than getting into any specifics about what is needed to install a particular application e.g. command line switches, dependencies and licensing.  If you are going to do this for real create I wouod recommend creating a new VM and test the installation process from the command line and tune the switches etc. for that application accordingly before doing any work in MDT.  There is already lots of help out there on the form for the application you are interested in.

    I am going to use Foxit Enterprise Reader for this example , as it’s not a Microsoft application and it’s a traditional application in that it’s not what MDT calls a packaged application designed for the modern interface in Windows 8.1. It’s also free (although you’ll have to register to get the Enterprise Reader as an msi) and I actually use it anyway to read PDF documents. All this is is actually pretty easy to do but I got caught out a few times and  wading through the huge amount of MDT documentation can be time consuming so I hope you’ll find this useful. My steps will be:

    • Import the Foxit application into the Deployment Share in  the MDT Deployment Workbench
    • Modify the Task Sequence to deploy Foxit
    • amend the Rules (aka ControlSettings.ini) of the Deployment Share properties to install Foxit without any intervention on my part.

    Import the Application. 

    To import an application in MDT all you need to do is navigate to the Applications folder in the Deployment share right click and select Import Application. However I thought it would be good to create folders for each software publisher so I created a Foxit folder underneath Applications and then I did right click -> Imported Application from there. This looked OK in the Deployment Workbench but actually there’s no actual folder created on the Deployment Share.  This is by design and if you want to create your own physical folder structure then you should store them on a share you control and point MDT to the applications ton that share than importing them which is the Application without  source files or elsewhere on the network option in the Import Application Wizard.

    Next I found that I couldn’t import a file only a folder  which I guess is typical for many applications so I stored the Foxit msi in its own folder before importing it. 

    The next thing that caught me out was the Command details.  It’s pretty easy to install an msi for Foxit this would be msiexec /i EnterpriseFoxitReader612.1224_enu.msi /quiet. However the Working directory entry confused me because MDT has the application now so surely I could just leave this empty? Well no and this not a problem with MDT rather it’s because of the way I am using it. Anyway I set the Working Directory to the UNC path of Foxit Reader folder (\\RDS-OPS\DeploymentShare$\Applications\Foxit Enterprise Reader in my case) and that worked.  

    Modify the Task Sequence

    I just used a standard Task Sequence template in my last post which already has a step in it to install an application, but where is it?    The answer turns out to be that it’s inside the State Restore folder ..

    1461_03_08

    Anyway I changed the settings here to reference Foxit and all is well.

    Configure the Rules

    I didn’t think I need to make any changes to the rules (in my last post) as my deployment was already fully automated so I was surprised to be presented with a popup asking me to confirm which application I wanted install when I first tested this. So I needed to add in two more settings one to skip which application to install and another to actually select it.  However the Rules identify applications by GUID not by name, so I had to get the GUID from the general tab of the Application properties and enter it like this..

    SkipApplications=Yes

    Applications001 ={ec8fcd8e-ec1e-45d8-a3d5-613be5770b14}

    Your GUID will be different and if you want more than one application then you would add more afterwards (Application002 = , Application003  etc).

    I also set  SkipFinallSummary=No at this point as I wanted to see if everything was working before the VM switched off.

    Summary

    MDT also has the ability to deploy bundles of applications and before you ask me you’ll need to do something completely different for Office 365 and Office 213 Pro Plus and my recommendation for a simple life would be to use Application Virtualization aka App-V.  This is included in MDOP (Microsoft Desktop Optimization Pack) and is one  of the benefits of having Software Assurance.  That’s a topic for another day based on feedback and my doing a bot more research.  Next up the exciting world of patching VDI