• How big should my OS drive be?

    My name is Michael Champion and I've been working in support for more than 12 years here at Microsoft.  I have been asked by many customers "What is the recommended size for the OS partition for Windows Server?".  There are minimum recommendations in the technical documentation (and release notes), but those recommendations are more on the generic side.  There are times when that recommendation is fine, but other times they are not.

    Take for example the Windows Server 2012 R2 disk recommendations.

    System Requirements and Installation Information for Windows Server 2012 R2

    Disk space requirements

    The following are the estimated minimum disk space requirements for the system partition.

    Minimum: 32 GB

    Be aware that 32 GB should be considered an absolute minimum value for successful installation. This minimum should allow you to install Windows Server 2012 R2 in Server Core mode, with the Web Services (IIS) server role. A server in Server Core mode is about 4 GB smaller than the same server in Server with a GUI mode. For the smallest possible installation footprint, start with a Server Core installation and then completely remove any server roles or features you do not need by using Features on Demand. For more information about Server Core and Minimal Server Interface modes, see Windows Server Installation Options.

    The system partition will need extra space for any of the following circumstances:

      • If you install the system over a network.
      • Computers with more than 16 GB of RAM will require more disk space for paging, hibernation, and dump files.

    The trick here is that "minimum" is bolded meaning that you could need a larger space and does not take into account your actal memory, what applications may be installed, etc.  While it does state this, I can give you an idea based on the role and hardware configuration of the server and other factors what disk space you should have available.

    Here are some good suggestions to follow when trying to calculate the size of an OS volume.

    • 3x RAM up to 32GB
    • 10-12GB for the base OS depending on roles and features installed
    • 10GB for OS Updates
    • 10GB extra space for miscellaneous files and logs
    • Any applications that are installed and their requirements. (Exchange, SQL, SharePoint,..)

    Taking the full 32GB RAM, a simple OS build would require a drive about 127GM in size.  One may think this is too large for the OS when the minimum disk space requirement is 32GB but let's break this down a bit...

    Why 3x RAM?

    If you are using 32GB of RAM and you need to troubleshoot a bug check or hang issue, you will need a page file at least 100MB larger than the amount of RAM as well as space for the memory dump.  Wait, that is just over 2x RAM... There are other log files like the event logs that will grow over time and we may need to collect other logs that will take up GB of space depending on what we are troubleshooting and the verbosity of the data we need.

    10GB-12GB for the base OS?

    The base OS install size is about 10GB-12GB and that is just for the base files and depends on what roles and features are installed.

    10GB for OS Updates?

    If you are familiar with the WinSxS directory in the OS for 2008/R2 and up, this folder will grow as the server is updated over the life of the server.  We have made great strides in reducing the space taken up by the WinSxS folder but it still increases over time.

    10GB extra space for miscellaneous files and logs?

    This may seem to be covered in the 3x RAM but many times people will copy ISO, 3rd party install files or logs, and other things to the server.  It is better to have the space than not to have it.

    How much for server applications then?

    This part is variable and should be taken in consideration when purposing a server for a particular function.  In general server use, the 127GB can usually accommodate a single or even a dual purpose server.

    Thank You,
    Michael Champion
    Support Escalation Engineer

  • So what exactly is the CLIUSR account?

    From time to time, people stumble across the local user account called CLIUSR and wonder what it is, while you really don’t need to worry about it; we will cover it for the curious in this blog.

    The CLIUSR account is a local user account created by the Failover Clustering feature when it is installed on Windows Server 2012 or later. Well, that’s easy enough, but why is this account here? Taking a step back, let’s take a look at why we are using this account

    In the Windows Server 2003 and previous versions of the Cluster Service, a domain user account was used to start the Cluster Service. This Cluster Service Account (CSA) was used for forming the Cluster, joining a node, registry replication, etc. Basically, any kind of authentication that was done between nodes used this user account as a common identity.

    A number of support issues were encountered as domain administrators were pushing down group policies that stripped rights away from domain user accounts, not taking into consideration that some of those user accounts were used to run services. An example of this is the Logon as a Service right. If the Cluster Service account did not have this right, it was not going to be able to start the Cluster Service. If you were using the same account for multiple clusters, then you could incur production downtime across a number of critical systems. You also had to deal with password changes in Active Directory. If you changed the user accounts password in AD, you also needed to change passwords across all Clusters/nodes that use the account.

    In Windows Server 2008, we learned and redesigned everything about the way we use start the service to make it more resilient, less error prone, and easier to manage. We started using the built-in Network Service to start the Cluster Service. Keep in mind that this is not the full blown account, just simply a reduced privileged set. Changing it to this reduced account was a solution for the group policy issues.

    For authentication purposes, it was switched over to use the computer object associated with the Cluster Name known as the Cluster Name Object (CNO)for a common identity. Because this CNO is a machine account in the domain, it will automatically rotate the password as defined by the domain’s policy for you (which is every 30 days by default).

    Great!! No more domain user account and its password changes we have to account for. No more trying to remember which Cluster was using which account. Yes!! Ah, not so fast my friend. While this solved some major pain, it did have some side effects.

    Starting in Windows Server 2008 R2, admins started virtualizing everything in their datacenters, including domain controllers. Cluster Shared Volumes (CSV) was also introduced and became the standard for private cloud storage. Some admin’s completely embraced virtualization and virtualized every server in their datacenter, including to add domain controllers as a virtual machine to a Cluster and utilize the CSV drive to hold the VHD/VHDX of the VM.

    This created a “chicken or the egg” scenario that many companies ended up in. In order to mount the CSV drive to get to the VMs, you had to contact a domain controller to get the CNO. However, you couldn’t start the domain controller because it was running on the CSV.

    Having slow or unreliable connectivity to domain controllers also had effect on I/O to CSV drives. CSV does intra-cluster communication via SMB much like connecting to file shares. To connect with SMB, it needs to authenticate and in Windows Server 2008 R2, that involved authenticating the CNO with a remote domain controller.

    For Windows Server 2012, we had to think about how we could take the best of both worlds and get around some of the issues we were seeing. We are still using the reduced Network Service privilege to start the Cluster Service, but now to remove all external dependencies we have a local (non-domain) user account for authentication between the nodes.

    This local “user” account is not an administrative account or domain account. This account is automatically created for you on each of the nodes when you create a cluster or on a new node being added to the existing Cluster. This account is completely self-managed by the Cluster Service and handles automatically rotating the password for the account and synchronizing all the nodes for you. The CLIUSR password is rotated at the same frequency as the CNO, as defined by your domain policy (which is every 30 days by default). With it being a local account, it can authenticate and mount CSV so the virtualized domain controllers can start successfully. You can now virtualize all your domain controllers without fear. So we are increasing the resiliency and availability of the Cluster by reducing external dependencies.

    This account is the CLIUSR account and is identified by its description.


    One question that we get asked is if the CLIUSR account can be deleted. From a security standpoint, additional local accounts (not default) may get flagged during audits. If the network administrator isn’t sure what this account is for (i.e. they don’t read the description of “Failover Cluster Local Identity”), they may delete it without understanding the ramifications. For Failover Clustering to function properly, this account is necessary for authentication.


    1. Joining node starts the Cluster Service and passes the CLIUSR credentials across.

    2. All passes, so the node is allowed to join.

    There is one extra safe guard we did to ensure continued success. If you accidentally delete the CLIUSR account, it will be recreated automatically when a node tries to join the Cluster.

    Short story… the CLIUSR account is an internal component of the Cluster Service. It is completely self-managing and there is nothing you need to worry about regarding configuring and managing it. So leave it alone and let it do its job.

    In Windows Server 2016, we will be taking this even a step further by leveraging certificates to allow Clusters to operate without any external dependencies of any kind. This allows you to create Clusters out of servers that reside in different domains or no domains at all. But that’s a blog for another day.

    Hopefully, this answers any questions you have regarding the CLIUSR account and its use.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • CROSS POST: How Shared VHDX Works on Server 2012 R2

    In the not far back point in time, there was a blog done by Matthew Walker that we felt needed to also be on the AskCore site as well due to the nature and the popularity of the article.  So we are going to cross post it here.  Please keep in mind that the latest changes/updates will be in the original blog post.

    CROSS POST: How Shared VHDX Works on Server 2012 R2

    Hi, Matthew Walker here, I’m a Premier Field Engineer here at Microsoft specializing in Hyper-V and Failover Clustering. In this blog I wanted to address creating clusters of VMs using Microsoft Hyper-V with a focus on Shared VHDX files.

    From the advent of Hyper-V we have supported creating clusters of VMs, however the means of adding in shared storage has changed. In Windows 2008/R2 we only supported using iSCSI for shared volumes, with Windows Server 2012 we added the capability to use virtual fibre channel, and SMB file shares depending on the workload, and finally in Windows Server 2012 R2 we added in shared VHDX files.

    Shared Storage for Clustered VMs:

    Windows Version



    2012 R2





    Virtual Fibre Channel




    SMB File Share




    Shared VHDX




    So this provides a great deal of flexibility when creating clusters that require shared storage with VMs. Not all clustered applications or services require shared storage so you should review the requirements of your app to see. Clusters that might require shared storage would be file server clusters, traditional clustered SQL instances, or Distributed Transaction Coordinator (MSDTC) instances. Now to decide which option to use. These solutions all work with live migration, but not with items like VM checkpoints, host based backups or VM replication, so pretty even there. If there is an existing infrastructure with iSCSI or FC SAN, then one of those two may make more sense as it works well with the existing processes for allocating storage to servers. SMB file shares work well but only for a few workloads as the application has to support data residing on a UNC path. This brings us to Shared VHDX.

    Available Options:

    Hyper-V Capability

    Shared VHDX used

    ISCSI Drives

    Virtual Fibre Channel Drives

    SMB Shares used in VM

    Non-Shared VHD/X used

    Host based backups












    VM Replication






    Live Migration






    Shared VHDX files are attached to the VMs via a virtual SCSI controller so show up in the OS as a shared SAS drive and can be shared with multiple VMs so you aren’t restricted to a two node cluster. There are some prerequisites to using them however.

    Requirements for Shared VHDX:

    2012 R2 Hyper-V hosts
    Shared VHDX files must reside on Cluster Shared Volumes (CSV)
    SMB 3.02

    It may be possible to host a shared VHDX on a vender NAS if that appliance supports SMB 3.02 as defined in Windows Server 2012 R2, just because a NAS supports SMB 3.0 is not sufficient, check with the vendor to ensure they support the shared VHDX components and that you have the correct firmware revision to enable that capability. Information on the different versions of SMB and capabilities is documented in a blog by Jose Barreto that can be found here.

    Adding Shared VHDX files to a VM is relatively easy, through the settings of the VM you simply have to select the check box under advanced features for the VHDX as below.


    For SCVMM you have to deploy it as a service template and select to share the VHDX across the tier for that service template.


    And of course you can use PowerShell to create and share the VHDX between VMs.

    PS C:\> New-VHD -Path C:\ClusterStorage\Volume1\Shared.VHDX -Fixed -SizeBytes 30GB

    PS C:\> Add-VMHardDiskDrive -VMName Node1 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    PS C:\> Add-VMHardDiskDrive -VMName Node2 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    Pretty easy right?

    At this point you can setup the disks as normal in the VM and add them to your cluster, and install whatever application is to be clustered in your VMs and if you need to you can add additional nodes to scale out your cluster.

    Now that things are all setup let’s look at the underlying architecture to see how we can get the best performance from our setup. Before we can get into the shared VHDX scenarios first we need to take a brief stint on how CSV works in general. If you want a more detailed explanation please refer to Vladimir Petter’s excellent blogs starting with this one.


    This is a simplified diagram of the way we handle data flow for CSV, the main points here are to realize that access to the shared storage in this clustered environment is handled through the Cluster Shared Volume File System (CSVFS) filter driver and supporting components, this system handles how we access the underlying storage. Because CSV is a clustered file system we need to have this orchestration of file access. When possible I/O travels a direct path to the storage, but if that is not possible then we will redirect over the network to a coordinator node. The coordinator node shows up in the Failover Cluster manager as the owner for the CSV.

    With Shared VHDX we also have to have orchestration of shared file access, to achieve this with Shared VHDX all I/O requests are centralized and funneled through the coordinator node for that CSV. This results in I/O from VMs on hosts other than the coordinator node being redirected to the coordinator. This is different from a traditional VHD or VHDX file that is not shared.

    First let’s look at this from the perspective of a Hyper-V compute cluster using a Scale-Out File Server as our storage. For the following examples I have simplified things by bringing it down to two nodes and added in a nice big red line to show the data path from the VM that currently owns our clustered workload. For my example I making some assumptions, one is that the workload being clustered is configured in an Active/Passive configuration with a single shared VHDX file and we are only concerned with the data flow to that single file from one node or the other. For simplicity I have called the VMs Active and Passive just to indicate which one owns the Shared VHDX in the clustered VMs and is transferring I/O to the storage where the shared VHDX resides.


    So we have Node 1 in our Hyper-V cluster accessing the Shared VHDX over SMB and connects to the coordinator node of the Scale-Out File Server cluster (SOFS), now let’s move the active workload.


    So even when we move the active workload SMB and the CSVFS drivers will connect to the coordinator node in the SOFS cluster, so in this configuration our performance is going to be consistent. Ideally you should have high speed connects between your SOFS nodes and on the network connections used by the Hyper-V compute nodes to access the shares. 10 Gb NICs or even RDMA NICs. Some examples of RDMA NICs are Infiniband, iWarp and RDMA over Converged Ethernet (RoCE) NICs.

    Now as we change things up a bit, we will move the compute onto the same servers that are hosting the storage


    As you can see the access to the VHDX is sent through the CSVFS and SMB drivers to access the storage, and everything works like we expect as long as the active VM of the clustered VMs is on the same node as the coordination node of the underlying CSV, so now let’s look at how the data flows when the active VM is on a different node.


    Here things take a different path than we might expect, since SMB and CSVFS are an integral part of ensuring proper orchestrated access to the Shared VHDX we send the data across the interconnects between the cluster nodes rather than straight down to storage, this can have a significant impact on your performance depending on how you have scaled your connections.

    If the direct access to storage is a 4Gb fibre connect and the interconnect between nodes is a 1Gb connection there is going to be a serious difference in performance when the active workload is not on the same node that owns the CSV. This is exacerbated when we have 8Gb or 10Gb bandwidth to storage and the interconnects between nodes is only 1Gb. To help mitigate this behavior make sure to scale up your cluster interconnects to match using options such as 10 Gb NICs, SMB Multi-channel and/or RDMA capable devices that will improve your bandwidth between the nodes.

    One final set of examples to address concerns about scenarios where you may have an application active on multiple clustered VMs that are accessing the same Shared VHDX file. First let’s go back to the separate compute and storage nodes.


    And now to show how it goes with everything all together in the same servers.


    So we can even implement a scale out file server or other multi-access scenarios using clustered VMs.

    So the big takeaway here is more about understanding the architecture to know when you will see certain types of performance, and how to set proper expectations based on where and how we access the final storage repository for the shared VHDX. By moving some of the responsibility for handling access to the VHDX to SMB and CSVFS we get a more flexible architecture and more options, but without proper planning and an understanding of how it works there can be some significant differences in performance based on what type of separation there is between the compute side and the storage side. For the best performance ensure you have high speed and high bandwidth interconnects from the running VM all the way to the final storage by using 10 Gb or RDMA NICs, and try to take advantage of SMB Multi-Channel.

    --- Matthew Walker

  • CROSS POST: Windows 10, WindowsUpdate.log and how to view it with PowerShell or Tracefmt.exe

    In the not far back point in time, there was a blog done by Charles Allen that we felt needed to also be on the AskCore site due to the nature and the popularity of the article.  So we are going to cross post it here.  Please keep in mind that the latest changes/updates will be in the original blog post.

    Windows 10, WindowsUpdate.log and how to view it with PowerShell or Tracefmt.exe

    With the release of Windows 10, there is going to be a change to the way the Operating System logs created and how we can view them.

    For those of us who are supporting services like Configuration Manager and Windows Update Services to deploy Software Updates, this means a change to how we will look at the WindowsUpdate.log in Windows 10.
    The WindowsUpdate.log is still located in C:\Windows, however when you open the file C:\Windows\WindowsUpdate.log, you will only see the following information:

    In order to read the WindowsUpdate.log in Windows 10, you will need to either perform one of two different options.

    1) Decode the Windows Update ETL files
    2) Use Windows PowerShell cmdlet to re-create the WindowsUpdate.log the way we normally view it

    I am going to go ahead and start with the PowerShell cmdlet option #2, as it is my personally preferred method.

    Using PowerShell Get-WindowsUpdateLog:

    1) On the Windows 10 device you wish to read the WindowsUpdate.log open PowerShell with Administrative rights (I prefer to use PowerShell ISE)
    2) Perform the command "PS C:\WINDOWS\system32> Get-WindowsUpdateLog", you will see the following occur:

    3) This will take a moment to complete, once done,  you will see on the desktop a new WindowsUpdate.log file

    Please note that the newly created WindowsUpdate.log file from running the Get-WindowsUpdate.log command is a static log file and will not update like the old WindowsUpate.log unless you perform the Get-WindowsUpdateLog cmdlet again.
    However, with some PowerShell magic, you can easily create a script that will update the log actively to allow you to troubleshoot closer to real time with this method.

    For more information on the Get-WindowsUpdate.log cmdlet, please go here:

    Decoding the Windows ETL files:

    So the other option, is to directly decode the ETL files for getting the Windows Update information during troubleshooting. Lets go ahead and walk through this:

    1) Download the public symbols, if you have never done this, you can follow these instructions:
    2) Download the Tracefmt.exe tool by following these instructions:
    3) Open an elevated command prompt
    4) Create a temporary directory folder, example %systemdrive%\WULogs
    5) Navigate to the folder containing the Tracefmt.exe and then copy it to the folder you just created (example copy Tracefmt.exe to %systemdrive%\WULogs)
    6) Run the following commands at the command prompt you have open

    cd /d %systemdrive%\WULogs
    copy %windir%\Logs\WindowsUpdate\* %systemdrive%\WULogs\
    tracefmt.exe -o windowsupate.log <Windows Update logs> -r c:\Symbols

    For the <Windows Update logs> syntax, you will want to replace this with the applicable log names, space-delimited. Example:

    cd /d %systemdrive%\WULogs
    copy %windir%\Logs\WindowsUpdate\* %systemdrive%\WULogs\
    tracefmt.exe -o windowsupate.log Windowsupdate.103937.1.etl Windowsupdate.103937.10.etl -r c:\Symbols

  • Office Applications only print 1-2 pages

    Hello AskPerf!  My name is Susan, and today we are going to discuss an issue where printing through Office applications only produce 1-2 pages out of a multi-page document.

    For example, you have Windows 2003/2008 Print Server with (e.g. Lexmark Universal v2 PS3 ( and Windows 8.1 clients attempt to print from Office applications; only the first page or 2nd pages will print. 

    Other symptoms you may observe:

    • You can print only 2 pages, for example: page 2-3 of a 10 page document
    • You print just fine out of other applications
    • If you print to PDF from Office, the files print as expected

    Cause There are two main causes of the behavior above.  The first is missing fonts from the Print Server - the buffer simply fills and overflows and only ~2 pages will print.  The second cause is a legacy Bluetooth service is installed as well as its add-on component.

    Resolution #1

    Install the missing fonts on the Print Server.  You do not need to install Office, only the fonts.

    Here are the fonts that should be installed:

    Fonts that are installed with Microsoft Office 2013 products

    Fonts supplied with* Office 2010

    Office 2010 printing errors with Calibri font when printing through a Windows Server 2003 or 2008 print server

    Resolution #2

    For the Bluetooth Driver, there are two pieces: the service as well as the add-on that is registered under the Office applications.  The add-ons should be disabled for all Office applications under multiple keys.

    Option 1

    1. Check with the vendor to determine if there are any updates to your Bluetooth device.

    Option 2

    1. Uninstall Bluetooth       

    a.      Please confirm it is completely uninstalled via checking MSCONFIG and running Services.msc

    b.      Next, you will need to modify registry keys in two  locations  and change the loadbehavior to 0 or delete. 

               For example: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\Outlook\AddinsBtmoffice.connect

    And also under  BTMOffice.connect is loaded in Access, Excel, Project, Outlook, Powerpoint, Word for each application.


    And also under  BTMOffice.connect is loaded in Access, Excel, Project, Outlook, Powerpoint, Word for each application.

    Option 3

    1. Disable Bluetooth as a test

    a.      Stop the service from running in Services.msc

    b.      Change the loadbehavior in the above registry keys to 0


  • Remote Desktop Licensing Service Stopping

    Hello AskPerf! My name is Matt Graham and I'll be discussing an issue that you may see on your RDS Licensing Server.

    SCENARIO You have both a 2008 R2 and an 2012 or 2012 R2 Licensing server in your RDS environment.  When you look under services.msc, you notice that the Remote Desktop Licensing service is stopped on the 2012 / 2012 R2 server.  You try to start it again, but after a short period of time (30 seconds to a few minutes) it stops on its own again.  In fact, every time you try to start the service, it starts for a short time and then stops on its own.

    Alternatively, you may see this service crash.

    ISSUE This behavior is actually by design.  You cannot have a 2008 R2 and a 2012 / 2012 R2 License server in the same RDS environment.

    RESOLUTION If you are moving to a 2012 / 2012 R2 environment, then deactivate and decommission your 2008 R2 license server.  If you still want to have two or more license servers, you will need to build another matching 2012 / 2012 R2 license server.

    CONSIDERATION #1 We have seen at least one case where the 2012 License Manager Service still did not start even after removing the 2008 R2 License server.  In this case, the licensing server database had become corrupt.  If this happens, you can rebuild the database using the "Manage Licenses" wizard.

    WARNING If you do this, you will have to re-install your licenses after the rebuild. Be sure you have your licensing information.

    1.  Open your RD Licensing Manager, right click on your server and select Manage Licenses.

    2. Select Rebuild the license server database.

    3.  After this, you will need to have your Retail CAL pack or your EA information in order to reinstall your licenses.

    CONSIDERATION #2 In one case, a customer had to rename the "C:\Windows\System32\Lserver" folder, uninstall the RDS roles, reboot, and reinstall the RDS Licensing role in order to get the service to start again.  This should effectively do the same thing as rebuilding the license database, but I mention it because it was successful in at least one case.

    Finally, when you decommission your old 2008 R2 server, be sure to think through what that will entail for your session hosts.  You may need to take inventory of your session hosts and ensure that they are pointed to your 2012 / 2012 R2 license server if they aren't already pointed to it.


  • Manage Developer Mode on Windows 10 using Group Policy

    Hi All, We’ve had a few folks want to know how to disable Developer Mode using Group Policy, but still allow side-loaded apps to be installed. Here is a quick note how to do this. (A more AD-centric post from Linda Taylor is on it way) On the Windows more
  • Windows 10 Volume Activation Tips


    Today’s blog is going to cover some tips around preparing your organization for activating Windows 10 computers using volume activation

    Updating Existing KMS Hosts

    The first thing to do is to update your KMS host to support Windows 10 computers using the following hotfix

    3058168: Update that enables Windows 8.1 and Windows 8 KMS hosts to activate Windows 10


    • When downloading this fix make sure to choose the correct operating system and architecture(x86 or X64) so you get the right update. There is an update for Windows 8/2012 and an update for Windows 8.1/2012 R2. So if you get a “The update is not applicable to your computer” message, you may have incorrect version.
    • We are updating this KB title to reflect that 2012 and 201 2R2 are supported also

    You may notice that Windows 7 and Windows Server 2008 R2 KMS hosts are not covered by this update. We are working on releasing an update to support Windows 7/Windows Server 2008 R2 but we would encourage everyone to update to a later KMS Host OS.

    Obtain Your Windows Server 2012 R2 for Windows 10 CSVLK

    Today there are 2 CSVLK’s for Windows 10 available

    • Windows 10 CSVLK: Can only be installed on Windows 8(with above update), Windows 8.1(with above update), or Windows 10 KMS host and only activates client operating systems
    • Windows Server 2012 R2 for Windows 10 CSVLK: Can only be installed on a Windows Server 2012 or 2012 R2 KMS host(with the above update installed)and activates both client and server operating systems

    Generally most KMS hosts are setup on Server operating systems so you need to get the Windows Server 2012 R2 for Windows 10 CSVLK. To find it do the following:


    1. Log on to the Volume Licensing Service Center (VLSC).
    2. Click License.
    3. Click Relationship Summary.
    4. Click License ID of their current Active License.
    5. After the page loads, click Product Keys.
    6. In the list of keys, locate “Windows Srv 2012 R2 DataCtr/Std KMS for Windows 10.”

    For example:


    The Windows 10 CSVLK is located in different area of the website.


    If you have an open agreement, you will need to contact the VLSC support team to request your CSVLK.

    Once you obtain your key you will need to install and activate it using the following steps

    Cscript.exe %windir%\system32\slmgr.vbs /ipk <your CSVLK>
    Cscript.exe %windir%\system32\slmgr.vbs /ato

    After you install the key you can run Cscript.exe %windir%\system32\slmgr.vbs /dlv it will show the following

    Description: Windows(R) Operating System, VOLUME_KMS_2012-R2_WIN10 channel

    Once installed and activated this CSVLK will activate Windows 10 and all previous client and server volume license editions

    Volume Activation Management Tool (VAMT 3.1)

    You should update to the latest version of VAMT 3.1 which can be found in the Windows 10 ADK

    Note: We are aware of 2 issues with VAMT 3.1 with Windows 10 currently. First issue is if you try to add the above CSVLK to VAMT you will get error “The specified product key is invalid, or is unsupported by this version of VAMT”. For additional information see the following article

    3094354: Can't add CSVLKs for Windows 10 activation to VAMT 3.1

    The second issue is that you cannot discover Windows 10 computers unless you use IP address. This issue is still being investigated. Check the latest information by querying on VAMT

    Proxy Servers

    If you are using a proxy server with basic authentication in your environment you should review the following article for list of exceptions you may have to add. The list of addresses have changed since previous operating systems.

    921471: Windows activation fails and may generate error code 0x8004FE33

    Additional info

    The Generic Volume License Key (GVLK) for Windows 10 editions can be located at the following link:

    Hope this helps with your volume activation.

    Scott McArthur
    Senior Supportability Program Manager

  • Unable to add file shares in a Windows 2012 R2 Failover Cluster

    My name is Chinmoy Joshi and I am a Support Escalation Engineer with the Windows Core team. I’m writing today to share information regarding an issue which I came across with multiple customers recently.

    Consider a two node 2012 R2 Failover Cluster using shared disks to host a File Server role. To add shares to the File Server role, select the role and right-mouse click on it to get the Add File Share option. The Add File Share option is also available along the far right column. Upon doing this, you may receive an error “There were errors retrieving file sharesor the Add Share wizard gets stuck with,Unable to retrieve all data needed to run the wizard”.



    When starting the add share wizard, it is going to try and enumerate all current shares on the node and across the Cluster. There can be multiple reasons why Failover Cluster Manager would throw these errors. We will be covering two of the known scenarios that can cause this.

    Scenario 1:

    Domain Users/Admins can be part of nested groups; meaning, they are a in a group that is part of another group. As part of the security, there is a token header being passed and that header can be bloated. Bloated headers can occur when the user/admin is part of nested group or may be migrated from some domain to a new domain carrying older SID’s. In our case, the domain user was a part of large number of active directory groups. There can be three ways to resolve this:

    A)  Reduce the number of active directory groups the user is member of,
    B)  Clean up the SID History, or
    C)  Modify the Https service registry with the following registry values:

    Caution: Please backup the registry before modifying in case you need to revert the changes.


    Note that these keys may not be there, so they will need to be created.

    Here, HTTPS protocol uses Kerberos for authentication and the token header generated was too large throwing an error.  When this is the case, you will see the following event:

    Log Name: Microsoft-Windows-FileServices-ServerManager-EventProvider/Operational
    Source: Microsoft-Windows-FileServices-ServerManager-EventProvider
    Event ID: 0
    Level: Error
    Description: Exception: Caught exception Microsoft.Management.Infrastructure.CimException: The WinRM client received an HTTP bad request status (400), but the remote service did not include any other information about the cause of the failure.
       at Microsoft.Management.Infrastructure.Internal.Operations.CimSyncEnumeratorBase`1.MoveNext()
       at Microsoft.FileServer.Management.Plugin.Services.FSCimSession.PerformQuery(String cimNamespace, String queryString)
       at Microsoft.FileServer.Management.Plugin.Services.ClusterEnumerator.RetrieveClusterConnections(ComputerName serverName, ClusterMemberTypes memberTypeToQuery)


    Problems with Kerberos authentication when a user belongs to many groups

    Scenario 2:

    The second most popular reason for not able to get the file shares created is the WinRM policy being enabled for IPv4filter. When this is set, you will see this in the wizard:


    To see if it is set on the Cluster nodes, go into the Local Security Policy from the Administrative Tools or Server Manager.  Once there, follow down the path to:

    If you go into the Group Policy Editor, it would be located at:

    Local Computer Policy
    Computer Configuration
    Administrative Templates
    Windows Components
    Windows Remote Management (WinRM)
    WinRM Service
    Allow remote server management through WinRM

    If it is enabled, open that policy up and check to see if the box for IPv6 has an asterisks in it.



    You will run into this error if only IPv4 is selected.  So to resolve this, you would need to either disable the policy or also add an asterisks for IPv6.  For the change to take effect, you will need to reboot the system.  After the reboot, go back into Group Policy Editor to see if it has been reverted back.  If it has, you will need to check your domain policies and have this done there.

    Hope this helps you save time in resolving the issue, Good Luck!!

    Chinmoy Joshi
    Support Escalation Engineer

  • Temporary Post Used For Theme Detection (c64b201d-93d0-43f8-b193-1d4f3f59fb6d - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

    This is a temporary post that was not deleted. Please delete this manually. (5f0bb32c-25d0-4558-9bd7-fa01838db359 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

  • Windows 10 (RTM) RSAT tools now available…

    Hey Folks, quick post to let you know that the Windows 10 Remote Server Administration Tools are now available.

    Remote Server Administration Tools for Windows 10


  • Windows 10 Group Policy (.ADMX) Templates now available for download

    Hi everyone, Ajay here. I wanted to let you all know that we have released the Windows 10 Group Policy (.ADMX) templates on our download center as an MSI installer package. These .ADMX templates are released as a separate download package so you can manage more
  • Windows 10 is coming!


    Hello folks, as I’m sure you already know, Windows 10 will be available tomorrow, July 29th.  With that said, we will be blogging some of the new features that our team will be supporting in this new OS.

    We will also blog about features that some of other teams support.  Namely, how to manage Windows 10 notifications and upgrade options:

    How to manage Windows 10 notification and upgrade options

    Windows 10 landing page

    See you soon!


  • The New and Improved CheckSUR

    One of the most used and arguably most efficient tools that we utilize when troubleshooting Servicing issues, prior to Windows 8/Windows 2012, is the System Update Readiness tool(also known as CheckSUR). However, as we continue to improve our operating systems, we must continue to improve our troubleshooting tools as well. Thus, I want to introduce the “Updated CheckSur”

    Improvements for the System Update Readiness Tool in Windows 7 and Windows Server 2008 R2

    In short, previously, CheckSUR would load its payload locally on the machine and run the executable to attempt to resolve any discrepancies it detects in the package store.

    With these improvements, the utility no longer carries a payload.  It also doesn’t require repeated downloads of the CheckSUR package that was previously required. The new CheckSUR package will stay installed until removed by the user.

    I’m sure you’re wondering: without the payload, how will CheckSUR resolve any issues? After installing this patch and rebooting (which is required), the CheckSUR functionality is now exposed through the DISM command:

    DISM /Online /Cleanup-Image /Scanhealth

    This command should seem familiar if you have used DISM for troubleshooting on any Win8+ operating system. There is, however, no restorehealth/checkhealth with this update.  Scanhealth provides the same functionality as restorehealth in Win8+ OS’s and the CheckSUR tool did previously.

    Another new feature is that CheckSUR will now also detect corruption on components for Internet Explorer.

    A few extra points to note:

    • The “Updated CheckSUR” is specific to only run on Windows 7 SP1 and Windows 2008 R2 SP1
    • CheckSUR can only be run on an online OS
    • CheckSUR can be used as a scheduled proactive method by scheduling a task to run /scanhealth as a desired time to ensure that the system is periodically checked for corruption
    • Manual steps that previously could be utilized to run CheckSUR are no longer available with the update to CheckSUR

    One of my favorite parts of the update is that the results are still logged in the c:\windows\logs\CBS\checksur.log and still gives the same layout and information surrounding its findings once the file has been accessed and opened. I will be creating another article shortly that discusses some steps to take when you encounter a CheckSUR.log with errors.

    Thank You
    Nicholas Debanm
    Support Escalation Engineer

  • Azure SNAT

    This post was contributed by Pedro Perez. Azure’s network infrastructure is quite different than your usual on-premises network as there are different layers of software abstraction that work behind the curtains. I would like to talk today about more
  • Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network and Creation)

    Hello, cluster fans. In my previous blog, Part 1, I talked about how to work around the storage block in order to implement Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Networking in Cluster on Azure.

    Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the Cluster.

    VIP (Virtual IP address): A public IP address belongs to the cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

    DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

    Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

    Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

    You can refer to this blog for more details about those terms for Azure network:

    VIPs, DIPs and PIPs in Microsoft Azure

    OK, enough reading, Storage is ready and we know the basics of Azure network, can we start to building the Cluster? Yes!

    The first difference you will see is that you need create the Cluster with one node and add the other nodes as the next step. This is because the Cluster Name Object (CNO) cannot be online since it cannot acquire a unique IP Address from the Azure DHCP service. Instead, the IP Address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the Cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the Cluster from losing quorum, you start with a one node Cluster. Let the CNO’s IP Address fail and then manually set up the IP address.


    The CNO DEMOCLUSTER is offline because the IP Address it is dependent on is failed. is the VM’s DIP, which is where the CNO’s IP duplicates from.


    In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example,

    To change the IP address, right mouse click on the resource, choose the Properties of the IP Address, and specify the new address.


    Once the address is changed, right mouse click on the Cluster Name resource and tell it to come online.


    Now that these two resources are online, you can add more nodes to the Cluster.

    Instead of using Failover Cluster Manager, the preferred method is to use the New-Cluster PowerShell cmdlet and specify a static IP during Cluster creation. When doing it this way, you can add all the nodes and use the proper IP Address from the get go and not have to use the extra steps through Failover Cluster Manager.

    Take the above environment as example:

    New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress

    Note:The Static IP Address that you appoint to the CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP, cannot resolve DNS name, and cannot use the CNO for management since its IP is an unusable IP.

    Now you’ve successfully created a Cluster. Let’s add a highly available role inside it. For the demo purpose, I’ll use the File Server role as an example since this is the most common role that lot of us can understand.

    Note:In a production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

    Different than Cluster on-premises, I recommend you to pause all other nodes and keep only one node up. This is to prevent the new File Server role from moving among the nodes since the file server’s VCO (Virtual Computer Object) will have a duplicated IP Address automatically assigned as the IP on the node who owns this VCO. This IP Address fails and causes the VCO not to come online on any node. This is a similar scenario as for CNO we just talked about previously.

    Screenshots are more intuitive.

    The VCO DEMOFS won’t come online because of the failed status of IP Address. This is expected because the dynamic IP address duplicates the IP of owner node.


    Manually editing the IP to a static unused, in this example, now the whole resource group is online.


    But remember, that IP Address is the same unusable IP address as the CNO’s IP. You can use it to bring the resource online but that is not a real IP for network communication. If this is a File Server, none of the VMs except the owner node of this VCO can access the File Share.  The way Azure networking works is that it will loop the traffic back to the node it was originated from.

    Show time starts. We need to utilize the Load Balancer in Azure so this IP Address is able to communicate with other machines in order to achieving the client-server traffic.

    Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be a public facing VIP, or internal only, like a DIP. Each VM needs have the endpoint(s) so the Load Balancer knows where the traffic should go. In the endpoint, there are two kinds of ports. The first is a Regular port and is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, etc. Another kind of port is a Probe port. The default port number for this is 59999. Probe port’s job is to find out which is the active node that hosts the VCO in the Cluster. Load Balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in Cluster on an Azure VM, you need to know out what port(s) the application uses because you will need to add the port(s) to the endpoint. Then, you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, Load Balancer will do the similar port forward task and route the traffic to the VM who owns the VCO. All the above settings need to be completed using PowerShell as the blog was written.

    Note: At the time of this blog (written and posted), Microsoft only supports one resource group in cluster on Azure as an Active/Passive model only. This is because the VCO’s IP can only use the Cloud Service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given Cloud Service.

    Here is the diagram for Internal Load Balancer (ILB) in a Cluster which can explain the above theory better:


    The application in this Cluster is a File Server. That’s why we have port 445 and the IP for VCO ( the same as the ILB. There are three steps to configure this:

    Step 1: Add the ILB to the Azure cloud service.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $ServiceName = "demovm1-3va468p3" # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.


    $ILBName = "DEMOILB" # newly chosen name for the new ILB

    $SubnetName = "Subnet-1" # subnet name that the VMs use in the VNet

    $ILBStaticIP = "" # static IP address for the ILB in the subnet

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName


    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)


    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName


    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)


    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM


    # Check the settings.

    ForEach ($node in $VMNodes)


    Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$ -eq "smb"}



    Step 3: Update the parameters of VCO’s IP address with Probe Port.

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2008 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB.

    cluster res $IPResourceName /priv enabledhcp=0 overrideaddressmatch=1 address=$ILBIP probeport=59999  subnetmask=

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2012/2012 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB

    Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="";"Network"="$ClusterNetworkName";"OverrideAddressMatch"=1;"EnableDhcp"=0}

    You should see this window:


    Take the IP Address resource offline and bring it online again. Start the clustered role.

    Now you have an Internal Load Balancer working with the VCO’s IP. One last task you need do is with the Windows Firewall. You need to at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to the VCO the first time or after you failover the resource group to another node because of the ProbeIntervalInSeconds we set up previously.

    In this example, the VCO has an Internal IP of If you want to make your VCO public-facing, you can use the Cloud Service’s IP Address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure Load Balancer. You just need to add the endpoint with a regular port plus the probe port to each VM (Step 2. Then update the VCO’s IP in the Cluster (Step 3). Please be aware, your Clustered resource group will be exposed to the Internet since the VCO has a public IP. You may want to protect it by planning enhanced security methods.

    Great! Now you’ve completed all the steps of building a Windows Server Failover Cluster on an Azure IAAS VM. It is a bit longer journey; however, you’ll find it useful and worthwhile. Please leave me comments if you have question.

    Happy Clustering!

    Mario Liu
    Support Escalation Engineer

  • Supported IP Protocols for Azure Cloud Services

    As of today, June 2015, the supported IP protocols for Azure cloud services are TCP (protocol number 6) and UDP (protocol number 17) only. All TCP and UDP ports are supported. This applies to both cloud service VIPs as well as instance level public IPs more
  • Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network)

    Hello, cluster fans. In my previous blog, I talked about how to work around the storage block in order to implementing Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Network in cluster on Azure more
  • Windows Server Failover Cluster on Azure IAAS VM – Part 1 (Storage)

    Hello, cluster fans. This is Mario Liu. I’m the Support Escalation Engineer in Windows High Availability team in Microsoft CSS Americas. I have a good news for you that starting in April, 2015, Microsoft supports Windows Server Failover Cluster more
  • Walkthrough on Session hint / TSVUrl on Windows Server 2012

    Hello Askperf, my name is Naresh and today we are going to discuss how we can connect to a Windows 2012 Remote desktop collection from thin clients or other clients that are not session hints aware.

    You might be thinking what are “Session hints”, so let us right away dig into the need for session hints. The connection broker in Windows 2012/R2 has changed the way clients connect to a group of RDSH/RDVH servers – earlier called farms but now we have them grouped as ‘collections’ in Windows 2012/R2. With Windows 2012, we brought changes in the way how the GUI looks, how we install different roles and how these different roles interact with each other. With all this the flow of remote desktop connections and how a client connects to the endpoint servers, changed as well.

    Classical way of connecting to Remoteapps-windows server 2008 r2

    In Windows 2008 R2 we deployed RemoteApps as:

    1. MSI files
    2. RDP files
    3. Connect through RDWeb

    To explain the connection flow I will walk you through the RDP file content of a RemoteApp in Windows 2008/R2 vs. Windows 2012/R2.

    This is how a RDP file for a RemoteApp would look like in a 2008 R2 RDS environment:


    1. The client reads the full address (of the farm) and the RDGateway properties.
    2. If the client finds the RDGateway, it will authenticate against the gateway and based on the CAP and RAP policy the connection would be passed on.
    3. The Client would then do a DNS query for the full address (of the farm) – assuming this is a DNS Round Robin or the farm name is pointing to a NLB – and would try to connect to the RDSH server. (If there is a dedicated redirector, then one of them will receive this connection.)
    4. The RDSH (or the redirector) server receiving the connection would then contact the connection broker and if there is an existing disconnected session available for this user on an RDSH, the connection broker would send the details of the RDSH server back to the redirector. If there is no disconnected session, the connection broker would determine the best suited server as per the load balancing algorithms and would send the details of that server to the redirector.
    5. Redirector would in turn pass those details to the client and the client would then directly logon to the application on the assigned server. Session established.

    Change in the way we connect in 2012 -Session Hint / TSVUrl

    In a 2012/R2 environment the RDP file looks like this:


    1. In Windows 2012 the concept of Farms has been deprecated and replaced by collections. However, unlike Farms, collections do not have an entry in the DNS. Therefore the client reads the full address (which is for connection broker which hosts the RDS deployment and collections) and the RDGateway properties.
    2. If the client finds the RDGateway, it will authenticate against the gateway and based on the CAP and RAP policy the connection would be passed on.
    3. The Client would then do a DNS query for the full address, i.e. the connection broker for windows 2012 and would try to connect to the RD Connection broker. The term redirector is no longer used in Windows 2012 and instead connection broker does the redirection, but how?

    What are session hints/TSVUrls ?


    If you see the above RDP file, I have also highlighted the loadbalanceinfo which consists of the TSVUrl. A TSVUrl or session hints suggests which collection in the deployment the client should connect to. So along with the Full Address and gateway information, the client also reads the loadbalancerinfo and sends that over to the connection broker.

    4. The connection broker then reads the TSVUrl to determine the collection name and then suggests which RD Session host participating in the collection should take the session based on whether there is an already existing session or not.

    5. If there is an existing session available for this user on an RDSH in that collection, the connection broker would send the details of the RDSH server back to the client. If there is no disconnected session, the connection broker would determine the best suited server within the collection as per the load balancing algorithms and would send the details of that server to the client.

    6. The client would then directly logon to the application on that assigned server. Session established.

    DefaultTsvUrl: workaround for incompatible RDClient

    However, what would happen if the RD client does not understand the TSVURLs? Yes, the client would directly logon to the connection broker but since the application is not hosted there, it would error out.

    We have seen a lot of customers not wanting to move over to Windows 2012 Remote desktop services because they have Clients like, old thin clients with old RD clients and some non-windows clients or some of the old Windows clients that might not understand TSVURLs. I would highly recommend upgrading the clients to the latest by getting in touch with the OEM vendor/manufacturer for getting the latest RD client for these devices (in case of old Windows fat cients either install RDC 8 or later or else upgrade to the operating system that supports RDC 8) making sure they are tsvurl aware, given the so many other benefits and features the latest RD client would bring along. However, we do understand that some of our customers would have genuine reasons to keep these clients and also while planning and implementing an upgrade, one would need to run the show in the meantime with the non-compatible clients.

    For such cases, we can use the below registry key on the connection broker hosting the deployment.

    Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base 322756 How to back up and restore the registry in Windows.

    The following tuning recommendation has been helpful in alleviating the problem:

    1. Start Registry Editor (Regedit.exe).

    2. Locate and then click the following key in the registry:

    HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\ClusterSettings

    3. On the Edit menu, click Add Value, and then add the following registry value:

    Value name: DefaultTsvUrl
    Data type: REG_SZ
    Value data: tsv://<TSVURL>

    This registry would provide the connection broker with the default loadbalanceinfo in case the client was unable to read the loadbalanceinfo provided in the remoteapp.

    To find the TSVUrl to be set in DefaultTsvUrl, you can go to the following registry on the connection broker:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\<CollectionName>\Applications\<RemoteApp>\

    RDPFileContents REG_SZ

    You can find the tsvurl in the RDPFilecontents of the collection you would like to set as your default. Then configure it as your DefaultTsvUrl . You can then keep the show running while you upgrade to newer compatible clients.

    NOTE: This is being suggested as an alternate/workaround when you do not have upgrading the client as an option. It has the following caveats that one should be aware of:

    1. This would only be read when the client is unable to understand the tsvurl sent in the RDP file (from the remote app) and thus does not present the tsvurl to connection broker.
    2. Whenever such a client comes the DefaultTsvUrl sends it to one single collection as specified in the registry value. DefaultTsvUrl can only point to one single collection only and thus you may want to plan and create a single collection for non compatible clients that has all their required apps in it. There is no provision of defining multiple collections in this registry so if you want to use incompatible clients over multiple collections then it won't be possible.
    3. In case you change that collection, you will have to change the defauDefaultTsvUrl lttsvurl registry value as well.
    4. This registry is only a workaround for tsvurls and will not work if the clients are not compatible with remoteapps itself. It is only for providing a workaround for clients that were able to access remoteapps earlier in Windows 2008/R2 but cannot access them through collections as explained in the section "Change in the way we connect in 2012 -Session Hint / TSVUrl".


  • Invitation to provide feedback through UserVoice

    I am not sure if everyone is aware of UserVoice but I am here to tell you about it.  UserVoice is where you can provide feedback to the Microsoft Product Groups who are now monitoring these forums.  Do you have idea or suggestion on how to make Windows Server 2016 better or a feature you would like to see added?  Well, speak up and let us know what you are thinking.

    There are multiple forums to provide this feedback.  Below is the listing of the various features.  But first, how to start User Voice and how the Windows Server Product Team will respond.

    How to start User Voice?

    1. Create user account. (Enter contact in case we need to ask more questions. )
    2. Add your voice! (I wish… )
    3. Cast a vote to the idea you like. You get 10 votes total!

    What ideas will be most considered by the Windows Server Product Team? 

    • Idea with high votes will be considered heavily.
    • Clear and actionable ideas will be reviewed quickly.  

    Caution: Do not create a single idea with multiple ideas contained in it. We need to understand the priorities. Please make sure they are separate ideas so we can see clear votes on each distinct idea. In this case, we will likely to close the idea.

    Once the Windows Server Product Team has reviewed the idea, the idea status will change.

    Note: "Under Review" status means that the Windows Server Product Team is reviewing. It does not guarantee any deliverable.

    We will provide notification on all declined ideas.

    Each vote get released when the idea is closed (either decline or completed).

    Now, as far as the various forums, here you go and let us know what you would like to see:

    General Feedback 
    Nano Server 
    Remote Management Tools 
    Security and Assurance 

    Also, if you are looking to provide feedback on Automation (PowerShell and Scripting), please provide your suggestions using our PowerShell Connect Site

    Remember, these sites are for feature suggestions and ideas only.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • Use Azure custom routes to enable KMS activation with forced tunneling

    Previously, if customers enabled forced tunneling on their subnets, OS activations prior to Windows Server 2012 R2 would fail as they could not connect to the Azure KMS server from their cloud service VIP. However, thanks to the newly released Azure custom more
  • Microsoft Ignite sessions dealing with what we do in AskCore

    Early in the month of May, Microsoft held it's Ignite Conference (formally known as TechEd) in Chicago, Illinois.  This conference was a huge success with over 23,000 attendees.  There are a lot of new things coming out with Windows 10 and Windows Server 2016 over the next year.  I wanted to provide you some of the sessions that deal specifically with what our Core Group supports (Failover Clustering, Storage, Hyper-V, and Deployment).  There were tons more sessions regarding all other aspects of Microsoft Azure, Applications (such as SQL, Exchange, Office 365), SCVMM, Security, etc, but I wanted to pull out these specific sessions since they deal with what we deal with.

    Give it a look and see how the next version of Windows could be for you.  Each full session is approx 75 minutes in length.  I also pulled out a few "Ignite Studios" productions that are in the 20 minute or so range.

    To see the full list of all sessions from Microsoft Ignite, please visit our Channel 9 site.


    Failover Clustering / Storage

    Stretching Failover Clusters and Using Storage Replica in Windows Server vNext
    In this session we discuss the deployment considerations of taking a Windows Server Failover Cluster and stretching across sites to achieve disaster recovery. This session discusses the networking, storage, and quorum model considerations. This session also discusses new enhancements coming in vNext to enable multi-site clusters.

    Deploying Private Cloud Storage with Dell Servers and Windows Server vNext
    The storage industry is going through strategic tectonic shifts. In this session, we’ll walk through Dell’s participation in the Microsoft Software Defined Storage journey and how cloud scale scenarios are shaping solutions. We will provide technical guidance for building Storage Spaces in Windows Server vNext clusters on the Dell PowerEdge R730xd platform.

    Exploring Storage Replica in Windows Server vNext
    Delivering business continuity involves more than just high availability, it means disaster preparedness. In this session, we discuss the new Storage Replica feature, including scenarios, architecture, requirements, and demos. Along with our new stretch cluster option, it also covers use of Storage Replica in cluster-to-cluster and non-clustered scenarios.

    Upgrading Your Private Cloud to Windows Server 2012 R2 and Beyond!
    We are moving fast, and want to help you to keep on top of the latest technology! This session covers the features and capabilities that will enable you to upgrade to Windows Server 2012 R2 and to Windows Server vNext with the least disruption. Understand cluster role migration, cross version live migration, rolling upgrades, and more.

    Overview of the Microsoft Cloud Platform System
    With the Microsoft Cloud Platform System, we are sharing our cloud design learnings from Azure datacenters, so customers can deploy and operate a cloud solution with Windows Server, Microsoft System Center and the Windows Azure Pack. This solution provides Infrastructure-as-a-Service and Platform-as-a-Service solutions for enterprises and service providers.

    Architectural Deep Dive into the Microsoft Cloud Platform System
    The Microsoft Cloud Platform System has an automated framework that keeps the entire stamp current from software to firmware to drivers across all Windows Server, Microsoft System Center, Windows Azure Pack, SQL Server and OEM/IHV and prevent disruptions to tenant and management workloads. This session covers the complete architecture for CPS and deployment in your datacenter.

    Platform Vision & Strategy (4 of 7): Storage Overview
    This is the fourth in a series of 7 datacenter platform overview sessions.

    StorSimple: Extending Your Datacenter into Microsoft Azure with Hybrid Cloud Storage
    StorSimple provides a hybrid cloud storage solution with a hybrid storage array in the on-premises datacenter that seamlessly extends storage capabilities to the cloud. This session details the implementation and functionality of the solution and discusses how the solution solves the issue of growing IT costs related to storage growth and management.

    Hyper-V Storage Performance with Storage Quality of Service
    Windows Server vNext allows you to centrally monitor and manage performance for Hyper-V workloads using Scale-Out File Servers. Learn how to monitor storage performance from a customer, Hyper-V, and storage admin’s viewpoint, then author effective policies to deliver the performance your customers need.

    Spaces-Based, Software-Defined Storage: Design and Configuration Best Practices
    Going well beyond a feature walkthrough, this session delves into the nuances and complexities of the spaces-based SDS design. Starting with the hardware selection and continuing up the stack, this session empowers you to successfully design, deploy, and configure a storage solution based completely on Windows Server 2012 R2 and proven best practices. Examples galore!


    Platform Vision & Strategy (2 of 7): Server Virtualization Overview
    Windows Server and Microsoft Azure are ushering in the next generation of computing for modern apps and cloud infrastructure. What are Containers? Nano Server? New in Hyper-V? Azure IaaS? Or how does this fit into Microsoft’s cloud strategy? Get the answers and more! Come learn about new capabilities in Windows Server, Hyper-V and Azure VMs.

    The Hidden Treasures of Windows Server 2012 R2 Hyper-V?
    It's one thing to hear about and see a great demo of a Hyper-V feature. But how do you put them into practice? This session takes you through some of those lesser-known elements of Hyper-V that have made for great demonstrations, introduces you to some of the lesser-known features, and shows you best practices, how to increase serviceability and uptime, and design/usage tips for making the most of your investment in Hyper-V.

    Microsoft's New Windows Server Containers
    In this session, we cover what containers are, what makes them such an exciting technology, how they will work in Windows Server, and how Docker will integrate with them.

    An Insider’s Guide to Desktop Virtualization
    Ready to drink from a fire hose? In this highly energized session, learn about insights, best practices, and hear unfiltered thoughts about Desktop Virtualization, VDI, vendors, and solutions. Discussion topics include: VDwhy, VDCry, VDI Smackdown, building and designing a Microsoft VDI solution, and 3D graphics. Experience the Microsoft and Citrix Virtual Desktop solution with a huge amount of videos and demos. With unique content and insights, this session is fun and packed with great content for everyone interested in Desktop Virtualization—and some nice giveaways. A session you don’t want to miss!

    Shielded Virtual Machines

    Harden the Fabric: Protecting Tenant Secrets in Hyper-V
    In today’s environments, hosters need to provide security assurance to their tenants. "Harden the fabric" is a Windows Server and Microsoft System Center vNext scenario, which includes enhancements in Hyper-V, Virtual Machine Manager, and a new Guardian Server role that enables shielded VMs. Technologies which ensure that host resources do not have access to the Virtual Machine or data.

    Platform Vision & Strategy (5 of 7): Security and Assurance Overview
    Come learn how Microsoft is addressing persistent threats, insider breach, organized cyber crime and securing the Microsoft Cloud Platform (on-premises and connected services with Azure). This includes scenarios for securing workloads, large enterprise tenants and service providers.

    Shielded VMs and Guarded Fabric Validation Guide for Windows Server 2016
    This document provides you an installation and validation guide for Windows Server 2016 Technical Preview (build #10074) and System Center Virtual Machine Manager vNext for Guarded Fabric Hosts and Shielded VMs. This solution is designed to protect tenant virtual machines from compromised fabric administrators.

    Windows 10
    Top Features of Windows 10
    In this demo-heavy session, see why you need to start thinking: Windows 10. The answer to every question will be Windows 10, but what are the questions? How do you deliver a more secure standard operating environment? How do you make mobility familiar for all your users? What changes the deployment conversation? What changes the app conversation? How do you “mobilize” Win32 applications? What changes the way you manage device lifecycles? What changes how you buy your devices? There will be prizes, there will be fun and you’ll be ready, set for the rest of your Windows 10 experience at Microsoft Ignite.

    The New User Experience with Windows 10
    Are you ready for Windows 10? Well, it was designed and developed based on feedback from millions of people around the world, so we think you probably are! Join us as we show you how Windows 10 combines the familiar things you love with a modern touch. Get a deeper look at the user experience and discover new features. Find out how Windows 10 makes you more productive, celebrates a new generation of apps, and unlocks the power of hardware.

    Upgrading to Windows 10: In Depth
    With Windows 10, we are encouraging everyone, including organizations, to upgrade from their existing OS (Windows 7, Windows 8, or Windows 8.1). This upgrade process is easy and reliable, but how exactly does it work? In this session, we dig deep and explore how the process works to ensure that everything (apps, settings, data, drivers) is preserved.

    Windows 10: Ask the Experts
    We’ve talked a lot about Windows 10 already. In this session, we hold an open Q&A, hosted by the always-entertaining Mark Minasi, where you can ask anything about Windows 10. No questions are off limits. So if you’ve still got questions and are looking for answers, bring them to this session.

    Provisioning Windows 10 Devices with New Tools
    A new feature in Windows 10, runtime provisioning will help to reduce the cost of deploying Windows PCs and devices such as tablets and phones. This new feature will enable IT professionals and system integrators to easily configure a general-purpose device during first boot or runtime without re-imaging for the organization's use. In this session, we look at the new tools that enable these scenarios, and exploring the capabilities and deployment options for them.

    Overview of Windows 10 for Enterprises
    Windows 10 brings a wealth of new features and solutions to the enterprise. In this session, we explain the various security, management, and deployment features of Windows 10 along with showing you some of the new end-user features that will not only make your customers more productive but also delight them.

    Overview of Windows 10 for Enterprises
    Windows 10 brings a wealth of new features and solutions to the enterprise. In this session, we explain the various security, management, and deployment features of Windows 10 along with showing you some of the new end-user features that will not only make your customers more productive but also delight them.

    Overview of Windows 10 for Education
    While Windows has always provided great learning outcomes for students and a comprehensive platform for teachers and administrators, there are several reasons why education customers in general should take notice of Windows 10. From the minimal learning curve user experience for mouse and keyboard users, to the familiar usability scaled across Windows 10 devices, teachers and students will be productive and comfortable from the start. In this session we explain how we are simplifying management and deployment, including in-place upgrades from Windows 7 or 8.1 and provisioning off-the-shelf devices without wiping and replacing images. Learn about benefits of the new, unified app store, allowing flexible distribution of apps.

    What's New in Windows 10 Management and the Windows Store
    Windows 10 continues to add new and improved management technologies, to ensure that Windows continues to be the best—and most flexible—operating system to manage. In this session, we talk about all the changes that are coming, including enhancements to built-in mobile device management protocols, new Windows Store and volume purchase program capabilities, sign-on capabilities with organizational IDs (Microsoft Azure Active Directory), sideloading and other app deployment enhancements, and new capabilities being added to other existing management technologies, such as PowerShell, WMI, etc.

    Windows Server 2016

    Nano Server
    Come hear about important transformations in Windows Server – the new installation option called Nano Server. Nano Server is a deep rethink of the server architecture. The result is a new, lean cloud fabric host and application development platform, resulting in 20x smaller than Server Core and a reduction in security attack service surface and reboots!


    How Microsoft IT Deploys Windows 10
    Learn how Microsoft IT adopted and deployed Windows 10 internally using Enterprise Upgrade as the primary deployment method. This approach reduced the deployment overhead by using System Center Configuration Manager Operating System Deployment (OSD) and upgrade which resulted in significant reductions in helpdesk calls. In addition we share how we are leveraging some of the new Enterprise scenarios to delight users while securing the enterprise. You can realize similar benefits in your enterprise by adopting these best practices as you migrate from Windows 7 and 8.x to 10.

    Expert-Level Windows 10 Deployment
    Join us for a live demo on how to build a Windows deployment solution, based on Microsoft System Center Configuration Manager. In the session we are taking OS Deployment in Microsoft Deployment Toolkit and System Center Configuration Manager to its outer limits. Deployment tips, tricks, and hard core debugging in a single session. You can expect a lot of live demos in this session.

    Windows 10 Deployment: Ask the Experts
    Still have questions about Windows deployment, even after all the other sessions this week? For this session, we gather as many experts as we can find for a roundtable Q&A session, with plenty of “official” and “real-world” answers for everyone, troubleshooting and implementation advice, and probably a fair number of opinions and “it depends” answers as well.

    Preparing Your Infrastructure for Windows 10
    So you want to deploy Windows 10 in your organization? While many organizations will be able to do this with little impact, there are some scenarios and features that can impact existing server, management, and network infrastructures. In this session, we take a look at those impacts so you know what to expect.

    Deploying Windows 10: Back to Basics
    Are you new to Windows deployment, or maybe just rusty? In this session, we review the tools that are available, explain all the acronyms, and explore best practices for deploying Windows 10. During the process, we show all the key tools that we recommend for building and customizing Windows 10 images, deploying Windows 10 images, provisioning new computers, and migrating from older operating systems like Windows 7.

    What's New in Windows 10 Deployment
    With the upcoming release of Windows 10, there will be new and updated ways to deploy Windows. In this session, we review new recommendations for upgrading existing devices using a simple in-place upgrade process, provisioning tools for transforming new devices into ones ready for enterprise use, as well as updates to traditional deployment tools and techniques (ADK and beyond). We also talk about application compatibility, hardware requirements, and other common deployment questions.

    What's New in Windows 10 Deployment
    With the upcoming release of Windows 10, there will be new and updated ways to deploy Windows. In this session, we review new recommendations for upgrading existing devices using a simple in-place upgrade process, provisioning tools for transforming new devices into ones ready for enterprise use, as well as updates to traditional deployment tools and techniques (ADK and beyond). We also talk about application compatibility, hardware requirements, and other common deployment questions.

    Deploying Microsoft Surface Pro 3 in the Enterprise
    You have chosen Surface Pro 3 for your organization. Now, get the tips and tricks directly from engineers who built it. This session offers useful information on how you can deploy, manage, and support these devices throughout your org like a jedi master.

    Troubleshooting Windows 10 Deployment: Top 10 Tips and Tricks
    Need help with troubleshooting Windows deployment issues? Johan and Mikael share lessons learned around handling device drivers in the deployment process, common deployment issues and their workarounds, parsing log files, WinPE and PXE troubleshooting, UEFI deployments. As a foundation, Microsoft Deployment Toolkit and Microsoft System Center Configuration Manager will be used. You can expect a lot of live demos, tips, and tricks in this session.

    Preparing for Windows 10 Deployment: Assessment, Compatibility, and Planning
    Before you can deploy Windows 10, you need to make sure your organization is ready. That requires information gathering, compatibility analysis, project management, and piloting – an iterative process. In this session, we talk about tools to help with common concerns around app and hardware compatibility, web compatibility, readiness for upgrades, and more.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • Azure DNS Server Redundancy

    Customers may observe that their PaaS role instances and IaaS virtual machines are only issued one DNS server IP address by DHCP. This does not mean that name resolution in Azure has a single point of failure however. The Azure DNS infrastructure is more
  • What is the IP address

    The IP address is a virtual public IP address that is used to facilitate a communication channel to internal platform resources for the bring-your-own IP Virtual Network scenario. Because the Azure platform allow customers to define any more