• Troubleshooting Activation Issues

    Today, Henry Chen and I are going to talk about troubleshooting some activation issues that we often run into.

    To begin, here is an article which talks about what Microsoft Product Activation is and why it is important. Also, thisarticle explains KMS Activation.

    Now, let’s jump into some common activation scenarios.

    Scenario 1 - Security Processor Loader Driver

    1. You get an error 0x80070426 when you try to activate a Windows 7 SP1 or a Windows Server 2008 R2 SP1 KMS client by running slmgr /ato.


    When you try to start Software Protection services, you will see this popup error.


    If you review the Application Event log, you will see the Event 1001.

    Source:  Microsoft-Windows-Security-SPP
    Event ID:  1001
    Level:  Error
    Description:  The Software Protection service failed to start. 0x80070002

    To resolve this, make sure the Security Processor Loader Driver is started.

    1. Go to Device Manager.
    2. Click on View -- > Show hidden devices
    3. Drop down Non-Plug and Play Drivers



    In this case, it is disabled.  It could be either Automatic, Demand or System, but not started.


    If it’s other than Boot, change the startup type to Bootand then start the driver.

    You could also as shown below change it from the registry by browsing to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\spldrand change the start value to 0 and reboot.


    If it fails to start, uninstall and re-install the driver and reboot your machine. In almost every case that we have seen, reinstalling the driver fixes the issue (i.e. you are able to start the driver).

    Once it’s started, you will be able to start Software Protection Service and then activate Windows successfully.

    Scenario 2 – Plug & Play

    When trying to activate using slmgr /atoyou get the following error even when running the command elevated:

    Windows Script Host

    Activating Windows Server(R), ServerStandard edition (68531fb9-5511-4989-97be-d11a0f55633f) ...Error: 0x80070005 Access denied: the requested action requires elevated privileges


    And the below is shown when you try to display activation information using slmgr /dlv

    Windows Script Host

    Script: C:\Windows\system32\slmgr.vbs
    Line:   1131
    Char:   5
    Error:  Permission denied
    Code:   800A0046
    Source: Microsoft VBScript runtime error


    We do have an article


    which talks about the cause of the issue. While missing permission is the root cause, we have seen instances where GPO is not enabled and the permission does not seem to be correct. We also have a


    written by our office team member on how to set the permissions using command line which we have found to be useful. We often combine both these articles to resolve issues.

    First, to verify you have the right permissions, run the below command.

    sc sdshow plugplay

    Below is how the correct permissions should look like:

    On Windows 7 SP1 or Windows Server 2008 R2 SP1

    (A;;CCLCSWLOCRRC;;;SU) <-------- This is the permission that seems to be missing in almost all instances.

    On a broken machine this is what we see.


    In order to set the correct permissions, run the following command as given in the blogfor Office:


    Then run sc sdshow plugplayto make sure the permissions have been set. Once they are set, you will be able to activate Windows successfully.

    There also have been instances where we have seen combination of 1 and 2, so you might have to check if spldr driver is started as well as permission on plugplayservice.

    On Windows Server 2012 R2

    When you run slmgr /atoyou get the below error on a machine that is domain joined. The other commands like slmgr /dlv works.

    Windows Script Host

    Activating Windows(R), ServerDatacenter edition (00091344-1ea4-4f37-b789-01750ba6988c) ...

    Error: 0x80070005 Access denied: the requested action requires elevated privileges


    This happens when SELFaccount is missing access permission on COM Security.

    To add the permission back, type dcomcnfgon the RUN box and hit OK.


    Under Component Services, expand Computers, right-click My Computer, and then click Properties.


    Click the COM Security tab, and then click Edit Default under Access Permissions.


    If SELF does not appear in the Group or user names list, click Add, type SELF, click Check Names, and then click OK.


    Click SELF, and then click to select the following check boxes in the Allowcolumn:

    · Local Access

    · Remote Access


    Then click OK on Access Permission and then OK on My Computer Properties.

    Reboot the machine.

    Scenario 3 – Read-only attribute

    As in scenario 1, we may get error 0x80070426, where a user gets the following when trying to activate Windows 2008 R2 SP1 or Windows 7 SP1.


    When trying to Start Software Protectionservice, you get an access is denied error message.


    To get more details on the error, we open the Application Event Log which shows the following error:

    Source: Microsoft-Windows-Security-SPP
    Event ID: 1001
    Level: Error
    Description: The Software Protection service failed to start. 0xD0000022

    To resolve this issue, browse to %windir%\system32 and make sure the following files have the file attribute Read-Onlyunchecked.




    Software Protectionservice should start now.

    Scenario 4 – Troubleshooting with Procmon

    Here, we will give an idea on how to use Procmonto troubleshoot activation issue.

    Windows Server 2012 R2

    On a Windows Server 2012 R2 server, when we try to run any slmgrswitches, we get the error below.


    When you try to start Software Protection service we get the following error.


    Launch process monitor and stop the capture by click on the Captureicon.


    Click on the Filtericon.


    Choose Process Name, is, type sppsvc.exe (Software Protection Service) and click Add


    We will add another Filter. So choose Result, contains, denied and click Add then OK.


    Start the capture by clicking on the Capture icon as shown above and start the Software Protectionservice.

    Once you get the error, we should see entries similar to what is shown below. In this case it’s a folder but could be a registry path too based on where we are missing permissions.


    As per the result, looks like we have permission issue on C:\Windows\System32\spp\store\2.0. We could be missing permissions on any of the folders in the path.

    Usually we start with the last folder so in this case it would be 2.0.

    Comparing permissions on broken machine (Left) and working machine (Right) we can see that sppsvcis missing.



    As you already guessed, the next step is to add sppsvcback and give it full control.

    Click on Edit and from Locations choose your local machine name, then under Enter the object names to select type NT Service\sppsvc and click on Check Names then OK.


    Make sure you give the service account Full control and click OK on the warning message and OKto close the Permissions box.


    Now try starting the Software Protection service and it should start successfully and you will be able to successfully activate Windows.

    We hope this blog was useful in troubleshooting some of your activations issues.

    Saurabh Koshta
    Henry Chen

  • Errors Retrieving File Shares on Windows Failover Cluster

    Hi AskCore, Chinmoy here again. In today’s blog, I would like to share one more scenario in continuation to my previous blog on Unable to add file shares in Windows 2012 R2 Failover Cluster.

    This is about WinRm a setting that could lead to failure on adding file shares using Windows 2012/2012R2 Failover Cluster Manager.

    Consider a two-node Windows Server 2012 R2 Failover Cluster using shared disks to host a File Server role. To access the shares, we click on the file shares and go to the shares tab at the bottom.  We see the error on the Information column next to the Roles:

    “There were errors retrieving the file shares.”


    There can be multiple reasons why Failover Cluster Manager would throw these errors. We will be covering one of the scenarios caused because of a WinRm configuration.


    We cannot add new shares using Failover Cluster Manager, but can via PowerShell.  This may occur, if Winrm is not correctly configured.  WinRm is the Microsoft implementation of the WS-Management protocol and more can be found here.

    If we have Winrm configuration issues, we may even fail to connect to remote servers or other Cluster nodes using Server Manager as shown below.


    The equivalent PowerShell cmdlet reports the below error: -

    PS X:\> Enter-PSSession Hostname
    Enter-PSSession : Connecting to remote server hostname failed with the following error message : The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic.

    At line:1 char:1
    + Enter-PSSession hostname
    + ~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidArgument: (hostname:String) [Enter-PSSession], PSRemotingTransportException
        + FullyQualifiedErrorId : CreateRemoteRunspaceFailed

    The above is a sign of WinRm being unable connect to the remote server.

    Let’s dig more, and check the event logs:

    Log Name: Microsoft-Windows-FileServices-ServerManager-EventProvider/Operational
    Event ID: 0
    Source: Microsoft-Windows-FileServices-ServerManager-EventProvider
    Description: Exception: Caught exception Microsoft.Management.Infrastructure.CimException: The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

    The above event states that there is communication issue with WinRm component. A quick way to configure WinRm is to run the command:

    winrm quickconfig

    This command starts the WinRM service and sets the service startup type to Auto-start. It also configures a listener for the ports that send and receive WS-Management protocol messages using either HTTP or HTTPS on any IP address. If it returns the following message:

    WinRM service is already running on this machine.
    WinRM is already set up for remote management on this computer.

    Then try running the below command:

    winrm id -r:ComputerName

    You may receive the following message if WinRM is not able to communicate to the WinRS client. It also means we cannot resolve the destination using a loopback IP configured to IP Listen List for HTTP communications.


    You can validate if the Loopback adapter IP is configured to IP Listen List for HTTP communication:


    To this problem run the below command:


    On removing the loopback IP, we shall be able to add the file share successfully using the Failover Cluster console. Hope it helps to fix the issue. Good Luck!

    Chinmoy Joshi
    Support Escalation Engineer

  • Using Repadmin with ADLDS and Lingering objects

    Hi! Linda Taylor here from the UK Directory Services escalation team. This time on ADLDS, Repadmin, lingering objects and even PowerShell…. The other day a colleague was trying to remove a lingering object in ADLDS. He asked me about which repadmin syntax more
  • When Special Pool is not so Special

    Hi Everyone.  Richard here in the UK GES team bringing you an interesting case we saw recently where Special Pool gave us some unexpected results, but ultimately still helped us track down the cause of the problem.   It started when we were more
  • “Administrative limit for this request was exceeded" Error from Active Directory

    Hello, Ryan Ries here with my first AskDS post! I recently ran into an issue with a particular environment where Active Directory and UNIX systems were being integrated. Microsoft has several attributes in AD to facilitate this, and one of those attributes more
  • SHA1 Key Migration to SHA256 for a two tier PKI hierarchy

    Hello. Jim here again to take you through the migration steps for moving your two tier PKI hierarchy from SHA1 to SHA256. I will not be explaining the differences between the two or the supportability / security implementations of either. That information more
  • Setting up Data Recovery Agent for Bitlocker

    You might have already read on TechNet and one of the other AskCore Blogson how to setup Data Recovery Agent (DRA) for BitLocker. However, how do you request a certificate from internal Certificate Authority (AD CS) to enable Data Recovery Agent (DRA). Naziya Shaik and I have written detailed instructions here and hope it is helpful.

    So what is a Data Recovery Agent?

    Data recovery agents are individuals whose public key infrastructure (PKI) certificates have been used to create a BitLocker key protector, so those individuals can use their credentials to unlock BitLocker-protected drives. Data recovery agents can be used to recover BitLocker-protected operating system drives, fixed data drives, and removable data drives. However, when used to recover operating system drives, the operating system drive must be mounted on another computer as a data drive for the data recovery agent to be able to unlock the drive. Data recovery agents are added to the drive when it is encrypted and can be updated after encryption occurs.

    Below are the steps needed.  From creating the certificate on the Certification Authority, to using it on Client machine.

    The machines in use are:

    1. Windows Server 2012 R2 DC and CA

    2. Windows 10 Enterprise

    If I go to Windows 10 and try to request a DRA certificate, we cannot see it as illustrated below:


    In order for the client to see a DRA certificate, we need to copy the Key Recovery Agent template, add BitLocker Drive Encryption, and BitLocker Drive Recovery Agent from the application policies.

    Here is how you do it.

    1. On a CA, we created a duplicate of the Key Recovery Agent and named it BitLocker DRA.


    2. Add the BitLocker Drive Encryption and BitLocker Data Recovery Agent by going into Properties -- > Extensions and edit Application Policies.


    3. In the CA Management Console, go into Certificate Templates and add BitLocker DRAas the template to issue.


    On a Windows 10 client, adding Certificate Manager to Microsoft Management Console:

    1. Click Start, click Run, type mmc.exe, and then click OK.

    2. In the File menu, click Add/Remove Snap-in.

    3. In the Add/Remove Snap-in box, click Add.

    4. In the Available Standalone Snap-ins list, click Certificates, and click Add.

    5. Click My user account, and click Finish.

    6. Then click OK.

    Then under Certificates -- > Personal -- > Right click on Certificate -- > All Tasks -- > Request New Certificate


    These are the Certificate Enrollment steps


    Click Next and in our case, we have Active Directory Enrollment Policy


    Click Nextand you will see the BitLocker DRA certificate which we created above.


    Select BitLocker DRA and click Enroll.

    This is what it looks like.


    The next steps are pretty much the same as given in this Blog. We will need to export the certificate to be used across all the machines.

    To accomplish this, right click on the certificate above and choose Export.


    This will bring up the export wizard.


    On the Export Private Key page, leave the default selection of No, do not export the private key.


    On the Export File Format page, leave the default selection of DER encoded binary X.509 (.CER).


    The next window is specifying the location and file name of the certificate you are exporting.  In my case below, I chose to save it to the desktop.


    Click Finish to complete the wizard.


    The next step will be to import that certificate into our BitLocker GPO to be able to use. In this, I have a GPO called BitLocker DRA.

    Under Computer Configuration -- > Policies -- > Windows Settings -- > Security Settings -- > Public Key Policies -- > Right click BitLocker Drive Encryption –> Add Data Recovery Agent


    This will start the Add Data Recovery Agent wizard.


    Click Browse Folders and point it to the location where you saved the certificate. My example above was from the desktop, so got from there.


    Double click on the certificate to load it.


    Click Next and Finish.

    You will see the certificate imported successfully.


    Additionally, make sure that you have the below GPO enabled.  In Group Policy Editor, expand Computer Configuration -- > Administrative Templates -- > Windows Components -- > BitLocker Drive Encryption and ensure Enabled is selected.


    Running Manage-bdeto get the status on the client you enabled Bitlocker on, you will see Data Recovery Agent (Certificate Based) to show it is currently set.



    Saurabh Koshta
    Naziya Shaikh

  • How to convert Windows 10 Pro to Windows 10 Enterprise using ICD

    Windows 10 makes life easier and brings a lot of benefits in the enterprise world. Converting Windows 10 without an ISO image or DVD is one such benefit. My name is Amrik and in this blog, we’ll take an example of upgrading Windows 10 Professional edition to Windows 10 Enterprise edition.

    Let’s consider a scenario wherein you purchase a few computers. These computers come pre-installed with Windows 10 Pro and you would like to convert it to Windows 10 Enterprise.

    The simpler way is to use DISM servicing option:

    Dism /online /Set-Edition:Enterprise /AcceptEula /ProductKey:12345-67890-12345-67890-12345

    For more information on DISM Servicing, please review:

    The above may be a good option if you have single or few computers. But, what if you’ve got hundreds of computers to convert.

    To make your life easier, you may want to use the Windows Imaging and Configuration Designer (ICD). You can get the Windows ICD as part of the Windows 10 Assessment and Deployment Kit (ADK), which is available for download here.

    With the help of ICD, admins can create a provisioning packages (.ppkg) which can help configuring Wi-Fi networks, adding certificates, connecting to Active Directory, enrolling a device in Mobile Device Management aka MDM, and even updating Windows 10 Editions - all without the need to format the drive and reinstall Windows.

    Install Windows ICD from The Windows 10 ADK

    The Windows ICD relies on some other tools in the ADK kit, so you need to select the options to install the following:

    • Deployment Tools,
    • Windows Preinstallation Environment (Windows PE)
    • Imaging and Configuration Designer (ICD),

    Before proceeding any further, let’s ensure you understand the prerequisite:

    • You have the required licenses to install Windows 10 Enterprise.

    The below steps require KMS license keys. You cannot use MAK license keys to convert. Since you are using KMS keys to do the convert, you need to have a KMS host capable of activating Windows 10 computers or you will need to change to a MAK key after the upgrade is complete.

    Follow below steps to convert:


    Click on File menu and select New Project.

    It will ask to enter the following details. You may name the package as per your convenience and save it to a different location if you would like to.


    Navigate to the path Runtime Settings –> EditionUpgrade –>UpgradeEditionWithProductKey


    Once you enter the product key (Use the KMS client key for Windows 10 Enterprise available here.)

    Click on File –> Save.

    Click on Export –> Provisioning Package.

    The above step will build the provisioning package.


    In the screenshot below, if anyone wants to keep a password or a certificate, then he may set it up.


    Select any location to save the provisioning package.


    Once complete, it will give the summary of all the choices selected. Now, we just need to click the button BUILD.



    Navigating to the above folder will open the location below and note the .ppkg file has been created which we will use to upgrade Windows 10 Professional.


    We now need to connect the Windows 10 Professional machine to the above share and run the .ppkg file.

    Here is the screenshot before I ran the package which shows that the machine is installed with Windows 10 Professional version:


    Run the file “Upgrade_Win10Pro_To_Win10Ent.ppkg” to complete the upgrade process.


    After double clicking the .ppkg file, we will get the warning or a prompt similar to UAC below:


    Just select “Yes, add it” and proceed. After this we need to wait and the system is getting prepared for upgrade.



    After the upgrade is complete, the machine will reboot and the OS is going to be Windows 10 Enterprise and we get the below screen as confirmation:


    And this is where we confirm that the upgrade is successful:


    The .ppkg file can be sent to the user through an email. The package can be on located on an internal share and run from there or copied to a USB drive and used on that drive.

    A couple of ways to automate the above process:

    • Use MDT by adding the option to Install Applications under Add –> General tab.
    • Use SCCM by following steps mentioned in the blog below:

    Apply a provisioning package from a SCCM Task Sequence

    Amrik Kalsi
    Senior Support Engineer

  • We Are Hiring – North Carolina and Texas

    Would you like to join the world’s best and most elite debuggers to enable the success of Microsoft solutions?   As a trusted advisor to our top customers you will be working with to the most experienced IT professionals and developers in the industry more
  • How big should my OS drive be?

    My name is Michael Champion and I've been working in support for more than 12 years here at Microsoft.  I have been asked by many customers "What is the recommended size for the OS partition for Windows Server?".  There are minimum recommendations in the technical documentation (and release notes), but those recommendations are more on the generic side.  There are times when that recommendation is fine, but other times they are not.

    Take for example the Windows Server 2012 R2 disk recommendations.

    System Requirements and Installation Information for Windows Server 2012 R2

    Disk space requirements

    The following are the estimated minimum disk space requirements for the system partition.

    Minimum: 32 GB

    Be aware that 32 GB should be considered an absolute minimum value for successful installation. This minimum should allow you to install Windows Server 2012 R2 in Server Core mode, with the Web Services (IIS) server role. A server in Server Core mode is about 4 GB smaller than the same server in Server with a GUI mode. For the smallest possible installation footprint, start with a Server Core installation and then completely remove any server roles or features you do not need by using Features on Demand. For more information about Server Core and Minimal Server Interface modes, see Windows Server Installation Options.

    The system partition will need extra space for any of the following circumstances:

      • If you install the system over a network.
      • Computers with more than 16 GB of RAM will require more disk space for paging, hibernation, and dump files.

    The trick here is that "minimum" is bolded meaning that you could need a larger space and does not take into account your actal memory, what applications may be installed, etc.  While it does state this, I can give you an idea based on the role and hardware configuration of the server and other factors what disk space you should have available.

    Here are some good suggestions to follow when trying to calculate the size of an OS volume.

    • 3x RAM up to 32GB
    • 10-12GB for the base OS depending on roles and features installed
    • 10GB for OS Updates
    • 10GB extra space for miscellaneous files and logs
    • Any applications that are installed and their requirements. (Exchange, SQL, SharePoint,..)

    Taking the full 32GB RAM, a simple OS build would require a drive about 127GM in size.  One may think this is too large for the OS when the minimum disk space requirement is 32GB but let's break this down a bit...

    Why 3x RAM?

    If you are using 32GB of RAM and you need to troubleshoot a bug check or hang issue, you will need a page file at least 100MB larger than the amount of RAM as well as space for the memory dump.  Wait, that is just over 2x RAM... There are other log files like the event logs that will grow over time and we may need to collect other logs that will take up GB of space depending on what we are troubleshooting and the verbosity of the data we need.

    10GB-12GB for the base OS?

    The base OS install size is about 10GB-12GB and that is just for the base files and depends on what roles and features are installed.

    10GB for OS Updates?

    If you are familiar with the WinSxS directory in the OS for 2008/R2 and up, this folder will grow as the server is updated over the life of the server.  We have made great strides in reducing the space taken up by the WinSxS folder but it still increases over time.

    10GB extra space for miscellaneous files and logs?

    This may seem to be covered in the 3x RAM but many times people will copy ISO, 3rd party install files or logs, and other things to the server.  It is better to have the space than not to have it.

    How much for server applications then?

    This part is variable and should be taken in consideration when purposing a server for a particular function.  In general server use, the 127GB can usually accommodate a single or even a dual purpose server.

    Thank You,
    Michael Champion
    Support Escalation Engineer

  • So what exactly is the CLIUSR account?

    From time to time, people stumble across the local user account called CLIUSR and wonder what it is, while you really don’t need to worry about it; we will cover it for the curious in this blog.

    The CLIUSR account is a local user account created by the Failover Clustering feature when it is installed on Windows Server 2012 or later. Well, that’s easy enough, but why is this account here? Taking a step back, let’s take a look at why we are using this account

    In the Windows Server 2003 and previous versions of the Cluster Service, a domain user account was used to start the Cluster Service. This Cluster Service Account (CSA) was used for forming the Cluster, joining a node, registry replication, etc. Basically, any kind of authentication that was done between nodes used this user account as a common identity.

    A number of support issues were encountered as domain administrators were pushing down group policies that stripped rights away from domain user accounts, not taking into consideration that some of those user accounts were used to run services. An example of this is the Logon as a Service right. If the Cluster Service account did not have this right, it was not going to be able to start the Cluster Service. If you were using the same account for multiple clusters, then you could incur production downtime across a number of critical systems. You also had to deal with password changes in Active Directory. If you changed the user accounts password in AD, you also needed to change passwords across all Clusters/nodes that use the account.

    In Windows Server 2008, we learned and redesigned everything about the way we use start the service to make it more resilient, less error prone, and easier to manage. We started using the built-in Network Service to start the Cluster Service. Keep in mind that this is not the full blown account, just simply a reduced privileged set. Changing it to this reduced account was a solution for the group policy issues.

    For authentication purposes, it was switched over to use the computer object associated with the Cluster Name known as the Cluster Name Object (CNO)for a common identity. Because this CNO is a machine account in the domain, it will automatically rotate the password as defined by the domain’s policy for you (which is every 30 days by default).

    Great!! No more domain user account and its password changes we have to account for. No more trying to remember which Cluster was using which account. Yes!! Ah, not so fast my friend. While this solved some major pain, it did have some side effects.

    Starting in Windows Server 2008 R2, admins started virtualizing everything in their datacenters, including domain controllers. Cluster Shared Volumes (CSV) was also introduced and became the standard for private cloud storage. Some admin’s completely embraced virtualization and virtualized every server in their datacenter, including to add domain controllers as a virtual machine to a Cluster and utilize the CSV drive to hold the VHD/VHDX of the VM.

    This created a “chicken or the egg” scenario that many companies ended up in. In order to mount the CSV drive to get to the VMs, you had to contact a domain controller to get the CNO. However, you couldn’t start the domain controller because it was running on the CSV.

    Having slow or unreliable connectivity to domain controllers also had effect on I/O to CSV drives. CSV does intra-cluster communication via SMB much like connecting to file shares. To connect with SMB, it needs to authenticate and in Windows Server 2008 R2, that involved authenticating the CNO with a remote domain controller.

    For Windows Server 2012, we had to think about how we could take the best of both worlds and get around some of the issues we were seeing. We are still using the reduced Network Service privilege to start the Cluster Service, but now to remove all external dependencies we have a local (non-domain) user account for authentication between the nodes.

    This local “user” account is not an administrative account or domain account. This account is automatically created for you on each of the nodes when you create a cluster or on a new node being added to the existing Cluster. This account is completely self-managed by the Cluster Service and handles automatically rotating the password for the account and synchronizing all the nodes for you. The CLIUSR password is rotated at the same frequency as the CNO, as defined by your domain policy (which is every 30 days by default). With it being a local account, it can authenticate and mount CSV so the virtualized domain controllers can start successfully. You can now virtualize all your domain controllers without fear. So we are increasing the resiliency and availability of the Cluster by reducing external dependencies.

    This account is the CLIUSR account and is identified by its description.


    One question that we get asked is if the CLIUSR account can be deleted. From a security standpoint, additional local accounts (not default) may get flagged during audits. If the network administrator isn’t sure what this account is for (i.e. they don’t read the description of “Failover Cluster Local Identity”), they may delete it without understanding the ramifications. For Failover Clustering to function properly, this account is necessary for authentication.


    1. Joining node starts the Cluster Service and passes the CLIUSR credentials across.

    2. All passes, so the node is allowed to join.

    There is one extra safe guard we did to ensure continued success. If you accidentally delete the CLIUSR account, it will be recreated automatically when a node tries to join the Cluster.

    Short story… the CLIUSR account is an internal component of the Cluster Service. It is completely self-managing and there is nothing you need to worry about regarding configuring and managing it. So leave it alone and let it do its job.

    In Windows Server 2016, we will be taking this even a step further by leveraging certificates to allow Clusters to operate without any external dependencies of any kind. This allows you to create Clusters out of servers that reside in different domains or no domains at all. But that’s a blog for another day.

    Hopefully, this answers any questions you have regarding the CLIUSR account and its use.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • CROSS POST: How Shared VHDX Works on Server 2012 R2

    In the not far back point in time, there was a blog done by Matthew Walker that we felt needed to also be on the AskCore site as well due to the nature and the popularity of the article.  So we are going to cross post it here.  Please keep in mind that the latest changes/updates will be in the original blog post.

    CROSS POST: How Shared VHDX Works on Server 2012 R2

    Hi, Matthew Walker here, I’m a Premier Field Engineer here at Microsoft specializing in Hyper-V and Failover Clustering. In this blog I wanted to address creating clusters of VMs using Microsoft Hyper-V with a focus on Shared VHDX files.

    From the advent of Hyper-V we have supported creating clusters of VMs, however the means of adding in shared storage has changed. In Windows 2008/R2 we only supported using iSCSI for shared volumes, with Windows Server 2012 we added the capability to use virtual fibre channel, and SMB file shares depending on the workload, and finally in Windows Server 2012 R2 we added in shared VHDX files.

    Shared Storage for Clustered VMs:

    Windows Version



    2012 R2





    Virtual Fibre Channel




    SMB File Share




    Shared VHDX




    So this provides a great deal of flexibility when creating clusters that require shared storage with VMs. Not all clustered applications or services require shared storage so you should review the requirements of your app to see. Clusters that might require shared storage would be file server clusters, traditional clustered SQL instances, or Distributed Transaction Coordinator (MSDTC) instances. Now to decide which option to use. These solutions all work with live migration, but not with items like VM checkpoints, host based backups or VM replication, so pretty even there. If there is an existing infrastructure with iSCSI or FC SAN, then one of those two may make more sense as it works well with the existing processes for allocating storage to servers. SMB file shares work well but only for a few workloads as the application has to support data residing on a UNC path. This brings us to Shared VHDX.

    Available Options:

    Hyper-V Capability

    Shared VHDX used

    ISCSI Drives

    Virtual Fibre Channel Drives

    SMB Shares used in VM

    Non-Shared VHD/X used

    Host based backups












    VM Replication






    Live Migration






    Shared VHDX files are attached to the VMs via a virtual SCSI controller so show up in the OS as a shared SAS drive and can be shared with multiple VMs so you aren’t restricted to a two node cluster. There are some prerequisites to using them however.

    Requirements for Shared VHDX:

    2012 R2 Hyper-V hosts
    Shared VHDX files must reside on Cluster Shared Volumes (CSV)
    SMB 3.02

    It may be possible to host a shared VHDX on a vender NAS if that appliance supports SMB 3.02 as defined in Windows Server 2012 R2, just because a NAS supports SMB 3.0 is not sufficient, check with the vendor to ensure they support the shared VHDX components and that you have the correct firmware revision to enable that capability. Information on the different versions of SMB and capabilities is documented in a blog by Jose Barreto that can be found here.

    Adding Shared VHDX files to a VM is relatively easy, through the settings of the VM you simply have to select the check box under advanced features for the VHDX as below.


    For SCVMM you have to deploy it as a service template and select to share the VHDX across the tier for that service template.


    And of course you can use PowerShell to create and share the VHDX between VMs.

    PS C:\> New-VHD -Path C:\ClusterStorage\Volume1\Shared.VHDX -Fixed -SizeBytes 30GB

    PS C:\> Add-VMHardDiskDrive -VMName Node1 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    PS C:\> Add-VMHardDiskDrive -VMName Node2 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    Pretty easy right?

    At this point you can setup the disks as normal in the VM and add them to your cluster, and install whatever application is to be clustered in your VMs and if you need to you can add additional nodes to scale out your cluster.

    Now that things are all setup let’s look at the underlying architecture to see how we can get the best performance from our setup. Before we can get into the shared VHDX scenarios first we need to take a brief stint on how CSV works in general. If you want a more detailed explanation please refer to Vladimir Petter’s excellent blogs starting with this one.


    This is a simplified diagram of the way we handle data flow for CSV, the main points here are to realize that access to the shared storage in this clustered environment is handled through the Cluster Shared Volume File System (CSVFS) filter driver and supporting components, this system handles how we access the underlying storage. Because CSV is a clustered file system we need to have this orchestration of file access. When possible I/O travels a direct path to the storage, but if that is not possible then we will redirect over the network to a coordinator node. The coordinator node shows up in the Failover Cluster manager as the owner for the CSV.

    With Shared VHDX we also have to have orchestration of shared file access, to achieve this with Shared VHDX all I/O requests are centralized and funneled through the coordinator node for that CSV. This results in I/O from VMs on hosts other than the coordinator node being redirected to the coordinator. This is different from a traditional VHD or VHDX file that is not shared.

    First let’s look at this from the perspective of a Hyper-V compute cluster using a Scale-Out File Server as our storage. For the following examples I have simplified things by bringing it down to two nodes and added in a nice big red line to show the data path from the VM that currently owns our clustered workload. For my example I making some assumptions, one is that the workload being clustered is configured in an Active/Passive configuration with a single shared VHDX file and we are only concerned with the data flow to that single file from one node or the other. For simplicity I have called the VMs Active and Passive just to indicate which one owns the Shared VHDX in the clustered VMs and is transferring I/O to the storage where the shared VHDX resides.


    So we have Node 1 in our Hyper-V cluster accessing the Shared VHDX over SMB and connects to the coordinator node of the Scale-Out File Server cluster (SOFS), now let’s move the active workload.


    So even when we move the active workload SMB and the CSVFS drivers will connect to the coordinator node in the SOFS cluster, so in this configuration our performance is going to be consistent. Ideally you should have high speed connects between your SOFS nodes and on the network connections used by the Hyper-V compute nodes to access the shares. 10 Gb NICs or even RDMA NICs. Some examples of RDMA NICs are Infiniband, iWarp and RDMA over Converged Ethernet (RoCE) NICs.

    Now as we change things up a bit, we will move the compute onto the same servers that are hosting the storage


    As you can see the access to the VHDX is sent through the CSVFS and SMB drivers to access the storage, and everything works like we expect as long as the active VM of the clustered VMs is on the same node as the coordination node of the underlying CSV, so now let’s look at how the data flows when the active VM is on a different node.


    Here things take a different path than we might expect, since SMB and CSVFS are an integral part of ensuring proper orchestrated access to the Shared VHDX we send the data across the interconnects between the cluster nodes rather than straight down to storage, this can have a significant impact on your performance depending on how you have scaled your connections.

    If the direct access to storage is a 4Gb fibre connect and the interconnect between nodes is a 1Gb connection there is going to be a serious difference in performance when the active workload is not on the same node that owns the CSV. This is exacerbated when we have 8Gb or 10Gb bandwidth to storage and the interconnects between nodes is only 1Gb. To help mitigate this behavior make sure to scale up your cluster interconnects to match using options such as 10 Gb NICs, SMB Multi-channel and/or RDMA capable devices that will improve your bandwidth between the nodes.

    One final set of examples to address concerns about scenarios where you may have an application active on multiple clustered VMs that are accessing the same Shared VHDX file. First let’s go back to the separate compute and storage nodes.


    And now to show how it goes with everything all together in the same servers.


    So we can even implement a scale out file server or other multi-access scenarios using clustered VMs.

    So the big takeaway here is more about understanding the architecture to know when you will see certain types of performance, and how to set proper expectations based on where and how we access the final storage repository for the shared VHDX. By moving some of the responsibility for handling access to the VHDX to SMB and CSVFS we get a more flexible architecture and more options, but without proper planning and an understanding of how it works there can be some significant differences in performance based on what type of separation there is between the compute side and the storage side. For the best performance ensure you have high speed and high bandwidth interconnects from the running VM all the way to the final storage by using 10 Gb or RDMA NICs, and try to take advantage of SMB Multi-Channel.

    --- Matthew Walker

  • CROSS POST: Windows 10, WindowsUpdate.log and how to view it with PowerShell or Tracefmt.exe

    In the not far back point in time, there was a blog done by Charles Allen that we felt needed to also be on the AskCore site due to the nature and the popularity of the article.  So we are going to cross post it here.  Please keep in mind that the latest changes/updates will be in the original blog post.

    Windows 10, WindowsUpdate.log and how to view it with PowerShell or Tracefmt.exe

    With the release of Windows 10, there is going to be a change to the way the Operating System logs created and how we can view them.

    For those of us who are supporting services like Configuration Manager and Windows Update Services to deploy Software Updates, this means a change to how we will look at the WindowsUpdate.log in Windows 10.
    The WindowsUpdate.log is still located in C:\Windows, however when you open the file C:\Windows\WindowsUpdate.log, you will only see the following information:

    In order to read the WindowsUpdate.log in Windows 10, you will need to either perform one of two different options.

    1) Decode the Windows Update ETL files
    2) Use Windows PowerShell cmdlet to re-create the WindowsUpdate.log the way we normally view it

    I am going to go ahead and start with the PowerShell cmdlet option #2, as it is my personally preferred method.

    Using PowerShell Get-WindowsUpdateLog:

    1) On the Windows 10 device you wish to read the WindowsUpdate.log open PowerShell with Administrative rights (I prefer to use PowerShell ISE)
    2) Perform the command "PS C:\WINDOWS\system32> Get-WindowsUpdateLog", you will see the following occur:

    3) This will take a moment to complete, once done,  you will see on the desktop a new WindowsUpdate.log file

    Please note that the newly created WindowsUpdate.log file from running the Get-WindowsUpdate.log command is a static log file and will not update like the old WindowsUpate.log unless you perform the Get-WindowsUpdateLog cmdlet again.
    However, with some PowerShell magic, you can easily create a script that will update the log actively to allow you to troubleshoot closer to real time with this method.

    For more information on the Get-WindowsUpdate.log cmdlet, please go here:

    Decoding the Windows ETL files:

    So the other option, is to directly decode the ETL files for getting the Windows Update information during troubleshooting. Lets go ahead and walk through this:

    1) Download the public symbols, if you have never done this, you can follow these instructions:
    2) Download the Tracefmt.exe tool by following these instructions:
    3) Open an elevated command prompt
    4) Create a temporary directory folder, example %systemdrive%\WULogs
    5) Navigate to the folder containing the Tracefmt.exe and then copy it to the folder you just created (example copy Tracefmt.exe to %systemdrive%\WULogs)
    6) Run the following commands at the command prompt you have open

    cd /d %systemdrive%\WULogs
    copy %windir%\Logs\WindowsUpdate\* %systemdrive%\WULogs\
    tracefmt.exe -o windowsupate.log <Windows Update logs> -r c:\Symbols

    For the <Windows Update logs> syntax, you will want to replace this with the applicable log names, space-delimited. Example:

    cd /d %systemdrive%\WULogs
    copy %windir%\Logs\WindowsUpdate\* %systemdrive%\WULogs\
    tracefmt.exe -o windowsupate.log Windowsupdate.103937.1.etl Windowsupdate.103937.10.etl -r c:\Symbols

  • Office Applications only print 1-2 pages

    Hello AskPerf!  My name is Susan, and today we are going to discuss an issue where printing through Office applications only produce 1-2 pages out of a multi-page document.

    For example, you have Windows 2003/2008 Print Server with (e.g. Lexmark Universal v2 PS3 ( and Windows 8.1 clients attempt to print from Office applications; only the first page or 2nd pages will print. 

    Other symptoms you may observe:

    • You can print only 2 pages, for example: page 2-3 of a 10 page document
    • You print just fine out of other applications
    • If you print to PDF from Office, the files print as expected

    Cause There are two main causes of the behavior above.  The first is missing fonts from the Print Server - the buffer simply fills and overflows and only ~2 pages will print.  The second cause is a legacy Bluetooth service is installed as well as its add-on component.

    Resolution #1

    Install the missing fonts on the Print Server.  You do not need to install Office, only the fonts.

    Here are the fonts that should be installed:

    Fonts that are installed with Microsoft Office 2013 products

    Fonts supplied with* Office 2010

    Office 2010 printing errors with Calibri font when printing through a Windows Server 2003 or 2008 print server

    Resolution #2

    For the Bluetooth Driver, there are two pieces: the service as well as the add-on that is registered under the Office applications.  The add-ons should be disabled for all Office applications under multiple keys.

    Option 1

    1. Check with the vendor to determine if there are any updates to your Bluetooth device.

    Option 2

    1. Uninstall Bluetooth       

    a.      Please confirm it is completely uninstalled via checking MSCONFIG and running Services.msc

    b.      Next, you will need to modify registry keys in two  locations  and change the loadbehavior to 0 or delete. 

               For example: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Office\Outlook\AddinsBtmoffice.connect

    And also under  BTMOffice.connect is loaded in Access, Excel, Project, Outlook, Powerpoint, Word for each application.


    And also under  BTMOffice.connect is loaded in Access, Excel, Project, Outlook, Powerpoint, Word for each application.

    Option 3

    1. Disable Bluetooth as a test

    a.      Stop the service from running in Services.msc

    b.      Change the loadbehavior in the above registry keys to 0


  • Remote Desktop Licensing Service Stopping

    Hello AskPerf! My name is Matt Graham and I'll be discussing an issue that you may see on your RDS Licensing Server.

    SCENARIO You have both a 2008 R2 and an 2012 or 2012 R2 Licensing server in your RDS environment.  When you look under services.msc, you notice that the Remote Desktop Licensing service is stopped on the 2012 / 2012 R2 server.  You try to start it again, but after a short period of time (30 seconds to a few minutes) it stops on its own again.  In fact, every time you try to start the service, it starts for a short time and then stops on its own.

    Alternatively, you may see this service crash.

    ISSUE This behavior is actually by design.  You cannot have a 2008 R2 and a 2012 / 2012 R2 License server in the same RDS environment.

    RESOLUTION If you are moving to a 2012 / 2012 R2 environment, then deactivate and decommission your 2008 R2 license server.  If you still want to have two or more license servers, you will need to build another matching 2012 / 2012 R2 license server.

    CONSIDERATION #1 We have seen at least one case where the 2012 License Manager Service still did not start even after removing the 2008 R2 License server.  In this case, the licensing server database had become corrupt.  If this happens, you can rebuild the database using the "Manage Licenses" wizard.

    WARNING If you do this, you will have to re-install your licenses after the rebuild. Be sure you have your licensing information.

    1.  Open your RD Licensing Manager, right click on your server and select Manage Licenses.

    2. Select Rebuild the license server database.

    3.  After this, you will need to have your Retail CAL pack or your EA information in order to reinstall your licenses.

    CONSIDERATION #2 In one case, a customer had to rename the "C:\Windows\System32\Lserver" folder, uninstall the RDS roles, reboot, and reinstall the RDS Licensing role in order to get the service to start again.  This should effectively do the same thing as rebuilding the license database, but I mention it because it was successful in at least one case.

    Finally, when you decommission your old 2008 R2 server, be sure to think through what that will entail for your session hosts.  You may need to take inventory of your session hosts and ensure that they are pointed to your 2012 / 2012 R2 license server if they aren't already pointed to it.


  • Manage Developer Mode on Windows 10 using Group Policy

    Hi All, We’ve had a few folks want to know how to disable Developer Mode using Group Policy, but still allow side-loaded apps to be installed. Here is a quick note how to do this. (A more AD-centric post from Linda Taylor is on it way) On the Windows more
  • Windows 10 Volume Activation Tips


    Today’s blog is going to cover some tips around preparing your organization for activating Windows 10 computers using volume activation

    Updating Existing KMS Hosts

    The first thing to do is to update your KMS host to support Windows 10 computers using the following hotfix

    3058168: Update that enables Windows 8.1 and Windows 8 KMS hosts to activate Windows 10


    • When downloading this fix make sure to choose the correct operating system and architecture(x86 or X64) so you get the right update. There is an update for Windows 8/2012 and an update for Windows 8.1/2012 R2. So if you get a “The update is not applicable to your computer” message, you may have incorrect version.
    • We are updating this KB title to reflect that 2012 and 201 2R2 are supported also

    You may notice that Windows 7 and Windows Server 2008 R2 KMS hosts are not covered by this update. We are working on releasing an update to support Windows 7/Windows Server 2008 R2 but we would encourage everyone to update to a later KMS Host OS.

    Obtain Your Windows Server 2012 R2 for Windows 10 CSVLK

    Today there are 2 CSVLK’s for Windows 10 available

    • Windows 10 CSVLK: Can only be installed on Windows 8(with above update), Windows 8.1(with above update), or Windows 10 KMS host and only activates client operating systems
    • Windows Server 2012 R2 for Windows 10 CSVLK: Can only be installed on a Windows Server 2012 or 2012 R2 KMS host(with the above update installed)and activates both client and server operating systems

    Generally most KMS hosts are setup on Server operating systems so you need to get the Windows Server 2012 R2 for Windows 10 CSVLK. To find it do the following:


    1. Log on to the Volume Licensing Service Center (VLSC).
    2. Click License.
    3. Click Relationship Summary.
    4. Click License ID of their current Active License.
    5. After the page loads, click Product Keys.
    6. In the list of keys, locate “Windows Srv 2012 R2 DataCtr/Std KMS for Windows 10.”

    For example:


    The Windows 10 CSVLK is located in different area of the website.


    If you have an open agreement, you will need to contact the VLSC support team to request your CSVLK.

    Once you obtain your key you will need to install and activate it using the following steps

    Cscript.exe %windir%\system32\slmgr.vbs /ipk <your CSVLK>
    Cscript.exe %windir%\system32\slmgr.vbs /ato

    After you install the key you can run Cscript.exe %windir%\system32\slmgr.vbs /dlv it will show the following

    Description: Windows(R) Operating System, VOLUME_KMS_2012-R2_WIN10 channel

    Once installed and activated this CSVLK will activate Windows 10 and all previous client and server volume license editions

    Volume Activation Management Tool (VAMT 3.1)

    You should update to the latest version of VAMT 3.1 which can be found in the Windows 10 ADK

    Note: We are aware of 2 issues with VAMT 3.1 with Windows 10 currently. First issue is if you try to add the above CSVLK to VAMT you will get error “The specified product key is invalid, or is unsupported by this version of VAMT”. For additional information see the following article

    3094354: Can't add CSVLKs for Windows 10 activation to VAMT 3.1

    The second issue is that you cannot discover Windows 10 computers unless you use IP address. This issue is still being investigated. Check the latest information by querying on VAMT

    Proxy Servers

    If you are using a proxy server with basic authentication in your environment you should review the following article for list of exceptions you may have to add. The list of addresses have changed since previous operating systems.

    921471: Windows activation fails and may generate error code 0x8004FE33

    Additional info

    The Generic Volume License Key (GVLK) for Windows 10 editions can be located at the following link:

    Hope this helps with your volume activation.

    Scott McArthur
    Senior Supportability Program Manager

  • Unable to add file shares in a Windows 2012 R2 Failover Cluster

    My name is Chinmoy Joshi and I am a Support Escalation Engineer with the Windows Core team. I’m writing today to share information regarding an issue which I came across with multiple customers recently.

    Consider a two node 2012 R2 Failover Cluster using shared disks to host a File Server role. To add shares to the File Server role, select the role and right-mouse click on it to get the Add File Share option. The Add File Share option is also available along the far right column. Upon doing this, you may receive an error “There were errors retrieving file sharesor the Add Share wizard gets stuck with,Unable to retrieve all data needed to run the wizard”.



    When starting the add share wizard, it is going to try and enumerate all current shares on the node and across the Cluster. There can be multiple reasons why Failover Cluster Manager would throw these errors. We will be covering two of the known scenarios that can cause this.

    Scenario 1:

    Domain Users/Admins can be part of nested groups; meaning, they are a in a group that is part of another group. As part of the security, there is a token header being passed and that header can be bloated. Bloated headers can occur when the user/admin is part of nested group or may be migrated from some domain to a new domain carrying older SID’s. In our case, the domain user was a part of large number of active directory groups. There can be three ways to resolve this:

    A)  Reduce the number of active directory groups the user is member of,
    B)  Clean up the SID History, or
    C)  Modify the Https service registry with the following registry values:

    Caution: Please backup the registry before modifying in case you need to revert the changes.


    Note that these keys may not be there, so they will need to be created.

    Here, HTTPS protocol uses Kerberos for authentication and the token header generated was too large throwing an error.  When this is the case, you will see the following event:

    Log Name: Microsoft-Windows-FileServices-ServerManager-EventProvider/Operational
    Source: Microsoft-Windows-FileServices-ServerManager-EventProvider
    Event ID: 0
    Level: Error
    Description: Exception: Caught exception Microsoft.Management.Infrastructure.CimException: The WinRM client received an HTTP bad request status (400), but the remote service did not include any other information about the cause of the failure.
       at Microsoft.Management.Infrastructure.Internal.Operations.CimSyncEnumeratorBase`1.MoveNext()
       at Microsoft.FileServer.Management.Plugin.Services.FSCimSession.PerformQuery(String cimNamespace, String queryString)
       at Microsoft.FileServer.Management.Plugin.Services.ClusterEnumerator.RetrieveClusterConnections(ComputerName serverName, ClusterMemberTypes memberTypeToQuery)


    Problems with Kerberos authentication when a user belongs to many groups

    Scenario 2:

    The second most popular reason for not able to get the file shares created is the WinRM policy being enabled for IPv4filter. When this is set, you will see this in the wizard:


    To see if it is set on the Cluster nodes, go into the Local Security Policy from the Administrative Tools or Server Manager.  Once there, follow down the path to:

    If you go into the Group Policy Editor, it would be located at:

    Local Computer Policy
    Computer Configuration
    Administrative Templates
    Windows Components
    Windows Remote Management (WinRM)
    WinRM Service
    Allow remote server management through WinRM

    If it is enabled, open that policy up and check to see if the box for IPv6 has an asterisks in it.



    You will run into this error if only IPv4 is selected.  So to resolve this, you would need to either disable the policy or also add an asterisks for IPv6.  For the change to take effect, you will need to reboot the system.  After the reboot, go back into Group Policy Editor to see if it has been reverted back.  If it has, you will need to check your domain policies and have this done there.

    Hope this helps you save time in resolving the issue, Good Luck!!

    Chinmoy Joshi
    Support Escalation Engineer

  • Temporary Post Used For Theme Detection (c64b201d-93d0-43f8-b193-1d4f3f59fb6d - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

    This is a temporary post that was not deleted. Please delete this manually. (5f0bb32c-25d0-4558-9bd7-fa01838db359 - 3bfe001a-32de-4114-a6b4-4005b770f6d7)

  • Windows 10 (RTM) RSAT tools now available…

    Hey Folks, quick post to let you know that the Windows 10 Remote Server Administration Tools are now available.

    Remote Server Administration Tools for Windows 10


  • Windows 10 Group Policy (.ADMX) Templates now available for download

    Hi everyone, Ajay here. I wanted to let you all know that we have released the Windows 10 Group Policy (.ADMX) templates on our download center as an MSI installer package. These .ADMX templates are released as a separate download package so you can manage more
  • Windows 10 is coming!


    Hello folks, as I’m sure you already know, Windows 10 will be available tomorrow, July 29th.  With that said, we will be blogging some of the new features that our team will be supporting in this new OS.

    We will also blog about features that some of other teams support.  Namely, how to manage Windows 10 notifications and upgrade options:

    How to manage Windows 10 notification and upgrade options

    Windows 10 landing page

    See you soon!


  • The New and Improved CheckSUR

    One of the most used and arguably most efficient tools that we utilize when troubleshooting Servicing issues, prior to Windows 8/Windows 2012, is the System Update Readiness tool(also known as CheckSUR). However, as we continue to improve our operating systems, we must continue to improve our troubleshooting tools as well. Thus, I want to introduce the “Updated CheckSur”

    Improvements for the System Update Readiness Tool in Windows 7 and Windows Server 2008 R2

    In short, previously, CheckSUR would load its payload locally on the machine and run the executable to attempt to resolve any discrepancies it detects in the package store.

    With these improvements, the utility no longer carries a payload.  It also doesn’t require repeated downloads of the CheckSUR package that was previously required. The new CheckSUR package will stay installed until removed by the user.

    I’m sure you’re wondering: without the payload, how will CheckSUR resolve any issues? After installing this patch and rebooting (which is required), the CheckSUR functionality is now exposed through the DISM command:

    DISM /Online /Cleanup-Image /Scanhealth

    This command should seem familiar if you have used DISM for troubleshooting on any Win8+ operating system. There is, however, no restorehealth/checkhealth with this update.  Scanhealth provides the same functionality as restorehealth in Win8+ OS’s and the CheckSUR tool did previously.

    Another new feature is that CheckSUR will now also detect corruption on components for Internet Explorer.

    A few extra points to note:

    • The “Updated CheckSUR” is specific to only run on Windows 7 SP1 and Windows 2008 R2 SP1
    • CheckSUR can only be run on an online OS
    • CheckSUR can be used as a scheduled proactive method by scheduling a task to run /scanhealth as a desired time to ensure that the system is periodically checked for corruption
    • Manual steps that previously could be utilized to run CheckSUR are no longer available with the update to CheckSUR

    One of my favorite parts of the update is that the results are still logged in the c:\windows\logs\CBS\checksur.log and still gives the same layout and information surrounding its findings once the file has been accessed and opened. I will be creating another article shortly that discusses some steps to take when you encounter a CheckSUR.log with errors.

    Thank You
    Nicholas Debanm
    Support Escalation Engineer

  • Azure SNAT

    This post was contributed by Pedro Perez. Azure’s network infrastructure is quite different than your usual on-premises network as there are different layers of software abstraction that work behind the curtains. I would like to talk today about more
  • Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network and Creation)

    Hello, cluster fans. In my previous blog, Part 1, I talked about how to work around the storage block in order to implement Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Networking in Cluster on Azure.

    Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the Cluster.

    VIP (Virtual IP address): A public IP address belongs to the cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

    DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

    Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

    Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

    You can refer to this blog for more details about those terms for Azure network:

    VIPs, DIPs and PIPs in Microsoft Azure

    OK, enough reading, Storage is ready and we know the basics of Azure network, can we start to building the Cluster? Yes!

    The first difference you will see is that you need create the Cluster with one node and add the other nodes as the next step. This is because the Cluster Name Object (CNO) cannot be online since it cannot acquire a unique IP Address from the Azure DHCP service. Instead, the IP Address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the Cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the Cluster from losing quorum, you start with a one node Cluster. Let the CNO’s IP Address fail and then manually set up the IP address.


    The CNO DEMOCLUSTER is offline because the IP Address it is dependent on is failed. is the VM’s DIP, which is where the CNO’s IP duplicates from.


    In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example,

    To change the IP address, right mouse click on the resource, choose the Properties of the IP Address, and specify the new address.


    Once the address is changed, right mouse click on the Cluster Name resource and tell it to come online.


    Now that these two resources are online, you can add more nodes to the Cluster.

    Instead of using Failover Cluster Manager, the preferred method is to use the New-Cluster PowerShell cmdlet and specify a static IP during Cluster creation. When doing it this way, you can add all the nodes and use the proper IP Address from the get go and not have to use the extra steps through Failover Cluster Manager.

    Take the above environment as example:

    New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress

    Note:The Static IP Address that you appoint to the CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP, cannot resolve DNS name, and cannot use the CNO for management since its IP is an unusable IP.

    Now you’ve successfully created a Cluster. Let’s add a highly available role inside it. For the demo purpose, I’ll use the File Server role as an example since this is the most common role that lot of us can understand.

    Note:In a production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

    Different than Cluster on-premises, I recommend you to pause all other nodes and keep only one node up. This is to prevent the new File Server role from moving among the nodes since the file server’s VCO (Virtual Computer Object) will have a duplicated IP Address automatically assigned as the IP on the node who owns this VCO. This IP Address fails and causes the VCO not to come online on any node. This is a similar scenario as for CNO we just talked about previously.

    Screenshots are more intuitive.

    The VCO DEMOFS won’t come online because of the failed status of IP Address. This is expected because the dynamic IP address duplicates the IP of owner node.


    Manually editing the IP to a static unused, in this example, now the whole resource group is online.


    But remember, that IP Address is the same unusable IP address as the CNO’s IP. You can use it to bring the resource online but that is not a real IP for network communication. If this is a File Server, none of the VMs except the owner node of this VCO can access the File Share.  The way Azure networking works is that it will loop the traffic back to the node it was originated from.

    Show time starts. We need to utilize the Load Balancer in Azure so this IP Address is able to communicate with other machines in order to achieving the client-server traffic.

    Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be a public facing VIP, or internal only, like a DIP. Each VM needs have the endpoint(s) so the Load Balancer knows where the traffic should go. In the endpoint, there are two kinds of ports. The first is a Regular port and is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, etc. Another kind of port is a Probe port. The default port number for this is 59999. Probe port’s job is to find out which is the active node that hosts the VCO in the Cluster. Load Balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in Cluster on an Azure VM, you need to know out what port(s) the application uses because you will need to add the port(s) to the endpoint. Then, you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, Load Balancer will do the similar port forward task and route the traffic to the VM who owns the VCO. All the above settings need to be completed using PowerShell as the blog was written.

    Note: At the time of this blog (written and posted), Microsoft only supports one resource group in cluster on Azure as an Active/Passive model only. This is because the VCO’s IP can only use the Cloud Service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given Cloud Service.

    Here is the diagram for Internal Load Balancer (ILB) in a Cluster which can explain the above theory better:


    The application in this Cluster is a File Server. That’s why we have port 445 and the IP for VCO ( the same as the ILB. There are three steps to configure this:

    Step 1: Add the ILB to the Azure cloud service.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $ServiceName = "demovm1-3va468p3" # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.


    $ILBName = "DEMOILB" # newly chosen name for the new ILB

    $SubnetName = "Subnet-1" # subnet name that the VMs use in the VNet

    $ILBStaticIP = "" # static IP address for the ILB in the subnet

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName


    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)


    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName


    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)


    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM


    # Check the settings.

    ForEach ($node in $VMNodes)


    Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$ -eq "smb"}



    Step 3: Update the parameters of VCO’s IP address with Probe Port.

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2008 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB.

    cluster res $IPResourceName /priv enabledhcp=0 overrideaddressmatch=1 address=$ILBIP probeport=59999  subnetmask=

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2012/2012 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB

    Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="";"Network"="$ClusterNetworkName";"OverrideAddressMatch"=1;"EnableDhcp"=0}

    You should see this window:


    Take the IP Address resource offline and bring it online again. Start the clustered role.

    Now you have an Internal Load Balancer working with the VCO’s IP. One last task you need do is with the Windows Firewall. You need to at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to the VCO the first time or after you failover the resource group to another node because of the ProbeIntervalInSeconds we set up previously.

    In this example, the VCO has an Internal IP of If you want to make your VCO public-facing, you can use the Cloud Service’s IP Address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure Load Balancer. You just need to add the endpoint with a regular port plus the probe port to each VM (Step 2. Then update the VCO’s IP in the Cluster (Step 3). Please be aware, your Clustered resource group will be exposed to the Internet since the VCO has a public IP. You may want to protect it by planning enhanced security methods.

    Great! Now you’ve completed all the steps of building a Windows Server Failover Cluster on an Azure IAAS VM. It is a bit longer journey; however, you’ll find it useful and worthwhile. Please leave me comments if you have question.

    Happy Clustering!

    Mario Liu
    Support Escalation Engineer