• Azure networking - Public IP addresses in Classic vs ARM

    This post was contributed by Stefano Gagliardi, Pedro Perez, Telma Oliveira, and Leonid Gagarin As you know, we recently introduced the Azure Resource Manager deployment model as an enhancement of the previous Classic deployment model. more
  • Endpoint Load Balancing Heath Probe Configuration Details

    Azure load balanced endpoints enable a port to be configured on multiple role instances or virtual machines in the same hosted service. The Azure platform has the ability to add and remove role instances based upon the instance health to achieve high more
  • Virtual Machine Managment Hangs on Windows Server 2012 R2 Hyper-V Host

    Hi, my name is Christian Sträßner from the Global Escalation Services team based in Munich, Germany. Today we will look at a hang scenario that involves user and kernel dump analysis.  Also, we will demonstrate how to analyze a hang from both user more
  • New Guided Walkthrough for troubleshooting problems relating to Event ID 1135 in a Failover Clustering environment

    I wanted to post about a new walkthrough that we have to help in troubleshooting an Event 1135 on a Failover Cluster. 

    As a bit of a background, Failover Clustering sends a heartbeat from and to each node of a Cluster to determine its health and if it responding.  If it does not respond in certain time period, it is considered down and will be removed from Cluster membership.  In the System Event Log of the remaining nodes, an Event 1135 will be triggered stating that the non-responding node was removed.

    There is now a guided walkthrough to step you through troubleshooting and aiding in determining the cause.  The walkthrough will cover a number of things including Cluster networks, Antivirus, etc.  Check it out and see what you think.  Try it out the next time to see if it helps.

    Troubleshooting cluster issue with event ID 1135

  • Failover Cluster Node Startup Order in Windows Server 2012 R2

    In this blog, my Colleague, JP, and I would like to talk about how to start a Cluster without the need of the ForceQuorum (FQ) switch.  We have identified 3 different scenarios and how the Cluster behaves when you turn on the nodes in a certain order for Windows Server 2012 R2.  I first want to mention two articles that you should be familiar with.

    How to Properly Shutdown a Failover Cluster or a Node

    Microsoft's recommendation is to configure the Cluster with a witness

    Now, on to the scenarios.

    Scenario 1: Cluster without a witness (Node majority)
    Scenario 2: Cluster with a disk witness
    Scenario 3: cluster with a file share witness

    In the below scenario, we have tried starting the cluster with and without the witness.

    Scenario 1: Cluster without a witness (Node Majority)

    Let’s use the name of cluster as 'CLUSTER' and the name of the nodes as 'A' 'B' and 'C'.  We have setup the witness type as Node Majority.  All
    nodes have a weighted vote (meaning an assigned and a current vote).  The core Cluster Group and the other resources (two Cluster Shared Volumes) are on Node A.  We also have not defined any Preferred Owners of any group.  For simplistic sake, the Node ID of each is as follows.  You can get NodeID with the Powershell commandlet Get-ClusterNode.

    Name ID State
    ==== == =====
    A    1  Up
    B    1  Up
    C    1  Up

    When we gracefully shut down Node A, all the resources on the node fail over to Node B, which is the next highest Node ID.  When we say a graceful shutdown, we are meaning shutting down the machine by clicking on the Start Menu or shutting down after applying patches.  All the resources are on Node B.  So the current votes would be:

    Node A = 0
    Node B = 1
    Node C = 1

    Now, let's gracefully shut down Node B.  All the resources now failover to Node C.  As per the way dynamic quorum works with Windows Server 2012 R2, the Cluster sustains on one node as the "last man standing".  So our current votes are:

    Node A = 0
    Node B = 0
    Node C = 1

    Now we want to gracefully shut down Node C as well.  Since all the nodes are down, the Cluster is down. 

    When we start Node A, which was shut down first, the Cluster is not formed and we see the below in the Cluster log:

    INFO  [NODE] Node 3: New join with n1: stage: 'Attempt Initial Connection' status (10060) reason: 'Failed to connect to remote endpoint'
    DBG   [HM] Connection attempt to C failed with error (10060): Failed to connect to remote endpoint
    INFO  [NODE] Node 3: New join with n2: stage: 'Attempt Initial Connection' status (10060) reason: 'Failed to connect to remote endpoint'
    DBG   [HM] Connection attempt to C failed with error (10060): Failed to connect to remote endpoint
    DBG   [VER] Calculated cluster versions: highest [Major 8 Minor 9600 Upgrade 3 ClusterVersion 0x00082580], lowest [Major 8 Minor 9600 Upgrade 3 ClusterVersion 0x00082580] with exclude node list: ()

    When we start Node B, which was shut down second, the Cluster is not formed and below are the entries we see in the Cluster log:

    INFO  [NODE] Node 1: New join with n2: stage: 'Attempt Initial Connection' status (10060) reason: 'Failed to connect to remote endpoint'
    DBG   [HM] Connection attempt to C failed with error (10060): Failed to connect to remote endpoint
    DBG   [VER] Calculated cluster versions: highest [Major 8 Minor 9600 Upgrade 3 ClusterVersion 0x00082580], lowest [Major 8 Minor 9600 Upgrade 3 ClusterVersion 0x00082580] with exclude node list: ()

    Both nodes are trying to connect to Node C, which is shut down.  Since they are unable to connect to Node C, it does not form the Cluster.  Even though we have two nodes up (A and B) and configured for Node Majority, the Cluster is not formed.

    WHY??  Well, let's see.

    We start Node C and now the Cluster is formed.

    Again, WHY??  Why did this happen when the others wouldn’t??

    This is because the last node (Node C) that was shutdown was holding the Cluster Group. So to answer your questions, the node that was shut down last  is the first node to be turned on.  When a node is shutdown, its vote is changed to 0 in the Cluster registry.  When a node goes to start the Cluster Service, it will check its vote.  If it is 0, it will only join a Cluster.  If it is 1, it will first try to join a Cluster and if it cannot connect to the Cluster to join, it will form the Cluster.

    This is by design.

    Shut down all the 3 nodes again in the same order. 

    Node A first
    Node B second
    Node C last

    Power up Node C and the Cluster is formed with the current votes as:

    Node A = 0
    Node B = 0
    Node C = 1

    Turn on Node B.  It joins and is given a vote.  Turn on Node A.  It joins and is given a vote. 

    If you start any other node in the Cluster other than the node that was last to be shut down, the ForceQuorum (FQ) switch must be used to form the Cluster.  Once it is formed, you can start the other nodes in any order and they would join.

    Scenario 2: Cluster with a disk witness
    We take the same 3 nodes and the same environment; but, add a disk witness to it.

    Let's observe the difference and the advantage of adding the witness.  To view the property of Dynamic witness weight, use the Powershell commandlet (Get-Cluster).WitnessDynamicWeight.

    PS C:\> (Get-Cluster).WitnessDynamicWeight

    The setting of 1 means it has a vote.  The setting of 0 means it does not have a vote.  Remember, we still go by the old ways of keeping the votes at an odd number

    Initially, the Cluster Group and all the resources are on Node A with the other 2 nodes adding votes to it.  The Disk Witness also adds a vote dynamically when it is needed.

    Node A = 1 vote = Node ID 1
    Node B = 1 vote = Node ID 2
    Node C = 1 vote = Node ID 3
    Disk Witness = 0 vote

    We gracefully shut down Node A.  All the resources and the Cluster Group move to Node B while Node A loses its vote.  Next, we gracefully shut down Node B and it loses its vote.  All the resources and Cluster Group move to Node C.  This leaves Node C as the "Last Man Standing" as in the previous scenario.  Gracefully shut down Node C as well and the Cluster is down.

    This time, instead of powering on the last node that was shut down, i.e. Node C, power on Node B which was shut down in the order, second in the list.


    This is because we have a witness configured and the Dynamic Quorum comes into play.  If you check the witness dynamic weight now, you will see that it has a vote.

    PS C:\> (Get-Cluster).WitnessDynamicWeight

    Because it has a vote, the Cluster forms.

    Scenario 3: Cluster with a file share witness
    Again, we take the same 3 nodes, with the same environment and add a file share witness to it.

    Presently, Node A is holding the Cluster Group and the other resources.  With the other 2 nodes and a file share witness adding the ability to dynamically add a vote to it if it needs it.

    The votes are follows

    Node A = 1 vote = Node ID 1
    Node B = 1 vote = Node ID 2
    Node C = 1 vote = Node ID 3
    File Share Witness = 0 vote

    We gracefully shut down Node A.  The resources move over to Node B and Node A loses a vote.  Because Node A lost the vote, the file share witness dynamically adjusted and gave itself a vote to keep it at an odd number.  Next, we gradefully shut down Node B. The resources move over to Node C and Node B also loses its vote.

    Node C is now the "Last Man Standing" which is holding the Cluster Group and all other resources.  When we shut down Node C, the Cluster shuts down.

    Let's take a look back at the 2nd scenario where we could turn on any node and the Cluster would form.  All the resources come online and we had a disk witness in place.  In the case of a file share witness, this does not happen.

    If we turn on Node A, which was shut down first, the Cluster would not form even though we have a file share witness.  We need to revert back to turning on the node that was shut down last, i.e. Node C (the "Last Man Standing"), to automatically form the Cluster. 

    So what is the difference?  We have a witness configured....  This is because the file share witness does not hold a copy of the Cluster Database.

    So why are you doing it this way? 

    To answer this, we have to go back in time to the way the Cluster starts and what database the Cluster uses when a form takes place.

    In Windows 2003 and below, we had the quorum drive.  The quorum drive always had the latest copy of the database.  The database holds all configurations, resources, etc for the Cluster.  It also took care of replicating any changes to all nodes so they would have the up to date information.  So when the Cluster formed, it would download the copy on the quorum drive and then start.  This actually wasn't the best way of doing things as there is really only one copy and if it goes down, everything goes down.

    In Windows 2008, this changed. Now, any of the nodes or the disk witness would have the latest copy.  We track this with a "paxos” tag.  When a change is made on a node (add resource, delete resource, node join, etc), that nodes Paxos tag is updated.  It will then send out a message to all other nodes (and disk witness if available) to update its database.  This way, everything is current. 

    When you start a node in the Cluster to form the Cluster, it is going to compare it's paxos with the one on the witness disk.  Whichever is later is the direction in which the Cluster database is used.  If the paxos is later on the disk witness, then it downloads to the node the latest copy and uses it.  If the local node is later, it uploads it to the disk witness and runs with it.

    We do things in this fashion so that you will not lose any configuration.  For example, you have a 7 node Hyper-V Cluster with 600 virtual machines running.  Node 6 is powered down, for whatever reason, and is down for a while.  In the meantime, you add an additional 200 virtual machines.  All nodes and a disk witness knows about this.  Say that the rack or datacenter the Cluster is in loses power.  Power is restored and Node6 gets powered up first.  If there is a disk witness, it is going to have a copy of the Cluster database with all 800 virtual machines and this node that has been down for so long will have them.  If you had a file share witness (or no witness) that does not contain the Cluster database, you would lose the 200 and have to reconfigure them.

    The ForceQuorum (FQ) switch will override this and starts with whatever Cluster database (and configurations) that are on the node, irregardless of paxos tags numbers.  When you use this, it makes that node’s Cluster database the “golden” copy and replicates it to all other nodes (and disk witness) as they come up.  So be cautious when using this switch.  In the above example, you start up Node 6, you lost the 200 virtual machines and will need to recreate them in the Cluster.

    As a side note, Windows Server 2016 Failover Cluster follows this same design.  If you haven't had a chance to test it out and see all the new features, come on aboard and try it out.

    Santosh Bathija and S Jayaprakash
    Microsoft India GTSC

  • Using the Windows 10 Compatibility Reports to understand upgrade issues

    This blog discusses on how to obtain and review the Compatibility Reports to troubleshoot Windows 10 Upgrades.
    On a PC that is eligible for the free upgrade offer, you can use the "Get Windows 10" app and choosing the "Check your upgrade status". The report will be displayed within the app showing issues in separate categories for devices and apps that would have potential issues.

    If your PC/Tablet does not qualify for the Free Windows 10 Upgrade Offer, you would not be able to launch the app and get the Compatibility reports. Example: You are running the Windows 8.1 Enterprise/Windows 7 Enterprise editions or other scenarios.

    1. Use the Windows 10 installation media that you intend to use and launch the Windows 10 Setup Program.


    2. After checking for the most recent Dynamic updates for the Windows 10 installation, the installation will run a compatibility check in the background and you should see:


    3. You can see the full list of potential compatibility issues in the files located in the folder:


    The files would be named as CompatData_YEAR_MONTH_DATE_HOUR_MIN_SEC… so on. These CompatData files would provide information about the compatibility of Hardware / Software issues.

    You can also get the setupact.log file from the C:\$Windows.~BT\Sources\Panther folder and use the below steps to get only the Compatibility information from the logs.

    To view the details that included in the Setupact.Log file, you can copy the information to the Setupactdetails.txt file by using the Findstr command, and then view the details in the Setupactdetails.txt. To do this, follow these steps:

    A. Open an elevated command prompt.

    B. At the command prompt, type the following command, and then press ENTER:

    findstr /c:"CONX" C:\$Windows.~BT\Sources\Panther\setupact.log  >"%userprofile%\Desktop\Setupactdetails.txt"

    C. Open the Setupactdetails.txt file from your desktop to review the details.

    Also, see:
    Troubleshooting common Windows 10 upgrade errors and issues

    Kaushik Ainapure
    Solution Asset PM
    Windows Division

  • Does your logon hang after a password change on win 8.1 /2012 R2/win10?

    Hi, Linda Taylor here, Senior Escalation Engineer from the Directory Services team in the UK. I have been working on this issue which seems to be affecting many of you globally on windows 8.1, 2012 R2 and windows 10, so I thought it would be a good idea more
  • AutoAdminLogon registry value reset to 0 after reboot

    Recently a customer had an issue where they were setting some registry values in their Azure VM by running a script using the guest agent CustomScriptExtension. After reboot, one of the values was reset to 0, and another was completely removed. more
  • IaaSAntimalware Extension Status NotReady if Installed with no Configuration

    The Microsoft Antimalware extension (IaaSAntimalware) requires a minimum configuration when installed, otherwise its status will be NotReady . When you add the IaaSAntimalware extension using the Azure management portal, that minimum configuration is more
  • Users can't access the desktop and other resources through Quick Access in Windows 10

    If you use copyprofile when customizing your Windows 10 profiles, you may encounter a scenario where pinned icons, such as Desktop under Quick Access for Windows 10 will not be accessible and users may encounter an issue similar to the following when attempting to access or save an item to that location.

    “Location is not available. C:\Users\Administrator\Desktop is not accessible. Access is denied.”

    Microsoft is aware of the issue and is investigating further. To work around this issue, or to fix the issue if user profiles are already deployed and experiencing this behavior, consider implementing any of the following following options depending on your deployment scenario and requirements.

    1. Before the image is created- Unpin the "desktop" shortcut from Quick Access prior to sysprep/copyprofile. The "desktop" shortcut under This PC will not be available upon profile creation. All other customizations will be retained.

    2. After the image is created and deployed to address new logons- After sysprep (e.g. while in OOBE or logged in), delete the following file from the default profile . This will remove any customizations made to the Quick Access list prior to sysprep/copyprofile.

    a. %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

    3. After the image is created and deployed to address existing logons- Delete the file per-user so it's regenerated the next time Explorer is opened (again, losing any customizations):

    a. %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms

    4. After the image is created and deployed to address existing logons - Have the user unpin and re-pin the Desktop from Quick Access after logon.

    For steps 2a and 3a, you can utilize group policy preferences to deploy this to users that might be already experiencing the issue in their environment.

    2a: %systemdrive%\users\default\appdata\roaming\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms


    3a: %appdata%\microsoft\windows\Recent\AutomaticDestinations\f01b4d95cf55d32a.automaticDestinations-ms


  • Display Scaling in Windows 10

    Hope everyone is having a good day.  Today, we have a guest among us.  Steve Wright is a Senior Program Manager in the Developer Platform Group.  He is authoring this blog regarding scaling in Windows 10 with how it works and how users will benefit from the work we have done with scaling.


    Windows 10 is an important release for Windows display scaling. It implements a unified approach to display scaling across all SKUs and devices aimed at these goals:

    1) Our end users enjoy a mix of UWP and classic desktop applications on desktop SKUs which reliably provide content at a consistent size

    2) Our developers can create UWP applications that deliver high quality reliably-sized content across all display devices and all Windows SKUs

    Windows 10 also delivers desktop and mobile UI which looks polished and crisp across a wider range of display densities and viewing distances than we have ever before supported on Windows. Finally, Windows 10 drives support for high quality multi-monitor scaling for docking and projection into more of both our desktop and our mobile UI.

    This article covers the basics of scaling in Windows 10, how it works, and how users will benefit from the work we have done. It wraps up by charting the course forward and show what we want to tackle in future updates to Win10.

    Our vision for display scaling

    For our end users, display scaling is a platform technology ensuring that content is presented at a consistent and optimal--yet easily adjustable--size for readability and comprehension on every device. For our developers, display scaling is an abstraction layer in the Windows presentation platform, making it easy for them to design and build apps, which look great on both high and low density displays.

    Basic concepts and terms

    We need a basic glossary of terms and some examples to show why scaling is important:





    While these examples use phones for the sake of simplicity, the same concepts apply to wearables, tablets, laptops, desktop displays and even conference room wall-mounted TVs and projectors.

    Dynamic scaling scenarios

    Note that more than one display may be used on the same device—either all at the same time, or at different times in sequence. Scale factor and effective resolution are therefore dynamic concepts and depend on where content is displayed at a particular time.

    Some everyday scenarios where this dynamic scaling can take place include projecting, docking, moving apps between different monitors, and using remote desktop to connect your local display to a remote device.

    Who does the scaling, and how do they do it

    Because Windows supports many different kind of applications and presentation platforms, scaling can occur in different places. This table illustrates the major scaling categories:

    Scaling Class


    Pros and cons

    Dynamically scaling apps:

    • Apps that scale themselves on the fly no matter where they are presented

    UWP apps

    • XAML and HTML frameworks and MRT handle this for the developer
    • DX-based UWPs need to do the work to scale themselves

    Desktop UI built on XAML and HTML

    • Start menu
    • Notifications

    Some classic desktop apps

    • file explorer
    • taskbar
    • cmd shell
    • IE (canvas, not UI chrome)

    + Crisp and right-sized content everywhere

    + Very easy to support for UWP apps (developer can rely entirely on framework support)

    - Very hard to support for Win32 apps

    “System scale factor” apps:

    • Apps that understand a single system-side scale factor (usually taken from the primary display at logon time)
    • When these apps are presented on a display that doesn’t match the system scale factor, Windows bitmap stretches them to the right size

    A small number of top-tier classic desktop apps--about 50% of them, weighed by user “face time”:

    • Microsoft products: Office & Visual Studio
    • Browsers: Chrome & Firefox
    • Photoshop & Illustrator (support for some scale factors, not all)
    • Notepad++, Editpad Pro, etc.

    WPF apps: all WPF apps support this

    + Crisp and right-sized on primary display

    - Right-sized but somewhat blurry on other displays

    - Moderately hard for Win32 developer

    + Comes for free in WPF apps

    “Scaling unaware” apps:

    • Apps that only understand low DPI displays
    • On any other display, Windows bitmap stretches them to the right size

    Majority of classic apps, weighed by app count

    • Some Windows tools (device manager)

    + Crisp and right-sized on low DPI displays

    - Right-sized but somewhat blurry on any high DPI display

    What this means for the user:

    1. UWPs and most Windows UI looks great on high DPI displays and in any multi-monitor scenarios where different display scale factors are in play
    2. A few important classic desktop apps (and all WPF apps) look great on high DPI primary displays but a little blurry on other secondary displays
    3. A large number of older classic desktop apps look blurry on high DPI displays.

    What we have done in Windows 10

    Now we can talk about the work done in Windows 10 to improve our support for both high DPI displays and for dynamic scaling scenarios. This works falls into several major areas:

    1. Unifying how content is scaled across all devices running Windows to ensure it consistently appears at the right size
    2. Extending the scaling system and important system UI to ensure we can handle very large (8K) and very dense (600 DPI) displays
    3. Adding scaling support to the mobile UX
    4. Improve Windows support for dynamic scaling: more OS and application content scales dynamically, and the user has greater control over each display’s scaling

    Let’s take a closer look at each of these.

    Unified and extended scaling system

    In Windows 8.1 the set of supported scale factors was different for different kinds of content. Classic desktop applications scaled to 100%, 125%, 150%, 200% and 250%; Store apps scaled to 100%, 140% and 180%. As a result, when running different apps side by side in productivity scenarios, content could have inconsistent sizes in different apps. In addition, on very dense displays, the scaling systems “capped out” at different points, making some apps too small on them.

    This chart shows the complexity and limits of the 8.1 scaling systems:


    For Windows 10, we unified all scaling to a single set of scale factors for both UWP and classic applications on both the Desktop and Mobile SKU:


    In Windows 8.1 all scaling topped out at 180% or 250%. For Windows 10 we knew that devices like 13.3” 4K laptops and 5.2” and 5.7” QHD phones would require even higher scale factors. Our unified scaling model for Windows 10 runs all the way to support 450%, which gives us enough headroom to support future displays like 4K 6” phones and 23” 8K desktop monitors.

    As part of this effort, Windows 10 has polished the most commonly used desktop UI to look beautiful and clear even at 400% scaling.

    Making the mobile shell scalable

    We have also overhauled our Mobile SKU so that the mobile shell UI and UWP apps will scale to the Windows 10 scale factors. This work ensures that UWP apps run at the right size on phones and phablets as well as desktop displays, and that the mobile shell UI is presented at the right size on phones of different sizes, resolutions and pixel densities. This provides our users with a more consistent experience, and makes it easier to support new screen sizes and resolutions.

    Improve Windows’ support for dynamic scaling

    When we added dynamic scaling support in Windows 8.1, there was relatively little inbox UI that worked well with dynamic scaling, but in Windows 10, we have done work in many areas of the Windows UI to handle dynamic scaling.

    UWP application dynamic scaling

    As noted above, UWP HTML and XAML apps are designed to be dynamically scalable. As a result, these applications render crisply and with the right size content on all connected displays.

    Windows “classic” desktop UI

    Windows 10 makes large parts of the most important system UI scale properly in multi-monitor setups and other dynamic scaling scenarios so that it will be the right size on any display.

    Start Experience

    For example, the desktop Start and Cortana experiences are built on the XAML presentation platform, and because of that, they scale crisply to the right size on every display.

    File Explorer

    File Explorer—a classic desktop application built on the Win32 presentation platform—was not designed to dynamically rescale itself. In Windows 10, however, the file explorer app has been updated to support dynamic scaling.

    Windows Taskbar

    In Windows 8.1 the Windows taskbar had similar historical limitations. In Windows 10, the taskbar renders itself crisply at every scale factor and the correct size on all connected displays in all different scenarios. Secondary taskbar UI like the system clock, jumplists and context menus also scale to the right size in these scenarios.

    Command shells et al.

    We have done similar work elsewhere in commonly used parts of the desktop UI. For example, in Windows 10 “console windows” like the command prompt scale correctly on all monitors (provided you choose to use scalable fonts), and other secondary UI like the “run dialog” now scales correctly on each monitor.

    Mobile shell and frameworks

    In Windows 10 the mobile platform also supports dynamic scaling scenarios. In particular, with Continuum, the phone can run apps on a second attached display. In most cases external monitors have different scale factors than the phone’s display. UWP apps and shell UI can now scale to a different DPI on the secondary display applications so that Continuum works correctly at the right size on the Mobile SKU.

    User scaling setting

    Windows 8.1 users reported frustration with the user setting for scaling:

    1. There was a single slider for multiple monitors. The slider changed the scale factor for every connected monitor, making it impossible to reliably tweak the scale factor for only one of the displays.
    2. Users found it confusing that there were two scale settings, one for modern apps/UI and another for classic apps/UI, and that the two settings worked in significantly different ways.

    In Windows 10, there is a single scale setting that applies to all applications, and the user applies it to a single display at a time. In the fall update, this setting has been streamlined to apply instantly.

    What we didn’t get to

    We are already seeing a number of common feedback issues that we’re working on for future releases of Windows. Here are some of the biggest ones we are tracking for future releases:

    Unscaled content: Lync, desktop icons

    Some applications (for example, Lync) choose to disable bitmap scaling for a variety of technical reasons, but do not take care of all their own scaling in dynamic scaling scenarios. As a result, these apps can display content that is too large or too small. We are working to improve these apps for a future release. For example, desktop icons are not per-monitor scaled in Windows 10, but in the fall update they are properly scaled in several common cases, such as docking, undocking, and projection scenarios.

    Blurry bitmap-scaled content: Office apps

    Although the UWP Office applications are fully per-monitor scaled in Windows 10, the classic desktop apps are “System scale factor apps”, as described in the table above. They generally look great on a high DPI device, but when used on secondary displays at different scale factors (including docking and projection), they may be somewhat blurry due to bitmap scaling. A number of popular desktop applications (Notepad++, Chrome, Firefox) have similar blurriness issues in these scenarios. We have ongoing work on improving migration tools for developers with these complex Win32 desktop applications.


    Scaling is a complex problem for the open Windows ecosystem, which has to support devices ranging in size from roughly 4” to 84”, with densities ranging from 50DPI to 500DPI. In Windows 10 we took steps to consolidate and simplify our developer story for scaling and to improve the end-user visual experience. Stay tuned for future releases!

  • Speaking in Ciphers and other Enigmatic tongues…update!

    Hi! Jim Tierney here again to talk to you about Cryptographic Algorithms, SCHANNEL and other bits of wonderment. My original post on the topic has gone through a rewrite to bring you up to date on recent changes in this space. So, your company purchases more
  • Azure VM may fail to activate over ExpressRoute

    Customers can advertise a default route (also known as forced tunneling) over their ExpressRoute circuit to force all traffic destined for the internet destined traffic to be routed through their on-premises infrastructure and out their on-premises edge more
  • Troubleshooting Activation Issues

    Today, Henry Chen and I are going to talk about troubleshooting some activation issues that we often run into.

    To begin, here is an article which talks about what Microsoft Product Activation is and why it is important. Also, thisarticle explains KMS Activation.

    Now, let’s jump into some common activation scenarios.

    Scenario 1 - Security Processor Loader Driver

    1. You get an error 0x80070426 when you try to activate a Windows 7 SP1 or a Windows Server 2008 R2 SP1 KMS client by running slmgr /ato.


    When you try to start Software Protection services, you will see this popup error.


    If you review the Application Event log, you will see the Event 1001.

    Source:  Microsoft-Windows-Security-SPP
    Event ID:  1001
    Level:  Error
    Description:  The Software Protection service failed to start. 0x80070002

    To resolve this, make sure the Security Processor Loader Driver is started.

    1. Go to Device Manager.
    2. Click on View -- > Show hidden devices
    3. Drop down Non-Plug and Play Drivers



    In this case, it is disabled.  It could be either Automatic, Demand or System, but not started.


    If it’s other than Boot, change the startup type to Bootand then start the driver.

    You could also as shown below change it from the registry by browsing to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\spldrand change the start value to 0 and reboot.


    If it fails to start, uninstall and re-install the driver and reboot your machine. In almost every case that we have seen, reinstalling the driver fixes the issue (i.e. you are able to start the driver).

    Once it’s started, you will be able to start Software Protection Service and then activate Windows successfully.

    Scenario 2 – Plug & Play

    When trying to activate using slmgr /atoyou get the following error even when running the command elevated:

    Windows Script Host

    Activating Windows Server(R), ServerStandard edition (68531fb9-5511-4989-97be-d11a0f55633f) ...Error: 0x80070005 Access denied: the requested action requires elevated privileges


    And the below is shown when you try to display activation information using slmgr /dlv

    Windows Script Host

    Script: C:\Windows\system32\slmgr.vbs
    Line:   1131
    Char:   5
    Error:  Permission denied
    Code:   800A0046
    Source: Microsoft VBScript runtime error


    We do have an article


    which talks about the cause of the issue. While missing permission is the root cause, we have seen instances where GPO is not enabled and the permission does not seem to be correct. We also have a


    written by our office team member on how to set the permissions using command line which we have found to be useful. We often combine both these articles to resolve issues.

    First, to verify you have the right permissions, run the below command.

    sc sdshow plugplay

    Below is how the correct permissions should look like:

    On Windows 7 SP1 or Windows Server 2008 R2 SP1

    (A;;CCLCSWLOCRRC;;;SU) <-------- This is the permission that seems to be missing in almost all instances.

    On a broken machine this is what we see.


    In order to set the correct permissions, run the following command as given in the blogfor Office:


    Then run sc sdshow plugplayto make sure the permissions have been set. Once they are set, you will be able to activate Windows successfully.

    There also have been instances where we have seen combination of 1 and 2, so you might have to check if spldr driver is started as well as permission on plugplayservice.

    On Windows Server 2012 R2

    When you run slmgr /atoyou get the below error on a machine that is domain joined. The other commands like slmgr /dlv works.

    Windows Script Host

    Activating Windows(R), ServerDatacenter edition (00091344-1ea4-4f37-b789-01750ba6988c) ...

    Error: 0x80070005 Access denied: the requested action requires elevated privileges


    This happens when SELFaccount is missing access permission on COM Security.

    To add the permission back, type dcomcnfgon the RUN box and hit OK.


    Under Component Services, expand Computers, right-click My Computer, and then click Properties.


    Click the COM Security tab, and then click Edit Default under Access Permissions.


    If SELF does not appear in the Group or user names list, click Add, type SELF, click Check Names, and then click OK.


    Click SELF, and then click to select the following check boxes in the Allowcolumn:

    · Local Access

    · Remote Access


    Then click OK on Access Permission and then OK on My Computer Properties.

    Reboot the machine.

    Scenario 3 – Read-only attribute

    As in scenario 1, we may get error 0x80070426, where a user gets the following when trying to activate Windows 2008 R2 SP1 or Windows 7 SP1.


    When trying to Start Software Protectionservice, you get an access is denied error message.


    To get more details on the error, we open the Application Event Log which shows the following error:

    Source: Microsoft-Windows-Security-SPP
    Event ID: 1001
    Level: Error
    Description: The Software Protection service failed to start. 0xD0000022

    To resolve this issue, browse to %windir%\system32 and make sure the following files have the file attribute Read-Onlyunchecked.




    Software Protectionservice should start now.

    Scenario 4 – Troubleshooting with Procmon

    Here, we will give an idea on how to use Procmonto troubleshoot activation issue.

    Windows Server 2012 R2

    On a Windows Server 2012 R2 server, when we try to run any slmgrswitches, we get the error below.


    When you try to start Software Protection service we get the following error.


    Launch process monitor and stop the capture by click on the Captureicon.


    Click on the Filtericon.


    Choose Process Name, is, type sppsvc.exe (Software Protection Service) and click Add


    We will add another Filter. So choose Result, contains, denied and click Add then OK.


    Start the capture by clicking on the Capture icon as shown above and start the Software Protectionservice.

    Once you get the error, we should see entries similar to what is shown below. In this case it’s a folder but could be a registry path too based on where we are missing permissions.


    As per the result, looks like we have permission issue on C:\Windows\System32\spp\store\2.0. We could be missing permissions on any of the folders in the path.

    Usually we start with the last folder so in this case it would be 2.0.

    Comparing permissions on broken machine (Left) and working machine (Right) we can see that sppsvcis missing.



    As you already guessed, the next step is to add sppsvcback and give it full control.

    Click on Edit and from Locations choose your local machine name, then under Enter the object names to select type NT Service\sppsvc and click on Check Names then OK.


    Make sure you give the service account Full control and click OK on the warning message and OKto close the Permissions box.


    Now try starting the Software Protection service and it should start successfully and you will be able to successfully activate Windows.

    We hope this blog was useful in troubleshooting some of your activations issues.

    Saurabh Koshta
    Henry Chen

  • Errors Retrieving File Shares on Windows Failover Cluster

    Hi AskCore, Chinmoy here again. In today’s blog, I would like to share one more scenario in continuation to my previous blog on Unable to add file shares in Windows 2012 R2 Failover Cluster.

    This is about WinRm a setting that could lead to failure on adding file shares using Windows 2012/2012R2 Failover Cluster Manager.

    Consider a two-node Windows Server 2012 R2 Failover Cluster using shared disks to host a File Server role. To access the shares, we click on the file shares and go to the shares tab at the bottom.  We see the error on the Information column next to the Roles:

    “There were errors retrieving the file shares.”


    There can be multiple reasons why Failover Cluster Manager would throw these errors. We will be covering one of the scenarios caused because of a WinRm configuration.


    We cannot add new shares using Failover Cluster Manager, but can via PowerShell.  This may occur, if Winrm is not correctly configured.  WinRm is the Microsoft implementation of the WS-Management protocol and more can be found here.

    If we have Winrm configuration issues, we may even fail to connect to remote servers or other Cluster nodes using Server Manager as shown below.


    The equivalent PowerShell cmdlet reports the below error: -

    PS X:\> Enter-PSSession Hostname
    Enter-PSSession : Connecting to remote server hostname failed with the following error message : The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig". For more information, see the about_Remote_Troubleshooting Help topic.

    At line:1 char:1
    + Enter-PSSession hostname
    + ~~~~~~~~~~~~~~~~~~~~~~~~~
        + CategoryInfo          : InvalidArgument: (hostname:String) [Enter-PSSession], PSRemotingTransportException
        + FullyQualifiedErrorId : CreateRemoteRunspaceFailed

    The above is a sign of WinRm being unable connect to the remote server.

    Let’s dig more, and check the event logs:

    Log Name: Microsoft-Windows-FileServices-ServerManager-EventProvider/Operational
    Event ID: 0
    Source: Microsoft-Windows-FileServices-ServerManager-EventProvider
    Description: Exception: Caught exception Microsoft.Management.Infrastructure.CimException: The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service: "winrm quickconfig".

    The above event states that there is communication issue with WinRm component. A quick way to configure WinRm is to run the command:

    winrm quickconfig

    This command starts the WinRM service and sets the service startup type to Auto-start. It also configures a listener for the ports that send and receive WS-Management protocol messages using either HTTP or HTTPS on any IP address. If it returns the following message:

    WinRM service is already running on this machine.
    WinRM is already set up for remote management on this computer.

    Then try running the below command:

    winrm id -r:ComputerName

    You may receive the following message if WinRM is not able to communicate to the WinRS client. It also means we cannot resolve the destination using a loopback IP configured to IP Listen List for HTTP communications.


    You can validate if the Loopback adapter IP is configured to IP Listen List for HTTP communication:


    To this problem run the below command:


    On removing the loopback IP, we shall be able to add the file share successfully using the Failover Cluster console. Hope it helps to fix the issue. Good Luck!

    Chinmoy Joshi
    Support Escalation Engineer

  • Using Repadmin with ADLDS and Lingering objects

    Hi! Linda Taylor here from the UK Directory Services escalation team. This time on ADLDS, Repadmin, lingering objects and even PowerShell…. The other day a colleague was trying to remove a lingering object in ADLDS. He asked me about which repadmin syntax more
  • When Special Pool is not so Special

    Hi Everyone.  Richard here in the UK GES team bringing you an interesting case we saw recently where Special Pool gave us some unexpected results, but ultimately still helped us track down the cause of the problem.   It started when we were more
  • “Administrative limit for this request was exceeded" Error from Active Directory

    Hello, Ryan Ries here with my first AskDS post! I recently ran into an issue with a particular environment where Active Directory and UNIX systems were being integrated. Microsoft has several attributes in AD to facilitate this, and one of those attributes more
  • SHA1 Key Migration to SHA256 for a two tier PKI hierarchy

    Hello. Jim here again to take you through the migration steps for moving your two tier PKI hierarchy from SHA1 to SHA256. I will not be explaining the differences between the two or the supportability / security implementations of either. That information more
  • Setting up Data Recovery Agent for Bitlocker

    You might have already read on TechNet and one of the other AskCore Blogson how to setup Data Recovery Agent (DRA) for BitLocker. However, how do you request a certificate from internal Certificate Authority (AD CS) to enable Data Recovery Agent (DRA). Naziya Shaik and I have written detailed instructions here and hope it is helpful.

    So what is a Data Recovery Agent?

    Data recovery agents are individuals whose public key infrastructure (PKI) certificates have been used to create a BitLocker key protector, so those individuals can use their credentials to unlock BitLocker-protected drives. Data recovery agents can be used to recover BitLocker-protected operating system drives, fixed data drives, and removable data drives. However, when used to recover operating system drives, the operating system drive must be mounted on another computer as a data drive for the data recovery agent to be able to unlock the drive. Data recovery agents are added to the drive when it is encrypted and can be updated after encryption occurs.

    Below are the steps needed.  From creating the certificate on the Certification Authority, to using it on Client machine.

    The machines in use are:

    1. Windows Server 2012 R2 DC and CA

    2. Windows 10 Enterprise

    If I go to Windows 10 and try to request a DRA certificate, we cannot see it as illustrated below:


    In order for the client to see a DRA certificate, we need to copy the Key Recovery Agent template, add BitLocker Drive Encryption, and BitLocker Drive Recovery Agent from the application policies.

    Here is how you do it.

    1. On a CA, we created a duplicate of the Key Recovery Agent and named it BitLocker DRA.


    2. Add the BitLocker Drive Encryption and BitLocker Data Recovery Agent by going into Properties -- > Extensions and edit Application Policies.


    3. In the CA Management Console, go into Certificate Templates and add BitLocker DRAas the template to issue.


    On a Windows 10 client, adding Certificate Manager to Microsoft Management Console:

    1. Click Start, click Run, type mmc.exe, and then click OK.

    2. In the File menu, click Add/Remove Snap-in.

    3. In the Add/Remove Snap-in box, click Add.

    4. In the Available Standalone Snap-ins list, click Certificates, and click Add.

    5. Click My user account, and click Finish.

    6. Then click OK.

    Then under Certificates -- > Personal -- > Right click on Certificate -- > All Tasks -- > Request New Certificate


    These are the Certificate Enrollment steps


    Click Next and in our case, we have Active Directory Enrollment Policy


    Click Nextand you will see the BitLocker DRA certificate which we created above.


    Select BitLocker DRA and click Enroll.

    This is what it looks like.


    The next steps are pretty much the same as given in this Blog. We will need to export the certificate to be used across all the machines.

    To accomplish this, right click on the certificate above and choose Export.


    This will bring up the export wizard.


    On the Export Private Key page, leave the default selection of No, do not export the private key.


    On the Export File Format page, leave the default selection of DER encoded binary X.509 (.CER).


    The next window is specifying the location and file name of the certificate you are exporting.  In my case below, I chose to save it to the desktop.


    Click Finish to complete the wizard.


    The next step will be to import that certificate into our BitLocker GPO to be able to use. In this, I have a GPO called BitLocker DRA.

    Under Computer Configuration -- > Policies -- > Windows Settings -- > Security Settings -- > Public Key Policies -- > Right click BitLocker Drive Encryption –> Add Data Recovery Agent


    This will start the Add Data Recovery Agent wizard.


    Click Browse Folders and point it to the location where you saved the certificate. My example above was from the desktop, so got from there.


    Double click on the certificate to load it.


    Click Next and Finish.

    You will see the certificate imported successfully.


    Additionally, make sure that you have the below GPO enabled.  In Group Policy Editor, expand Computer Configuration -- > Administrative Templates -- > Windows Components -- > BitLocker Drive Encryption and ensure Enabled is selected.


    Running Manage-bdeto get the status on the client you enabled Bitlocker on, you will see Data Recovery Agent (Certificate Based) to show it is currently set.



    Saurabh Koshta
    Naziya Shaikh

  • How to convert Windows 10 Pro to Windows 10 Enterprise using ICD

    Windows 10 makes life easier and brings a lot of benefits in the enterprise world. Converting Windows 10 without an ISO image or DVD is one such benefit. My name is Amrik and in this blog, we’ll take an example of upgrading Windows 10 Professional edition to Windows 10 Enterprise edition.

    Let’s consider a scenario wherein you purchase a few computers. These computers come pre-installed with Windows 10 Pro and you would like to convert it to Windows 10 Enterprise.

    The simpler way is to use DISM servicing option:

    Dism /online /Set-Edition:Enterprise /AcceptEula /ProductKey:12345-67890-12345-67890-12345

    For more information on DISM Servicing, please review:

    The above may be a good option if you have single or few computers. But, what if you’ve got hundreds of computers to convert.

    To make your life easier, you may want to use the Windows Imaging and Configuration Designer (ICD). You can get the Windows ICD as part of the Windows 10 Assessment and Deployment Kit (ADK), which is available for download here.

    With the help of ICD, admins can create a provisioning packages (.ppkg) which can help configuring Wi-Fi networks, adding certificates, connecting to Active Directory, enrolling a device in Mobile Device Management aka MDM, and even updating Windows 10 Editions - all without the need to format the drive and reinstall Windows.

    Install Windows ICD from The Windows 10 ADK

    The Windows ICD relies on some other tools in the ADK kit, so you need to select the options to install the following:

    • Deployment Tools,
    • Windows Preinstallation Environment (Windows PE)
    • Imaging and Configuration Designer (ICD),

    Before proceeding any further, let’s ensure you understand the prerequisite:

    • You have the required licenses to install Windows 10 Enterprise.

    The below steps require KMS license keys. You cannot use MAK license keys to convert. Since you are using KMS keys to do the convert, you need to have a KMS host capable of activating Windows 10 computers or you will need to change to a MAK key after the upgrade is complete.

    Follow below steps to convert:


    Click on File menu and select New Project.

    It will ask to enter the following details. You may name the package as per your convenience and save it to a different location if you would like to.


    Navigate to the path Runtime Settings –> EditionUpgrade –>UpgradeEditionWithProductKey


    Once you enter the product key (Use the KMS client key for Windows 10 Enterprise available here.)

    Click on File –> Save.

    Click on Export –> Provisioning Package.

    The above step will build the provisioning package.


    In the screenshot below, if anyone wants to keep a password or a certificate, then he may set it up.


    Select any location to save the provisioning package.


    Once complete, it will give the summary of all the choices selected. Now, we just need to click the button BUILD.



    Navigating to the above folder will open the location below and note the .ppkg file has been created which we will use to upgrade Windows 10 Professional.


    We now need to connect the Windows 10 Professional machine to the above share and run the .ppkg file.

    Here is the screenshot before I ran the package which shows that the machine is installed with Windows 10 Professional version:


    Run the file “Upgrade_Win10Pro_To_Win10Ent.ppkg” to complete the upgrade process.


    After double clicking the .ppkg file, we will get the warning or a prompt similar to UAC below:


    Just select “Yes, add it” and proceed. After this we need to wait and the system is getting prepared for upgrade.



    After the upgrade is complete, the machine will reboot and the OS is going to be Windows 10 Enterprise and we get the below screen as confirmation:


    And this is where we confirm that the upgrade is successful:


    The .ppkg file can be sent to the user through an email. The package can be on located on an internal share and run from there or copied to a USB drive and used on that drive.

    A couple of ways to automate the above process:

    • Use MDT by adding the option to Install Applications under Add –> General tab.
    • Use SCCM by following steps mentioned in the blog below:

    Apply a provisioning package from a SCCM Task Sequence

    Amrik Kalsi
    Senior Support Engineer

  • We Are Hiring – North Carolina and Texas

    Would you like to join the world’s best and most elite debuggers to enable the success of Microsoft solutions?   As a trusted advisor to our top customers you will be working with to the most experienced IT professionals and developers in the industry more
  • How big should my OS drive be?

    My name is Michael Champion and I've been working in support for more than 12 years here at Microsoft.  I have been asked by many customers "What is the recommended size for the OS partition for Windows Server?".  There are minimum recommendations in the technical documentation (and release notes), but those recommendations are more on the generic side.  There are times when that recommendation is fine, but other times they are not.

    Take for example the Windows Server 2012 R2 disk recommendations.

    System Requirements and Installation Information for Windows Server 2012 R2

    Disk space requirements

    The following are the estimated minimum disk space requirements for the system partition.

    Minimum: 32 GB

    Be aware that 32 GB should be considered an absolute minimum value for successful installation. This minimum should allow you to install Windows Server 2012 R2 in Server Core mode, with the Web Services (IIS) server role. A server in Server Core mode is about 4 GB smaller than the same server in Server with a GUI mode. For the smallest possible installation footprint, start with a Server Core installation and then completely remove any server roles or features you do not need by using Features on Demand. For more information about Server Core and Minimal Server Interface modes, see Windows Server Installation Options.

    The system partition will need extra space for any of the following circumstances:

      • If you install the system over a network.
      • Computers with more than 16 GB of RAM will require more disk space for paging, hibernation, and dump files.

    The trick here is that "minimum" is bolded meaning that you could need a larger space and does not take into account your actal memory, what applications may be installed, etc.  While it does state this, I can give you an idea based on the role and hardware configuration of the server and other factors what disk space you should have available.

    Here are some good suggestions to follow when trying to calculate the size of an OS volume.

    • 3x RAM up to 32GB
    • 10-12GB for the base OS depending on roles and features installed
    • 10GB for OS Updates
    • 10GB extra space for miscellaneous files and logs
    • Any applications that are installed and their requirements. (Exchange, SQL, SharePoint,..)

    Taking the full 32GB RAM, a simple OS build would require a drive about 127GM in size.  One may think this is too large for the OS when the minimum disk space requirement is 32GB but let's break this down a bit...

    Why 3x RAM?

    If you are using 32GB of RAM and you need to troubleshoot a bug check or hang issue, you will need a page file at least 100MB larger than the amount of RAM as well as space for the memory dump.  Wait, that is just over 2x RAM... There are other log files like the event logs that will grow over time and we may need to collect other logs that will take up GB of space depending on what we are troubleshooting and the verbosity of the data we need.

    10GB-12GB for the base OS?

    The base OS install size is about 10GB-12GB and that is just for the base files and depends on what roles and features are installed.

    10GB for OS Updates?

    If you are familiar with the WinSxS directory in the OS for 2008/R2 and up, this folder will grow as the server is updated over the life of the server.  We have made great strides in reducing the space taken up by the WinSxS folder but it still increases over time.

    10GB extra space for miscellaneous files and logs?

    This may seem to be covered in the 3x RAM but many times people will copy ISO, 3rd party install files or logs, and other things to the server.  It is better to have the space than not to have it.

    How much for server applications then?

    This part is variable and should be taken in consideration when purposing a server for a particular function.  In general server use, the 127GB can usually accommodate a single or even a dual purpose server.

    Thank You,
    Michael Champion
    Support Escalation Engineer

  • So what exactly is the CLIUSR account?

    From time to time, people stumble across the local user account called CLIUSR and wonder what it is, while you really don’t need to worry about it; we will cover it for the curious in this blog.

    The CLIUSR account is a local user account created by the Failover Clustering feature when it is installed on Windows Server 2012 or later. Well, that’s easy enough, but why is this account here? Taking a step back, let’s take a look at why we are using this account

    In the Windows Server 2003 and previous versions of the Cluster Service, a domain user account was used to start the Cluster Service. This Cluster Service Account (CSA) was used for forming the Cluster, joining a node, registry replication, etc. Basically, any kind of authentication that was done between nodes used this user account as a common identity.

    A number of support issues were encountered as domain administrators were pushing down group policies that stripped rights away from domain user accounts, not taking into consideration that some of those user accounts were used to run services. An example of this is the Logon as a Service right. If the Cluster Service account did not have this right, it was not going to be able to start the Cluster Service. If you were using the same account for multiple clusters, then you could incur production downtime across a number of critical systems. You also had to deal with password changes in Active Directory. If you changed the user accounts password in AD, you also needed to change passwords across all Clusters/nodes that use the account.

    In Windows Server 2008, we learned and redesigned everything about the way we use start the service to make it more resilient, less error prone, and easier to manage. We started using the built-in Network Service to start the Cluster Service. Keep in mind that this is not the full blown account, just simply a reduced privileged set. Changing it to this reduced account was a solution for the group policy issues.

    For authentication purposes, it was switched over to use the computer object associated with the Cluster Name known as the Cluster Name Object (CNO)for a common identity. Because this CNO is a machine account in the domain, it will automatically rotate the password as defined by the domain’s policy for you (which is every 30 days by default).

    Great!! No more domain user account and its password changes we have to account for. No more trying to remember which Cluster was using which account. Yes!! Ah, not so fast my friend. While this solved some major pain, it did have some side effects.

    Starting in Windows Server 2008 R2, admins started virtualizing everything in their datacenters, including domain controllers. Cluster Shared Volumes (CSV) was also introduced and became the standard for private cloud storage. Some admin’s completely embraced virtualization and virtualized every server in their datacenter, including to add domain controllers as a virtual machine to a Cluster and utilize the CSV drive to hold the VHD/VHDX of the VM.

    This created a “chicken or the egg” scenario that many companies ended up in. In order to mount the CSV drive to get to the VMs, you had to contact a domain controller to get the CNO. However, you couldn’t start the domain controller because it was running on the CSV.

    Having slow or unreliable connectivity to domain controllers also had effect on I/O to CSV drives. CSV does intra-cluster communication via SMB much like connecting to file shares. To connect with SMB, it needs to authenticate and in Windows Server 2008 R2, that involved authenticating the CNO with a remote domain controller.

    For Windows Server 2012, we had to think about how we could take the best of both worlds and get around some of the issues we were seeing. We are still using the reduced Network Service privilege to start the Cluster Service, but now to remove all external dependencies we have a local (non-domain) user account for authentication between the nodes.

    This local “user” account is not an administrative account or domain account. This account is automatically created for you on each of the nodes when you create a cluster or on a new node being added to the existing Cluster. This account is completely self-managed by the Cluster Service and handles automatically rotating the password for the account and synchronizing all the nodes for you. The CLIUSR password is rotated at the same frequency as the CNO, as defined by your domain policy (which is every 30 days by default). With it being a local account, it can authenticate and mount CSV so the virtualized domain controllers can start successfully. You can now virtualize all your domain controllers without fear. So we are increasing the resiliency and availability of the Cluster by reducing external dependencies.

    This account is the CLIUSR account and is identified by its description.


    One question that we get asked is if the CLIUSR account can be deleted. From a security standpoint, additional local accounts (not default) may get flagged during audits. If the network administrator isn’t sure what this account is for (i.e. they don’t read the description of “Failover Cluster Local Identity”), they may delete it without understanding the ramifications. For Failover Clustering to function properly, this account is necessary for authentication.


    1. Joining node starts the Cluster Service and passes the CLIUSR credentials across.

    2. All passes, so the node is allowed to join.

    There is one extra safe guard we did to ensure continued success. If you accidentally delete the CLIUSR account, it will be recreated automatically when a node tries to join the Cluster.

    Short story… the CLIUSR account is an internal component of the Cluster Service. It is completely self-managing and there is nothing you need to worry about regarding configuring and managing it. So leave it alone and let it do its job.

    In Windows Server 2016, we will be taking this even a step further by leveraging certificates to allow Clusters to operate without any external dependencies of any kind. This allows you to create Clusters out of servers that reside in different domains or no domains at all. But that’s a blog for another day.

    Hopefully, this answers any questions you have regarding the CLIUSR account and its use.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • CROSS POST: How Shared VHDX Works on Server 2012 R2

    In the not far back point in time, there was a blog done by Matthew Walker that we felt needed to also be on the AskCore site as well due to the nature and the popularity of the article.  So we are going to cross post it here.  Please keep in mind that the latest changes/updates will be in the original blog post.

    CROSS POST: How Shared VHDX Works on Server 2012 R2

    Hi, Matthew Walker here, I’m a Premier Field Engineer here at Microsoft specializing in Hyper-V and Failover Clustering. In this blog I wanted to address creating clusters of VMs using Microsoft Hyper-V with a focus on Shared VHDX files.

    From the advent of Hyper-V we have supported creating clusters of VMs, however the means of adding in shared storage has changed. In Windows 2008/R2 we only supported using iSCSI for shared volumes, with Windows Server 2012 we added the capability to use virtual fibre channel, and SMB file shares depending on the workload, and finally in Windows Server 2012 R2 we added in shared VHDX files.

    Shared Storage for Clustered VMs:

    Windows Version



    2012 R2





    Virtual Fibre Channel




    SMB File Share




    Shared VHDX




    So this provides a great deal of flexibility when creating clusters that require shared storage with VMs. Not all clustered applications or services require shared storage so you should review the requirements of your app to see. Clusters that might require shared storage would be file server clusters, traditional clustered SQL instances, or Distributed Transaction Coordinator (MSDTC) instances. Now to decide which option to use. These solutions all work with live migration, but not with items like VM checkpoints, host based backups or VM replication, so pretty even there. If there is an existing infrastructure with iSCSI or FC SAN, then one of those two may make more sense as it works well with the existing processes for allocating storage to servers. SMB file shares work well but only for a few workloads as the application has to support data residing on a UNC path. This brings us to Shared VHDX.

    Available Options:

    Hyper-V Capability

    Shared VHDX used

    ISCSI Drives

    Virtual Fibre Channel Drives

    SMB Shares used in VM

    Non-Shared VHD/X used

    Host based backups












    VM Replication






    Live Migration






    Shared VHDX files are attached to the VMs via a virtual SCSI controller so show up in the OS as a shared SAS drive and can be shared with multiple VMs so you aren’t restricted to a two node cluster. There are some prerequisites to using them however.

    Requirements for Shared VHDX:

    2012 R2 Hyper-V hosts
    Shared VHDX files must reside on Cluster Shared Volumes (CSV)
    SMB 3.02

    It may be possible to host a shared VHDX on a vender NAS if that appliance supports SMB 3.02 as defined in Windows Server 2012 R2, just because a NAS supports SMB 3.0 is not sufficient, check with the vendor to ensure they support the shared VHDX components and that you have the correct firmware revision to enable that capability. Information on the different versions of SMB and capabilities is documented in a blog by Jose Barreto that can be found here.

    Adding Shared VHDX files to a VM is relatively easy, through the settings of the VM you simply have to select the check box under advanced features for the VHDX as below.


    For SCVMM you have to deploy it as a service template and select to share the VHDX across the tier for that service template.


    And of course you can use PowerShell to create and share the VHDX between VMs.

    PS C:\> New-VHD -Path C:\ClusterStorage\Volume1\Shared.VHDX -Fixed -SizeBytes 30GB

    PS C:\> Add-VMHardDiskDrive -VMName Node1 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    PS C:\> Add-VMHardDiskDrive -VMName Node2 -Path C:\ClusterStorage\Volume1\Shared.VHDX -ShareVirtualDisk

    Pretty easy right?

    At this point you can setup the disks as normal in the VM and add them to your cluster, and install whatever application is to be clustered in your VMs and if you need to you can add additional nodes to scale out your cluster.

    Now that things are all setup let’s look at the underlying architecture to see how we can get the best performance from our setup. Before we can get into the shared VHDX scenarios first we need to take a brief stint on how CSV works in general. If you want a more detailed explanation please refer to Vladimir Petter’s excellent blogs starting with this one.


    This is a simplified diagram of the way we handle data flow for CSV, the main points here are to realize that access to the shared storage in this clustered environment is handled through the Cluster Shared Volume File System (CSVFS) filter driver and supporting components, this system handles how we access the underlying storage. Because CSV is a clustered file system we need to have this orchestration of file access. When possible I/O travels a direct path to the storage, but if that is not possible then we will redirect over the network to a coordinator node. The coordinator node shows up in the Failover Cluster manager as the owner for the CSV.

    With Shared VHDX we also have to have orchestration of shared file access, to achieve this with Shared VHDX all I/O requests are centralized and funneled through the coordinator node for that CSV. This results in I/O from VMs on hosts other than the coordinator node being redirected to the coordinator. This is different from a traditional VHD or VHDX file that is not shared.

    First let’s look at this from the perspective of a Hyper-V compute cluster using a Scale-Out File Server as our storage. For the following examples I have simplified things by bringing it down to two nodes and added in a nice big red line to show the data path from the VM that currently owns our clustered workload. For my example I making some assumptions, one is that the workload being clustered is configured in an Active/Passive configuration with a single shared VHDX file and we are only concerned with the data flow to that single file from one node or the other. For simplicity I have called the VMs Active and Passive just to indicate which one owns the Shared VHDX in the clustered VMs and is transferring I/O to the storage where the shared VHDX resides.


    So we have Node 1 in our Hyper-V cluster accessing the Shared VHDX over SMB and connects to the coordinator node of the Scale-Out File Server cluster (SOFS), now let’s move the active workload.


    So even when we move the active workload SMB and the CSVFS drivers will connect to the coordinator node in the SOFS cluster, so in this configuration our performance is going to be consistent. Ideally you should have high speed connects between your SOFS nodes and on the network connections used by the Hyper-V compute nodes to access the shares. 10 Gb NICs or even RDMA NICs. Some examples of RDMA NICs are Infiniband, iWarp and RDMA over Converged Ethernet (RoCE) NICs.

    Now as we change things up a bit, we will move the compute onto the same servers that are hosting the storage


    As you can see the access to the VHDX is sent through the CSVFS and SMB drivers to access the storage, and everything works like we expect as long as the active VM of the clustered VMs is on the same node as the coordination node of the underlying CSV, so now let’s look at how the data flows when the active VM is on a different node.


    Here things take a different path than we might expect, since SMB and CSVFS are an integral part of ensuring proper orchestrated access to the Shared VHDX we send the data across the interconnects between the cluster nodes rather than straight down to storage, this can have a significant impact on your performance depending on how you have scaled your connections.

    If the direct access to storage is a 4Gb fibre connect and the interconnect between nodes is a 1Gb connection there is going to be a serious difference in performance when the active workload is not on the same node that owns the CSV. This is exacerbated when we have 8Gb or 10Gb bandwidth to storage and the interconnects between nodes is only 1Gb. To help mitigate this behavior make sure to scale up your cluster interconnects to match using options such as 10 Gb NICs, SMB Multi-channel and/or RDMA capable devices that will improve your bandwidth between the nodes.

    One final set of examples to address concerns about scenarios where you may have an application active on multiple clustered VMs that are accessing the same Shared VHDX file. First let’s go back to the separate compute and storage nodes.


    And now to show how it goes with everything all together in the same servers.


    So we can even implement a scale out file server or other multi-access scenarios using clustered VMs.

    So the big takeaway here is more about understanding the architecture to know when you will see certain types of performance, and how to set proper expectations based on where and how we access the final storage repository for the shared VHDX. By moving some of the responsibility for handling access to the VHDX to SMB and CSVFS we get a more flexible architecture and more options, but without proper planning and an understanding of how it works there can be some significant differences in performance based on what type of separation there is between the compute side and the storage side. For the best performance ensure you have high speed and high bandwidth interconnects from the running VM all the way to the final storage by using 10 Gb or RDMA NICs, and try to take advantage of SMB Multi-Channel.

    --- Matthew Walker