• Finally a Windows Task Manager Performance tab blog!

    Good morning AskPerf!  How many times have we looked at Windows Task Manager and wondered what the values on the Performance tab meant?  Why do they not add up?  What is the difference between Free and Available Memory, etc., etc., etc.?  In today’s post, we will take a look at these values and explain what each one means.

    Below is a screenshot of the Performance tab from a Windows 2008 R2 Server with 16GB RAM and a 16GB page file:



    Resource Monitor’s Memory tab looked like this:


    The Performance tab is divided into the following sections:

    • CPU Usage – This indicates the percentage of processor cycles that are not idle at the moment. If this graph displays a high percentage continuously (and if you are not able to find any process chewing it up, it could be due to interrupts / DPCs. Use Process Explorer to get more details regarding them.  It may also mean that processor is overloaded on the system) Depending the number of CPUs on the system, we can see multiple graphs per CPU on the right.
    • CPU Usage History - Indicates how busy the processor has been. The graph only shows values since the time the Task Manager was opened.
    • Memory - Indicates the percentage of the physical memory that is currently being used.
    • Physical Memory Usage History - Indicates how much physical memory is being utilized. It also shows values since Task Manager was opened.
    • Physical Memory (MB) - Indicates the total and available physical memory, as well as the amount of memory used by system cache.
    • Kernel Memory (MB) - Indicates the memory used by the operating system and the drivers running in kernel mode (Paged and Non-paged pool).
    • System - Provides totals for the number of handles, threads, and processes currently running. A process is a single executable program. A thread is an object within a process that runs program instructions. A handle is a reference to a resource used by the operating system. A process may have multiple threads, each of which in turn may have multiple handles.

    We need to keep in mind that the Memory Usage graph (showed in Windows Vista/2008/7/2008R2) is the sum of all the process’s private working set.  On older Operating Systems (XP/2003), the PF Usage value seen is the Total System Commit.  This represents the potential page file usage, i.e how much pagefile would be used if all the private committed virtual memory in the system had to be paged out to the disk.

    Now taking a detailed look at the Physical Memory section:

    • Total - This counter shows the total amount of RAM that is usable by the operating system. Note that there can be a difference between the Installed RAM and the Total RAM due to Physical Memory shadow setting in BIOS, memory mapped to PCI Device etc. To know more about the reasons for this difference click here.
    • Cached - This represents the sum of the system working set, standby list, and modified page list. If you want to find the matching counters in Perfmon, and then load up the following objects under Memory – Cache Bytes, Modified pages list bytes, Standby cache core bytes, standby cache normal priority byte and standby cache reserve bytes.
    • Available - This is amount of physical memory that is currently available for use by the operating system, the drivers and the processes. It is equal to the sum of the standby pages, the free pages and the zero page lists.
    • Free - This is the sum of the free pages and the zero page lists.

    Under the Kernel Memory section, we have:

    • Paged - This is the currently used Pool paged byte in MB.
    • Nonpaged - This is the currently allocated Nonpaged Pool bytes in MB.

    For more details, click here.

    Here’s some information about different states of a Page in Memory (Reference: Windows Internal 5th Edition):

    • Active - (also called Valid) The page is part of a working set (either a process working set or the system working set) or it’s not in any working set (for example, nonpaged kernel page) and a valid PTE usually points to it.
    • Standby - The page previously belonged to a working set but was removed (or was perfected directly into the standby list). The page wasn’t modified since it was last written to disk. The PTE still refers to the physical page but is marked invalid and in transition.
    • Modified - The page previously belonged to a working set but was removed. However, the page was modified while it was in use and its current contents haven’t yet been written to disk or remote storage. The PTE still refers to the physical page but is marked invalid and in transition. It must be written to the backing store before the physical page can be reused.
    • Modified no-write - Same as a modified page, except that the page has been marked so that the memory manager’s modified page writer won’t write it to disk. The cache manager marks pages as modified no-write at the request of file system drivers. For example, NTFS uses this state for pages containing file system metadata so that it can first ensure that transaction log entries are flushed to disk before the pages they are protecting are written to disk.
    • Free - The page is free but has unspecified dirty data in it. These pages can’t be given as a user page to a user process without being initialized with zeros, for security reasons.
    • Zeroed - The page is free and has been initialized with zeros by the zero page thread (or was determined to already contain zeros).
    • ROM - The page represents read-only memory.
    • Bad - The page has generated parity or other hardware errors and can’t be used. This is also used internally by the system for pages that may be transitioning from one state to another or are on internal look-aside.

    With that, we have come to the end of this post.  Please feel free to post additional questions below.  Until next time.


  • RD Licensing Configuration on Windows Server 2012

    Good morning AskPerf! Today we are going to discuss the steps in installing/configuring Windows Server 2012 Remote Desktop Services Licensing in your environment using various available options.

    Adding a new License Server in a new Deployment

    Let us assume that you already have created a Remote Desktop Services Deployment. You have a Session Based Collection and a Virtual Desktop based collection as per your business requirement. Now, you have introduced a new Server in the domain that will serve as a License Server for Remote Desktop Services.

    Before you configure Licensing on any Remote Desktop Server Session Host or Virtualization Host server, the RD Licensing Diagnoser looks like below. To open RD Licensing Diagnoser, Click Tools, go to Terminal Services and click RD Licensing Diagnoser.


    The image below shows that the RD Session Host Server neither has a Licensing mode configured nor there is a License server configured for it.

    In the RD Licensing Diagnoser Information section, it will throw 2 warning(s): 
    1. The licensing mode for the Remote Desktop Session Host server is not configured.
    2. The Remote Desktop Session Host server is within its grace period, but the RD Session Host server has not been configured with any license server.


    Configuring Windows Server 2012 Remote Desktop Services Licensing involves 2 step process.  

    Note Make sure that the new License Server is already added to the Server Pool on the RD Connection Broker Server before you add it to the deployment.
    1. Configuring the Deployment Settings
    a. In the Server manager RDMS console Overview page, click on clip_image006 to add a License server which is already added to the domain


    b. In the ‘Add RD Licensing Servers’ applet choose the server that you want to add to the deployment from the Server Pool and click Next


    c. Click on Add on the Confirmation page and click Add

    d. If the Licensing Role Service is not already installed, the Wizard will install the role, reboot the system if required and add it to the Deployment.


    e. Once done, the Overview page will look like this


    Adding the License server to the deployment will not automatically configure the RD Session Host server or the RD Virtualization Host servers with the Licensing mode type or point them to the License server in the deployment that you just added. To configure them you need to follow below steps.

    2. Configuring the Licensing Mode.
    a. In deployment Overview page, select on Tasks and click ‘Edit Deployment Properties’


    b. In the ‘Deployment properties’ applet, click on the ‘RD Licensing’ page. Here you will see the License server is already added i.e., in our case, however, the Licensing mode is not selected. Choose the appropriate Licensing mode. Click Apply and OK to exit the wizard.


    c. At this stage the License server is installed, added to the deployment and mode is configured. However, the Licenses are yet to be installed. On the Session Host server or on the RD Virtualization host server License Diagnoser will show up as below


    d. Once you have installed the required Licenses and Activated the License server, the console will look something like below


    e. Also make sure to check License Configuration and that there are no Warnings with respect to configuration. The License Server should be part of ‘Terminal Server License’ group in Active Directory Domain Services.



    f. On the RD Session Host server if you rerun the Diagnoser, you will see that the server now recognizes the License server the CAL type.


    Adding an existing License Server in a new RDS deployment

    In this scenario, let us assume that you already have an existing License server with all the required licenses installed. You just deployed a RDS deployment and created a collection. You, now want to use the same License server in your environment for the new deployment.

    The steps are exactly the same as “2. Configuring the Licensing Mode” above.

    In the ‘Deployment properties’ applet, click on the ‘RD Licensing’ page. In the text box specify the Licensing server name with complete FQDN and then click Add. Choose the appropriate Licensing mode ‘Per device’ or ‘Per User’. Click Apply and OK to exit the wizard.



    Rest of the steps are similar and should be followed as applicable.

    Configuring License server manually

    There might be situation when you want to configure License server on the RD Session Host or on the RD Virtualization Host manually since you do not have any RD Connection Broker in your environment. You have already configured RD Session Host server or Virtualization Host Server as required and now you want to configure the License server which is already installed and configured with licenses. All you are left to do is configure the License Server and the Licensing mode on the corresponding RD session Host or Virtualization Host servers.

    Note The following commands must be ran from an Administrative PowerShell prompt.

    To configure the license server on RDSH/RDVH:

    $obj = gwmi -namespace "Root/CIMV2/TerminalServices" Win32_TerminalServiceSetting


    Note “License” is the name of the License Server in the environment

    To verify the license server configuration on RDSH/RDVH:

    $obj = gwmi -namespace "Root/CIMV2/TerminalServices" Win32_TerminalServiceSetting


    To change the licensing mode on RDSH/RDVH:

    $obj = gwmi -namespace "Root/CIMV2/TerminalServices" Win32_TerminalServiceSetting

    $obj.ChangeMode(value) - Value can be 2 - per Device, 4 - Per user

    To validate the licensing mode:

    $obj = gwmi -namespace "Root/CIMV2/TerminalServices" Win32_TerminalServiceSetting

    $obj. LicensingType


    Configuring license server using Group Policy

    Per your design requirements you can also configure License Server using Group Policy in your environment.
    The policy is located here:

    Computer Configuration\Policies\Administrative Templates\Windows Components\Remote Desktop Services\Remote Desktop Session Host\Licensing\

    “Use the specified Remote Desktop license servers” – Provide the FQDN of the license servers to use

    Set the Remote Desktop licensing mode – Specify the ‘per user’ or ‘per device’ licensing types.

    Known issue with RD Licensing Diagnoser:

    You may receive an error “Licenses are not available for this Remote Desktop Session Host server, and RD Licensing Diagnoser has identified licensing problems for the RD Session Host Server”

    In the RD Licensing Diagnoser Information Section, will show the possible cause and its remediation.


    To make sure that the License Diagnoser runs successfully, you need administrator privileges on the license server.


    Additional Resources

  • Event ID 157 "Disk # has been surprise removed"

    Hello my name is Bob Golding and I would like to share information on a new error you may see in the system event log. It is Event ID 157 "Disk <n> has been surprise removed" with Source: disk.  This error indicates that the CLASSPNP more
  • Removing .NET Framework 4.5/4.5.1 removes Windows 2012/2012R2 UI and other features

    This is Vimal Shekar and Krishnan Ayyer from the Windows Support team. Today in this blog, we will be discussing about an issue that we are seeing increasingly being reported in support. We will look at the effects of removing .Net Framework from a Windows Server 2012/2012 R2 installation.

    Windows Server 2012 includes .NET Framework 4.5 and Windows Server 2012 R2 includes .NET Framework 4.5.1. The .NET Framework provides a comprehensive and consistent programming model to build and run applications (including Roles and Features) that are built for various platforms. Windows Explorer (Graphical Shell), Server Manager, Windows PowerShell, IIS, ASP .NET, Hyper-V, etc, are all dependent on .NET Framework. Since there are multiple OS components dependent on .Net Framework, this feature is installed by default.  Therefore, you do not have to install it separately.

    It is not recommended to uninstall .NET Framework.  In some given circumstances, there may be a requirement to remove/re-install .Net Framework on Windows Server 2012/2012 R2.

    When you uncheck the .NET Framework 4.5 checkbox in the Remove Roles/Features Wizard of Server Manager, Windows will check all roles/features that may also be installed as it would need to be removed as well..  If there are other roles or features dependent on .NET Framework, those would be listed in this additional window.

    For Example:



    If you read through the list, the components that are affected by this removal are listed as follows:

    1. .NET Framework 4.5 Features
    2. RSAT (Remote Administration Assessment Toolkit) which includes Hyper-V Management tools and Hyper-V GUI,
    3. User interfaces and Infrastructure, which includes Graphical Management Tools and Infrastructure Server Graphical Shell (Full Shell and min Shell),
    4. PowerShell which will remove complete PowerShell 4.0 and ISE

    The list of components may differ depending upon the Roles and Features installed on the Server machine.  If you were to use DISM.EXE commands to remove .Net Feature, you may not even see such a list.  If you were to use PowerShell to remove .Net feature using the following command, you will not get the list.

    Uninstall-WindowsFeature Net-Framework-45-Features

    If you were to use Remove-WindowsFeature PowerShell cmdlet, you can add the –whatifswitch to see the list of features that would also be impacted.

    Remove-WindowsFeature Net-Framework-45-Features –WhatIf

    Unfortunately, we all get in a hurry sometimes and we do not read through the list and click “Remove Features”. If you notice – the “Server Graphical Shell” and “Graphical Management Tools and Infrastructure” are part of the features being removed.

    Here is a sample output from running Remove-WindowsFeature Net-Framework-45-Features -WhatIf. Again you will see that removing .Net Framework will effectively also remove the following:


    The two key features that I wanted to point out are:

    [User Interfaces and Infrastructure] Server Graphical Shell

    [User Interfaces and Infrastructure] User Interfaces and Infrastructure

    As stated earlier, this will leave the server without a graphical shell for user interaction. Only the command prompt will be available post reboot.

    If you get into this situation, run the below commands in the Server Core’s command prompt window to help you recover:

    DISM.exe /online /enable-feature / all featurename:NetFx4
    DISM.exe /online /enable-feature /all featurename:MicrosoftWindowsPowerShell

    The above commands will re-install .Net 4.0 and PowerShell on the server. Once PowerShell is installed, you can add the Graphical Shell (Windows Explorer) using the following command:

    Install-WindowsFeature Server-Gui-Shell, Server-Gui-Mgmt-Infra

    Once the GUI Shell is installed, you will need to restart the server with the following command:



    Remove-WindowsFeature and Uninstall-WindowsFeature are aliases.  The -whatif command shows what would occur if the command was run but does not execute the command.. 

    We hope this information was helpful.

    Vimal Shekar
    Escalation Engineer
    Microsoft Support

    Krishnan S Ayyer
    Technical Advisor
    Microsoft Support

  • What's New in Defrag for Windows Server 2012/2012R2

    Hello everyone, I am Palash Acharyya, Support Escalation Engineer with the Microsoft Platforms Core team. In the past decade, we have come a long way from Windows Server 2003 to all the way to Windows Server 2012R2. There has been a sea-change in the overall Operating System as a whole, and we have added/modified a lot of features. One of these is Disk Defragmentation and I am going to talk about it today.

    Do I need Defrag?

    To put this short and simple, defragmentation is a housekeeping job done at the file system level to curtail the constant growth of file system fragmentation. We have come a long way from Windows XP/2003 days when there used to be a GUI for defragmentation and it used to show fragmentation on a volume. Disk fragmentation is a slow and ongoing phenomena which occurs when a file is broken up into pieces to fit on a volume. Since files are constantly being written, deleted and resized, moved from one location to another etc., hence fragmentation is a natural occurrence. When a file is spread out over several locations, it takes longer for a disk to complete a read or a write IO. So, from a disk IO standpoint is defrag necessary for getting a better throughput? For example, when Windows server backup (or even a 3rd party backup solution which uses VSS) is used, it needs a Minimum Differential Area or MinDiffArea to prepare a snapshot. You can query this area using vssadmin list shadowstorage command (For details, read here). The catch is, there needs to be a chunk of contiguous free space without file fragmentation. The minimum requirement regarding the MinDiffArea is mentioned in the article quoted before.

    Q. So, do I need to run defrag on my machine?

    A. You can use a Sysinternal tool Contig.exe to check the fragmentation level before deciding to defrag. The tool is available here. Below is an example of the output which we can get:


    There are 13,486 fragments in all, so should I be bothered about it? Well, the answer is NO. Why?

    Here you can clearly observe that I have 96GB free space in C: volume, out of which Largest free space block or Largest Contiguous free space blockis approximately 54GB. So, my data is not scattered across the entire disk. In other words, my disk is not getting hammered during a read/write IO operation and running defrag here will be useless.

    Q. Again, coming back to the previous question, is defrag at all necessary?

    A. Well, it depends. We can only justify the need for defrag if it is causing some serious performance issues, else it is not worth the cost. We need to understand that file fragmentation is not always or solely responsible for poor performance. For example, there could be many files on a volume that are fragmented, but are not accessed frequently. The only way someone can tell if they need defrag is to measure their workload to see if fragmentation is causing slower and slower performance over time. If you determine that fragmentation is a problem, then you need to think about how effective it will be to run defrag for an extended period of time or the overall cost in running it. The word cost figuratively means the amount of effort which has gone behind running this task from an Operating System’s standpoint.  In other words, any improvement that you see will be at the cost of defrag running for a period of time and how it might interrupt production workloads. Regarding the situation where you need to run defrag to unblock backup, our prescriptive guidance should be to run defrag if a user encounters this error due to unavailability of contiguous free space. I wouldn’t recommend running defrag on a schedule unless the backups are critical and consistently failing for the same reason.

    A look at Windows Server 2008R2:

    Defragmentation used to run in Windows Server 2008/2008R2 as a weekly once scheduled task. This is how it used to look like:


    The default options:


    What changed in Server 2012:

    There have been some major enhancements and modifications in the functionality of defrag in Windows server 2012. The additional parameters which have been added are:

    /D     Perform traditional defrag (this is the default).

    /K     Perform slab consolidation on the specified volumes.

    /L     Perform retrim on the specified volumes.

    /O     Perform the proper optimization for each media type.

    The default scheduled task which used to run in Windows Server 2008R2 was defrag.exe –c which is doing a defragmentation in all volumes. This was again volume specific, which means the physical aspects of the storage (whether it’s a SCSI disk, or a RAID or a thin provisioned LUN etc.) are not taken into consideration. This has significantly changed in Windows server 2012. Here the default scheduled task is defrag.exe –c –h –k which means it is doing a slab consolidation of all the volumes with a normal priority level (default being Low). To explain Slab Consolidation, you need to understand Storage Optimization enhancements in Windows Server 2012 which has been explained in this blog.

    So what does Storage Optimizer do?

    The Storage Optimizer in Windows 8/Server 2012 , also takes care of maintenance activities like compacting data and compaction of file system allocation for enabling capacity reclamation on thinly provisioned disks. This is again platform specific, so if your storage platform supports it, Storage Optimizer will consolidate lightly used ‘slabs’ of storage and release those freed storage ‘slabs’ back to your storage pool for use by other Spaces or LUNs. This activity is done on a periodic basis i.e., without any user intervention and completes the scheduled task provided it is not interrupted by the user. I am not getting into storage spaces and storage pools as this will further lengthen this topic, you can refer TechNet regarding Storage Spaces overviewfor details.

    This is how Storage Optimizer looks like:


    This is how it looks like after I click Analyze


    For a thin-provisioned storage, this is how it looks like:


    The fragmentation percentage showing above is file level fragmentation, NOT to be confused with storage optimization. In other words, if I click on the Optimize option, it will do storage optimization depending on the media type. In Fig 5., you might observe fragmentation on volume E: and F: (I manually created file system fragmentation). If I manually run a defrag.exe –d (Traditional defrag) in addition with the default –o (Perform optimization), they won’t contradict each other as Storage Optimization and Slab Consolidation doesn’t work at a file system level like the traditional defrag used to do. These options actually show their potential in hybrid storage environments consisting of Storage spaces, pools, tiered storage etc. Hence, in brief the default scheduled task for running Defrag in Server 2012 and Server 2012 R2 does not do a traditional defrag job (defragmentation at a file system level) which used to happen in Windows Server 2008/2008R2. To do the traditional defragmentation of these volumes, one needs to run defrag.exe –d and before you do that, ensure that if it will be at all required or not.

    Q. So why did we stop the default file system defragmentation or defrag.exe -d?

    A. Simple, it didn’t justify the cost and effort to run a traditional file system defragmentation as a scheduled task once every week. When we talk about storage solutions having terabytes of data, running a traditional defrag (default file system defragmentation) involves a long time and also affects the server’s overall performance.

    What changed in Server 2012 R2:

    The only addition in Windows Server 2012R2 is the below switch:

    /G     Optimize the storage tiers on the specified volumes.

    Storage Tiers allow for use of SSD and hard drive storage within the same storage pool as a new feature in Windows Server 2012 R2. This new switch allows optimization in a tiered layout. To read more about Tiered Storage and how it is implemented, please refer to these articles:

    Storage Spaces: How to configure Storage Tiers with Windows Server 2012 R2

    What's New in Storage Spaces in Windows Server 2012 R2


    In brief, we need to keep these things in mind:

    1. The default scheduled task for defrag is as follows:

    Ø Windows Server 2008R2: defrag.exe –c

    Ø Windows Server 2012: defrag.exe –c –h –k

    Ø Windows Server 2012 R2: defrag.exe –c –h –k –g

    On a client machine it will be Windows –c –h –ohowever, if there is a thin provisioned media present, defrag will do slab consolidation as well.

    2. The command line –c –h –k (for 2012) and –c –h –k –g (for 2012R2) for the defrag task will perform storage optimization and slab consolidation on thin provisioned media as well. Different virtualization platforms may report things differently, like Hyper-V shows the Media Type as Thin Provisioned, but VMware shows it as a Hard disk drive. The fragmentation percentage shown in the defrag UI has nothing to do with slab consolidation. It refers to the file fragmentation of the volume.  If you want to address file fragmentation you must run defrag with –d(already mentioned before)

    3. If you are planning to deploy a PowerShell script to achieve the same, the command is simple.

    PS C:\> Optimize-Volume -DriveLetter <drive letter name> -Defrag -Verbose

    Details of all PowerShell cmdlets can be found here.

    That’s all for today, till next time.

    Palash Acharyya
    Support Escalation Engineer
    Microsoft Platforms Support

  • Windows Dynamic Cache Service Updated

    Good morning AskPerf! This is a quick blog to inform you that you no longer have to contact Microsoft Technical Support to obtain the Dynamic Cache Service for Windows Server 2008 R2. It is now freely available to download from the following link:

    Microsoft Windows Dynamic Cache Service


    Additional Resources


  • Unable to connect to a printer using a CNAME record

    Good morning AskPerf!  My name is Sandeep Bhatia and I work with Networking team here in Microsoft Support.  In today’s post, we will discuss Print issues when using a CNAME on Windows 2008 R2 Server, with a non-Microsoft DNS Servers.

    When you connect a printer hosted on Windows 2008 R2 Server using a CNAME alias it returns the following error:

    Operation could not be completed (Error 0x0000079). Double check the printer name and make sure that the printer is connected to the network.


    This error is returned because of the optimization changes to the spooler service in Windows 2008 R2 Server.  The Print Spooler service uses the local names to service requests.  We’ve verified the name being used is correct and we can connect using the NetBIOS, FQDN and IP address of the server.

    Step one is to make sure the target print server has the DNSOnWire registry key set to 1:

    HKLM\SYSTEM\CurrentControlSet\Control\Print\DNSOneWire (REG_DWORD)

    More details about this registry key is available at KB979602

    However, if the DNS Server that the Print server is using is not a Windows based DNS Server we could still see a similar error issue because of how the DNS server formats the reply.  When the DNSOnWire registry key is set to 1, the Print Server on startup will send a recursive DNS query expecting to get both the host record (A) the CNAME refers to and the IP address of the host.

    A sample DNS request and reply would look something like this: DNS DNS:QueryId = 0x1389, QUERY (Standard query),

    This will query for of type ALL on class Internet.

    When the type is set to ALL, the client would expect all the information about the record in on packet.  This query is also a recursive query to the DNS server for the name

    The second step is to make sure is the DNS server supports both a query type of ALL as well as recursive queries. The DNS server should be compliant with RFC 1035.

    In this example of a non-compliant DNS response, the reply from the DNS Server to the Print server for the DNS query, the DNS Server did not respond back with the IP Address of the Print Server.  It does send back the CNAME entry which points to the Print Server’ Host record, but the expectation is both should be returned. DNS DNS:QueryId = 0x1389, QUERY (Standard query), Response - Success, Array[IP Address Of the DNS Servers]  {DNS:242, UDP:241, IPv4:240}

      - Flags:  Response, Opcode - QUERY (Standard query), AA, RD, RA, Rcode - Success

         RD:                (.......1........) Recursion desired

      - ARecord: of type CNAME on class Internet:


        ResourceType: CNAME, Canonical name for an alias, 5(0x5)

         ResourceClass: Internet, 1(0x1)

         TimeToLive: 1800 (0x708)

         ResourceDataLength: 15 (0xF)


      + AuthorityRecord: of type NS on class Internet:

      + AdditionalRecord: of type Host Addr on class Internet:


    Under an ideal scenario, the reply for a recursive query from the DNS Server should look more like: DNS DNS:QueryId = 0x1389, QUERY (Standard query), Response - Success, Array[IP Address Of the DNS Servers]  {DNS:242, UDP:241, IPv4:240}

      + Flags:  Response, Opcode - QUERY (Standard query), AA, RD, RA, Rcode - Success

      - QRecord: of type ALL on class Internet


         QuestionType: A request for all records, 255(0xff)

         QuestionClass: Internet, 1(0x1)

      - ARecord: of type CNAME on class Internet:


         ResourceType: CNAME, Canonical name for an alias, 5(0x5)

         ResourceClass: Internet, 1(0x1)

         TimeToLive: 3600 (0xE10)

         ResourceDataLength: 15 (0xF)


      - AdditionalRecord: of type Host Addr on class Internet:


         ResourceType: A, IPv4 address, 1(0x1)

         ResourceClass: Internet, 1(0x1)

         TimeToLive: 1200 (0x4B0)

         ResourceDataLength: 4 (0x4)


    The key takeaway is that the configured DNS Server must return both the CNAME information and the IP Address of the Host in the same response in order to use printing to a CNAME successfully.

    -Sandeep Bhatia

    Additional Resources:

  • What’s New in Windows Servicing: Part 1

    My name is Aditya and I am a Senior Support Escalation Engineer for Microsoft on the Windows Core Team. I am writing today to shed some light on a the new changes that have been made to the Windows Servicing Stack in Windows 8.1 and Windows Server 2012 R2. This is a 4 part series and this is the first one:

    Windows 8.1 brings in a lot of new features to improve stability, reduce space usage and keep your machine up to date. This blog series will talk about each of these new features in detail and talk about some of the troubleshooting steps you will follow when you run into a servicing issue.

    What is Servicing and The Servicing Stack: Windows Vista onwards use a mechanism called Servicing to manage operating system components, rather than the INF-based installation methods used by previous Windows versions. With Windows Vista and Windows Server 2008, component-based builds use images to deploy component stores to the target machine rather than individual files. This design allows installation of additional features and fixes without prompting for media, enables building different operating system versions quickly and easily, and streamlines all operating system servicing.

    Within the servicing model, the update process for Vista+ Operating systems represents a significant advance over the update.exe model used in previous operating systems. Although update.exe had many positive features, it also had numerous issues, the foremost of which was the requirement to ship update.exe engine with each package.

    Servicing is simplified by including the update engine, in the form of the servicing stack, as part of the operating system. The servicing stack files are located in the C:\Windows\WINSxs folder.


    This folder can grow very large on Windows 2008 and Windows 2008 R2 system and more information can be found at why this happens at :

    What is the WINSXS directory in Windows 2008 and Windows Vista and why is it so large?

    What’s new in Windows 8.1 and Server 2012 R2:

    1. Component Store Analysis Tool:

    A new feature has been added to the DISM command that will allow users to get detailed information on the contents of the Component Store (WinSxS folder).

    There have been many users, mainly power users and IT Admins of Windows, who have raised concerns around the size of the WinSxS store and why it occupies so much space on the system. These users also have complaints about the size of WinSxS growing in size over time and are curious to know how its size can be reduced. A lot of users have questioned what happens if the WinSxS store is deleted completely. There have been multiple attempts in the past to explain what the WinSxS store contains and what the actual size of the WinSxS store is. For this OS release, a reporting tool has been created that a power user can run to find out the actual size of the WinSxS store as well as get more information about the contents of the Store. This is in addition to the article we will be publishing for users to understand how the WinSxS is structured, and what the actual size is as compared to the perceived size of this store.

    The purpose of this feature is two-fold. First, is to educate power users and IT Admins of Windows about what WinSxS is, what it contains and its importance to the overall functioning of the OS. Second, this feature will deliver a tool via the DISM functionality to analyze and report a specific set of information about the WinSxS store for power users.

    From various forums and blog posts, there seem to be two main questions that users have:

    · Why is WinSxS so massive?

    · Is it possible to delete WinSxS in part or completely?

    In addition to this, OEMs do have questions about how they can clean up unwanted package store, servicing logs, etc. from the image.

    Based on these questions, we felt that the most important metric for our tool would be the actual size of WinSxS. Secondly, it would be good to report packages that are reclaimable so that a user can startcomponentcleanup to scavenge them. Lastly, for devices like the Microsoft Surface, which remain on connected standby, it may be possible that the system never scavenged the image. In that case, considering that these tablets have small disk sizes, it becomes important to let users know when it was last scavenged and whether scavenging is recommended for their device.

    We expect the amount of time for completion of the analysis to be somewhere between 40 and 90 seconds on a live system. In this scenario, there needs to be some indication of progress made visible to the user. We will use the existing progress UI of DISM to indicate the % of analysis completed to the user. The user will also get the option to cancel out of the operation through the progress UI.

    The following steps describe the end to end flow of using the component store analysis tool:

    · The user launches an elevated command prompt by typing Command Prompt on the Start screen.

    · The user types in the DISM command:

    Dism.exe /Online /Cleanup-image /AnalyzeComponentStore

    At the end of the scan, the user gets a report of the results like this:


    2. Component Store Cleanup:

    The Component Store Cleanup functionality is one of several features aimed at reducing the overall footprint and footprint growth of the servicing stack. Reducing the footprint of Windows is important for many reasons, including providing end users more available disk capacity for their own files, and improving performance for deployment scenarios.

    Component Store Cleanup in Windows 8 was integrated into the Disk Cleanup Wizard. It performs a number of tasks, including removing update packages that contain only superseded components, and compressing un-projected files (such as optional components, servicing stack components, etc.). For Windows 8.1, we will add the capability to perform deep clean operations without requiring a reboot.

    Today, Component Store Cleanup must be triggered manually by an end-user, either by running DISM, or by using the Disk Cleanup Wizard. In order to make Component Store Cleanup more useful for the average end-user, it will be added into a maintenance task, automatically saving disk space for end-users. To enable this, a change will be made to allow uninstallation of superseded inbox drivers without requiring a reboot (today, all driver installs/uninstalls done by CBS require a reboot).

    The superseded package removal feature of deep clean attempts to maintain foot print parity between a computer that has been serviced regularly over time vs. a computer that has been clean installed and updated.

    2.1. How can Component Store Cleanup be initiated?

    Component Store Cleanup will support being initiated in the below 3 ways:

    1. Dism.exe /online /Cleanup-Image /StartComponentCleanup


    2. Disk cleanup wizard :

    a. To open Disk Cleanup from the desktop, swipe in from the right edge of the screen, tap Settings (or if you're using a mouse, point to the lower-right corner of the screen, move the mouse pointer up, and then click Settings), tap or click Control Panel, type Admin in the Search box, tap or click Administrative Tools, and then double-tap or double-click Disk Cleanup.

    b. In the Drives list, choose the drive you want to clean, and then tap or click OK.

    c. In the Disk Cleanup dialog, select the checkboxes for the file types that you want to delete, tap or click OK, and then tap or click Delete files.

    d. To delete system files:

    i. In the Drives list, tap or click the drive that you want to clean up, and then tap or click OK.

    ii. In the Disk Cleanup dialog box, tap or click Clean up system files. clip_image003 You might be asked for an admin password or to confirm your choice.


    c. In the Drives list, choose the drive you want to clean, and then tap or click OK.

    d. In the Disk Cleanup dialog box, select the checkboxes for the file types you want to delete, tap or click OK, and then tap or click Delete files.


    e. Automatic from a scheduled task:

    i. If Task Scheduler is not open, start the Task Scheduler. For more information, see Start Task Scheduler.

    ii. Expand the console tree and navigate to Task Scheduler Library\Microsoft\Windows\Servicing\StartComponentCleanup.

    iii. Under Selected Item, click Run


    The StartComponentCleanup task can also be started from the command line:

    schtasks.exe /Run /TN "\Microsoft\Windows\Servicing\StartComponentCleanup"

    For all three methods, an automatic scavenge will be performed after the disk cleanup in order to immediately reduce the disk footprint. When scavenge is performed for option 1, NTFS compression will not be used since it has a negative impact on capture and apply times, but Delta Compression will be used since it will help with both capture and apply. When run automatically for option 3, deep clean and the scavenge operation will be interruptible in order to maintain system responsiveness.

    2.2. What does Component Store Cleanup do?

    During automatic Component Store Cleanup, packages will be removed if the following criteria apply:

    § All components in package are in superseded state

    § Packages are not of an excluded class (permanent, LP, SP, foundation)

    § Package is older than defined age threshold

    · Only packages that have been superseded for a specified number of days (default of 30 days) will be removed by the automated deep clean task. In order maintain user responsiveness automatic Component Store Cleanup will perform package uninstall operations one at a time, checking to see if a stop has been requested in between each package.

    · The Component Store Cleanup maintenance task will be incorporated into the component platform scavenging maintenance task. This task runs every 1 week, with a deadline of 2 weeks. This ensures that scavenging and deep clean processing happens relatively quickly after patches are released on patch Tuesday.

    Manual Component Store Cleanup

    During manual Component Store Cleanup, packages will be removed if the following criteria apply:

    · All components in package are in superseded state

    · Packages are not of an excluded class (permanent, LP, SP, foundation)

    The functionality for manual Component Store Cleanup largely already exists in Win8. To improve performance, manual deep clean will perform all package uninstall operations in a single KTM transaction, and is not interruptible. Superseded packages are not subject to an age limit. Instead they are removed immediately.

    The next blog in the series we will discuss more about Delta Compression & Single Instancing…

    Senior Support Escalation Engineer
    Microsoft Platforms Support

  • WMI: How to troubleshoot High CPU Usage by WMI Components


    Windows Management Instrumentation Service (Winmgmt) or WMI provider (wmiprvse.exe) is consuming high amounts of CPU.

    In the directions below, you may have already broken out WMI Service to troubleshoot your issue.  By default, WMI runs in the main shared networking svchost process with several other services.

    If it is a svchost process showing high cpu usage, you can use Task Manager and add PID column, then identify which svchost process has the high memory usage. From inside a command prompt you can type in  tasklist /svc and look for the PID #, and identify if a single service is running in that svchost process or multiple services. If multiple services, it may become necessary to break each service out to run in its own svchost process to determine if it is the WMI service (winmgmt) that is causing the issue. From my experience, it will be the WMI service more times than not but not always.  As such, I would suggest breaking it out first into its own, and monitor to see if it is the one driving up high memory usage in the shared svchost process.

    If you suspect the WMI (Windows Management Instrumentation) service, you can break it out following directions below.

    Break WMI Service out into its own svchost process

    1. Open command prompt with elevated privileges
    2. Run following command: sc config winmgmt type= own
    3. Restart Wmi service
    4. Run sc query winmgmt to ensure status of service now reflects “own” indicating running in its own svchost process 

    When issue had been resolved or no longer needing the service broken out into its own svchost process, place it back into the shared svchost process by running following command from command prompt:

    • sc config <service name> type= share
    • Restart the service or machine and verify result is Win32_SHARE_PROCESS when you
    • run sc query winmgmt command again

    Configure Perfmon Collection using logman.exe method. Capture 15 minutes while issue is occurring.

    Short, high resolution log – 1 sec interval with thread counter, 250MB

    1. Click on Start

    <<Start Search>>, enter "CMD.exe" w/o the quotation marks and then press Enter.

    2. Copy and paste the following command into the command prompt window (if this does not work, you may need to manually type it in):

    Logman.exe create counter PerfLog-Short -o "c:\perflogs\PerfLog-Short" -f bincirc -v mmddhhmm -max 250 -c "\Cache\*" "\LogicalDisk(*)\*" "\Memory\*" "\Network Interface(*)\*" "\Paging File(*)\*" "\PhysicalDisk(*)\*" "\Processor(*)\*" "\Process(*)\*" "\Redirector\*" "\Server\*" "\System\*" "\Server Work Queues\*" "\Thread(*)\*" -si 00:00:01

    3. Start the log with:

    Logman.exe start PerfLog-Short

    4. Please stop the performance log as soon as the issue returns with the following command:

    Logman.exe stop PerfLog-Short

    Please note that if you reboot the server, you will need to start the logs again as they will not automatically restart on boot.

    Collect and Xperf trace for High CPU by using the Windows Performance Recorder form the Windows Performance Toolkit which you can install from the ADK

    Note: If the Operating System is a 64 bit box, you must first accomplish the following registry setting before collecting Xperf trace.

    Registry Path
    HKLM\System\CurrentControlSet\Control\Session Manager\Memory Management
    Data Type:

    NOTE setting this key is not needed on Windows Server 2012 & 2012 R2

    Reboot machine to place registry setting into effect.

    1. Download the Windows 8 ADK (Windows Assessment Deployment Kit) from here.
    2. Open the adksetup.exe and hit next until you get you the option to select feature options
    3. Select "Windows Performance Toolkit" and hit "Install"


    After installation has finished, start creating a trace by starting the "Windows Performance Recorder"


    Select CPU usage under Resource Analysis

    Logging mode can be left set to “Memory”, or you can change to “File”. Just be conscious of your disk space if you chose “File” as the etl file can become large fast

    Capture high cpu occurrence, but do not let the recording run for no more than 10 minutes.

    Immediately after capturing the event using Windows Performance Recorder (WPR), now use process explorer to dump out the process exhibiting high cpu usage.

    1. Download Windows Sysinternals tool called Procdump:

    2. Open a command prompt with elevated or administrative rights and change to the directory were you saved Procdump

    3. Open Task Manager and add the PID column view, then go locate the instance of wmiprvse.exe with high cpu usage and note the PID. If it was the WMI service that had the high cpu, then you should already have it broken out to run in its own svchost process and note the PID of that svchost process. To confirm you have the right svchost process, you can run tasklist /svc from administrative command prompt and verify the PID noted in task manager and ensure it is the svchost process running winmgmt in it.

    4. Run the following command: procdump –ma -s 60 -n 3 <PID>

    Note: Replace <PID> with actual PID you documented for instance of wmiprvse.exe or for the svchost process running winmgmt exhibiting high memory usage

    The above command will produce 3 dumps spaced 1 minute apart each in same directory you ran the procdump command from

    5. Download the latest version of the Windows Sysinternals tool Process Explorer.

    6. If it was wmiprvse.exe that had the high CPU usage, then find the instance and right click on it and bring up the properties sheet. Click on the WMI Providers tab and document the listed providers

    At this point you will now need to open a Support Incident Case with Microsoft to get the data analyzed to determine cause of high CPU usage.

    Reference this blog when you open the Support Incident Case with Microsoft as it will help the engineer understand what actions have been taken or followed and will help us track the effectiveness of the blog.

    Next up:  WMI: How to Troubleshoot WMI High Handle Count

  • Failover Clustering and Active Directory Integration

    My name is Ram Malkani and I am a Support Escalation Engineer on Microsoft’s Windows Core team. I am writing to discuss how Failover Clustering is integrated with Active Directory on Windows Servers.

    Windows Server Failover Clustering, has always had a very strong and cohesive attachment with the Active Directory. We made considerable changes to how Failover Clustering integrates with AD DS, as we made progression to new versions of Clusters running on Windows Servers. Let us see the story so far:

    Window Server 2003 and previous version.

    Windows Server 2008, 2008 R2

    Windows Server 2012

    We needed a Cluster Service Account (CSA). A domain user, whose credentials were used for the Cluster service and the Clustered resources. This had its problems, changing the password for the account, rotating the passwords, etc. Later, we did add support for Windows Clusters on 2003 to use Kerberos Authentication which created objects in Active Directory.

    We moved away from CSA, and instead, the Cluster started the use of Active Directory computer objects associated with the Cluster Name resource (CNO) and Virtual Computer Objects (VCOs) for other network names in the cluster. When cluster is created, the logged on user needed permissions to create the computer objects in AD DS, or you would ask the Active Directory administrator to pre-stage the computer object(s) in AD DS. Cluster communications between nodes also uses AD authentication.

    The same information provided for Windows 2008 and 2008R2 applies, however, we included a feature improvement to allow Cluster nodes to come up when AD is unavailable for authentication and allow Cluster Shared Volumes (CSVs) to become available and the VMs (potentially Domain Controllers) on it to start. This was a major issue as otherwise we had to have at least one available Domain Controller outside the cluster before the Cluster Service could start.


    What’s new with Clustering in Windows Server 2012 R2

    We have introduced, a new mode to create a Failover Cluster on Windows Server 2012 R2, known as Active Directory detached Cluster. Using this mode, you would not only no longer need to pre-stage these objects but also stop worrying about the management and maintenance of these objects. Cluster Administrators would no longer need to be wary about accidental deletions of the CNO or the Virtual Computer Objects (VCOs). The CNOs and VCOs are now instead created in Domain Name System (DNS).

    This feature provides greater flexibility when creating a Failover Cluster and enables you to choose to install Clusters with or without AD integration. It also improves the overall resiliency of cluster by reducing the dependencies on CNO and VCOs, thereby reducing the points of failure on the cluster.

    The intra-cluster communication would continue to use Kerberos for authentication, however, the authentication of the CNO would be done using NT LM authentication. Thus, you need to remember that for all Cluster roles that need Kerberos Authentication use of AD-detached cluster is not recommended.


    Installing Active Directory detached Cluster

    First, you should make sure that the nodes, running Windows Server 2012 R2 that you are intending to add to the cluster are part of the same domain, and proceed to install the Failover-Cluster feature on them. This is very similar to conventional Cluster installs running on Windows Servers. To install the feature, you can use the Server Manager to complete the installation.

    Server Manager can be used to install the Failover Clustering feature:

    Introducing Server Manager in Windows Server 2012

    We can alternatively use PowerShell (Admin) to install the Failover Clustering feature on the nodes.

    Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

    An important point to note is that PowerShell Cmdlet ‘Add-WindowsFeature’ is being replaced by ‘Install-WindowsFeature’ in Windows Server 2012 R2. PowerShell does not install the management tools for the feature requested unless you specify  ‘-IncludeManagementTools’ as part of your command. 



    The Cluster Command line tool (CLUSTER.EXE) has been deprecated; but, if you still want to install it, it is available under:
    Remote Server Administration Tools --> Feature Administration Tools --> Failover Clustering Tools --> Failover Cluster Command Interface in the Server Manager


    The PowerShell (Admin) equivalent to install it:

    Install-WindowsFeature -Name RSAT-Clustering-CmdInterface

    Now that we have Failover Clustering feature installed on our nodes. Ensure that all connected hardware to the nodes passes the Cluster Validation tests. Let us now go on to create our cluster. You cannot create an AD detached clustering from Cluster Administrator and the only way to create the AD-Detached Cluster is by using PowerShell.

    New-Cluster MyCluster -Node My2012R2-N1,My2012R2-N2 -StaticAddress -NoStorage -AdministrativeAccessPoint DNS


    In my example above, I am using static IP Addresses, so one would need to be specified.  If you are using DHCP for addresses, the switch “-StaticAddress ” would be excluded from the command.

    Once we have executed the command, we would have a new cluster created with the name “MyCluster” with two nodes “My2012R2-N1” and “My2012R2-N2”. When you look Active Directory, there will not be a computer object created for the Cluster “MyCluster”; however, you would see the record as the Access Point in DNS.



    For details on cluster roles that are not recommended or unsupported for AD detached Clusters, please read:

    Deploy an Active Directory-Detached Cluster

    That’s it! Thank you for your time.

    Ram Malkani
    Support Escalation Engineer
    Windows Core Team

  • Configuring Windows Failover Cluster Networks

    In this blog, I will discuss the overall general practices to be considered when configuring networks in Failover Clusters.

    Avoid single points of failure:

    Identifying single points of failure and configuring redundancy at every point in the network is very critical to maintain high availability. Redundancy can be maintained by using multiple independent networks or by using NIC Teaming. Several ways of achieving this would be:

    · Use multiple physical network adapter cards. Multiple ports of the same multiport card or backplane used for networks introduces a single point of failure.

    · Connect network adapter cards to different independent switches. Multiple Vlans patched into a single switch introduces a single point of failure.

    · Use of NIC teaming for non-redundant networks, such as client connection, intra-cluster communication, CSV, and Live Migration. In the event of a failure of the current active network card will have the communication move over to the other card in the team.

    · Using different types of network adapters eliminates affecting connectivity across all network adapters at the same time if there is an issue with the NIC driver.

    · Ensure upstream network resiliency to eliminate a single point of failure between multiple networks.

    · The Failover Clustering network driver detects networks on the system by their logical subnet. It is not recommended to assign more than one network adapter per subnet, including IPV6 Link local, as only one card would be used by Cluster and the other ignored.

    Network Binding Order:

    The Adapters and Bindingstab lists the connections in the order in which the connections are accessed by network services. The order of these connections reflects the order in which generic TCP/IP calls/packets are sent on to the wire.

    How to change the binding order of network adapters

    1. Click Start, click Run, type ncpa.cpl, and then click OK. You can see the available connections in the LAN and High-Speed Internet section of the Network Connections window.
    2. On the Advanced menu, click Advanced Settings, and then click the Adapters and Bindings tab.
    3. In the Connections area, select the connection that you want to move higher in the list. Use the arrow buttons to move the connection. As a general rule, the card that talks to the network (domain connectivity, routing to other networks, etc should the first bound (top of the list) card.

    Cluster nodes are multi-homed systems.  Network priority affects DNS Client for outbound network connectivity.  Network adapters used for client communication should be at the top in the binding order.  Non-routed networks can be placed at lower priority.  In Windows Server 2012/2012R2, the Cluster Network Driver (NETFT.SYS) adapter is automatically placed at the bottom in the binding order list.

    Cluster Network Roles:

    Cluster networks are automatically created for all logical subnets connected to all nodes in the Cluster.  Each network adapter card connected to a common subnet will be listed in Failover Cluster Manager.  Cluster networks can be configured for different uses.





    Disabled for Cluster Communication


    No cluster communication of any kind sent over this network

    Enabled for Cluster Communication only


    Internal cluster communication and CSV traffic can be sent over this network

    Enabled for client and cluster communication


    Cluster IP Address resources can be created on this network for clients to connect to. Internal and CSV traffic can be sent over this network

    Automatic configuration

    The Network roles are automatically configured during cluster creation. The above table describes the networks that are configured in a cluster.

    Networks used for ISCSI communication with ISCSI software initiators is automatically disabled for Cluster communication (Do not allow cluster network communication on this network).

    Networks configured without default gateway is automatically enabled for cluster communication only (Allow cluster network communication on this network).

    Network configured with default gateway is automatically enabled for client and cluster communication (Allow cluster network communication on this network, Allow clients to connect through this network).

    Manual configuration

    Though the cluster networks are automatically configured while creating the cluster as described above, they can also be manually configured based on the requirements in the environment.

    To modify the network settings for a Failover Cluster:

    · Open Failover Cluster Manager

    · Expand Networks.

    · Right-click the network that you want to modify settings for, and then click Properties.

    · If needed, change the name of the network.

    · Select one of the following options:

    o Allow cluster network communication on this network.  If you select this option and you want the network to be used by the nodes only (not clients), clear Allow clients to connect through this network. Otherwise, make sure it is selected.

    o Do not allow cluster network communication on this network.  Select this option if you are using a network only for iSCSI (communication with storage) or only for backup. (These are among the most common reasons for selecting this option.)

    Cluster network roles can also be changed using PowerShell command, Get-ClusterNetwork.

    For example:

    (Get-Cluster Network “Cluster Network 1”). Role =3

    This configures “Cluster Network 1” to be enabled for client and cluster communication.

    Configuring Quality of Service Policies in Windows 2012/2012R2:

    To achieve Quality of Service, we can either have multiple network cards or used, QoS policies with multiple VLANs can be created.

    QoS Prioritization is recommended to configure on all cluster deployments. Heartbeats and Intra-cluster communication are sensitive to latency and configuring a QoS Priority Flow Control policy helps reduce the latency.

    An example of setting cluster heartbeating and intra-node communication to be the highest priority traffic would be:

    New-NetQosPolicy “Cluster”-Cluster –Priority 6
    New-NetQosPolicy “SMB” –SMB –Priority 5
    New-NetQosPolicy “Live Migration” –LiveMigration –Priority 3


    Available values are 0 – 6

    Must be enabled on all the nodes in the cluster and the physical network switch

    Undefined traffic is of priority 0

    Bandwidth Allocation:

    It is recommended to configure Relative Minimum Bandwidth SMB policy on CSV deployments

    Example of setting minimum policy of cluster for 30%, Live migration for 20%, and SMB Traffic for 50% of the total bandwidth.

    New-NetQosPolicy “Cluster”-Cluster –Priority 6
    New-NetQosPolicy “SMB” –SMB –Priority 5
    New-NetQosPolicy “Live Migration” –LiveMigration –Priority 3

    Multi-Subnet Clusters:

    Failover Clustering supports having nodes reside in different IP Subnets. Cluster Shared Volumes (CSV) in Windows Server 2012 as well as SQL Server 2012 support multi-subnet Clusters.

    Typically, the general rule has been to have one network per role it will provide. Cluster networks would be configured with the following in mind.

    Client connectivity

    Client connectivity is used for the applications running on the cluster nodes to communicate with the client systems. This network can be configured with statically assigned IPv4, IPv6 or DHCP assigned IP addresses. APIPA addresses should not be used as will be ignored networks as the Cluster Virtual Network Adapter will be on those address schemes. IPV6 Stateless address auto configuration can be used, but keep in mind that DHCPv6 addresses are not supported for clustered IP address resources. These networks are also typically a routable network with a Default Gateway.

    CSV Network for Storage I/O Redirection.

    You would want this network if using as a Hyper-V Cluster and highly available virtual machines. This network is used for the NTFS Metadata Updates to a Cluster Shared Volume (CSV) file system. These should be lightweight and infrequent unless there are communication related events getting to the storage.

    In the case of CSV I/O redirection, latency on this network can slow down the storage I/O performance. Quality of Service is important for this network. In case of failure in a storage path between any nodes or the storage, all I/O will be redirected over the network to a node that still has the connectivity for it to commit the data. All I/O is forwarded, via SMB, over the network which is why network bandwidth is important.

    Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks need to be enabled to support Server Message Block (SMB) which is required for CSV. Configuring this network not to register with DNS is recommended as it will not use any name resolution. The CSV Network will use NTLM Authentication for its connectivity between the nodes.

    CSV communication will take advantage of the SMB 3.0 features such as SMB multi-channel and SMB Direct to allow streaming of traffic across multiple networks to deliver improved I/O performance for its I/O redirection.

    By default, the cluster will automatically choose the NIC to be used for CSV for manual configuration refer the following article.

    Designating a Preferred Network for Cluster Shared Volumes Communication

    This network should be configured for Cluster Communications

    Live Migration Network

    As with the CSV network, you would want this network if using as a Hyper-V Cluster and highly available virtual machines. The Live Migration network is used for live migrating Virtual machines between cluster nodes. Configure this network as Cluster communications only network. By default, Cluster will automatically choose the NIC for Live migration.

    Multiple networks can be selected for live migration depending on the workload and performance. It will take advantage of the SMB 3.0 feature SMB Direct to allow migrations of virtual machines to be done at a much quicker pace.

    ISCSI Network:

    If you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.

    This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network will be disabled from Cluster use. This network should set to lowest in the binding order.

    As with all storage networks, you should configure multiple cards to allow the redundancy with MPIO. Using the Microsoft provided in-box teaming drivers, network card teaming is now supported in Win2012 with iSCSI.

    Heartbeat communication and Intra-Cluster communication

    Heartbeat communication is used for the Health monitoring between the nodes to detect node failures. Heartbeat packets are Lightweight (134 bytes) in nature and sensitive to latency. If the cluster heartbeats are delayed by a Saturated NIC, blocked due to firewalls, etc, it could cause the cluster node to be removed from Cluster membership.

    Intra-Cluster communication is executed to update the cluster database across all the nodes any cluster state changes. Clustering is a distributed synchronous system. Latency in this network could slow down cluster state changes.

    IPv6 is the preferred network as it is more reliable and faster than IPv4. IPv6 linklocal (fe80) works for this network.

    In Windows Clusters, Heartbeat thresholds are increased as a default for Hyper-V Clusters.

    The default value changes when the first VM is clustered.


    Cluster Property


    Hyper-V Default








    Generally, heartbeat thresholds are modified after the Cluster creation. If there is a requirement to increase the threshold values, this can be done in production times and will take effect immediately.

    Configuring full mesh heartbeat

    The Cluster Virtual Network Driver (NetFT.SYS) builds routes between the nodes based on the Cluster property PlumbAllCrossSubnetRoutes.

    Value Description

    0     Do not attempt to find cross subnet routes if local routes are found

    1     Always attempt to find routes that cross subnets

    2     Disable the cluster service from attempting to discover cross subnet routes after node successfully joins.

    To make a change to this property, you can use the command:

    (Get-Cluster). PlumbAllCrossSubnetRoutes = 1

    References for configuring Networks for Exchange 2013 and SQL 2012 on Failover Clusters.

    Exchange server 2013 Configuring DAG Networks.

    Before Installing Failover Clustering for SQL Server 2012

    At TechEd North America 2013, there was a session that Elden Christensen (Failover Cluster Program Manager) did that was entitled Failover Cluster Networking Essentials that goes over a lot of configurations, best practices etc.

    Failover Cluster Networking Essentials

    S. Jayaprakash
    Senior Support Escalation Engineer
    Microsoft India GTSC

  • Timestamp difference in Windows Explorer FTP folder view

    Good morning AskPerf! Anshuman here with a quick post on timestamp differences in Windows Explorer when accessing FTP sites.


    On Windows 7(SP1), Windows 8, and Windows 8.1, you access a folder using ftp by opening an explorer window and typing in the FQDN FTP URL in the address bar. In the FTP folder view, if you access the properties of a file, you may notice that the Modified time in the properties of the file may not match the Date modified time stamp displayed in the detailed view in explorer.  See screenshot below:


    This behavior can be observed if the local time set on the machine from where you access the FTP folder in explorer view is set to a Time Zone other that the UTC time. In this case you will see that the Timestamp of the item, when its property is accessed is displayed as UTC, while the timestamp that we see under the Date modified column for the same file is shown as per the Time Zone that’s set on the machine.

    Please note that both the timestamps are correct. The difference is due to the Time Zone bias added or subtracted from the UTC time.

    Additional Resources

    -Anshuman Ghosh

  • Network Isolation of Windows Modern Apps – How Apps work with Akamai Internet Caching Servers in Windows 8/8.1

    Good morning AskPerf! Mario Liu here from the Windows 8/8.1 Support Team. Today I’d like to discuss a Windows Modern App connectivity issue I’ve worked over the past few months.

    We have several Enterprise customers who have deployed Windows 8 and 8.1 in their environments. However, after joining the machine to the domain, we discovered that numerous Windows Modern Apps could not display the contents correctly. For example, if we launched the Weather App, we saw the message “This page failed to load”:


    On the surface, this looks like a network connectivity issue, however other network related programs ran fine. A Network Trace was captured, but we could not find any dropped packets.

    After further troubleshooting, we found that the same Windows 8/8.1 machines worked if they were outside of their corporate network. This gave us a clue that something inside their environment is causing the issue. In one step, we noticed that if the Firewall was disabled, the Modern App ran fine. Obviously turning the firewall off is not recommended.

    After lots of additional troubleshooting steps were done, we discovered that the Akamai Internet Caching Servers had a hand in this issue. The Windows Firewall Service blocked the traffic to Akamai devices, but why would it do this? Why would a Modern App have network connectivity issues to Akamai devices?

    To answer this question, first we need understand what Akamai Internet Caching Server is. It is a widely used content management solution worldwide. The main purpose of it is to save the web contents (mostly static) as the cache on the server. So next time a client machine tries to reach the same website, it actually reaches the cache on the local Akamai Server instead of taking longer time to reach the real contents on the remote web server. By doing this, we save a lot network traffic and speed up the contents delivery. So, the Akamai Internet Caching Server plays a similar role as an Internet Proxy Server, but from a web content management perspective. It basically delivers the contents to Windows clients on behalf of real/live web server.

    OK, so now that we understand Akamai’s role, let’s switch back to the Windows Modern App. In Windows, there is an important feature called Windows Store App Network Isolation. Modern Apps are network isolated depending on the network capabilities the app developer chooses. An enterprise private network is protected and only available to those apps that declare the privateNetworkClientServer capability in the app manifest. There are very few Modern Apps in the Microsoft store that work on a private network. Even though users can very easily download and install any kind of app, the app cannot talk to the enterprise network because the app can only talk to the public network. In most cases, what is a private / public internet network is automatically detected, which means the Modern App knows to reach public internet even when a Windows 8/8.1 machine is running in a corporate (private) environment. But what happens if we bring the Akamai Internet Caching Server into this scene? Instead of letting Modern App reach the real contents on the remote web server via public internet, the Modern App is forced to talk to the Akamai device which is on the corporate (private) network. Remember, very few Modern Apps are approved to work on a private network ONLY. If the apps try to reach private network, the firewall will block it. Here is a way you can check to see if your machine is hitting this problem:

    1. Enable WFP (Windows Filtering Platform) auditing by running the following command via elevated command prompt:

    auditpol /set /subcategory:"Filtering Platform Packet Drop" /success:enable /failure:enable

    2. Reproduce the issue.

    3. Go to event viewer -> Windows Logs -> Security. You should see some event ID 5152 which means Filtering Platform Packet Drop.

    4. Look at one of these events and you should find this similar information. Note: is Akamai server’s address in this example.

    The Windows Filtering Platform has blocked a packet.

    Application Information:

    Process ID: 2712

    Application Name: \device\harddiskvolume2\windows\system32\wwahost.exe

    Network Information:

    Direction: Outbound

    Source Address:

    Source Port: 50571

    Destination Address:

    Destination Port: 80

    Protocol: 6

    Filter Information:

    Filter Run-Time ID: 71620

    Layer Name: Connect

    Layer Run-Time ID: 48

    Wwahost.exe is the Modern App process. As you can see, WFP blocked this traffic with Filter ID 71620 because it tried to reach a private IP address. This filter is called Block Outbound Default Rule. If you want to know how I can tell this filter’s name, you can do netsh.exe wfp capture start via elevated command prompt when you reproduce the issue. This will generate a file and in this cab you can see wfpdiag.xml. Now open this xml file to correlate the filter name and ID.

    So, at this point, we now know that Akamai is filtering the cached Internet traffic. That internet cache however is hosted in the private network. Akamai internet cache is essentially the internet network and needs to be deployed as such.

    There are several solutions. But they can be essentially classified in two types:

    Type 1: Either the Akamai servers must be deployed outside of the private network, or

    Type 2: The Akamai servers must be declared explicitly as proxies who live in the private network

    Microsoft’s recommendation is Type 1 as the enterprise cannot control much of what the Akamai devices do. So, putting the device just outside of the corporate firewall on a different address space or outside of enterprise subnet would appear to be a better solution. However, if this requires a fundamental change of your infrastructure, the Type 2 solution can also be guaranteed to work.

    Below we cover some solutions of how the Akamai Servers can be deployed.

    Solution 1 – Akamai servers on their own IP space.

    Akamai servers are deployed outside of the DMZ and they are given an address space different from the one provided to the company. In this scenario the ISP might participate or even the company can simply not provision the subnet given to Akamai anywhere else.

    Solution 2 – Akamai servers are outside of AD sites and Subnets.

    Akamai servers are given an IP address space that is not one included in the AD sites and Subnets configuration of active directory. In theory it should be a security issue that an external device not belonging to the corporation is included in the AD sites and subnets as it providing it with access that the enterprise does not want to provide. Hence this approach is not only an appropriate deployment of network isolation but an appropriate deployment of AD in general and a good tightening of the network.

    Solution 3 – Akamai servers are removed manually from the private network by using the Network isolation Group Policy controls to override the private network.

    Network Isolation Group Policy controls allows Group Policy admins to add private network subnets. It also allows them to completely override the private network definitions in AD sites and subnets and hence carve out any IP space that shouldn’t be considered part of the private network of the enterprise.

    Let’s assume your corporate IP address space ranges from to and you have two Akamai servers with IP and

    Here is the example to implement this solution:

    1. Open the Group Policy Management snap-in (gpmc.msc) and edit the Default Domain Policy.
    2. From the Group Policy Management Editor, expand Computer Configuration, expand Policies, expand Administrative Templates, expand Network, and click Network Isolation.
    3. In the right pane, double-click Private network ranges for apps.
    4. In the Private network ranges for apps dialog box, click Enabled. In the Private subnets text box, type the private subnets for your intranet, and remove Akamai servers.

    In this example the values are; in the Private subnets text box. Note we have excluded and since they are Akamai’s.

    5.Double-click Subnet definitions are authoritative, click Enabled.

    Solution 4 – Akamai servers are configured as proxies by using the Network Isolation Group Policy controls to declare them as proxies.

    The Network Isolation GP controls also allow admins to add internet proxies. It can also be used to completely override the proxy discovery mechanisms provided by default with a preferred proxy definition.

    The enterprise admin can easily use the network isolation GP internet proxy controls to add the Akamai servers as internet content proxies.

    Here is the example to implement this solution:

    1. Open the Group Policy Management snap-in (gpmc.msc) and edit the Default Domain Policy.
    2. From the Group Policy Management Editor, expand Computer Configuration, expand Policies, expand Administrative Templates, expand Network, and click Network Isolation.
    3. Double-click Internet proxy servers for apps. Click Enabled, and then in the Domain Proxies text box, type the IP addresses of your Internet proxy servers, separated by semicolons. In this example the values are;
    4. Double-click Proxy definitions are authoritative. Ensure that the Not Configured default value is selected to add these proxies to other discovered http proxies.

    For more information about the feature of Network Isolation of Windows Modern Apps, please refer to:

    Isolating Windows Store Apps on Your Network

    How to set network capabilities


  • Certificate Requirements for Windows 2008 R2 and Windows 2012 Remote Desktop Services

    Good morning AskPerf!  Kiran here with a question for you:  Why do we need certificates?  Well, certificates are used to sign the communication between two machines.  When a client connects to a server, the identity of the server that is receiving the connection and in turn, information from the client, is validated using certificates.

    This is done to prevent possible man-in-the-middle attacks.  When a communication channel is setup between the client and the server, the authority that issues/generates the certificate is vouching for the server to be authentic.

    So, as long as the client trusts the server it is communicating with, the data being sent to and from the server is considered secure.  This brings me to the next question:

    What type of certificate is required for RDS?

    The following blog contains information regarding the type of certificates and how you can create them using the Internal CA of the domain.

    Basic requirements for Remote Desktop certificates:

    1. The certificate is installed into computer’s “Personal” certificate store.
    2. The certificate has a corresponding private key.
    3. The "Enhanced Key Usage" extension has a value of either "Server Authentication" or "Remote Desktop Authentication" ( Certificates with no "Enhanced Key Usage" extension can be used as well.

    As the function it performs suggests, we need a ‘Server Authentication’ certificate.  This certificate can be generated using the ‘Workstation Authentication’ template (if required).

    Here is the exact process: 

    1. Open CERTSRV.MSC and configure certificates.
    2. Open Certification Authority.
    3. In the details pane, expand the instructor computer name.
    4. Right-click Certificate Templates and select Manage. Right-click Workstation Authentication and click Duplicate Template.
    5. On the General tab, change the Template display name to Client-Server Authentication and check Publish certificate in Active Directory.
    6. On the Extensions tab, click Application Policies then Edit. Click Add then select Server Authentication. Click OK until you return to the Properties of New Template dialog.
    7. Click the Security tab. For Domain Computers, click the checkbox to ‘Allow Autoenroll’. Click OK. Close the Certificate Templates Console.
    8. In the certsrv snap-in, right-click Certificate Templates and select New then Certificate Template to Issue.
    9. Select Client-Server Authentication and then click OK.

    This will be visible when viewing the certificate in the ‘Certificates’ MMC snap-in, as below:


    When you open the certificate, the ‘General’ tab will also contain the purpose of this certificate to be ‘Server Authentication’ as seen below:


    Another way to validate this, would be to go to the ‘Details’ section of the certificate and look at the ‘Enhanced Key Usage’ property:


    The easiest way to get a certificate, if you control the client machines that will be connecting, is to use Active Directory Certificate Services.  You can request and deploy your own certificates and they will be trusted by every machine in the domain. 

    If you're going to allow users to connect externally and they will not be part of your domain, you would need to deploy certificates from a public CA.  Examples including, but not limited to: GoDaddy, Verisign, Entrust, Thawte, DigiCert

    Now that you know what type of certificate you need, let’s talk about the contents of the certificate.

    In Windows 2008/2008 R2, you connect to the farm name, which as per DNS round robin, gets first directed to the redirector, next to the connection broker and finally to the server that will host your session.

    In Windows 2012, you connect to the Connection Broker and it routes you to the collection by using the collection name. 

    The certificates you deploy need to have a subject name or subject alternate name that matches the name of the server that the user is connecting to.  So for example, for Publishing, the certificate needs to contain the names of all of the RDSH servers in the collection.  The certificate for RDWeb needs to contain the FQDN of the URL, based on the name the users connect to.  If you have users connecting externally, this needs to be an external name (needs to match what they connect to).  If you have users connecting internally to RDweb, the name needs to match the internal name.  For Single Sign On, again the subject name needs to match the servers in the collection.

    For our example, let’s consider my RDS deployment to contain the following machines:

    RDSH1.RENDER.COM                 Session Host with Remote Apps configured

    RDSH2.RENDER.COM                 Session Host with Remote Apps configured

    RDVH1.RENDER.COM                Virtualization host with VDI VMs configured

    RDVH2.RENDER.COM                Virtualization host with VDI VMs configured

    RDCB.RENDER.COM                   Connection Broker

    RDWEB.RENDER.COM               RDWeb and Gateway server

    When my client connects internally, he will enter the FQDN of the server that hosts the web page, i.e,: RDWEB.RENDER.COM.

    The name of the certificate needs to be this name, of the URL that the user will initiate the connection to.  But we need to remember that the connection does not just end here.  The connection then flows from the web server to one of the session hosts or virtualization hosts and also the connection broker.

    The certificate can be common on all of these servers.  This is why we recommend that the Subject Alternate Name of the certificate contain the names of all the other servers that are part of the deployment.

    In short, the certificate for my environment would be as follows:

    Type: Server Authentication



    This is all you need as long as you have 5 or less servers in the deployment. But we have a problem when we have more servers in the deployment. This is because, by design, the SAN (Subject Alternate Name) on a certificate, can only contain 5 server names. If you have more of them, you will have to get a wildcard certificate issued to cover all the servers in the deployment. Here my certificate changes as follows:

    Type: Server Authentication



    We still do encounter some challenges when it comes to the following scenario. Note, that this is true only when you have external users that access the deployment.

    External name:

    Internal Name: RDWEB.RENDER.local

    Here, if you get a certificate with RDWEB.RENDER.COM in the name, the certificate errors still do appear.  This is because the certificate is supposed to validate a server with the FQDN: ‘RDWEB.RENDER.COM’.  However, your server is ‘RDWEB.RENDER.LOCAL’ and the ‘.com’ to ‘.local’ magic only happens at your public firewall/router using port forwarding (most common scenario).

    In such scenarios, we previously recommended that the name on the certificate contains the ‘.com’ name and the SAN contains the ‘.local’ name.

    Recently, all public certificate providers are stopping issuing certificates with ‘.LOCAL’ in them. Starting with Windows 8 and Windows Server 2012, we no longer need the external and internal names to be contained in the certificate.

    In scenarios where you have external clients connecting in and you have a private internal domain suffix (DOMAIN.LOCAL), you can get a certificate from a Public CA with the external (RDWEB.DOMAIN.COM) name and bind it to the RD Web Access and RD Gateway roles, because these are the only roles that are exposed to the internet.  For RD Connection Broker – Publishing and RD Connection Broker – Enable Single Sign On, you can make use of an internal certificate with the ‘DOMAIN.LOCAL’ name on it.  This however, as mentioned earlier, will only work with clients connecting through RDC 8.0 or above.

    The RD Gateway and Remote Desktop Client version 8.0 (and above) provides the external users with a secure connection to the deployment. Once connected to the deployment, the internal certificate with the ‘.local’ name will take care of Remote App signing (publishing) and Single Sign-On.

    Now, lets look at where we configure the certificate we have:

    Open the Server Manager on the Connection Broker server and Click on Remote Desktop Services in the left-most pane.

    Once here, you will see your deployment shown as in the illustration below. Click on Tasks and select “Edit Deployment Properties”


    This will bring up the property sheet of the deployment. Select the Certificates option in the left pane:


    Now, as discussed earlier, you can select the certificate that was created using the ‘Select Existing Certificate’ button on the bottom of the screen.

    Just point it to the ‘.pfx’ file and allow it to import the certificate for the role.

    You can use a single certificate for all the roles, if your clients are internal to the domain only, by generating a simple wildcard certificate (*.RENDER.LOCAL) and binding it to all the roles.

    Note, that even if you have multiple servers that are part of this deployment, the Server Manager will import the certificate to all the servers in the deployment, place them in the trusted root of the machines and bind them to the respective roles.

    -Kiran Kadaba

  • WMI: Repository Corruption, or Not?


    Windows Management Instrumentation failing due to repository being corrupted

    The WMI Repository (%windir%System32\Wbem\Repository) is the database that stores meta-information and definitions for WMI classes; in some cases the repository also stores static class data as well. If Repository becomes corrupted, then the WMI service will not be able to function correctly.

    Before grabbing that preverbal hammer approach and just rebuilding your repository, ask yourself, “Is the WMI repository OK?”

    Common symptoms that lead to this question are: provider load failure, access denied, class not found, invalid namespace, and namespace not found to mention a few.

    If you suspect WMI or repository corruption, rebuilding repository is the last thing you should do without verifying this is truly the case. Deleting and rebuilding the repository can cause damage to Windows or to installed applications. Other steps should be taken first to eliminate other possibilities or to confirm we have repository corruption. Noting here that having a repository too large also creates problems; an issue that can sometimes be interpreted as a corrupt repository, which is not always the case. If issues are due to a large repository, rebuilding the repository is currently the only method available to reduce it back to a working size.

    Since I mentioned “large repository”, let me set some guidelines up front. There is no hard fast number per say as to when you will start feeling performance problems with a large repository. As a guideline, if the file, located in (%windir%System32\Wbem\Repository, is 1 GB or larger, then I would recommend rebuilding your repository to reduce it back down to a working and manageable size. If the size is between 600-900 MB, and you are not feeling any noticeable performance issues, then I would recommend against rebuilding the repository.

    If WMI is corrupted, you can receive various errors and symptoms, depending on what activity was being done at the time. Below is a few errors and symptoms that could indicate that the repository is corrupted:

    1. Unable to connect to root\default or root\cimv2 namespaces. Fails returning error code 0x80041002 pointing to WBEM_E_NOT_FOUND.
    2. When we open Computer Management and Right Click on Computer Management (Local) and select Properties, you get the following error: "WMI: Not Found" or it hangs trying connect
    3. 0x80041010 WBEM_E_INVALID_CLASS
    4. Trying to use wbemtest, it hangs
    5. Schemas/Objects missing
    6. Strange connection/operation errors (0x8007054e):

    get-cimclass : Unable to complete the requested operation because of either a catastrophic media failure or a data structure corruption on the disk.

    At line:1 char:1

    + get-cimclass -Namespace root\cimv2\TerminalServices

    + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo : NotSpecified: (:) [Get-CimClass], CimException

    + FullyQualifiedErrorId : HRESULT 0x8007054e,Microsoft.Management.Infrastructure.CimCmdlets.GetCimClassCommand

    Check the Windows Application log for events in the past week where Source = Microsoft-Windows-WMI.  Look for any of the following WMI event IDs: 28, 65, 5600, 5601, 5614. Any of these could indicate a WMI repository issue or core infrastructure problem.

    If you do not find any of these events logged, your next action is to use the built in repository checker. From an elevated command prompt run "winmgmt /verifyrepository". If the repository has an issue, it will respond "repository is not consistent".

    If repository check comes back as “consistent”, then look at my other Ask Perf blogs for applicability:

    WMI: Missing or Failing WMI Providers or Invalid WMI Class (COMING SOON)

    WMI: High Memory Usage by WMI Service or Wmiprvse.exe (COMING SOON)

    How to troubleshoot High CPU Usage by WMI Components (COMING SOON)

    WMI Self-Recover

    When the WMI service restarts or detects Repository corruption, the self-recovery procedure will trigger automatically in two approaches (one or the other):

    1. AutoRestore: if the VSS backup mechanism enable snapshot the timestamp backup images In the system (ex. Win7 feature: previous fileversion), WMI will apply the AutoRestore approach to restore backup valid images in version queue. (if possible)
      • EVT: 65 (start restore) / 66 (succeed recovered with VSS Path)
    1. AutoRecovery: the rebuild process to generate fresh images of Repository based on registered mofs( listed @ HKLM\Software\Microsoft\WBEM\CIMOM: AutoRecover Mofs).
      • EVT: 5616 (complete recovery), eventually, there are lots of EVT:63 for mof warning about Localsystem registration of providers.

    Note: Under almost no circumstance should you use the script that rebuilds the WMI repository from the MOF files

    The script is inherently flawed, for 2 reasons:

      1. If you navigate to the %systemroot%\system32\wbem folder, and list the MOF files, you see find MOFs named (some provider name)_uninstall.mof. When you mofcomp those, they remove the classes in the MOF. The script mofcomps everything, so it can very easily install then immediately uninstall the classes, resulting in the class not being accessible.
      2. Replaying mofs is often sequence dependent. Example: classes in mof1 can depend on or have associations with classes in mof2. If they aren't present, MOFCOMP will not insert the classes. It's extremely difficult to know what is / is not the right sequence, so any script that simply MOFCOMPs everything is not going to be fully successful.

    In addition to causing damage to your system that's almost impossible to fix correctly, if you take that approach you will blow away all information that could be used to determine the root-cause.

    If the repository check (winmgmt /verifyrepository) comes back as inconsistent, your first action is to run “winmmgmt /salvagerepository” followed by running “winmgmt /verifyrepository” again to see if it now comes back as consistent.

    If it is still comes back inconsistent, then you need to run “winmgmt /resetrepository.” Before running this, please read the important note below for Server 2012.

    Force-recovery process -- rebuild based on the registry list of AutoRecover Mofs


    1. Check regkey value is empty or not @ HKLM\Software\Microsoft\WBEM\CIMOM: 'Autorecover Mofs' (** first line on some OSs is empty, review it in opening the regkey value)
    2. If above regkey is empty, copy/paste the regkey value from another machine of equal System/OS to the suspect machine
    3. Run the following command from command prompt with admin rights: “Winmgmt /resetrepositoy”
    4. If you get the error noted below, stop all dependency services on the WMI service by running following command: 

    net stop winmgmt /y

    then run

    Winmgmt /resetrepositoy

    WMI repository reset failed
    Error code:     0x8007041B
    Facility:       Win32
    Description:    A stop control has been sent to a service that other running services are dependent on

    NOTE: Applies to Server 2012

    We have encountered some issues when running the mofcomp command on Windows Server 2012 which has caused the Cluster namespace to be removed due to the cluswmiuninstall.mof contained in the c:\windows\system32\wbem folder. It has also caused unregistering class names for Cluster Aware Updating (CAU) because of its cauwmiv2.mof also in the wbem folder. Those could also affect other namespace that have an uninstall type .mof in the wbem folder beyond the two mentioned above.

    Furthermore, the uninstall .mof files for servers running Microsoft Cluster are also part of the autorecover folder that is used when you run the winmgmt /resetrepository command which will end up having the same effect of first installing the Cluster namespace, then uninstalling it just as if you had run a script to rebuild the repository that contained the “for” command to recompile all of the MOFs in the WBEM folder.

    Take the following actions to confirm if the uninstall problem for this scenario exists on your server. If it doesn’t, then you can run the winmgmt /reset repository, otherwise follow my directions below for manually accomplishing rebuild.

    1. Open regedit, and navigate to hklm\software\microsoft\wbem\cimom, and open “Autorecover MOFS
    2. Copy the data from that string value, and paste it into notepad
    3. Do a search for ClusWmiUninstall.mof. If the cluster provider uninstall has autorecover, it will be listed here
    4. If found, then continue to manual rebuild below, if not found then go ahead and use the winmgmt /resetrepository command

    How to manually rebuild repository on Server 2012 Cluster machine when cluster provider uninstall has an autorecover

    First ensure you have run winmgmt /verifyrepository to ensure that it is “inconsistent” and that you have tried winmgmt /salvagerepository to see if it resolves your issue.

    Change the startup type for the Window Management Instrumentation (WMI) Service to disabled.


    1. Stop the WMI Service, you may need to stop services dependent on the WMI service first before it allow you to successfully stop WMI Service
    2. Rename the repository folder:  C:\WINDOWS\system32\wbem\Repository to Repository.old
    3. Open a CMD Prompt with elevated privileges
    4. Cd windows\system32\wbem
    5. Run following command to re-register all the dlls: for /f %s in ('dir /b /s *.dll') do regsvr32 /s %s
    6. Set the WMI Service type back to Automatic and restart WMI Service
    7. cd /d c:\  ((go to the root of the c drive, this is important))
    8. Run following command specifically adopted for 2012 Clustered servers to recompile the MOFs: “dir /b *.mof *.mfl | findstr /v /i uninstall > moflist.txt & for /F %s in (moflist.txt) do mofcomp %s”
    9. Restart WMI service

    As a final note, if you run into a reoccurring corruption issue in your environment with WMI, try to exclude the WBEM folder and all subfolders under it from AV scanning. AV scanning is known to cause corruption and other issues in WMI.

    Other repository recovery solutions:

    Note: in following solutions (1 & 2), if the backup images (repository) are in large size (>100MB), restoring the repository will take some time.

    1. Apply WMI AutoRestore story in the system to recover the repository image quickly and keep it in sync with previous state.
    2. Enable VSS backup-related features for storing image snapshots
      • ex.  Volume Shadow Copy (VSS), or check any valid copies listed Local Disk(C:) Properties >> Shadow copies
    3. Make sure registry key has following setting: HKLM\Software\Microsoft\WBEM\CIMOM: AutoRestoreEnabled=1
    4. Frequently snapshot the restore-points of the system (if needed, refer to the following PowerShell scripts)

    $filterStmt = [String]::Format("DriveLetter='{0}'",$env:SystemDrive)

    # Get systemdrive Volume info

    $vlm = gcim win32_Volume -Filter $filterStmt

    # create a shadowcopy

    $res = icim win32_ShadowCopy -MethodName Create -Arguments @{Volume=$Vlm.DeviceID}

    if ($res.ReturnValue -eq 0)

    { gcim win32_ShadowCopy -Filter ("ID='"+$res.ShadowID+"'") } # **


    { $res | fc }

      • AutoRestore only searches the top 3 queued snapshots for the latest valid backup, if no valid one is found, AutoRecovery will apply.
      • To restore others in the queue of snapshots ( manually )
        1. In Server sku, looking at 'previous version' tab of the repository folder to find the expected backup path 
        2. stop WMI service: Net stop winmgmt /y
        3. Replace all of the files in %windir%\system32\wbem\Repository folder with the files from the backup path found in step 1

    Note The WMI Service has auto start setting, and if it comes back alive, you will not be able to replace the files.  The service needs to be in a stopped state (if WMI service is alive at the time, repeat step: 2~3)

    ex. Directory of \\localhost\C$\@GMT-2014.03.13-01.02.49Windows\system32\wbem\repository

    03/11/2014  11:53 AM    <DIR>          .
    03/11/2014  11:53 AM    <DIR>          ..
    03/12/2014  05:30 PM         4,759,552 INDEX.BTR
    03/12/2014  05:30 PM            90,640 MAPPING1.MAP
    03/12/2014  03:26 PM            90,640 MAPPING2.MAP
    03/12/2014  05:24 PM            90,640 MAPPING3.MAP
    03/12/2014  05:30 PM        27,541,504 OBJECTS.DATA

    4. Run following wmic command to bring WMI service back to life: wmic os

    You should set up a regular Scheduled Task to backup the latest repository:

        • winmgmt /backup <backup image path>
        • tracing EVT: 67,68

    You could also schedule restores as necessary

    • winmgmt /restore <above backup image path> 1
    • tracing EVT: 65,66

    If the issue is not a repository issue, and the objects are not retrievable:

    • Re-install the product. This is the first place to start.
    • If there is a specific provider that is not showing up, you can re-run mofcomp of a single provider. See Ask the Performance Team Blog article WMI: Missing or Failing WMI Providers or Invalid WMI Class (COMING SOON)

    If the issue persists or keeps returning, then at this point you will now need to open a Support Incident Case with Microsoft for further assistance.

    Reference this blog when you open the Support Incident Case with Microsoft as it will help the engineer understand what actions have been taken or followed and well help us track the effectiveness of the blog.

    Next up: WMI: Missing or Failing WMI Providers or Invalid WMI Class

  • Adding shortcuts on desktop using Group Policy Preferences in Windows 8 and Windows 8.1

    Hi All! My name is Saurabh Koshta and I am with the Core Team at Microsoft. Currently I work in the client space so supporting all aspects of Windows 8 and Windows 8.1 is my primary role. We very often get calls from customers who are evaluating more
  • LIVE: Microsoft Virtual Federal Forum

    For the first time, the 2014 Virtual Federal Forum, will be streamed LIVE from the Reagan Center in Washington, DC! This online digital experience will be completely hybrid, focused on Real Impact for a Lean and Modern Federal Governmentand will showcase innovative and cost effective solutions to unleash greater capabilities within agencies, while helping simplify and modernize processes. The Virtual Federal Forum is designed exclusively for the Federal government community providing the opportunity to hear from Microsoft executives, thought leaders, and strategic partners.  Virtual attendees get bonus material not available to the in-person audience and have the ability to download related session materials, take live polls and surveys, share ideas and ask questions to experts and executives through Chat, Twitter and Q&A sessions.

    Date: Tuesday, March 4th
    Time: 8am EST – 2:30pm EST

    Agenda Highlights:

    · Keynote speaker, The Honorable Tom Ridge Former Secretary of the U.S. Department of Homeland Security, will be speaking on The Global Mission to Secure Cyberspace and will be available to virtual attendees for Live Q&A.

    · Hear from top government agencies in a special customer panel; Veteran Affairs, the U.S. Navy and the Environmental Protection Agency discuss real world lessons learned and technology innovations.

    · Learn how to leverage a next generation mobile workforce for a 21st Century government with live demos and best practices from Jane Boulware, VP US Windows.

    · Senior Director of Microsoft’s Institute for Advanced Technology in Governments talks “Rethinking cyber defense…lessons learned from Microsoft’s own experience.

    Other featured speakers would include:

    Greg Myers - Vice President: Federal
    Walter Puschner - Vice President: User Experience IT
    Vaughn Noga - Acting Principal Deputy Assistant Administrator for Environmental Information
    Captain Scott Langley - USN, MCSE CEH CISSP, Commander Navy Reserve Forces Command N6/CTO
    Maureen Ellenberger - Veterans Relationship Management, Program Manager, Veteran Affairs
    Dave Aucsmith - Microsoft’s Institute for Advanced Technology in Government

    Register for this event using this unique URL.

    Thank you in advance and we look forward to your participation at the Virtual Federal Forum!

  • Unable to launch Cluster Failover Manager on any node of a 2012/2012R2 Cluster

    When Failover Cluster Manager is opened to manage a Cluster, it will contact all the nodes and retrieve Cluster configuration information using WMI calls. If any one of the nodes in the Cluster does not have the cluster namespace "root\mscluster" in WMI, Failover Cluster Manager will fail and give one of the below errors:



    Unfortunately, it does not give any indication of which node is missing the WMI namespace.  One of the ways you can check to see which one has it missing is to run the below command on each node of the Cluster.

    Get-WmiObject -namespace "root\mscluster" -class MSCluster_Resource

    It can be a bit tedious and time consuming if you have quite a few nodes, say like 64 of them.  The below script can be run on one of the nodes that will connect to all the other nodes and check to see if the namespace is present.  If it is, it will succeed.  If the namespace does not exist, it will fail.

    Set-ExecutionPolicy unrestricted


    If(import-module failoverclusters)

    Write-Host "Imported Cluster module"


    Write-Host "Getting the cluster nodes..." -NoNewline
    $nodes = Get-ClusterNode
    Write-host "Found the below nodes "
    Write-host " "
    Write-host ""
    Write-host "Running the WMI query...."
    Write-host " "
    ForEach ($Node in $nodes)
             Write-Host -NoNewline $node

                  if($Node.State -eq "Down")

                        Write-Host -ForegroundColor White    " : Node down skipping"


                  $result = (get-wmiobject -class "MSCluster_CLUSTER" -namespace "root\MSCluster" -authentication PacketPrivacy               -computername $Node -erroraction stop).__SERVER
                   Write-host -ForegroundColor Green      " : WMI query succeeded "

                  Write-host -ForegroundColor Red -NoNewline  " : WMI Query failed "
                  Write-host  "//"$_.Exception.Message


    In the below example, you can see that one of the nodes failed.

    To correct the problem, you would need to run the below from an administrative command prompt on the "failed" node(s).

    cd c:\windows\system32\wbem
    mofcomp.exe cluswmi.mof

    Once the Cluster WMI has been added back, you can successfully open Failover Cluster Management.  There is no restart of the machine or the Cluster Service needed.

    Now, the next question you may have is, "well how did I get this way in the first place".  The answer is actually a command from the old days to "fix" the WMI repository.  In earlier days, if there was a problem with WMI, they would change to the above directory and run mofcomp.exe *.mof.  This will take all the .MOF (Managed Object File) files in the directory and recompile them.  The problem with this command is is does "all" of them. 

    When you install Roles and Features that utilize WMI, there is a .MOF file to add itself to the repository.  There is also an uninstall .MOF file to remove itself if the role/feature is removed.  When you run with the *.mof switch, it could run the install first and the uninstall second.  So you are basically removing the namespaces to fix a problem.  Cluster is one of the ones that has an uninstall file.  To correct it, you have to run the above.  Since there are multiple uninstall files for other roles/features, you may need to run with those install files as well.

    The proper ways of recompiling the WMI Repository is with the use of WINMGMT.EXE.


    WMI Troubleshooting: The Repository on Vista / Server 2008

    Note: The blog above is titled for Windows 2008, but does apply to Windows 2012/2012R2 as well.

    Shasank Prasad
    Senior Support Escalation Engineer
    Microsoft Corporation

  • XP Support coming to an End soon…

    Hello AskPerf!  I’m sure you already know this, but if you don’t, XP Support Ends on April 8th, 2014.  That is 21 days from this post.  Below are a couple of links that will give you more information on moving forward:


    <SNIP from the second link above>

    As a result, after April 8, 2014, technical assistance for Windows XP will no longer be available, including automatic updates that help protect your PC. Microsoft will also stop providing Microsoft Security Essentials for download on Windows XP on this date. (If you already have Microsoft Security Essentials installed, you will continue to receive antimalware signature updates for a limited time, but this does not mean that your PC will be secure because Microsoft will no longer be providing security updates to help protect your PC.)

    If you continue to use Windows XP after support ends, your computer will still work but it might become more vulnerable to security risks and viruses. Also, as more software and hardware manufacturers continue to optimize for more recent versions of Windows, you can expect to encounter greater numbers of apps and devices that do not work with Windows XP.

    </END SNIP>

    Windows main landing page

    -Blake Morrison

  • RAP as a Service (RaaS) from Microsoft Services Premier Support

    In this post, I’m excited to discuss a new Premier Support offering called Risk Assessment Program (RAP) as a Service (or RaaS for short).

    For those that are not familiar with RAP, it is a Microsoft Services Premier Support offering that helps prevent serious issues from occurring by analyzing the health and risks present in your current environment.

    For example: if you haven’t done a WDRAP (Windows Desktop RAP) and your end-users are suffering slow boot times, slow logon times, slow file copy, hung applications, and applications crashing, it could help! A WDRAP assesses your current environment and recommends changes which improve the Windows user experience.

    Our new RAP as a Service offering helps accelerate the process of diagnosis and reporting, using our RaaS online service.

    Q: So what is Microsoft RAP as a Service (RaaS)?
    A:  RaaS is an evolution of the Risk Assessment Program offering.

    • RaaS is a way of staying healthy, proactively.
    • It’s secure and private.
    • The data is collected remotely.
    • We analyze against best practices established by knowledge obtained from Microsoft IT, and over 20,000 customer assessments.
    • It enables you to view your results immediately.

    You can also take a look at this video describing RAP as a Service:


    Microsoft RAP as a Service

    Q:  What are the benefits of RaaS over a RAP?
    A:  The benefits are:

    • Online delivery with a Microsoft accredited engineer.
    • A modern best-practices toolset that allows you to assess your environment at any time and includes ongoing updates for a full year.
    • You get immediate on-line feedback on your environment.  Just run the straightforward toolset and you’ll garner instant insight into your environment.
    • Easily share results with your IT staff and others in your organization.
    • You can reassess your environment to track remediation and improvement progress.
    • Reduced resource overhead requirements.  There’s no need to take your people away from their other work for multiple days, nor do they need to travel to the location where the work is being performed.
    • Better scheduling flexibility.  Due to the agile structure of the RaaS service offering, turnaround times to get a Microsoft accredited engineer to review your environment are much shorter.
    • Better security.  While both offerings are highly secure, RaaS has the added benefit of including no intermediary steps in the assessment process.
    • RaaS includes remediation planning, which helps you understand what’s required to get your environment optimally healthy.
    • A broader toolset that is continually enhanced.   For example, RaaS for Active Directory includes assessment checks that were previously available as two separate service offerings: an Active Directory RAP and Active Directory Upgrade Assessment Health Check.  These are combined in the Active Directory RaaS. RaaS also includes additional new tests such as support for Windows Server 2012.

    Q:  What technologies can be assessed using RaaS?

    … and others coming soon, such as Hyper-V and more.  Please contact your Microsoft Premier Support Technical Account Manager for further info on availability.

    Q:  I can’t wait until the releases of the other technologies!
    A:  In the meantime, you can still request a RAP for those technologies until these are released with RaaS.

    Q:  Is RaaS currently available for non-Premier Support customers?
    A:  Not at this time. To find out more about Premier Support, please visit Microsoft Services Premier Support  

    Q:  Do I use the RaaS service for my environment before or after going into production?
    A:  Both. We highly recommend you test your environment before going live using RaaS.  We also recommend using RaaS after you go in production, because changes between test and production are inevitable.

    Q:  What are the system requirements for a RaaS?
    A:  The Microsoft Download Center has a detailed description of RAP as a Service (RaaS) Prerequisites.

    Q:  How do I schedule a RaaS?
    A:  Talk to your Microsoft Premier Support Technical Account Manager (TAM) or Application Developer Manager (ADM), and they can schedule the RaaS.

    Q:  Where would I go to sign-in for the RaaS?
    A:  You browse to the Microsoft Premier Proactive Assessment Services site and enter your credentials. The packages will be waiting for you to download and start running.

    Q:  I’m in a secure environment; we cannot access external websites.
    A:  It’s alright! We have a portable version for your needs.

    Q:  Does it take a lot of ramp-up time to get familiar with the toolset?
    A:  No, the package is wizard-driven for ease of use.

    Q:  Do I need to have any down time?
    A:  No, the data collection is non-invasive, so no scheduled downtime is required. Collect the data on your own schedule.

    Q:  OK, I collected the data, now what are my next steps?
    A:  Once data collection is complete, you can submit the data privately and securely to the Microsoft Premier Proactive Assessment Services site for analysis.

    Q:  When do I get to see my results?
    A:  We (the accredited Microsoft engineers) will analyze and annotate the report for your specific environment.  Once you receive the report back, we will set up a conference call to go over the findings with your staff.

    Q:  How long is the report available for us?
    A:  The report is available online for twelve months so you can continue remediating any issues/problems.

    Q:  Can I re-run the RaaS toolset?
    A:  Yes, you get to re-collect the data, submit the data again and get the detailed analysis back for a whole year, as a Premier customer.

    Q:  Can I still have Microsoft Premier Field Engineers come on-site?
    A:  Yes, we still have that option available to assist you! Regular RAPs are still available.

    Thank you and I hope you found this useful and something you can take advantage of.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Global Business Support

  • Windows XP support ending April 8, 2014

    I’m sure you already know this, but if you don’t, XP Support Ends on April 8th, 2014.  That is 21 days from this post.  Below are a couple of links that will give you more information on moving forward:


    <SNIP from the second link above>

    As a result, after April 8, 2014, technical assistance for Windows XP will no longer be available, including automatic updates that help protect your PC. Microsoft will also stop providing Microsoft Security Essentialsfor download on Windows XP on this date. (If you already have Microsoft Security Essentials installed, you will continue to receive antimalware signature updates for a limited time, but this does not mean that your PC will be secure because Microsoft will no longer be providing security updates to help protect your PC.)

    If you continue to use Windows XP after support ends, your computer will still work but it might become more vulnerable to security risks and viruses. Also, as more software and hardware manufacturers continue to optimize for more recent versions of Windows, you can expect to encounter greater numbers of apps and devices that do not work with Windows XP.

    </END SNIP>

    Windows main page

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Global Business Support

  • Our UK Windows Directory Services Escalation Team is Hiring – Support Escalation Engineers.

    Hi! Its Linda Taylor here again from the Directory Services Escalation team in the UK. In this post, I want to tell you – We are hiring in the UK!! Would you like to join the UK Escalation Team and work on the most technically challenging and more
  • Creating bootable USB drive for UEFI computers

    In today’s blog I am going to discuss how to create a multi-partition bootable USB drive for use UEFI based computers.

    It is common to create bootable USB flash drives or hard drives so you can boot from them to do various tasks such as

    • Boot Windows PE (WINPE) for recovery purposes
    • Boot Windows PE (WINPE) to deploy image
    • Boot Microsoft Deployment Toolkit media deployment share

    UEFI based systems such as the Surface Pro or other UEFI systems require that the boot files reside on FAT32 partition.  If they are not FAT32 the system may not see the device as bootable. 

    FAT32 has a 4GB individual file size limitation and 32GB maximum volume size.  If any of the files are larger than 4GB you may have to configure the drive differently.  Consider if you are booting Windows PE 4.0 and want to deploy a custom image using Dism.exe where the size of the image is 8GB.  You would not be able to store the image on the FAT32 partition. 

    To get around this you have create multiple partitions on the drive.  Most flash drives report themselves as removable but to create multiple partitions the drive must report itself as Fixed.  If you have access to a Windows to Go (WTG) certified drive you can use it since a requirement for WTG is that the device report as fixed.  Some USB hard drives like the Western Digital Passport report themselves as fixed also. 

    To verify if the drive is reporting itself as fixed or removable plug the drive in and open My Computer:

    • Drive shows up under “Hard Disk Drives”:  Fixed
    • Drive shows up under “Devices with Removable Storage”:  Removable

    To create a USB drive with multiple partitions use the following steps

    1. Open elevated cmd prompt
    2. Type in Diskpart and hit enter
    3. Type in the following commands:

    List disk
    Sel disk X: (where X: is your USB drive)
    Create Part Primary size=2048
    Format fs=fat32 quick Label=”Boot”
    Create part primary
    Format fs=ntfs quick Label=”Deploy”

    Note:  You can choose different sizes and volume labels depending on your needs

    At this point you can now copy your boot files to the FAT32 partition and your other files to the NTFS partition. 

    In the earlier example you would copy the contents of your custom Windows PE (WINPE) 4.0 files in C:\winpe_amd64\media to the FAT32 partition and your custom install.wim to the NTFS partition

    Hope this helps with your deployments

    Scott McArthur
    Senior Support Escalation Engineer
    Microsoft Commercial Services & Support

  • Managing the Store app pin to the Taskbar added in the Windows 8.1 Update

    Warren here, posting with more news regarding the Windows 8.1 Update. Among the many features added by Windows 8.1 Update is that the Store icon will be pinned to the users taskbar when users first logon after updating their PC with Windows 8.1 Update more
  • I/O Performance impact of running Start-DedupJob with –Priority High

    My name is Steven Andress and I am a Support Escalation Engineer with Microsoft’s Platforms Support Team.  This is a short blog post to alert you to a condition you might encounter when running a deduplication job using the Start-DedupJob PowerShell cmdlet. 


    Start-DedupJob [-Type] <Type> [[-Volume] <String[]> ] [-AsJob] [-CimSession <CimSession[]> ] [-Full] [-InputOutputThrottleLevel <InputOutputThrottleLevel> ] [-Memory <UInt32> ] [-Preempt] [-Priority <Priority> ] [-ReadOnly] [-StopWhenSystemBusy] [-ThrottleLimit <Int32> ] [-Timestamp <DateTime> ] [-Wait] [ <CommonParameters>]

    The Start-DedupJob cmdlet starts a new data deduplication job for one or more volumes.  The Priority setting sets the CPU and I/O priority for the optimization job run that you run by using this cmdlet.  The only way to run a Deduplication job with High Priority is to use the cmdlet.  When Priority is set to High, I/O for other processes using the volume may be slowed down or even blocked.   If this is a Cluster Shared Volume (CSV), I/O to the volume from other nodes can be similarly impacted.   

    Do not use "-Priority High" when starting dedup jobs if this is server is in production hours.  If you wish to use this switch, please ensure that it is done after hours so that productivity is not affected.

    Steven Andress
    Senior Support Escalation Engineer
    Microsoft Corporation