January, 2013

  • How to Setup a Debug Crash Cart to Prevent Your Server from Flat Lining

    This is Ron Stock from the Global Escalation Services team and I recently had the task of live debugging a customer’s remote server.  In debug circles we use what is known as a crash cart to live debug production servers. The phrase conjures up visions ...read more
  • Case of the Unexplained Services exe Termination

    Hello Debuggers! This is Ron Stock from the Global Escalation Services team and I recently worked an interesting case dispatched to our team because Services.exe was terminating. Nothing good ever happens when Services.exe exits. In this particular case ...read more
  • Behavior Change When Working with Pass-Through Disks in Windows Server 2012 Failover Clusters

    Welcome back to the CORE Team blog. I want to discuss a behavior change in Windows Server 2012 Failover Clusters with respect to configuring highly available virtual machines with pass-through disks.

    Users have been attaching pass-through disks to virtual machines for quite some time. There are several reasons for this including the limitation on virtual hard disk (VHD) size and\or the decreased performance experienced when using the VHD format. Beginning with Windows Server 2012 Hyper-V, there are new capabilities that make using pass-through disks an obsolete concept. These new features, which address both size and performance, include:

    1. Virtual Fibre Channel as a storage connection type inside virtual machines
    2. New virtual hard disk format with VHDX

    If you plan to configure pass-through disks in highly available virtual machines in Windows Server 2012 Failover Clusters, then continue reading. If you do not, then thank you for recognizing the value proposition of the new features in Windows Server 2012.

    To put the new behavior in perspective, I will briefly review the existing behavior in Windows Server 2008 R2. Before configuring a pass-through disk in a virtual machine in Windows Server 2008 R2, a disk is mapped to a Hyper-V server, brought online, initialized, and taken back offline. If a virtual machine is already highly available in an R2 Failover Cluster, or is going to be made highly available, then the disk must be added to the cluster as a physical disk resource. If the disk is not added as a cluster resource before configuring the virtual machine with the pass-through disk, the 'Refresh Virtual Machine' process that is executed will result in Warnings viewable in a report.


    Choosing to view the report (Yes), shows a Failed process.


    The details do not provide a clear reason for the failure (i.e. Element not found).


    Additional information contained in the report provides the reason for the failure and can be used by an administrator to fix the problem (i.e. add the disk resource(s) to the cluster).


    The reported failure does not prevent the pass-through disk from being added to the virtual machine configuration. Additionally, an event is registered in the FailoverClustering-Manager log -

    Event ID: 4649
    Source: FailoverClustering -Manager
    Level: Warning
    "Failover Cluster Manager detected that virtual machine <VM_Name> is configured to use one or more disk that are not yet added to the cluster. Please add all required disks to the cluster before making this virtual machine highly available". The Hyper-V-High-Availability log registers an event as well -

    Event ID: 21105
    Source: Hyper-V-High-Availability
    Level: Error
    "'Virtual machine Configuration <VM_Name>' failed to update the configuration data of the virtual machine: Element not found. (0x80070490)."

    Even with the error and warnings, the storage is visible in Disk Manager in the virtual machine and can be manipulated as if it had been properly configured. The administrator must realize the problem and implement corrective action.

    Switching gears now to Windows Server 2012 Failover Clusters…

    Pass-through disks are also supported in Hyper-V in Windows Server 2012. The rational for using them, however, no longer really applies. What has changed is the mechanism that was in place in Windows Server 2008 R2 that provided a 'check and balance' process to be sure the end user did not make an error when configuring the virtual machine (i.e. did not add the disk to the cluster configuration first). To illustrate, let us examine two possible scenarios:

    Scenario 1: Adding a Pass-through disk to an already highly available virtual machine

    When configuring a pass-through disk in an already highly available virtual machine using the Failover Cluster Manager interface in Windows Server 2012, there is no report generated as part of a 'Refresh Virtual Machine' process. Therefore, information that could make an administrator aware of a misconfiguration is not immediately available. The logged events documented above are still registered. Even with the misconfiguration, the disk(s) can still be manipulated in the virtual machine and the virtual machine role moves freely (migrates) between nodes in the cluster without error. A vigilant administrator may eventually notice that the improperly configured pass-through disk is not listed in the Resources tab for the virtual machine and could then correct the misconfiguration.

    Scenario 2: Adding a Pass-through disk to a virtual machine before making it highly available

    If a virtual machine is configured with a pass-through disk before it is made highly available, and those disks have not been added to the cluster, the Configure Role process in Failover Cluster will detect the misconfiguration. The information provided in the generated report will instruct the administrator to add the disks to the cluster before making the virtual machine highly available. In this case, no change to the virtual machine configuration is made until the problem is corrected (the disks are added to the cluster).

    When properly configured, pass-through disks are listed in the Resources tab for the role and are even identified as 'Pass-through Disk' as shown here -.


    Those of us who have been working with Failover Clusters for a while would probably agree that having resources attached to highly available cluster roles that are not themselves under control of the cluster is not a good thing. Some of you may ask if this will be fixed. I do not know the answer to that question at this time, but the Product Team is aware of it.

    Thanks, and come back again soon.

    Chuck Timon
    Senior Support Escalation Engineer
    Microsoft Enterprise Platforms Support
    High Availability\Virtualization Team

  • Configuring Change Notification on a MANUALLY created Replication partner

    Hello. Jim here again to elucidate on the wonderment of change notification as it relates to Active Directory replication within and between sites. As you know Active Directory replication between domain controllers within the same site (intrasite) happens ...read more
  • FREE: Online Microsoft Virtual Academy Virtualization Jump Start Classes

    The IT Pro Evangelism team, Microsoft Learning and the Microsoft Virtual Academy are pleased to announce the next Jump Start courses!

    Introduction to Hyper-V

    Thursday, January 24th

    This one-day live event is designed for IT Pros experienced in virtualization (i.e., VMware) but in need of learning how to leverage Hyper-V to perform essential tasks in the Windows Server 2012 platform. Introduction to Microsoft Virtualization, Hyper-V Infrastructure, Hyper-V Networking, Hyper-V Storage, Hyper-V Management, Hyper-V High Availability and Live Migration, Integration with System Center 2012 Virtual Machine Manager, and Integration with Other System Center 2012 Components


    (01) Introduction to Microsoft Virtualization
    (02) Hyper-V Infrastructure
    (03) Hyper-V Networking
    (04) Hyper-V Storage

    **Meal Break**

    (05) Hyper-V Management
    (06) Hyper-V High Availability and Live Migration
    (07) Integration with System Center 2012 Virtual Machine Manager
    (08) Integration with Other System Center 2012 Components

    Microsoft Virtualization for VMware Professionals

    Wednesday, January 30th.

    This course is designed for VMware professionals looking to get up-to-speed with how Microsoft virtualization and Windows Server 2012 Hyper-V works and compares with VMware vSphere 5.1.  Introduction & Scalability, Storage & Resource Management, Multi-tenancy & Flexibility, Backup & High-Availability, Introduction & Overview of System Center 2012, Application Management, Cloud on your Terms, and Foundation, Hybrid Clouds & Costs.


    AM | Windows Server 2012 Hyper-V vs. VMware vSphere 5.1
    (01) Introduction & Scalability
    (02) Storage & Resource Management
    (03) Multi-tenancy & Flexibility
    (04) Backup & High-Availability

    **Meal Break**

    PM | System Center 2012 SP1 vs. VMware’s Private Cloud
    (05) Introduction & Overview of System Center 2012
    (06) Application Management
    (07) Cloud on your Terms
    (08) Foundation, Hybrid Clouds & Costs

    These live online events are designed for IT Pros that are new to virtualization, or experienced in other hypervisors (such as VMware or Citrix) and want to learn about Windows Server 2012 Hyper-V. 

    Join Microsoft & VMware virtualization experts Symon Perriman, Jeff Woolsey and Matt McSpirit in a demo-rich learning experience with a live Q&A for a full days of training from 8am to 5pm.

    These events are FREE and open to the PUBLIC so please register and spread the word for Introduction to Hyper-V and Microsoft Virtualization for VMware Professionals today!

    Join us again in late February for our Microsoft Tools for VMware Migration & Integration Jump Start.

  • PFE's Risk Assessment Program as a Service (RaaS)

    This is just a quick post to advertise our PFE folks and their new RaaS Service.

    Yong Rhee and Julio Danoviz from the MSPFE blog go thru a new service called RAP as a Service (RaaS).

    A RaaS is a service that provides an in-depth analysis of your environment to try preventing issues from occurring and/or producing a detailed plan to remediate discovered issues and address known risks.

    For example:  Windows Client RaaS (Windows Desktop Risk Assessment Program (WDRAP)).

    Spoilers: They go thru the frequently asked questions!

    Go check out the post if you’re interested!

    -AskPerf Blog Team

  • Removing a Mount Point Disk from a Cluster Group

    Hello everyone. Today’s post is going to cover the steps needed to follow should you ever have to remove a ‘Physical Disk’ resource from a clustered service or application where that disk is configured as a mount point. Removing a disk from a group may be needed if the application no longer requires the storage and there’s a need to utilize the disk in some other group or decommission it entirely.

    First, I wanted to talk a little about dependencies and their function. Resource dependencies are created between one or more resources in a cluster group in order to determine the order in which resources in the group are taken online/offline. Take for instance a SQL Server resource that’s dependent on the ‘Physical Disk’ resources where SQL’s data is stored. A dependency should be established so that the SQL Server resource is dependent on the disk resource. This dependency will make sure the disk comes online first, and then SQL Server. Same thing when taking those things offline, except in reverse. The SQL Server resource will come offline before the disk comes offline. Obviously, we would not want to have SQL Server attempt to start until all the disks it was using was online first.

    Once resources in a cluster group are linked with dependencies, you have to be careful when deleting resources out of a group. If you don’t remove dependencies properly, you may end up inadvertently removing other resources as well.

    In this example, I have two resources in a cluster group, Resource A and Resource B. I establish dependencies between them so that Resource B is dependent on Resource B.


    This means that when the group is brought online, Resource B will not be brought online until Resource A is online.

    Here is what the dependency report looks like.


    Now that these two resources are linked, you have to be careful when deleting these resources. If I were to delete Resource A from the group without first removing the dependency, BOTH resources will get removed.


    At this point, a pop-up will appear warning that a removal of this resource could affect applications using this resource.


    If you click ‘Yes’, both resources get removed.


    Lesson learned. If the resource you are deleting is dependent on any other resources, remove the dependency first.

    Now, we get to the main point of this post. The above process is fine for deleting resources from a cluster group unless the resource you are deleting is configured as a “Physical Disk’ resource, and it a mount point disk. The process differs slightly and you must follow this process or you could find yourself unintentionally moving every resource in the group into ‘Available Storage’.

    First, lets cover proper way to remove a mount point disk from a cluster group. In this example, I have a plain File Server group with a Network Name, IP Address, File Server, and three disks. A root disk (Disk X:) and two mount points using folders on the root of X: called X:\MountPointA and X:\MountPointB.


    Since I don’t have any shares located on X:\MountPointB, I want to remove that disk so I can use it in some other application. The FIRST thing I need to do is take the resource offline.


    Then I can right-click the resource and click ‘Remove from GroupName’


    When you remove a ‘Physical Disk’ resource from a cluster group, it doesn’t actually remove the cluster resource altogether, it moves the disk resource into the ‘Available Storage’ group.  This is so that you can reallocate the resource to another group if needed.

    As you can see, the resource now shows in ‘Available Storage’


    At this point, you can remove the mount point configuration, or change it to a lettered drive so you can use for some other application.

    Now let’s go over what can happen if you don’t take the mount point offline before removing it. The main reason in going over this is to show you how to recover so that there’s no adverse impact to the Cluster.

    In this example, I am removing the same resource from the File Server group following the same process above WITHOUT taking it offline first. First, I verified there were no dependencies on the ‘DiskX:\MountPointB’ resource.


    Now here’s where it gets fun. After I attempted to remove the mount point, ALL of my resources disappear from the group. ??????


    Time to panic? No, all is not lost. What happened is that because we had a mount point configured, and a mount point is not usable unless there’s a root disk, ALL of the resources moved to ‘Available Storage’ because the rest of the resources DO have dependencies.

    It may appear all of the resources disappeared. Because in the UI, we only show ‘Physical Disk’ resources, if any other resources get put in that group, they don’t show up in the UI. However, if we run a command line to display all resources and their groups, we can see that the resources are still there.


    To get the resources back into the right group, just move the disks back to the original File Server group. Right click the disk, More actions, and select Move this resource to another service or application. The same dependency tree will cause the resources to move back.


    Now we have all our resources back and we can follow the correct process of taking the mount point disk offline BEFORE removing it.


    Jeff Hughes
    Senior Support Escalation Engineer
    Microsoft Enterprise Platforms Support

  • Error in Failover Cluster Manager after install of KB2750149

    On Tuesday, Jaunary 8, the below recommended fix was released and available on Windows Update for Windows .NET Framework 4.5.

    An update is available for the .NET Framework 4.5 in Windows 8, Windows RT and Windows Server 2012

    When installing this update to a Windows 2012 Cluster Server, you will receive the below error when you select Roles or Nodes from within Failover Cluster Manager.

    A weak event was created and it lives on the wrong object, there is a very high chance this will fail, please review and make changes on your code to prevent this issue

    You can still manage your Cluster from a node that does not have this fix or through Powershell.  The recommendation at this time is to remove and not install this fix. 

    Microsoft is aware of the issue.  Once the cause has been identified and a resolution available, this blog will be updated to reflect the resolution.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Platforms Support

  • Tracing with Storport in Windows 2012 and Windows 8

    Welcome back to the CORE Team Blog. Paul Reynolds here. I would like to let everyone know about changes on how to capture Storport traces in Windows Server 2012 and Windows 8. Previously, Bob Golding wrote a blog on how to do this in Windows 2008 and Windows 2008 R2. If you have Windows 2008 or 2008 R2, continue to use that blog for your Storport traces. 


    We have two new steps that need to be done to enable Storport tracing on a Windows Server 2012 or Windows 8 machine:

    1. Add a new registry key and property for each disk resource we want to capture Storport traces on:

    Key to add:


    Property to add:

    EnableLogoETW with a value of 1 (DWORD)

    2. After adding these entries, a restart is required to make them effective. Yes, a reboot is required.

    Many of you may have hundreds of LUNs. To help expedite adding these new registry keys and properties, here is a sample Windows PowerShell script that can help you automate the process:

     $mydrives = get-item -path 'HKLM:\SYSTEM\CurrentControlSet\Enum\SCSI\*\*\Device Parameters\' | where {$_.PSPath -notlike  "*Virtual*" -and $_.PSPath -like "*Disk*"}
    foreach ($drive in $mydrives)
            New-ItemProperty -Path registry::$drive\Storport -Name  EnableLogoETW -Value 1 -PropertyType Dword


    Once you are done, you can remove the added keys (if you wish, not necessary) also by running a Windows PowerShell script. Here is a sample script that can help you with that:

    $mydrives = get-item -path 'HKLM:\SYSTEM\CurrentControlSet\Enum\SCSI\*\*\Device Parameters\' | where {$_.PSPath -notlike  "*Virtual*" -and $_.PSPath -like "*Disk*"}
    foreach ($drive in $mydrives)
            remove-ItemProperty -Path registry::$drive\Storport -Name EnableLogoETW



    After these keys and properties are added and the box rebooted, the process to capture a Storport trace becomes similar (though not identical) to the way we do it with Windows 2008 and Windows 2008 R2 (see Bob Golding’s blog for more detail). For those already familiar with the process, the two main changes are:

    • Instead of choosing IOPERNOTIFICATION, a new choice called LOGO_PERMORMANCE is picked
    • Request duration filters are no longer available

    For those not familiar with the process, here is an overview of how to start a Storport trace. Most of the information is from Bob’s original blog.

    1. Hit the Windows button and type Perfmon.exe, then press enter to start performance monitor.

    2. Expand “Data Collector Sets” and “Event Trace Sessions”


    3. Right-Click on “Event Trace Sessions”


    4. Select “New, Data Collector Set”


    5. The following dialogue will appear:


    Give the new data collector set a name in the dialogue box. In this example I called it “Storport”.

    6. Choose the “Create manually (Advanced) option and then click Next to see the following dialogue:


    7. Click Add on the dialogue box above and the following list of providers will appear.


    8. Select “Microsoft-Windows-Storport” and click OK. You should now see the following screen:


    9. Select “Keywords (Any)” then click Edit.


    10. Check the box for Logo_Performance, and then click OK:


    11. You should see the following screen:


    12. Click next and choose a root directory for the trace. In this example I use C:\perflogs:


    13. Click finish. You should see a new Event Trace Session that is stopped. In this example it is called Storport.


    14. Right-click the new Event Trace Session and click Start to start it:


    15. You should now see your new Event Trace Session started:


    16. After you are done collecting data, right-click the running Storport trace and select “Stop”.



    Now that we have a Storport trace, let’s look at the data it contains. A simple way to see the data is via Event Viewer:

    1. Hit the Windows key, type “eventvwr.exe” and hit the enter key. The Event Viewer utility will start:


    2. Right-Click on Event Viewer (local) and click on “Open Saved Log”:


    3. Choose the directory the Storport trace was saved to, highlight the ETL files and click Open. In this example, we chose c:\perflogs.


    4. After clicking “Open” a dialogue box will appear asking to create a new event log copy. Click “Yes”.


    5. You will see the following screen. We left the settings as the default and clicked “OK”.


    6. After clicking OK you will see Event ID 223 messages:


    7. Let’s look at the detail of the data:



    Port: This is the adapter port number (e.g. RaidPort1, etc.)
    Bus: This is Bus number
    Target: Target ID of the LUN exposed to the Operating System
    LUN: Logical Unit Number of the physical storage
    RequestDuration: How long the IRP took in 100 nanoseconds increments. To convert to milliseconds, divide this number by 10,000.
    CDBLenth: Length of Command Descriptor Block
    CDB: Command Descriptor Block (the SCSI command is the first byte of the CDB). If you wish to look up the SCSI command see http://en.wikipedia.org/wiki/SCSI_command
    SrbStatus: Status Request Block status returned from the adapter (see srb.h and scsi.h in the Microsoft WDK or http://en.wikipedia.org/wiki/SCSI_Status_Code)
    Irp: I/O request packet
    OriginalIrp: original I/O request packet



    When troubleshooting disk performance issues, Storport traces capture data from the last layer of software in Windows that an I/O Request Packet (IRP) will pass through before being handed off to hardware. It is an excellent tool for checking if slow disk performance is hardware related.

    In a next blog post, I will show a way to look at Storport data via Excel Spreadsheets with Pivot Tables and Pivot Charts. You can look at millions of rows of data if you use the free PowerPivot add-on available with Office 2010 and Office 2013.

    Paul Reynolds
    Support Escalation Engineer
    Windows Core Support Team

  • ADAMSync + (AD Recycle Bin OR searchFlags) = "FUN"

    Hello again ADAMSyncers! Kim Nichols here again with what promises to be a fun and exciting mystery solving adventure on the joys of ADAMSync and AD Recycle Bin (ADRB) for AD LDS. The goal of this post is two-fold: Explain AD Recycle Bin for AD ...read more
  • Intermittent Mail Sack: Must Remember to Write 2013 Edition

    Hi all, Jonathan here again with the latest edition of the Intermittent Mail Sack. We've had some great questions over the last few weeks so I've got a lot of material to cover. This sack, we answer questions on: Issues upgrading DFSR hub servers ...read more
  • Happy New Year 2013!!!

    Happy New Year AskPerf Blog Readers!  Can you believe that 2012 is now over?  Seems like it just flew by to me.  Anyway, we had some HUGE  product launches in 2012.  Some of which come to mind:

    • Windows 8
    • Microsoft Surface
    • Windows Server 2012
    • Windows Phone 8
    • Office 2013
    • Outlook.com
    • SQL Server 2012
    • Halo 4

    Plus many, many, many others…  I personally feel that 2013 is going to be EPIC (acceptable buzzword?)!  We’re looking forward to getting some awesome content out to you this year, so stay tuned!

    With that, we hope you have a fantastic 2013!

    -AskPerf Blog Team