Blog - Title

October, 2010

  • Mythical Creatures – Corrupt User Profiles



               “Ned” the Gnome

    Mike here again and in the spirit of Halloween I want to discuss mythical creatures. What would the world be without J.R.R. Tolkien’s idea of smelly, leather-skinned Orcs or Greek Mythology’s gift of Pegasus, the winged stallion? Unfortunately, for each great mythical creature, like giant walking trees (that walk for hours—thank you Kevin Smith), there is a horrendous one. The dreadful creature I want to discuss today is the corrupt user profile.

    I absolutely shudder when I hear the words “corrupt profile.” Like Superman, who is defeated by a glowing green rock—the corrupt profile is my kryptonite (Ned’s is the phrase Tips and Tricks). So, the purpose of this blog is to dispel the myth of the corrupt user profile.

    Let me start by contradicting myself—there is actually such a thing as a corrupt user profile; however, it is extremely rare. I’ve spent over ten years at Microsoft and I’ve seen two—count them—two actually corrupt user profiles. I’ll identify the “real” corrupt profile later. First, let’s identify what is NOT a corrupt user profile because it’s more prevalent.

    User profile load failure

    Occasionally, users report their profiles not loading, or Windows informs users that it logged the user on with a temporary user profile. It’s rare for Windows to not load a user profile because it is a “corrupt” user profile. Typically, a user profile does not load because:

    • A process is not playing nice in the sandbox—meaning some process other than Winlogon opened ntuser.dat exclusively thereby preventing Winlogon from loading the user profile.
    • Windows cannot find the user profile. This is most often the case when using Roaming, Mandatory, or Super Mandatory user profiles. All these profile types require network connectivity. In these cases, no network means Windows will not loaded user profile.
    • Configuration – Windows is configured, through a policy or otherwise, not to load the user profile. Profile quota size, or slow links are common causes for this scenario.

    The most common scenario classified as the mythical corrupt profile is the first, and rightly so because is painfully difficult to diagnose. Configuration is the second most likely scenario that attributes to the mythical corrupt profile. It’s rare to associate unavailable user profiles as corrupt, or scenarios involving the awesome access is denied error message.

    User settings missing

    Another scenario that perpetuates the corrupted profile myth is one that involves user settings disappearing. It’s unlikely that user settings disappear; it’s more likely the user settings were not saved. A number of scenarios can lead to this possibility.

    Profile trickery

    Most recently, I’ve seen a number of scenarios, mostly with Terminal Servers, where settings do not persist. Our case data show a trend of these scenarios using non-Microsoft profile management software. This software changes how Windows handles the user profile. Typically, these implementations treat the user profile as a local profile and then implements “magical magic” to roam user data back to a central location. This introduces a number of moving parts that must work correctly to ensure user settings are saved. Also, some of these non-Microsoft solutions allow you to partition portions of the user settings that persistent and those that do not. This allows control over which user settings roam through their solution and which settings do not. In these cases, verify the solution, third-party or otherwise, propagated the saved settings. However, this is not a corrupt user profile.

    Multiple instances – last writer wins

    Remember that Windows stores user settings in a registry file. The registry file is the smallest unit of roaming data. That means that Windows roams the entire user hive when the user logs off (or in the background with Windows 7). However, when a user logs on to multiple computers or has multiple sessions, then that user’s settings are only as good as the last session that writes to the central location.

    Consider the following scenario. A user has a laptop and frequently uses Terminal Services. The user shares the same profile between these computers. On Friday, the user logs on their laptop—the profile is loaded. After some time, the user makes a Terminal Services connection and begins to work in that session. The user then disconnects the Terminal Services session and goes to lunch. When they return, they change their desktop background on their laptop. The user logs off at the end of the day and their saved user settings roam to the central location. On Monday, the user logs on expecting their new desktop background; however, they receive their old desktop background. You discover that idle Terminal Services sessions are configured to logoff after a preconfigured idle time. The session’s user settings have a later time stamp then the previous and therefore writes last, resulting in the user’s setting appearing as if they did not save. This is another reason why we encourage separate user profiles for Terminal Services. So, add this experience to the list of mythical corrupt profiles.

    Misbehaving applications

    Another scenario that perpetuates the corrupt profile myth is with misbehaving applications that “magically” work when you delete the user profile. This is not a corrupt user profile. There is a big different between corrupt data and unexpected data. It’s difficult to determine what is wrong in these scenarios.

    Clearly it is related to user data because resetting the user data to blank or nothing restores the application’s performance to the expected behavior. These scenarios require a thorough understanding of the application, how it consumes user data, and the upper and lower limits of each setting. Deleting the entire user profile to accommodate a misbehaving application is a quick fix with huge ramifications. The “fix” for one application effectively breaks other applications. Also, deleting the user profile removes stored credentials, keys, and certificates that may be critical to the user.

    A better approach is to create a new user and test the application with a new user profile. But deleting a user profile because an application or a feature of an application does not work is overlooking the larger issue. Resist the urge and instead break out Process Monitor, capture registry activity, and reproduce the issue. Inventory the registry keys the applications uses in the user’s hive. Review the values of each of the keys in a working and failing scenario and compare the two. Use the process of elimination to determine the setting and value that is causing the failure.

    If time is not on your side and you know deleting the user profile resolves the problem, then create a virtual machine of the problematic computer so you can continue your investigation at a later time. Incorrect data stored in user settings does not make the profile corrupt.

    Will the real corrupt profile please stand up

    I’ve identified some of the common misconceptions that are associated with the corrupt profile mythology, and there are others. However, these scenarios consistently rise to the top. So, what is a real corrupt profile? I’m glad you asked.

    A user profile is a predetermined folder structure and accompanying registry data. Microsoft Windows uses the registry data to describe and preserve the user environment. The folder structure is storage for user and application data, specifically for an individual user. Windows stores the profile on the local hard drive, loads the profile when the user logs on, and unloads the profile when the user logs off.


    The preserved data that describes the user’s environment is nothing more than a registry hive. More specifically, the user’s registry portion of the profile is loaded into HKEY_CURRENT_USER. Registry hives,  keys, value names, and values are stored in a specific structure that Windows recognizes as the registry. Each element within the structure has its own metadata, such as last write time and security descriptor. All of this information must adhere to the scope and limits of the structure. Consider the following example:

    An application saves the position of its window in the user’s settings. Window locations are represented as coordinates on the screen. These coordinates are integer values. Integers are positive or negative values. However, the upper left corner of the screen is typically represented by the coordinate 0, 0. What if another application saved -12 and 0 as this data? Both numbers are valid integers. It meets the structure of a REG_DWORD, which is an integer data type for the registry. Yet, the application does not work correctly when this value is present in the registry. This is not a corrupt profile—its bad data; however, not in the context of the registry or the profile. The registry only cares that the value is within the scope of that data type.

    So, an actual corrupt profile is when the structure of the registry hive no longer conforms to the expected structure. I’ve seen this two times in 13 years and in both cases it was not exclusive to the user’s registry. The corruption persisted throughout registry hives and multiple aspects of the computer did not function correctly. In both these cases, new users with new profile as well as existing user with existing profiles experienced the problem. However, it was noticeable that multiple aspects of the computer were behaving poorly. Ultimately, the problem was diagnosed to a non-windows binary. The binary overwrote heap memory that the registry used. The binary modified that data before it was committed to disk. Then, Windows committed modified memory to disk; thereby misaligning the registry structure—which is a real corrupt user profile.


    Be wary when you hear a co-worker reporting a corrupt user profile. Ask them if they saw during their most recent snark hunting trip or during their last encounter with a ravenous Bugblatter Beast. More likely—they’ve seen one of the manifestations we’ve described in this post. It’s a difficult and time consuming problem to troubleshoot and resolve. But some additional diligence will surface the real problem.

    Mike “The Corrupt Profile Gladiator” Stephens

  • Hunting down DES in order to securely deploy Kerberos

    Hello folks, Ned here again. By now many businesses have begun deploying Windows Server 2008 R2 and Windows 7. Since Active Directory has become ubiquitous, Kerberos is now commonplace. What you may not know is that we made a significant change to default cryptographic support in Kerberos starting in Win7/R2 and if you are not careful, it may break some of your environment: by default, the DES encryption type is no longer enabled.

    For those who have homogenous Windows networks with no third party operating systems or appliances, and who have not configured DES for any user accounts, you can stop reading.

    Ok, one guy left. Everyone else pay attention.

    Some Background on Kerberos Encryption Types

    The phrase “encryption type” is simply another way of saying cryptography. Windows supports many cipher suites in order to protect Kerberos from being successfully attacked and decrypted. These suites use different key lengths and algorithms; naturally, the newer the cipher suite we support and use, the more secure the Kerberos.


    Key length

    MS OS Supported



    Windows 7, Windows Server 2008 R2



    Windows Vista, Windows Server 2008 and later



    Windows 2000 and later



    Windows 2000 and later, off by default in Win7/R2



    Windows 2000 and later, off by default in Win7/R2

    In practical terms, a Windows computer starts a Kerberos conversation sending a list of supported encryption types (ETypes). The KDC responds to the list with the most secure encryption type they both support. For example, a Windows 7 computer sends an AS_REQ. The AS_REQ contains the supported encryption types of AES256, AES128, RC4, and DES (only because I enabled it through security policy) – we can see this in a network capture:


    The KDC responds that it requires pre-authentication and sends a list of its supported encryption types:


    The client uses a password hash to encrypt a key. The client uses the encrypted key to protect the time stamp that it includes in the “real” AS_REQ. In this instance, the preferred encryption used is AES256, the highest level of encryption supported by Win7 and 2008 R2:


    I use Netmon 3.4 for the above examples (I’ll explain its importance later). As you can see, it’s un-fun to parse Kerberos traffic with it. This is how it looks in Wireshark; sometimes it’s easier to read for learning purposes:

    image image

    The Deal with DES

    DES (Data Encryption Standard) came about in the late 1970’s as a standardized encryption suite. Since then it’s been adopted by a lot of software; it’s probably one of the most supported ciphers in the world. It’s also quite insecure and no version of Windows has ever used it by default when talking to Windows KDCs; the minimum there has always been 128-bit RC4-HMAC. Starting in Windows 7, we decided that Windows, out of the box, would no longer support DES… at all. You’re in good shape as long as you don’t have any operating systems other than Windows.

    The problem is people use other operating systems (and may not even know it; your appliance web proxy is running a 3rd party operating system, bub). Those operating systems are not always configured to use Kerberos security at the highest cipher levels and often do not support negotiation or pre-authentication. Also, they may not support AES ciphers. And certain applications might require DES encryption due to short-sighted programming or default settings.

    This leaves you in a pickle: do you roll the dice and deploy Windows Server 2008 R2 DC’s and Windows 7 clients, hoping that there are no issues? Or do you enable DES on all your new computers using group policy, knowing you are enabling a cipher that weakens Kerberos? I think there’s a third option… that’s better…

    Finding Kerberos EType usage in your domain

    We document some simple steps for finding DES usage in your domain in KB977321 using network captures in a test environment. But wouldn’t it be easier to determine Kerberos usage based on security auditing so that you could gather and analyze and query data ? You can. This only requires is that you have DC’s running at least Windows Server 2008.

    1. If you already deployed Windows Server 2008 R2 and have enabled DES everywhere to error on the side of app compatibility, then configure security auditing against all DCs for success:

    • Kerberos Authentication Service
    • Kerberos Service Ticket Operations

    These auditing actions are part of the Account Logon category. For more details on these review these two KBs:

    If you don’t know how to enable this auditing in general, review:

    And no, this doesn’t work with Windows Server 2003 DC’s. Who cares, DES can’t be touched there… :)

    Depending on the size of your environment or quantity of auditing events, you may need to use some sort of security event log harvesting service like ACS. It will make querying your data easier. There are third parties that make these kinds of apps as well.

    2. Drink mint juleps for a few days.

    3. Examine your security audit event logs on your DC’s. Here is where it gets interesting. A few examples:



    Log Name:      Security

    Source:        Microsoft-Windows-Security-Auditing

    Date:          10/13/2010 5:06:47 PM

    Event ID:      4769

    Task Category: Kerberos Service Ticket Operations

    Level:         Information

    Keywords:      Audit Success

    User:          N/A



    A Kerberos service ticket was requested.


    Account Information:

           Account Name:        krbned@CONTOSO.COM

           Account Domain:            CONTOSO.COM

           Logon GUID:          {eed17165-1ca0-613b-51ae-17005546c7f0}


    Service Information:

           Service Name:        2008R2-01-F$

           Service ID:          CONTOSO\2008R2-01-F$


    Network Information:

           Client Address:             ::ffff:

           Client Port:         49203


    Additional Information:

           Ticket Options:            0x40810000

           Ticket Encryption Type:    0x12

           Failure Code:        0x0

           Transited Services:  -




    Log Name:      Security

    Source:        Microsoft-Windows-Security-Auditing

    Date:          10/12/2010 10:32:29 AM

    Event ID:      4768

    Task Category: Kerberos Authentication Service

    Level:         Information

    Keywords:      Audit Success

    User:          N/A



    A Kerberos authentication ticket (TGT) was requested.


    Account Information:

           Account Name:        krbned

           Supplied Realm Name: CONTOSO

           User ID:                   CONTOSO\krbned


    Service Information:

           Service Name:        krbtgt

           Service ID:          CONTOSO\krbtgt


    Network Information:

           Client Address:            ::ffff:

           Client Port:         1088


    Additional Information:

           Ticket Options:            0x40810010

           Result Code:         0x0

           Ticket Encryption Type:    0x17

           Pre-Authentication Type:   2


    Certificate Information:

           Certificate Issuer Name:         

           Certificate Serial Number:

           Certificate Thumbprint:          



    These “Ticket Encryption Type” values look mighty interesting. But what is a 0x17? Or a 0x12? Is there a complete list of what these all mean?

    Use Netmon to decipher these values. First though, I’ll let you in on a little secret: Netmon 3 exists mainly as part of our efforts to document our protocols for the EU and the DOJ. That’s why when you look at the parsing in the frame details page it is designed more for completeness than readability. You get to reap the rewards of this, as it’s why the Netmon parsers are not monolithic – instead, they allow easy viewing and even live editing, all loaded from text.

    1. Go back and look at that network capture screenshot I showed previously:


    All of those Etypes have a number in parenthesis. But they aren’t hex numbers. And from looking at my event logs above for example, the etype 0x12 came from a Windows 7 computer; that has to be AES-256, which the above screenshot shows is a value of 18.

    I just gave you a big hint. :)

    2. Take a look at the parsers tab in Netmon, specifically for Protocols, then KerberosV5:



    3. Take a look at the KrbETypeTable entry – look familiar? Here’s where those numbers are coming from that get displayed in the parser:

    Table KrbETypeTable( eType )


           Switch( eType )


                  Case 1: FormatString("des-cbc-crc (%d)", eType);

                  Case 2: FormatString("des-cbc-md4 (%d)", eType);

                  Case 3: FormatString("des-cbc-md5 (%d)", eType);

                  Case 4: FormatString("[reserved] (%d)", eType);

                  Case 5: FormatString("des3-cbc-md5 (%d)", eType);

                  Case 6: FormatString("[reserved] (%d)", eType);

                  Case 7: FormatString("des3-cbc-sha1 (%d)", eType);

                  //  9 through f in both RFC 3961 and MCPP

                  Case 9: FormatString("dsaWithSHA1-CmsOID (%d)", eType);

                  Case 10: FormatString("md5WithRSAEncryption-CmsOID (%d)", eType);

                  Case 11: FormatString("sha1WithRSAEncryption-CmsOID (%d)", eType);

                  Case 12: FormatString("rc2CBC-EnvOID (%d)", eType);

                  Case 13: FormatString("rsaEncryption-EnvOID (%d)", eType);

                  Case 14: FormatString("rsaES-OAEP-ENV-OID (%d)", eType);


                  Case 15: FormatString("des-ede3-cbc-Env-OID (%d)", eType);

                  Case 16: FormatString("des3-cbc-sha1-kd (%d)", eType);

                  Case 17: FormatString("aes128-cts-hmac-sha1-96 (%d)", eType);

                  Case 18: FormatString("aes256-cts-hmac-sha1-96 (%d)", eType);

                  Case 0x17: FormatString("rc4-hmac (%d)", eType);

                  Case 0x18: FormatString("rc4-hmac-exp (%d)", eType);

                  Case 0x41: FormatString("subkey-keymaterial (%d)", eType);

    And what do you think happens if I use calc to convert decimal 18 to hex? Indeed – you get 0x12. Which is aes256-cts-hmac-sha1-96, and that’s what your event log was trying to tell you. So all converted out, this means that the theoretical event log entries could be:











































    And if you want to catch DES usage, you should watch for events that included 0x1 and 0x3, as those are the versions of DES that Windows implements. Tada…

    Regardless of whether or not you care about Kerberos DES parsing, you can use these techniques to reverse engineer protocols based on the Netmon parser code or even fix parser errors. It’s a slick technique to keep in your back pocket. If you just wanted to cheat you could have looked these up in RFC 3961. This is “teaching to fish” time :).

    Ok, now what?

    It’s all well and good to know that you have software using DES in your environment. The next step is to change that behavior. Here are your tasks:

    • Make sure you have no DES-enabled user accounts in your domain.


    • Use your event log audit trail to create an inventory of computers sending DES etypes. Examine those computers and devices (they are probably not running Windows).
    • If the computers are running Windows, examine them for non-Microsoft software. One of those is the culprit. Netmon, Process Monitor, Process auditing, etc. can all be used to track down which process is requiring the insecure protocol. Contact the vendor about your options to alter the behavior.
    • If the computers are not running Windows or they are appliances, examine their local Kerberos client configurations or contact their vendors. You will also need to look at the installed apps as the OS might not be to blame (but it usually is).
    • If you get stuck with a vendor that refuses to stop using DES, contact their sales department and make a stink. Sales will usually be your advocate, as they want your money so they can buy more BMW M3’s. Using DES at this point is terminal laziness or the sign of a vendor that absolutely does not care at all about security – probably not someone with which you want to do business.

    Final thoughts

    This post wasn’t a treatise on Kerberos or even encryption types, naturally. If you want a lot more interesting reading an insomnia cure, I recommend:

    Until next time.

    - Ned “that’s the same cipher I have on my luggage!” Pyle

  • Get-DFSRBacklog PowerShell Script Available

    Hi folks. Our buddy Steve Grinker has posted an excellent sample PowerShell script for retrieving DFSR backlogs over the MS Code Gallery site:


    It uses WMI to retrieve DFSR Replication Groups, Replicated Folders, Connections, and servers, then calculates all of their backlog information. All with a simple command-line and no need to shell DFSRDIAG or WMIC. It’s quite slick.

    When you run it with no arguments it returns a human readable output of backlog status on the server you ran the script on, along with all of its partners:



    But it also supports more sophisticated options like specifying remote servers, a count of files you consider to be “high backlog” for your specific environment, and some nice output options you get for free thanks to PowerShell pipelines:

    get-dfsrbacklog.ps1 <server> <backlog count error threshold> | sort-object backlogstatus | format-table –groupby backlogstatus


    Make sure you set bypass execution policies, as the script is unsigned obviously.


    This is a great example of why PowerShell often kicks vbscript’s tail. Look at all the code that was required in this vbscript sample, where it is still less flexible than Psh.

    Naturally, this script is not supported by Microsoft and we’re just pointing to someone else’s code. If Mr. Grinker wants to chat with you he will. Now maybe someone can convince him to write a PowerShell alternative to restoredfsr.vbs.

    Great work Steve!

    - Ned “community cheerleader” Pyle

  • RESTOREDFSR.VBS Version 3 now available

    Hello folks, Ned here again. The infamous restoredfsr.vbs has now been rewritten (thanks for the prodding MLatCC) and it fixes some bad design limits of the older versions that were caused by time constraints and apathy. For those of you have not had the “pleasure” of restoredfsr.vbs:


    Update 6/12/14

    Well that gallery is kaput. Now hosted from here.

    This old script is really gross, I recommend instead using our new Windows PowerShell cmdlet Restore-DfsrPreservedFiles instead. You can restore from 8.1 client if you install RSAT, or from a WS2012 R2 server. Can either run locally or just map a drive to \\dfsrserver\h$ or whatever the root drive is, then restore.

    Take a look at for steps on using this cmdlet.


    A simple vbscript to allow recovery of DFS Replicated files that have been pushed into the ConflictAndDeleted or PreExisting folders due to misadventure.
    Remember, this script must be run from a CMD prompt using cscript.exe. Don't just double-click it.


    The script also requires to edit three paths (your source files, a new destination path, and the XML manifest you are calling). If you fail to edit those the script will exit with an error.

    ' Section must be operator-edited to provide valid paths

    ' Change path to specify location of XML Manifest
    ' Example 1: "C:\Data\DfsrPrivate\ConflictAndDeletedManifest.xml"
    ' Example 2: "C:\Data\DfsrPrivate\preexistingManifest.xml"


    ' Change path to specify location of source files
    ' Example 1: "C:\data\DfsrPrivate\ConflictAndDeleted"
    ' Example 2: "C:\data\DfsrPrivate\preexisting"

    SourceFolder = ("C:\your_replicated_folder\DfsrPrivate\ConflictAndDeleted")
    ' Change path to specify output folder

    OutputFolder = ("c:\your_dfsr_repair_tree")


    This script is an unsupported, as-is, no-warranty, last gasp, solution. If you are using it, you don’t have any backups, you are not using Previous Versions, and you are praying that you have not hit the conflictanddeleted quota (which is only 660MB by default).

    This new version now properly detects all file and folder types, runs a bit faster, and no longer requires weird trailing backslashes. It does not support command-line arguments as the very idea bores me to tears.

    Make sure you destroy all previous versions of this script you have lying around.

    - Ned “mail sack doubtful” Pyle

  • Friday Mail Sack: Not Particularly Terrifying Edition

    Hiya folks, Ned here again. In today’s Mail Sack I discuss SP1, DFSR, GPP passwords, USMT, backups, AD disk configurations, and the importance of costumed pets.



    Should it be safe to use the Windows 7 and Windows Server 2008 R2 Service Pack 1 Release Candidate builds in production? They came out this week and it looks like it’s pretty close to being done.


    No. This build is for testing only, just like the beta. The EULA specifically states that this is not for production servers and you will get no support running it in those environments.

    For more info and test support:


    I need to ramp up on USMT for our planned Windows 7 rollout early next year. I’ve found a lot of documentation but do you have recommendations on how I can learn it progressively? I know nothing about USMT so I’m not sure where to start.


    I would recommend going this route:


    1. What Does USMT Migrate?
    2. Common Migration Scenarios
    3. Quick Start Checklist
    4. Step-by-Step: Basic Windows Migration using USMT for IT Professionals
    5. Step-by-Step: Offline Migration with USMT 4.0
    6. How USMT Works
    7. Requirements


    1. ScanState Syntax
    2. LoadState Syntax
    3. Config.xml File
    4. Create a Custom XML File
    5. Customize USMT XML Files
    6. USMT Custom XML the Free and Easy Way
    7. Exclude Files and Settings
    8. Include Files and Settings
    9. Reroute Files and Settings
    10. Migrate EFS Files and Certificates
    11. Offline Migration
    12. USMT, OST, and PST
    13. Understanding USMT 4.0 Behavior with UEL and UE
    14. Controlling USMT Desktop Shell Icon Behavior from XP (and how to create registry values out of thin air)
    15. Get Shiny with USMT: Turning the Aero Theme on During XP to Windows 7 Migration


    1. Conflicts and Precedence
    2. Recognized Environment Variables
    3. USMT and /SF
    4. XML Elements Library


    1. Common Issues
    2. USMT 4.0: Cryptic Messages with Easy Fixes
    3. Don’t mess about with USMT’s included manifests!
    4. Log Files
    5. Return Codes
    6. USMT 4.0 and Custom Exclusion Troubleshooting
    7. USMT 4 and WinPE: Common Issues


    Is there a way to generate a daily DFSR health report?


    You can use DFSRADMIN.EXE HEALTH NEW <options> as part of a Scheduled Task to generate a report every morning before you get your coffee.



    Is there any good reason to separate the AD Logs, DB and SYSVOL onto separate drives? Performance maybe?


    Thomas Aquinas would have made a good DS engineer:

    "If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments [if] one suffices."

    We’ve not really pursued that performance line of thinking as it turned out to be of little value on most DC’s: AD’s database and logs are mostly static. In most environments for every write to an AD DB, there are thousands of reads. If your average disk read/write is under 25ms for any disks that hold the AD database and its transaction logs you are in the sweet spot. LSA tries to load as much of the DB into physical RAM as possible and it also keeps common query and index data in physical memory, so the disk perf isn’t super relevant unless you are incredibly starved for RAM. Server hardware is so much better now than when AD was invented that it’s just hard to buy crappy equipment – this isn’t Exchange or SQL where every little bit counts.

    Guidance around separating the files for SYSVOL was always pretty suspicious. That data is glacially static (in most environments it might only see a few changes a year, if ever). It has almost no data being read during GP processing either so disk performance is almost immaterial. I have never personally worked a case of a slow disk subsystem making GP processing slow.

    We still provide plenty of space guidance though, and that might make you need to separate things out:

    Since Win2008 and later made it so easy to grow and shrink volumes though, even that is not a big deal anymore.


    We are looking to make some mass refreshes to our local admin passwords on servers and workstations. Initially I started looking at some 3rd party tools, but they are a little pricey. Then I recalled the "Local Users and Groups" option in Group Policy preferences. However, I have seen some rumblings on the Internet about the password stored in the XML being not completely secure.


    We consider that password system in GPP XML files “obscured” rather than “securely encrypted”.

    The password is obfuscated with AES-256 (i.e. encrypted but with a symmetric public seed). If you were to control permissions to that GP folder (so that it no longer had “Authenticated Users” or any other user groups with READ access) containing the policy as well as use IPSEC to protect the traffic on the wire, it would be reasonably secure from anyone but admins and the computers themselves. Alan Burchill goes into a clever GPP technique for periodic password changes here:

    How to use Group Policy Preferences to set change Passwords

    He also makes the excellent point that a reasonably secure periodic password change system is better than just having the same password unchanged for years on end! Again, I would add to his example that using IPSEC and removing the “Authenticated Users” group from that group policy’s folder in SYSVOL (and replacing it with “Domain Computers”)  is healthy paranoia.

    Official ruling here, regardless of above:

    Try to not get spit all over me when you scream in the Comments section…


    Can DFSR read-only folders be backed up incrementally? Files Archive bits never change when I run a backup, so how can the backup software know to only grab changed files?


    Absolutely. And here’s a treat for you:

    The Archive bit has been dead since Windows Vista.

    If you run a backup on a non-read-only replicated folder (or anywhere else) while using Windows Server Backup you will notice that the Archive bit never gets dropped either. The Volume Shadow Service instead uses the NTFS USN journal to track files included in incremental backups. Some backup solutions might still use Archive bits, but Windows does not – it is dangerous to rely on it as so many third party apps (or even just users) can clear the attribute and break your backups. There’s next to no TechNet info on this out there, but SriramB (the lead developer of DPM) talks about this at length:

    Now obviously, you cannot restore files directly into a read-only replicated folder as the IO blocking driver won’t allow it. If you try with WSB it will report error “Access is Denied”.


    If you are restoring a backed up read-only replica, you have two options:

    1. Convert that replicated folder back to read-write temporarily, restore the data and allow it to replicate out, then set the folder back to read-only.
    2. Restore the data to an alternate location and copy or move it into the read-write replicated folder.


    As for other randomness…

    Best Comeback Comment of the Year

    From our recent hiring post:

    clip_image001 Artem -

    Crap. You know, I've recently joined Microsoft here in Russia. And guess what? No free Starbucks!

    clip_image002 NedPyle -

    Congrats on the job. Sorry about the Starbucks. I'm sure there's a vodka dispenser joke here somewhere, but I'll leave that to you. :-P

    clip_image001[1] Artem -

    Yep, it's in the Samovar right in the lobby hall. The problem is like in any big company there's a policy for everything. And in today's tough economy, free vodka is reserved for customer meetings only. Usually a policy is not a big problem, but not this one. It is enforced by bear guards.


      For those of you that aren’t from the US, Ireland, Canada, and the Isle of Limes: this week marks the Halloween holiday where kids dress up in costumes and run around getting free candy from neighbors. If you get stiffed on candy, it’s your responsibility to burn down that neighbor’s house. Wait, that’s just Detroit.

      It’s also an opportunity for people who were born without the shame gene to dress up their animals in cute outfits. Yay Internet! Here are some good ones for the dog lovers.





      Cat lovers can get bent.

      And finally, don’t forget to watch Night of the Living Dead, courtesy of the excellent and the public domain law. Still Romero’s best zombie movie ever. Which makes it the best zombie movie ever. You must do it with all lights off, preferably in a house in the woods.

      - Ned “ghouls night out” Pyle

    • Controlling USMT Desktop Shell Icon Behavior from XP (and how to create registry values out of thin air)

      People of Earth! It is I, Ned - your benevolent alien dictator - back again to talk to you about USMT.

      A few customers have asked me how to prevent XP Classic Start Menu desktop icons from migrating to Windows 7. Since these aren’t true shortcuts, you have to do some gyrations to block these icons from polluting your beautiful pristine new Windows 7 desktop experience.

      To give some context, you start with an XP computer. You’ve configured the computer with “Classic Start menu” setting. Icons like “My Documents”, “My Network Places”, and “My Computer”, appear on the desktop because of this configuration:


      The standard USMT migration, after loadstate without any customization, leaves your Windows 7 desktop looking like the following:


      Eww. There are more icons than before thanks to the new library code kicking in here.

      Note: Don’t confuse this with the XP Classic theme, that’s totally different and I’ve discussed migration options for that here previously. Yeah, looking at you Dave!

      So how can you block this default behavior? How can you choose the icons you want to appear on the desktop and ones that you do not?

      Blocking all shell desktop icon migration with a hammer

      The first part is easy. You block legacy Windows Explorer settings from migrating by:

      1. On a test source XP computer, create a new config.xml file with:

      Scanstate /genconfig:config.xml

      2. Change the Microsoft-Windows-explorer-DL and Microsoft-Windows-shell32-DL values in the config.xml from “yes” to “no”. This prevents the classic shell icons from migrating.

       <component displayname="Microsoft-Windows-explorer-DL" migrate="no" ID=""/>


      component displayname="Microsoft-Windows-shell32-DL" migrate="no" ID=""/>

      3. Add the /config:config.xml argument to Scanstate when you perform your migration. Make sure you can access the config.xml file from the source computer.

      This setting prevents USMT from migrating registry settings defined in your and If you have some specific pieces of those manifests that you want, then you can copy those include elements into a custom XML file and pass the xml filename with /i argument on your Scanstate command-line. For example, most folks only care about Explorer customizations like “show me the file extensions”. If you only want Folder Options settings, then use this custom XML with the above config.xml:

      <migration urlid="">


        <!-- This component migrates just user Explorer settings tweaks like show file extensions-->

        <component type="System" context="User">


          <role role="Settings">




                  <pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\* [*]</pattern>







      Just paste the XML snippet into a file named folderoptionstweaks.xml and pass the filename to Scanstate and Loadstate using the /i argument. Best of both worlds. You can season all this to taste of course.

      Putting some of those icons back by creating magical registry entries

      The problem with classic menu options is that they are stored as binary data in undocumented registry entries. By blocking the Explorer manifest, we are saying arbitrarily “don’t migrate any of that Explorer goo”. If we want to have a few of those icons appear though, we can’t use some big XP legacy blob. This means we need to create some registry entries on the Windows 7 computer during USMT Loadstate that never existed before.

      In the example below, I add the “Documents” and “Computer” icons, as those are the most likely the icons used by your users. Make sure you review my article on using Visual Studio to edit USMT XML files, unless you like suffering.

      1. Create a new custom XML called (for example) desktopshellicons.xml and paste in the following elements:

      <?xml version="1.0" encoding="UTF-8"?>
      migration urlid="">
      component context="User" type="Application"
      role role="Settings"

      Emulate the shell values, by creating 2 reg keys and 4 reg values as 0 byte DWORDS
      location type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\NewStartPanel [{59031a47-3f72-44a7-89c5-5595fe6b30ee}]</location
      location type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\NewStartPanel [{20D04FE0-3AEA-1069-A2D8-08002B30309D}]</location
      location type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\ClassicStartMenu [{59031a47-3f72-44a7-89c5-5595fe6b30ee}]</location
      location type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\ClassicStartMenu [{20D04FE0-3AEA-1069-A2D8-08002B30309D}]</location

      Include the emulated registry values so they are actually written in SCANSTATE & LOADSTATE
      pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\NewStartPanel [{59031a47-3f72-44a7-89c5-5595fe6b30ee}]</pattern
      pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\NewStartPanel [{20D04FE0-3AEA-1069-A2D8-08002B30309D}]</pattern
      pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\ClassicStartMenu [{59031a47-3f72-44a7-89c5-5595fe6b30ee}]</pattern
      pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\HideDesktopIcons\ClassicStartMenu [{20D04FE0-3AEA-1069-A2D8-08002B30309D}]</pattern

      The <addObject> element injects the registry setting into memory during Scanstate. The <include> tag commits the injected value into the store as a logical registry setting. Then, the Loadstate command creates the registry setting in the registry. Slick.

      This injects four registry values needed to enable "Documents" and "Computer" and then saves them during Loadstate. It does not affect actual folders and files, shortcuts, or other data in a user’s Desktop folder.

      2. Save this file. Make sure the file exists on both source and destination computers.

      3. Run Scanstate and Loadstate passing the filename with /i: argument (/i:desktopshellicons.xml). Also include the config.xml modified earlier in this article.


      Hopefully you’ve learned a few useful things here:

      1. Understand what a config.xml manifest really does and which manifest to examine.
      2. Learn how to deal with XP’s legacy shell cruft during migration to Windows 7.
      3. Learn how to create new registry values as part of a migration when they did not exist on the old source computer.
      4. How I crash landed in Roswell in ’47.

      Until next time,

      Ned “image” Pyle

    • Two more positions opened

      If you live in the US and want to come work with us here in Directory Services support, we have opened two more positions:

      If you want experience working on the largest, most complex environments in the world, this is the place to be. You will never be bored and you will never stop learning.

      - Ned “and we get free Starbucks” Pyle

    • Friday Mail Sack: Cluedo Edition

      Hello there folks, it's Ned. I’ve been out of pocket for a few weeks and I am moving to a new role here, plus Scott and Jonathan are busy as #$%#^& too, so that all adds up to the blog suffering a bit and the mail sack being pushed a few times. Never fear, we’re back with some goodness and frankness. Heck, Jonathan answered a bunch of these rather than sipping cognac while wearing a smoking jacket, which is his Friday routine. Today we talk certs, group policy, backups, PowerShell, passwords, Uphclean, RODC+FRS+SYSVOL+DFSR, and blog editing. There were a lot of questions in the past few weeks that required some interesting investigations on our part – keep them coming.

      Let us adjourn to the conservatory.


      Do you know of a way to set user passwords to expire after 30 days of inactivity?


      There is no automatic method for this, but with a bit of scripting it would be pretty trivial to implement. Run this sample command as an admin user (in your test environment first!!!):

      Dsquery.exe user -inactive 4 | dsmod.exe user –mustchpwd yes

      Dsquery will find all users in that domain that have not logged in for 4 weeks or longer, then pipe that list of DN’s into the Dsmod command that sets the “must change password at next logon” (pwdlastset) flag on each of those users.


      You can also use AD PowerShell in Win2008 R2/Windows 7 RSAT to do this.

      search-adaccount –accountinactive –timespan 30 –usersonly | set-aduser –changepasswordatlogon 1

      The PowerShell method works a little differently; Dsquery only considers inactive accounts that have logged on. Search-adaccount also considers users that have never logged on. This means it will find a few “users” that cannot usually have their password change flags enabled, such as Guest, KRBTGT, and TDO accounts that are actually trusts between domains. If someone wants to post a slick example of bypassing those, please send them along (as the clock ran down here).


      As it’s stated here:  

      "You are not required to run the ntdsutil snapshot operation to use Dsamain.exe. You can instead use a backup of the AD DS or AD LDS database or another domain controller or AD LDS server. The ntdsutil snapshot operation simply provides a convenient data input for Dsamain.exe."

      I should be able to mount snapshot and use dsamain to read AD content, with only full backup of AD. But I can't. Using ntdsutil I can list and mount snapshot from AD, but I can't do "dsamain -dbpath full_path_to_ntds.dit".


      You have to extract the .DIT file from the backup.

      1. First run wbadmin get versions. In the output, locate your most recent backup and note the Version identifier:

      wbadmin get versions

      2. Extract the Active Directory files from the backup. Run:

       wbadmin start recovery -versions:<version identifier> -itemtype:app -items:AD -recoverytarget:<drive>

      3. A folder called Active Directory will be created on the recovery drive. Contained therein you'll find the NTDS.DIT file. To mount it, run:

      dsamain -dbpath <recovery folder>\ntds.dit -ldapPort 4321

      4. The .DIT file will be mounted, and you can use LDP or ADSIEDIT to connect to the the directory on port 4321 and browse it.


      I has run into the issue described in KB976922 where "Run only specified Windows Applications" or “Run only allowed Windows applications” is blank when you are mixing Windows XP/Windows Server 2003 and Windows Server 2008/R2 Windows 7 computers. Some forum posts on TechNet state that this was being fixed in Win7 and Win2008 R2 though, which appears to be untrue. Is this being fixed in SP1 or later or something?


      It’s still broken in Win7/R2 and still broken in SP1. It’s quite likely to remain broken forever as there are so many workarounds and the technology in question actually dates back to before group policy – it was part of Windows 95 (!!!) system policies. Using this policy isn’t very safe. It’s often something that was configured many years ago  that lives on through inertia.

      Windows 7 and Windows Server 2008 R2 introduced AppLocker to:

      • Help prevent malicious software (malware) and unsupported applications from affecting computers in your environment.
      • Prevent users from installing and using unauthorized applications.
      • Implement application control policy to satisfy security policy or compliance requirements in your organization.

      Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 all support Software Restriction Policies (SAFER) which also control applications similarly to AppLocker. Both AppLocker and SAFER replace that legacy policy setting with something less easily bypassed and limited.

      For more information about AppLocker, please review:

      For more information about SAFER, please review:

      I updated the KB to reflect all this too.


      Is it possible to store computer certificates in a Trusted Platform Module (TPM) in Windows 7?


      The default Windows Key Storage Provider (KSP) does not use a TPM to store private keys. That doesn't mean that some third party can't provide a KSP that implements the Trusted Computer Group (TCG) 1.2 standard to interact with a TPM and use it to store private keys. It just means that Windows 7 doesn't have such a KSP by default.


      It appears that there is a new version of Uphclean available ( What’s new about this version and is it safe to run on Win2003?


      The new 1.6 version only fixes a security vulnerability and is definitely recommended if you are using older versions. It has no other announced functionality changes. As Robin has said previously, Uphclean is otherwise deceased and 2.0 beta will not be maintained or updated. Uphclean has never been an officially supported MS tool, so use is always at your own risk.


      My RODCs are not replicating SYSVOL even though there are multiple inbound AD connections showing when DSSITE.MSC is pointed to an affected RODC. Examining the DFSR event log shows:

      Log Name: DFS Replication
      Source: DFSR
      Date: 5/20/2009 10:54:56 AM
      Event ID: 6804
      Task Category: None
      Level: Warning
      Keywords: Classic
      User: N/A
      The DFS Replication service has detected that no connections are configured for replication group Domain System Volume. No data is being replicated for this replication group.

      New RODCs that are promoted work fine. Demoting and promoting an affected RODC fixes the issue.


      Somebody has deleted the automatically generated "RODC Connection (FRS)" objects for these affected RODCs.

      • This may have been done because the customer saw that the connections were named "FRS" and they thought that with DFSR replicating SYSVOL that they were no longer required.
      • Or they may have created manual connection objects per their own processes and deleted these old ones.

      RODCs require a special flag on their connection objects for SYSVOL replication to work. If not present, SYSVOL will not work for FRS or DFSR. To fix these servers:

      1. Logon to a writable DC in the affected forest as an Enterprise Admin.

      2. Run DSSITE.MSC and navigate to an affected RODC within its site, down to the NTDS Settings object. There may be no connections listed here, or there may be manually created connections.


      3. Create a new connection object. Ideally, it will be named the same as the default (ex: "RODC Connection (FRS)").


      4. Edit that connection in ADSIEDIT.MSC or with DSSITE.MSC attribute editor tab. Navigate to the "Options" attribute and add the value of "0x40".



      5. Create more connections using these steps as necessary.

      6. Force AD replication outbound from this DC to the RODCs, or wait for convergence. When the DFSR service on the RODC sees these connections, SYSVOL will begin replicating again.

      More info about this 0x40 flag:

      RT (NTDSCONN_OPT_RODC_TOPOLOGY, 0x00000040): The NTDSCONN_OPT_RODC_TOPOLOGY bit in the options attribute indicates whether the connection can be used for DRS replication [MS-DRDM]. When set, the connection should be ignored by DRS replication and used only by FRS replication.

      Despite the mention only of FRS in this article, the 0x40 value is required for both DFSR and FRS. Other connections for AD replication are still separately required and will exist on the RODC locally.


      What editor do you use to update and maintain this blog?


      Windows Live Writer 2011 (here). Before this version I was hesitant to recommend it, as the older flavors had idiosyncrasies and were irritating. WLW 2011 is a joy, I highly recommend it. The price is right too: free, with no adware. And it makes adding content easy…

      Like Peter Elson artwork.

      Or the complete 5 minutes and 36 seconds of Lando Calrissian dialog
      Map picture

      Or Ned

      GoatBlack Sheep
      Or ovine-related emoticons.


      That’s all for this week.

      - Ned “Colonel Mustard” Pyle and Jonathan “Professor Plum” Stephens