Blog - Title

January, 2008

  • New DFSR Data Restoration Script

    Hi, Ned here. Just a quick heads up - there is a new DFSR data recovery script posted below. This allows you to restore data from the ConflictAndDeleted or PreExisting folders within DFSR, primarily during disaster recovery. As always, we prefer you use your backup system to do this, as the script is 'at your own risk' and unsupported.

    Updated 10/15/10

    The latest script is now hosted on Code Gallery:

    Update 6/12/14

    Well that gallery is kaput. Now hosted from here.

    This old script is really gross, I recommend instead using our new Windows PowerShell cmdlet Restore-DfsrPreservedFiles instead. You can restore from 8.1 client if you install RSAT, or from a WS2012 R2 server. Can either run locally or just map a drive to \\dfsrserver\h$ or whatever the root drive is, then restore.

    Take a look at for steps on using this cmdlet.


    Remember, this script must be run from a CMD prompt using cscript. Don't just double-click it.


    The script also requires to edit three paths (your source files, a new destination path, and the XML manifest you are calling) . If you fail to edit those the script will exit with an error:

    ' Section must be operator-edited to provide valid paths

    ' Change path to specify location of XML Manifest
    ' Example 1: "C:\Data\DfsrPrivate\ConflictAndDeletedManifest.xml"
    ' Example 2: "C:\Data\DfsrPrivate\preexistingManifest.xml"


    ' Change path to specify location of source files

    ' Example 1: "C:\data\DfsrPrivate\ConflictAndDeleted"
    ' Example 2: "C:\data\DfsrPrivate\preexisting"

    SourceFolder = ("C:\your_replicated_folder\DfsrPrivate\ConflictAndDeleted")

    ' Change path to specify output folder

    OutputFolder = ("c:\your_dfsr_repair_tree")


    - Ned Pyle

  • Understanding “Read Only Domain Controller” authentication

    Hello there. Bob Drake here to discuss how Windows Server 2008 “Read Only Domain Controllers” (RODC’s) authenticate users differently from the way Windows Server 2003 and Windows Server 2008 standard domain controllers do. The “Read Only Domain Controller” is new to Windows Server 2008 and allows for the installation of a domain controller to accommodate common scenarios where users are authenticating over a wide area network (WAN) or there is a physical security concern for the domain controller, such as installations at branch office locations. Another new feature to Windows Server 2008 RODC’s is “Password Replication Policy” and depending on how they are configured determines how an RODC authenticates a user.

    To understand the authentication difference between a standard domain controller and an RODC, we need to review the “How interactive Logon works” and “Kerberos authentication” TechNet articles. In the section Domain Login (How interactive logon works article), a user’s credentials are received by Winlogon and passed to the LSA (local security authority) which negotiates Kerberos and contacts the domain controller. The domain controller then returns the logon success to the local computers LSA which generates the user’s access token. The Kerberos authentication is seen in the following diagram (taken from the Kerberos authentication article):


    To see the authentication on the wire, we would need to install a network capture application such as Netmon3.1 (or Wireshark, Ethereal, Packetyzer). In the following network trace, we see a client machine authenticate to a domain controller and is granted access with the “KRB_AS_REP” and “KRB_TGS_REP”:


    Now let’s take a look at the “Password Replication Policies” and how they affect the RODC authentication behavior. With the installation of an RODC, there are four new attributes and two built-in groups to support RODC operations:

    • msDS-Reveal-OnDemandGroup. This attribute points to the distinguished name (DN) of the Allowed List. The credentials of the members of the Allowed List are permitted to replicate to the RODC.
    • msDS-NeverRevealGroup. This attribute points to the distinguished names of security principals that are denied replication to the RODC. This has no impact on the ability of these security principals to authenticate using the RODC. The RODC never caches the credentials of the members of the Denied List. A default list of security principals whose credentials are denied replication to the RODC is provided. This helps ensure that RODCs are secure by default.
    • msDS-RevealedList. This attribute is a list of security principals whose passwords have ever been replicated to the RODC.
    • msDS-AuthenticatedToAccountList. This attribute contains a list of security principals in the local domain that have authenticated to the RODC. The purpose of the attribute is to help an administrator determine which computers and users are using the RODC for logon. This enables the administrator to refine the Password Replication Policy for the RODC.

    • Allowed RODC Password Replication Group. This group is added to the msDS-Reveal-OnDemandGroup.
    • Denied RODC Password Replication Group. This group is added to the msDS-NeverRevealGroup.

    Note: The “Allowed RODC Password Replication Group” has no members by default, and the “Denied RODC Password Replication Group” contains all the ‘VIP’ accounts (Enterprise Administrators, Cert Publishers, Schema Administrators, Etc). As with most things, Deny always trumps Allow.

    Using the commands for “Repadmin.exe” (this is built into Windows Server 2008) an administrator can modify the Password Replication Policy groups. To view the current PRP for a specified user:

    Repadmin /prp view <RODC> {<List_Name >|<User>}

    The following shows the settings for the groups on the RODC through several commands:


    Awesome information here! We can see who is on the allowed list (msDS-RevealOnDemand), who is on the deny list (msDS-NeverRevealGroup), who is actually revealed (msDS-RevealedList) and who actually has authenticated to the RODC (msDS-AuthenticatedToAccountlist).

    The configuration of a Password Replication Policy is pretty straight forward. Open Active Directory Users and Computers snap-in and select the RODC in the Domain Controllers organizational unit. On the “Password Replication Policy” tab, there are the two groups: “Allowed RODC Password Replication Group” and “Denied RODC Password Replication Group”. A user can be added to either of the desired groups.

    Another really cool feature is the “Prepopulate the password cache for an RODC” button. This button (pictured) allows an administrator to pre-add users that will be authenticating to the RODC.


    An administrator could also use the “Repadmin” utility to populate the password cache with the following command:

    Repadmin /rodcpwdrepl [DSA_LIST] <Hub DC> <User1 Distinguished Name> [<Computer1 Distinguished name> <User2 Distinguished Name>…].

    The following shows the user “Ned Pyle” being added to the password cache using Repadmin:


    So how does this affect the RODC? When a user authenticates to an RODC a check is performed to see if the password is cached. If the password is cached, the RODC will authenticate the user account locally. If the user’s password is not cached, then the RODC forwards the authentication request to a writable Windows Server 2008 Domain Controller which in turn authenticates the account and passes the authenticated request back to the RODC. Once the user account is authenticated, the RODC makes another request for the replication of the user’s password in a unidirectional replication providing the account has been configured to allow replication.

    This finally brings us to seeing the difference in authentication. For the following NetMon 3.1 trace, I have configured a user account whose password has been denied replication to the RODC. The user authenticates to the RODC (2k3DOM2k8DC2) and the RODC forwards the request to the writable domain controller (2k3DOM2k8DC). We see the extra traffic since the user’s password has not been cached:


    For the last trace I have allowed the user password to be cached by configuring the Password Replication Policy. The user authentication is the same as above, with the exception to what the RODC does after authenticating the user. Now see the RODC make the request for the user’s password to be replicated, but in subsequent logins the password replication request would not be seen since it has been cached:


    Note: If the Wide Area Network (WAN) is down and the user account and password has NOT been cached, then the user account will fail to authenticate.

    To wrap it up, when a user account is not cached, the RODC forwards the authentication to a writable Domain Controller which does the authentication. If the Users password is allowed to be cached, then the RODC will pull that through a replication request. Once the user has been authenticated, and their password has been cached, any subsequent login can then be handled by the RODC alone. Some people may see an increase in Wide Area Network (WAN) traffic with the introduction to an RODC, but after caching user passwords there should be a significant reduction in traffic and a more secure environment. In my next blog I will discuss how account lockout thresholds affect this process and what Administrators might run into with them.

    -Bob Drake

  • Migrating your DFS Namespaces in three (sorta) easy steps

    Hi, Dave here. Today, I will cover the process by which domain DFS namespaces can be migrated from one domain to another, important considerations both during and after the migration, and some helpful tips along the way.

    My post assumes that you have some working knowledge of DFS and related terminology. Don’t worry if you are new to DFS… we have excellent documentation over on the Windows 2003 Server Technical Reference website. I’ll wait right here until you get back.

    To begin with, we know that DFS is used to build a unified namespace for files scattered throughout your Windows environment. This way, users don’t need to know which servers contain their data and the names of any particular share. They only need to know the root of your DFS namespace to begin locating their files and get silently referred to the required file server. This is all well and good, but what happens when you move your files shares to a new domain or environment and want bring your elaborate DFS namespace along with the data? Or what if you simply wish to make another copy of an existing DFS namespace? The answer is with the DFSUTIL.exe support tool.

    DFSUTIL is a command-line tool provided within the Windows Server 2003 Support Tools package. With it, you can export a specified DFS namespace to an XML text file and later import this information back into a new DFS root.

    Before using these commands, ensure that you have the latest version of the Support Tools installed on your system where you will be exporting/importing the DFS configuration. Please note the process by which the data or file servers are migrated between the environments is beyond the scope of this blog.

    It is also recommended to create a full system-state backup of your domain controller(s). This way, any accidental deletions or overwrites of your DFS configuration can be easily recovered.

    The steps below detail the process by which a DFS root called “public” was migrated from the domain “” to a new “public” root in the domain “”. A two-way trust exists between these domains and network communications between them is fully functional. Although this environment is simplistic, the process is applicable to namespaces with hundreds of folders and targets.

    Here is a screenshot of the Public namespace within


    Step1 - Exporting a Namespace:

    The command to export the namespace is “dfsutil /root:\\\rootname /export:exportedroot.txt /verbose

    As you can see, I exported my domain DFS root named “public” to a file named “publicroot.txt”. My root happens to contain 3 folders, named “accounting”, “utilities”, and “documentation”. “Accounting” has two folder targets, and “utilities” and “documentation” each have a single folder target. All target file servers, FS1 and FS2, are in the domain.

    The publicroot.txt file contains:

    <Root Name="\\CONTOSO\Public" State="1" Timeout="300" Attributes="64" >
        <Target Server="CONTOSOROOT1" Folder="Public" State="2" />
        <Target Server="CONTOSOROOT2" Folder="Public" State="2" />

        <Link Name="Accounting" State="1" Timeout="1800" >
            <Target Server="fs1" Folder="accounting" State="2" />
            <Target Server="fs2" Folder="accounting" State="2" />

        <Link Name="utilities" State="1" Timeout="1800" >
            <Target Server="fs1" Folder="utilities" State="2" />

        <Link Name="documentation" State="1" Timeout="1800" >
            <Target Server="fs2" Folder="documentation" State="2" />


    There, that was easy enough! Now that the root has been exported, some edits to this data is required before import.

    Step 2 – Editing the configuration file

    First off, you see that the “root name” value is “\\CONTOSO\Public”. Because the root configuration needs to be imported into the Fabrikam domain, this must be manually modified. The modified portion of the namespace file will look like this:

    <?xml version="1.0"?>

    <Root Name="\\FABRIKAM\Public" State="1" Timeout="300" Attributes="64" >

    The remainder of the file will remain untouched, for reasons which will be discussed below. This file should be saved (I named it publicrootmodified.txt)

    Step 3 – Create the new namespace in the new environment/domain

    Before a DFS configuration file can be imported, the target namespace must be manually created—DFSUTIL won’t create the root for you.

    The command to import the configuration is as follows:

    dfsutil /root:\\\rootname /import: publicrootmodified.txt /set /verbose

    NOTE! The import process will overwrite any DFS configurations in the target namespace. Please ensure you enter the path of a root you are prepared to replace.

    If you try an import without first creating the DFS root, you will get the following error:

    System error 1168 has occurred.
    Element not found.

    Likewise, attempting to import the configuration file before changing the “Root Name” value within it to match the namespace will result in the error:

    System error 2 has occurred.
    The system cannot find the file specified.

    A successful import will appear as follows:


    The new DFS namespace in the fabrikam domain:


    Notice how the root targets “CONTOSOROOT1” and “CONTOSOROOT2” don’t show up under the “Namespace Servers” tab. This is because DFSUTIL ignores any root targets listed in the configuration file. You have the option to configure additional namespace servers in the DFS Management tool either before or after you import the configuration file. In my example, I had a single namespace server in the Fabrikam domain called “fabrikamroot1”.

    If you take a peek at the dfsutil.exe command syntax, you may notice there is also a “merge” option in addition to “set”. Merge can only be utilized if your configuration XML specifies folders and targets which are not already present in the namespace. This may be useful if you want to incrementally import portions of the namespace, but requires careful manipulation of the XML configuration files and isn’t generally recommended.

    Let’s take a few moments to review what we have done. First, we exported DFS configuration information, changed the value for the new namespace/root name, and then imported it into a new namespace. Because of the domain trust, clients in both domains can now seamlessly access any of the 3 folders via either the “\\\pubic” or “\\\public” namespaces—both namespaces will issue referrals to the folder targets which still reside in the domain.

    It is likely that your migration requires making the shared data part of the other domain. Through beyond the scope of this blog post, this may entail simply joining file servers to the other domain, copying the data to a completely new server in the other domain, consolidating many target folders to a single file server in the other domain, or using a domain migration tool such as the Active Directory Migration Tool (ADMT) to migrate the server(s).

    Lastly, here are a number of considerations to ensure this process goes smoothly:

    • Don’t enable FRS or DFSR replication until the folder targets all exist within the same domain or forest, otherwise it won’t work.
    • Using dfsutil.exe to migrate namespaces doesn’t preserve FRS or DFSR replication configurations.
    • If after importing the namespace clients have problems accessing the namespaces or any of the folders, check to see if they can directly access the root and folder targets via \\server\share to determine if the namespace or something else is at fault.
    • Access to resources between the domains is highly dependant on the health of the domain trust--verify the trust’s status and ensure name resolution is functioning correctly.
    • If any folder (link) target server names or share names are changed, the migrated namespaces will need to be manually updated to reflect this. This is most likely to happen if DFS is configured to use DNS names rather than NetBIOS names and file servers are moved to the other domain.
    • DFS configuration changes may not be immediate. Client's cache referral information which won’t be updated until it expires. Additionally, Active Directory replication latencies may also cause namespace servers to only detect changes after they poll domain controllers for changes (60 minutes by default).

    Also note that the steps detailed here can be applied to standalone DFS Namespace servers. The process of exporting, modifying, and then importing the namespace will be much the same.

    And there you have it—a complete migration of DFS configuration in 3 easy steps. They should prove useful when wishing to rename a namespace, making a copy of the namespace for use in a test environment, and backing up the namespace in the event that the namespace is accidentally deleted or modified.

    - David Fisher

  • Security Policy Settings and User Account Control

    Hi, Mike here. This post was originally published in the Group Policy Team blog in September 2006—anticipating the launch of Windows Vista. Here it is again—refreshed—for the upcoming launch of Windows Server 2008.

    User Account Control in Windows Server 2008 and Windows Vista requires all users run in a standard user mode; its purpose: to limit the user’s ability from changing critical operating system files or expose their computer and network to viruses and malware. Windows displays an authorization dialog box when a task requires administrative privileges, such as opening the Microsoft Management Console (MMC). You, the administrator, provide administrative credentials to “elevate” your privileges for the specific process (You can read more about User Account Control, on the Microsoft Windows Vista TechNet site Windows Server 2008 and Windows Vista provide you with nine security policy settings to control how User Account Control behaves. You can locate these security policy settings in the Local Group Policy Editor under Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options or Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options when editing domain-based GPOs using GPMC included in Windows Server 2008 or when using Remote Server Administration Tool on Windows Vista Service Pack 1.

    Figure 1- UAC security policy parent node in a domain-based GPO.

    These security policy settings apply only to computers running Windows Server 2008 or Windows Vista RTM or later. These security policy settings can co-exist in GPOs applicable to clients earlier than Windows Vista. Operating systems other than Windows Server 2008 and Windows Vista ignore the settings.

    Figure 2- UAC Computer security policy settings

    Before I begin, I want to tell you about another feature with security policy settings. This valuable feature is a little hard to find. Each security policy setting has “explain” text similar to registry-based policy settings. Simply double-click on the security policy setting and then click on the Explain tab to view detailed information about the security policy setting; enabled and disabled behavior; and default values. Now, let us move on to the User Account Control policy settings.


    Figure 3 - Explain text in Security policy settings

    Windows Vista provides nine security policy settings to control the behavior of User Account Control. You can enable these security policy settings in Local Computer and Domain-based Group Policy objects. Each security policy setting starts with “User Account Control” and then the actual name of the policy settings. The Group Policy Object Editor lists security policy settings in alphabetical order, so just scroll to the end.

    The first of these policies controls the Admin Approval Mode for the built-in administrators account. When enabled, the Admin Approval mode is on for the built-in administrator account causes Windows prompts the administrators for any operations requiring an elevation in privilege. The prompt gives the administrator the choice to Permit or Deny the request for elevation. When disabled, Admin Approval mode is off. The built-in administrator account runs all applications using full administrative privileges and does not prompt for elevation.

    The next two security policy settings control the type of prompt for User Account Control uses. These security policy settings are Behavior of the elevation prompt for administrators in Admin Approval Mode and Behavior of the elevation prompt for standard users. Behavior of the elevation prompt for administrators in Admin Approval Mode security policy setting provides three choices

    • Prompt for Consent –provides a dialog box asking you to either Permit or Deny the request for elevation.
    • Prompt for Credentials –provides an authentication dialog box asking you to provide administrative credentials to permit the request for elevation.
    • Elevate without Prompting –automatically permits the request for elevation without prompting the administrator.

    The Behavior of the elevation prompt for standard users security policy setting provides two choices. Prompt for Credentials and Automatically deny elevation requests where Windows denies all requests for elevation and displays an Access Denied error message.

    When enabled, the Detect application installation and prompt for elevation security policy setting causes Windows to detect heuristically for installation packages that require an elevation of privilege and triggers a User Account Control prompt for elevation. Disabling this security policy setting disables detection process.

    Enabling the security policy setting Only elevate executables that are signed and validated enforces Windows Vista to validate the Public Key Infrastructure (PKI) certificate chain before permitting it to run. Disabling this security policy setting does not enforce validation of the PKI certificate chain.

    The next security policy setting listed is, Only elevate UIAccess applications that are installed in secure locations. UIAccess applications are applications designed specifically to assist with user accessibility. These applications typically send information to other applications. The on-screen keyboard is an example of a UIAccess application. When enabled, Windows enforces UIAccess application to run from a secure location. These secure locations include:

    • …\Program Files\... including all sub folders.
    • …\Windows\System32\...
    • …\Program Files (x86)\... including all sub folders (64-bit versions).

    Your desktop appearance changes when Windows Vista prompts you for elevation. Windows displays a gradient shade of gray over your existing desktop and then you see the prompt for elevation, in color. Actually, Windows switches your desktop to a secure desktop before prompting you for elevation. This describes the enabled behavior of the security policy setting Switch to the secure desktop when prompting for elevation. When disabled, Windows prompts for elevation on your existing desktop.

    Some applications read or write registry information or files to locations that Windows protects from normal users. This usually requires the user to run the application as an administrator until an application upgrade becomes available. Windows Vista helps by providing virtualized file and registry writes to areas previously protected from normal users. This feature redirects writes destined for protected locations to locations where users have write access. The security policy setting Virtualize file and registry write failures to per-user locations provides this behavior, when enabled. When you disable this security policy setting, applications attempting to write in protected locations fail as with earlier versions of Windows.

    The last security policy setting controlling User Account Control behavior is probably the most important one. Run all users, including administrators, as standard users is a security policy setting the affects all other User Account Control security policy settings. Enabling this policy turns on Admin Approval Mode and enables all other User Account Control polices to their default values. Disabling this policy turns off Admin Approval Mode and disables all related User Account Control security policy settings. Lastly, changing this security policy setting requires a reboot.

    So, when you are evaluating your security policy during your Windows Server 2008 or Windows Vista deployment, look at the explain text for each security policy setting. Make sure you fully understand its impact before changing a security policy setting. Then, do not forget to include User Account Control policy settings in your security policy. These security policy settings can help you keep your computer, network, and data safe and secure.

    - Mike Stephens

  • Replacing an Expired DRA Certificate

    Hi, Tom here from the Directory Services team. One of the most common EFS issues we see is for an expired Domain Data Recovery Agent (DRA) certificate. It is also one of the easiest things to resolve. You may have seen the error Recovery Policy for this system contains an invalid recovery certificate or ERROR_BAD_RECOVERY_POLICY.


    Since you can’t extend the life of a Recovery Agent certificate you will need to remove the expired ones first. You start by opening up the Default Domain Policy and navigating to Encrypting File System. On the right side you will see the expired certificate. Right click on the expired certificate and select All Tasks | Export, and export the file to a .CER format. Although this certificate has expired it can still be used to decrypt files that have already been encrypted with this Recovery Certificate specified. (The original DRA private key resides in the Administrator profile of the first domain controller in the domain. If this profile or domain controller no longer exists you may not be able to use this certificate to decrypt files.) Once this is completed you should delete this certificate from the Policy.


    There are a couple of ways to get a new DRA certificate. If you are running an Enterprise Certificate Authority in your Domain you can choose Create Data Recovery Agent and a new certificate should be automatically installed. If you don’t have an Enterprise Certificate Authority or if you want the certificate to be good for a much longer period of time you can use the cipher command and create a self-signed certificate that will be good for 99 years.


    If you choose to create a Data Recovery Agent using your Enterprise Certificate Authority, please make sure to Export the newly created certificate and Export the Private key to maintain security. To do this, right-click on the new certificate, choose All Tasks and then Export. A wizard will guide you through the export process. Choose Yes, export the private key and then click next. As a best practice, the private key should be deleted from the system when a successful export is complete. Strong private key protection should also be used as an extra level of security on the private key while it exists on a file system (CD, Floppy, hard drive).


    Once the *.PFX file and private key have been exported, the file should be secured on a stable media in a secure location. For example, you may want to preserve the *.PFX file on one or more CD-ROMs that are stored in a safety deposit box, vault, etc. that has strict physical access controls. If the file and associated private key are lost, it will be impossible to decrypt any existing files that have used that specific DRA certificate as the data recovery agent.

    Creating a Self-Signed DRA Certificate

    You may decide that even with an Enterprise Certificate Authority you want to use a Self-Signed DRA Certificate. The benefit of doing this is that you will not have to go through this process again. The downside is that there will be no Key Archival of the Private Key.

    To create a new self-signed DRA certificate you need to open a command prompt on a XP/2003/Vista computer and then type cipher /r:filename where filename equals the name of the file you want to create. In my example below I used the name recovery. Use any password you want when prompted.


    With the newly created DRA certificate, you go back to the Default Domain policy we were looking at above and select Add Data Recovery Agent and then choose Browse Folders select the certificate you just created. If you get a pop up box saying Windows cannot determine if this certificate has been revoked and a question about Do you want to install this certificate just click Yes.

    Now you need to make sure that all of your clients will trust this newly created certificate so you need to import it into the Trusted Root Certification Authorities. Just right click and select Import and with a few more clicks you will almost done.


    Getting Your Clients to Use the New Certificate

    After you finish the above steps you need to refresh the Group Policy on the clients. You can do this by typing gpupdate /force at a command prompt. Once the policy has refreshed you should update the DRA information for the encrypted files by typing cipher /u at a command prompt. This will update only the files on the local machine so if you need to do this on a large number of machines you may want to put it in a login script. If you have any problems here you may need to reboot and try it again.

    Now that you have done all of this how can you be sure that your encrypted files have been updated with the new DRA? Just check the Advanced Attributes for an encrypted file and compare the thumbprint of the DRA to the thumbprint of the certificate you just created.


    You can also use the command EFSINFO /R /C in the directory where you have encrypted files and it will show you the DRA information. EFSInfo is a resource kit utility and can be downloaded at the following location:

    Remember to copy the .PFX file you created earlier and put it away somewhere for safe keeping. This is the file you will need to import onto a user’s computer to decrypt a file should the need ever arise. If you created a new DRA certificate using your Certificate Authority you should export that certificate along with the private key and put it away as well.

    Next time I’ll talk about some of the reasons you can get an Access Denied while trying to decrypt files.

    Other Reading:

    929103 Error message when you try to renew the default recovery agent certificate in Windows Server 2003, in Windows XP, or in Windows 2000: "This certificate cannot be renewed because it does not contain enough information to generate a renewal request";EN-US;929103

    241201 How to back up the recovery agent Encrypting File System (EFS) private key in Windows Server 2003, in Windows 2000, and in Windows XP;EN-US;241201

    - Tom Ausburne

  • What’s in a Token (Part 2): Impersonation

    It’s Randy again. In my last blog post, we discussed that the token is the identification for a process. The token object contains a list of security identifiers, rights and privileges that the Windows Security Subsystem uses to grant access to secured resources and tasks. Each process running on a computer will contain a token to represent who is doing the work for this activity. With identification on each process, Windows can guard against unauthorized access to sensitive data as well as protecting the integrity of the operating system. At any given time, there are different processes running on a computer to represent the different activities required to function and to do what we ask. Some of these processes run as the logged on user (interactive logon), while other processes run under a service startup account or as the computer itself.

    Sometimes a process will need to perform one task as someone and another task as someone else to successfully carry out its activity. Each of these tasks in a process are defined as threads. By default a thread uses the token of the process running the thread; this is called its Primary Token. If a process needs to perform a task as if it were someone else, a thread can use an Impersonation Token to identify itself as the other person instead as the identity of the process.

    As we discuss impersonation, think of this real world analogy: Consider purchasing groceries with your credit card at the checkout counter of a grocery store. You give your credit card to the cashier, and the cashier takes your card and swipes it through the cash register and processes the transaction as if they were you. If we could not impersonate, then everyone in the checkout line would have to get behind the cash register and initiate this transaction themselves and I don’t think management would like giving the general public access to their cash registers. Now one person with the job qualification to charge credit cards can perform this action on every customer’s behalf and maintaining their identity with the help of the credit card. A good example of impersonation would be connecting to a network share on a file server from your workstation. On the file server, we have the Server service responsible for handling the network (SMB) request from the client. The Server service is running under “Local System” (SID: S-1-5-18) and is responsible for filling the request made by the workstation. The workstation process that wants to access the share is running as the logged on user to the workstation. In order for the operating system to control access to the file on the file server, the Server service must use the requestor’s credentials when retrieving the file and not that of the Local System. The Server service process, running as Local System, creates a thread with an Impersonation Token to perform the task of accessing the file with the identity of the user logged onto the workstation. It is difficult to catch this behavior, because as soon as the impersonating thread completes its task, the token will revert back to the Primary token (the token of the process that creates the thread.) We’ll go through this scenario and find evidence of impersonation.

    A great tool to witness Impersonation is Process Monitor. This tool logs all access to files and registry at the time of access, therefore we see a snapshot of the impersonating thread before it reverts back to the primary token. It will output a lot of entries, so you will want to use the filter option to locate the path to the file you are monitoring. We will monitor access to test.txt on the tools share. On the file server, we will open test.txt using the local path (C:\Tools.) versus the UNC path of the share \\WtipFS1\Tools) Here are two screenshots below.


    This is some of the output from accessing test.txt using the local path. Notice that Explorer.exe is the process that opens this file. As mentioned in my first blog post, explorer.exe is the user shell that is running with the token of the logged on user. This token is used to compare against the security descriptor on test.txt, allowing us to set permissions to grant or deny access based on the user credentials.


    This is some of the output from accessing test.txt using the UNC path. It makes no difference that we use a UNC path from the file server or from a workstation; both scenarios will use the Redirector and Server service to get the file. When accessing via the UNC path, we see the System services do all the work. The process is using the token of the local system which will have access to all the files regardless of what user is requesting the file. In order to enforce the security on the file, the local system needs to impersonate the user and present the requesting user’s token to the security subsystem to compare against the security descriptor on test.txt. Below we see the properties of the log event in process monitor (double click on one of the log entry lines) when the local system opens our test.txt file.


    Notice at the bottom that the local system is impersonating Billy. This way, we could deny Billy access to this file and our permissions will be adhered even when another process tries to get this file on Billy’s behalf.

    In our example above, we see that the local system impersonates users in order to provide services. This behavior is not limited to the local system. We can grant this ability to anybody by using the security privileges mentioned in my previous blog post. The privilege needed is Impersonate a client after authentication (SeImpersonatePrivilege.) This privilege is listed in the token of the process and is checked by the security subsystem when that process wants to open a thread with an impersonation token.

    Auditing is another good tool to track some of the symptoms of impersonation. As mentioned above, we require the impersonate a client after authentication privilege” for a process to use an impersonation token on a worker thread. If we audit privilege use, then we will see this privilege enumerated when a process starts. In our File server example, the System process actually has a privilege of Act as part of the operating system (SeTcbPrivilege) which changes all the rules by giving the process complete authority to the local system. Referred to by developers as TCB (trusted computer base), this permits the process to call LSALogonUser and authenticate as anyone and access anything local to the system. This is why careful consideration needs to be made when granting this privilege.

    Another interesting application to experiment with impersonation is Internet Information Server. It is easy to configure different accounts for the application pool that runs a website. It is also easy to configure anonymous access or Windows Integrated and witness the different behaviors of each. By default, our website is running under the credentials of the local IWAN account. When we restart the IIS services, we see an audit event, like the one below, of the IWAN account enumerating its privilege use.


    We can also enable network logon auditing on the file server. When we access the default website from a different computer, then we see a logon event, on the file server hosting the website, similar to the one below.


    In this audit event we see that Billy accessed the website on the fileserver. Notice that the logon type is set to 3, which equates to a Network Logon. A network logon tells us that a remote process is requesting a resource on this box and providing credentials. A local service must handle this request, therefore impersonating the credentials of the remote process. If LogonType is Interactive or Batch, a primary token is generated to represent the new user. If LogonType is Network, an impersonation token is generated. To learn more about the logon process, you can research LsaLogonUser or the Logon and Authentication technical reference.

    Now we have given a process the ability to perform a task as a different user; sounds like a pretty powerful privilege - but also a potential area for exploitation. This is why we control impersonation by providing different levels of impersonation; Anonymous, Identification, Impersonation, and Delegation. These levels refer to what you can do with the impersonation token after you get it. Anonymous is just like it sounds – we can impersonate you as an anonymous user and that’s all. Identification is when a process can pull your token to validate credentials, but cannot do anything with the token. Impersonation means that the process can perform task as a different user; as in our network share or IIS example above. Impersonation is limited only to tasks local to that computer, a process cannot “run a task” or “request an object” that resides outside of the system because we grant the privilege on a session by session basis (different computer, different session.) Delegation is one step beyond Impersonation, where the process can call resources and perform tasks on other computers than it. The fact we can impersonate beyond the boundaries of the local system is an important distinction between impersonation and delegation; the target resource off the system will have no knowledge of the middle-tier impersonator. We will not go into great detail about Delegation, but this is a feature provided to us by the Kerberos authentication protocol and also referred to “double hop” authentication – meaning that our credentials hop from one computer to the next, rather than us having to go to each computer individually.

    We define the level of impersonation in two places. The first place is in the client/server code itself. Here the client tells the server process what exactly he can do with these credentials when he gets them during the InitializeSecurityContext function. This is one way to safeguard granting more power than necessary, but we are reliant on the software developers for this safety-net. The second place is in granting the needed rights and privileges to the server process, which is controlled by us network administrators. We already discussed the SeImpersonatePrivilege, but there are additional options with delegation set on the client object guarding it as non-delegable and on the server process by permitting delegation and even constraining delegation to limit the tasks and remote computers to which it can perform.

    This summarizes the concept of the impersonation token. In a future blog post, I will be discussing how the process impersonates itself with certain privileges and security groups removed, referred to as a restricted token. The benefits of this is to not cookie-cut a set of credentials to all or nothing, why give a process more rights than it needs to do its job. This has been enhanced in Vista with user account control and split tokens to logon a user with two separate tokens (a full and a restricted) and require user acknowledgement to initiate a process that requires the full token.

    - Randy Turner

  • Customizing file compression in DFSR on Windows Server 2008 (no, not RDC)

    Hi, Ned here. If you’re a regular reader of the Filing Cabinet, you’ve probably already read about all the new features for DFS Replication (DFSR) added in Windows Server 2008. If not, trot on over here and drink your fill. Just to summarize some of these cool new features:

    Windows Server 2003 R2


    Windows Server 2008


    SYSVOL replicated with FRS service

    SYSVOL replicated with DFSR (via migration or by creating your domain in 2008 functional mode)

    RPC synchronous pipes

    RPC asynchronous pipes (when replicating between 2008 servers)

    Synchronous inputs/outputs (I/Os)

    Asynchronous I/Os

    Buffered I/Os

    Unbuffered I/Os

    Normal Priority I/Os

    Low Priority I/Os (this reduces the load on the system as a result of replication)

    4 concurrent file downloads

    16 concurrent file downloads

    Database recovery mechanism

    Improved Dirty Shutdown Recovery

    DFSR scheduler

    Algorithmic Enhancements for the scheduler

    Number of Replication Groups * number of Replicated Folders * number of simultaneously replicating Connections must be less than 1024

    Limited only by your hardware, network connections, and frequency of data change.

    But there’s something missing. There’s a little history here, so walk with me down technical memory lane.

    When DFSR was released on Windows Server 2003 R2 in February 2006, it had a list of file extensions that were non-compressible. These were:


    When you added a file to a DFSR content set and it was at least 64KB in size, the DFSR service placed a copy of the file in the Staging folder. The file had an alternate data stream added that held the computed Remote Differential Compression (RDC) signatures. RDC is what allows DFSR to only replicate the parts of the file that have changed. But files were also compressed with a standard compression algorithm called XPRESS, which is similar to ZIP or RAR compression. Any files that ended up in staging were compressed with XPRESS unless they had an extension that was on this list. Since there was no point in trying to shrink files that were already compressed (like ZIP or JPG), we simply didn’t bother; it wasted disk and CPU resources after all.

    After digging around in the product design request database here, looking at the source code, and pinging the developers, I came up short: the list was hardcoded. Rats. I wrote up a short internal article and promptly forgot about it.

    Fast forward to Windows Server 2008.

    You guessed it – we can do this now and here are the steps:

    1. Your Active Directory forest schema must have been extended with ADPREP to Windows Server 2008 (or you have installed a new Win2008 forest). If this is not done you won’t have the new attribute necessary for this to work.

    2. Setup DFSR on a few Win2008 machines (add the role through Server Manager, use DFSMGMT.MSC to configure, and let the servers pick up the new settings).


    3. On one of your domain controllers run ADSIEDT.MSC as a Domain Administrator.

    4. Navigate to your content set that you want to modify. So for example:


    cn=Public,cn=content,cn=Shared, cn=dfsr-globalsettings,cn=system,dc=cohowinery,dc=com

    5. Right-click the replicated folder (cn=public) and choose properties.

    6. Scroll down a ways and edit the attribute msDFSR-DefaultCompressionExclusionFilter.

    Important Notes: Any extensions we add in here will become the new rule for compression. So if we just add FOO then all files that are *.foo will no longer be compressed. We need to make sure we comma separate a list of files, and there must not be any spaces. Also, this list is destructive – if we put anything in here, then the default list of extensions like ZIP or JPG is ignored going forward unless we add them back. Finally, if the attribute is set to “<Not Set>” it will use the default extension list from above.

    7. So let’s add our new extension. We have an application that creates drawing files called *.NED. These NED files are already compressed so we don’t want DFSR to waste time trying to compress them more. In the Value field of our msDFSR-DefaultCompressionExclusionFilter attribute we all of the below:


    8. We click ok and exit out of ADSIEDIT. We wait for AD replication to complete. We wait for the DFSR servers to pick up the change through their polling process (or run DFSRDIAG POLLAD).

    Now when NED files are created or modified, they will no longer be compressed in the staging directory. Sweet! If for some reason you wanted to turn off all file compression, you can set the msDFSR-DefaultCompressionExclusionFilter attribute to contain just an asterisk. This would have the minor upside of saving some CPU and disk time when staging, but could have a significant downside in bandwidth consumption and replication time for larger files – so this is not recommended and you shouldn’t do this without significant testing in your lab environment. You cannot tailor RDC to only affect certain file types – it is limited to off/on/minimum size, just like in R2.

    And yes, we have a KB article for this in the works. The beauty of blogging means you don’t have to wait for it. :)

    - Ned Pyle

  • New KB articles January 13-19

    Here are the new Directory Services-related KB articles for the week. Also, although it isn't entirely new, I was reminded that we have the Windows Server 2008 command reference available now. So here are links to both the 2008 and 2003 versions. They are good as a quick reference when you need a command's syntax but also interesting to look at to be aware of tools you might not use on a daily basis. AD admins will want to familiarize themselves with Wevtutil and Icacls, both are new to Vista/2008.

    Windows Server 2008 Command Reference

    Windows Server 2003 Command-Line Reference

    KB Title


    After you reapply Internet Explorer Maintenance Group Policy settings on a computer that has Internet Explorer 7 installed, a pop-up blocker exception site that you manually added is missing


    Certain Windows XP-related user accounts and groups remain on the computer after you upgrade to Windows Vista


    You receive an access denied message on a Windows Vista-based computer when you click offline files that are not synchronized with the file server


    A Windows Vista-based computer may be unable to connect to the network after you configure the computer to use machine authentication and to validate the RADIUS server certificate


    On a computer that is a Windows Server 2003 domain member, you may experience problems when the name of the local user account is the same as the domain name


    A Windows Vista-based computer is frequently unresponsive for 30 seconds if the Documents folder is redirected to a shared folder, and the folder is made available offline

    - Craig Landis