Blog - Title

November, 2011

  • Removing DFSR Filters

    Hi folks, Ned here again. DFSR administrators usually know about its built-in filtering mechanism. You can configure each filter based on file name and extension; by default, files named like “*.bak, *.tmp, ~*” are filtered, as they are unlikely to be permanent or useful between servers. You can also filter out folders; this is less common, as you cannot provide a full path - only a name. Sometimes though, it is useful for a specific working folder used by applications.

    When you enable a filter, each server scans its database for that replicated folder path and removes any records of files and folders that match the filter. Because the database doesn’t care about filtered objects, DFSR ignores any future changes to the files. In practical filtering terms, this means:

    1. Any new files/folders added do not replicate to any other server.

    2. Files/folders that already exist through previous replication stay on all servers.

    3. Files/folders previously replicated and then later modified after enabling the filter are quasi-deleted from all other servers, in the sense that they are added to the database tombstone list. After all, they are filtered and therefore “no longer exist” when updated, according to the downstream servers. But the files do not actually get deleted from the replicated folder; they simply stop replicating and if anyone changes them on various server nodes, they diverge.

    But what about when you remove filters?


    This is rare enough that we've never bothered to document it. There are key issues to understand:

    1. You must install the latest DFSR hotfixes for your operating system, on all DFSR servers.

    List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2 -

    List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2003 and in Windows Server 2003 R2 -

    Do not continue with any filter changes until installing those hotfixes and restarting the DFSR servers*. There is a very nasty bug that leads to folders refusing to replicate their previously filtered-contents or files that disappear from partner servers.

    * Alternatively, you can stop the DFSR service and install the hotfixes. Generally, the removes the need to restart and no prompt displays.


    I'm often asked if DFSR hotfixes are recommended preemptively and the answer is YES. Data loss hotfixes do not fix your lost data, only make further data loss stop! You should probably keep on top of NTFS hotfixes too.

    2. Any changes made between servers after enabling the filter are going to result in some deleted or overwritten files, and that is by design. By setting a DFSR filter, you told DFSR that those folders and their contents no longer existed in DFSR terms. When the filter comes off and after making the first change in some server’s copy of that folder, the conflict resolution algorithm is going to kick in and the two folders synchronize, regardless of your wishes as to which folder will win.

    Here I filtered subfolder2 and created – on each server – different bmp files named madeon1 (on server 01) and madeon2 (on server 02):



    Then I removed the filter, forced AD replication, and forced DFSR to poll AD. Those files are different, so nothing “bad” happened and they sync:


    However, if I repeat that un-filtering test with two files with the same name, but different contents (the file was originally replicated to both server, filtered out, and then later modified on server 02):



    Then the newer file will win, overwriting what’s on 01 (even if it was “good” data for some user). This is why when you filter folders from replication, you should make sure it’s not user data that is still modified on multiple servers. Someday it may have to re-converge.


    3. It is critical that you back up filtered folders on all servers replicating its parent, as you are very likely to need to restore some files in order to undo some of the conflict resolution damage. Some users are going to be very unhappy otherwise – and if they are the Vice President of TV Programming and Microwave Oven Technologies, they will come down on you like a load of bricks.

    A minority of customers use DFSR folder filters – they are not very flexible, due to their lack of path support. They are safe only if users cannot alter them or their contents – then at least they will not have a negative experience.

    Until next time,

    - Ned “Director of Directories” Pyle

  • Friday Mail Sack: Guest Reply Edition

    Hi folks, Ned here again. This week we talk:

    Let's gang up.


    We plan to migrate our Certificate Authority from single-tier online Enterprise Root to two-tier PKI. We have an existing smart card infrastructure. TechNet docs don’t really speak to this scenario in much detail.

    1. Does migration to a 2-tier CA structure require any customization?

    2. Can I keep the old CA?

    3. Can I create a new subordinate CA under the existing CA and take the existing CA offline?


    [Provided by Jonathan Stephens, the Public Keymaster- Editor]

    We covered this topic in a blog post, and it should cover many of your questions:

    Aside from that post, you will also find the following information helpful:

    To your questions:

    1. While you can migrate an online Enterprise Root CA to an offline Standalone Root CA, that probably isn't the best decision in this case with regard to security. Your current CA has issued all of your smart card logon certificates, which may have been fine when that was all you needed, but it certainly doesn't comply with best practices for a secure PKI. The root CA of any PKI should be long-lived (20 years, for example) and should only issue certificates to subordinate CAs. In a 2-tier hierarchy, the second tier of CAs should have much shorter validity periods (5 years) and is responsible for issuing certificates to end entities. In your case, I'd strong consider setting up a new PKI and migrating your organization over to it. It is more work at the outset, but it is a better decision long term.
    2. You can keep the currently issued certificates working by publishing a final, long-lived CRL from the old CA. This is covered in the first blog post above. This would allow you to slowly migrate your users to smart card logon certificates issued by the new PKI as the old certificates expired. You would also need to continue to publish the old root CA certificate in the AD and in the Enterprise NTAuth store. You can see these stores using the Enterprise PKI snap-in: right-click on Enterprise PKI and select Manage AD Containers. The old root CA certificate should be listed in the NTAuthCertificates tab, and in the Certificate Authorities Container tab. Uninstalling the old CA will remove these certificates; you'll need to add them back.
    3. You can't take an Enterprise CA offline. An Enterprise CA requires access to Active Directory in order to function. You can migrate an Enterprise CA to a Standalone CA and take that offline, but, as I've said before, that really isn't the best option in this case.


    Are there any know issues with P2Ving ADAM/AD LDS servers?


    [Provided by Kim Nichols, our resident ADLDS guru'ette - Editor]

    No problems as far as we know. The same rules apply as P2V’ing DCs or other roles; make sure you clean up old drivers and decommission the physicals as soon as you are reasonably confident the virtual is working. Never let them run simultaneously. All the “I should have had a V-8” stuff.

    Considering how simple it is to create an ADLDS replica, it might be faster and "cleaner" to create a new virtual machine, install and replicate ADLDS to it, then rename the guest and throw away the old physical; if ADLDS was its only role, naturally.


    [Provided by Fabian Müller, schlau Deutsche PFE- Editor]

    When using production delegation in AGPM, we can grant permissions for editing group policy objects in the production environment. But these permissions will be written to all deployed GPOs, not for specific ones. GPMC makes it easy to set “READ” and “APPLY” permissions on a GPO, but I cannot find a security filtering switch in AGPM. So how can we manage the security filtering on group policies without setting the same ACL on all deployed policies?


    Ok, granting “READ” and “APPLY” permissions respectively managing security filtering in AGPM is not that obvious to find. Do it like this in the change control panel of AGPM:

    • Check-out the according Group Policy Object and provide a brief overview of the changes to be done in the “comments” window, e.g. “Add important security filtering ACLs for group XYZ, dude!
    • Edit the checked-out GPO

    In the top of the Group Policy Management Editor, click “Action” –> “Properties”:


    • Change to “Security” tab and provide your settings for security filtering:


    • Close the Group Policy Management Editor and Check-in the policy (again with a good comment)
    • If everything is done you can now safely “Deploy” the just edited GPO – now the security filter is in place in production:


    Note 1: Be aware that you won’t find any information regarding the security filtering change in the AGPM history of the edited group policy object. There is nothing in the HTML reports that refer to security filtering changes. That’s why you should provide a good explanation on your changes during “check-in” and “check-out” phase:



    Note 2: Be careful with “DENY” ACEs using AGPM – they might get removed. See the following blog for more information on that topic:


    I have one Windows Server 2003 IIS machine with two web applications, each in its own application pool. How can I register SPNs for each application?


    [This one courtesy of Rob Greene, the Abominable Authman - Editor]

    There are a couple of options for you here.

    1. You could address each web site on the same server with different host names.  Then you can add the specific HTTP SPN to each application pool account as needed.
    2. You could address each web site with a unique port assignment on the web server.  Then you can add the specific HTTP SPN with the port attached like http/
    3. You could use the same account to run all the application pool accounts on the same web server.

    NOTE: If you choose option 1 or 2, you have to be careful about Internet Explorer behaviors.  If you choose the unique host name per web site then you will need to make sure to use HOST records in DNS or put a registry key in place on all workstations if you choose CNAME.  If you choose having a unique port for each web site, you will need to put a registry key in place on all workstations so that they send the port number in the TGS SPN request.


    Comparing AGPM controlled GPOs within the same domain is no problem at all – but if the AGPM server serves more than one domain, how can I compare GPOs that are hosted in different domains using AGPM difference report?


    [Again from Fabian, who was really on a roll last week - Editor]

    Since AGPM 4.0 we provide the ability to export and import Group Policy Objects using AGPM. What you have to do is:

    • To export one of the GPOs from domain 1…:


    • … and import the *.cab to domain 2 using the AGPM GPO import wizard (right-click on an empty area in AGPM Contents—> Controlled tab and select “New Controlled GPO…”):



    • Now you can simply compare those objects using difference report:


    [Woo, finally some from Ned - Editor]


    When I use the Windows 7 (RSAT) version of AD Users and Computers to connect to certain domains, I get error "unknown user name or bad password". However, when I use the XP/2003 adminpak version, no errors for the same domain. There's no way to enter a domain or password.


    ADUC in Vista/2008/7/R2 does some group membership and privilege checking when it starts that the older ADUC never did. You’ll get the logon failure message for any domain you are not a domain admin in, for example. The legacy ADUC is probably broken for that account as well – it’s just not telling you.



    I have 2 servers replicating with DFSR, and the network cable between them is disconnected. I delete a file on Server1, while the equivalent file on Server2 is modified. When the cable is re-connected, what is the expected behavior?


    Last updater wins, even if a modification of an ostensibly deleted file. If the file was deleted first on server 1 and modified later on server 2, it would replicate back to server 1 with the modifications once the network reconnected. If it had been deleted later than the modification, that “last write” would win and it would delete from the other server once the network resumed.

    More info on DFSR conflict handling here


    Is there any automatic way to delete stale user or computer accounts? Something you turn on in AD?


    Nope, not automatically; you have to create a solution to detect the age and disable or delete stale accounts. This is a very dangerous operation - make sure you understand what you are getting yourself into. For example:


    Whenever I try to use the PowerShell cmdlet Get-ACL against an object in AD, always get an error like " Cannot find path ou=xxx,dc=xxx,dc=xxx because it does not exist". But it does!


    After you import the ActiveDirectory module, but before you run your commands, run:

    CD AD:

    Get-Acl won’t work until you change to the magical “active directory drive”.


    I've read the Performance Tuning Guidelines for Windows Server, and I wonder if all SMB server tuning parameters (AsyncCredits, MinCredits, MaxCredits, etc) also work (or help) for DFSR.  Also, do you know the limit is for SMB Asynchronous Credits - the document doesn’t say?


    Nope, they won’t have any effect on DFSR – it does not use SMB to replicate files. SMB is only used by the DFSMGMT.MSC if you ask it to create a replicated folder on another server during RF setup. More info here:

    Configuring DFSR to a Static Port - The rest of the story -

    That AsynchronousCredits SMB value does not have a true maximum, other than the fact that it is a DWORD and cannot exceed 4,294,967,295 (i.e. 0xffffffff). Its default value on Windows Server 2008 and 2008 R2 is 512; on Vista/7, it's 64.


    As KB938475 ( points out, adjusting these defaults comes at the cost of paged pool (Kernel) memory. If you were to increase these values too high, you would eventually run out of paged pool and then perhaps hang or crash your file servers. So don't go crazy here.

    There is no "right" value to set - it depends on your installed memory, if you are using 32-bit versus 64-bit (if 32-bit, I would not touch this value at all), the number of clients you have connecting, their usage patterns, etc. I recommend increasing this in small doses and testing the performance - for example, doubling it to 1024 would be a fairly prudent test to start.

    Other Stuff

    Happy Birthday to all US Marines out there, past and present. I hope you're using Veterans Day to sleep off the hangover. I always assumed that's why they made it November 11th, not that whole WW1 thing.

    Also, happy anniversary to Jonathan, who has been a Microsoft employee for 15 years. In keeping with the tradition, he had 15 pounds of M&Ms for the floor, which in case you’re wondering, it fills a salad bowl. Which around here, means:


    Two of the most awesome things ever – combined:

    A great baseball story about Lou Gehrig, Kurt Russell, and a historic bat.

    Off to play some Battlefield 3. No wait, Skyrim. Ah crap, I mean Call of Duty MW3. And I need to hurry up as Arkham City is coming. It's a good time to be a PC gamer. Or Xbox, if you're into that sorta thing.


    Have a nice weekend folks,

     - Ned "and Jonathan and Kim and Fabian and Rob" Pyle