Blog - Title

September, 2010

  • Microsoft’s Support Statement Around Replicated User Profile Data

    [Note from Ned: this article was created and vetted by the Microsoft development teams for DFS Replication, DFS Namespaces, Offline Files, Folder Redirection, Roaming User Profiles, and Home Folders. Due to some TechNet publishing timelines, it was decided to post here in the interim. This article will become part of the regular TechNet documentation tree at a later date. The primary author of this document is Mahesh Unnikrishnan, a Senior Program Manager who works on the DFSR, DFSN, and NFS product development teams. You can find other articles by Mahesh at the MS Storage Team blog: http://blogs.technet.com/b/filecab.

    The purpose of this article is to clarify exactly which scenarios are supported for user data profiles when used with DFSR, DFSN, FR, CSC, RUP, and HF. It also provides explanation around why the unsupported scenarios should not be used. When you finish reading this article I recommend reviewing http://blogs.technet.com/b/askds/archive/2009/02/20/understanding-the-lack-of-distributed-file-locking-in-dfsr.aspx

    Update 4/15/2011 - the DFSR development team created a matching KB for this - http://support.microsoft.com/kb/2533009]

    Deployment scenario 1: Single file server, replicated to enable centralized backup

    Consider the following illustrative scenario. Contoso Corporation has two offices – a main office in New York and a branch office in London. The London office is a smaller office and does not have dedicated IT staff on site. Therefore, data generated at the London office is replicated over the WAN link to the New York office for backup.

    Contoso has deployed a file server in the London branch office. User profiles and redirected home folders are stored on shares exported by that file server. The contents of these shares are replicated to the central hub server in the New York office for centralized backup and data management. In this scenario, a DFS namespace is not configured. Therefore, users will not be automatically redirected to the central file server if the London file server is unavailable.

    clip_image002

    As illustrated by the diagram above, there is a file server hosting home folders and user profile data for all employees in Contoso’s London branch office. The home folder and user profile data is replicated using DFS Replication from the London file server to the central file server in the New York office. This data is backed up using backup software like Microsoft’s System Center Data Protection Manager (DPM) at the New York office.

    Note that in this scenario, all user initiated modifications occur on the London file server. This holds true for both user profile data and the data stored in users’ home folders. The replica in the New York office is only for backup purposes and is not being actively modified or accessed by users.

    There are a few variants of this deployment scenario, depending on whether a DFS Namespace is configured. Following sub-sections detail these deployment variants and specify which of these variants are supported.

    Scenario 1A: DFS Namespace is not configured

    [Supported Scenario]

    Scenario highlights:

    • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
    • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
    • DFS Namespace is not configured.

    Specifics:

    • In this scenario, in effect, only one copy of the data is modified by end-users, i.e. the data hosted on the branch office file server (London file server, in this example).
    • The replica hosted by the file server in the hub site (New York file server, in this example) is only for backup purposes and users are not actively directed to that content.
    • In this scenario, DFS Namespaces is not configured.
    • Folder redirection may be configured for users in the branch office with data stored on a share hosted by the branch office file server.
    • Roaming user profiles may be configured with user profile data stored on the branch office file server.
    • Offline Files (Client Side Caching) may be configured, with the data stored on the branch office file server made available offline to users in the branch office.
    Scenario 1B: DFS Namespace is configured – single link target

    [Supported Scenario]

    This is a variation of the above scenario, with the only difference being that DFS Namespaces is set up to create a unified namespace across all shares exported by the branch office file server. However, in this scenario, all namespace links must have only one target1 - the share hosted by the branch office file server.

    1 Deployment scenarios where namespace links have multiple targets are discussed later in this document.

    Scenario highlights:

    • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
    • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
    • A DFS Namespace is configured in order to create a unified namespace. However, namespace links do not have multiple targets – the share on the central file server is not added as a DFS-N link target.

    Specifics:

    • In this scenario, in effect, only one copy of the data is modified by end-users, i.e. the data hosted on the branch office file server (London file server, in this example).
    • The replica hosted by the file server in the hub site (New York file server) is only for backup purposes and users are not actively directed to that content.
    • In this scenario, a DFS Namespace may be configured, but multiple targets are not set up for links. In other words, none of the namespace links point to replicas of the share hosted on the branch office file server as well as the central file server. Namespace links point only to the share hosted by the branch office file server.
    • Therefore, if the branch office file server were to fail, there will not be an automatic failover of clients to the central file server.
    • Folder redirection may be configured for users in the branch office with data stored on a share hosted by the branch office file server.
    • Roaming user profiles may be configured with user profile data stored on the branch office file server.
    • Offline Files (Client Side Caching) may be configured, with the data stored on the branch office file server made available offline to users in the branch office.
    Support Statement … (deployment scenario 1):

    Both variants of this deployment scenario are supported. The key point to remember for this deployment scenario is that only one copy of the data is actively modified and used by client computers, thereby avoiding issues caused by replication latencies and users accessing potentially stale data from the file server in the main office (which may not be in sync).

    The following use-cases will work in this deployment scenario:

    • TS farm using the file server in the branch as backend store.
    • Laptops in branch office with offline files configured against the branch file server.
    • Regular desktops with folder redirection configured.

    In this scenario, the following technologies are supported and will work:

    • Folder redirection to the file server in the branch.
    • Client side caching/Offline files.
    • Roaming user profiles.

    Designing for high availability

    DFS Replication in Windows Server 2008 R2 includes the ability to add a failover cluster as a member of a replication group. To do so, refer to the TechNet article ‘Add a Failover Cluster to a Replication Group’. Offline files and Roaming User Profiles can also be configured against a share hosted on a Windows failover cluster.

    For the above mentioned deployment scenarios, the branch office file server may be deployed on a failover cluster to increase availability. This ensures that the branch office file server is resilient to hardware and software related outages affecting individual cluster nodes and is able to provide highly available file services to users in the branch office.

    Deployment scenario 2: Multiple (replica) file servers for geo-location

    Consider the same scenario described above with a few differences. Contoso Corporation has two offices – a main office in New York and a branch office in London. Contoso has deployed a file server in the London branch office. User profiles and redirected home folders are stored on shares exported by that file server. The contents of these shares are replicated to the central hub server in the New York office for centralized backup and data management.

    In this scenario, a DFS namespace is configured in order to enable users to be directed to the replica closest to their current location. Therefore, namespace links have multiple targets – the file server in the branch as well as the central file server. Optionally, the namespace may be configured to prefer issuing referrals to shares hosted by the branch office file server by ordering referrals based on target priority.

    The replica in the central hub/main site may optionally be configured to be a read-only DFS replicated folder.

    Scenario 2A: DFS Namespaces is configured – multiple link target configuration

    [Unsupported Scenario]

    Scenario highlights:

    • A single file server is deployed per branch office. Home folders and roaming user profiles for users in that branch office are stored on the branch file server.
    • This data is replicated using DFS Replication over the WAN from (multiple) branch file servers to a hub server for centralized backup (using DPM).
    • A DFS Namespace is configured in order to create a unified namespace.
    • Namespace links have multiple targets – the share on the central file server is added as a second DFS-N link target.
    • The namespace may optionally be configured to prefer issuing referrals to the branch office file server. This may be done because administrators require that only when the branch file server is unavailable, should clients be redirected to the central file server.

    Specifics:

    • In this scenario, in effect, end-users may be directed to any of the available replicas of data. It is expected that in the normal course of events, users will modify the data hosted on the branch office file server.
    • A DFS Namespace is configured and multiple targets are set up for namespace links. In other words, namespace links point to replicas of the share hosted on both the branch office file server as well as the central file server. The namespace may be configured to prefer issuing referrals to the share located on the branch office file server.
    • If the branch office file server were to be unavailable, users would be redirected to the replica on the central hub server.
    • Administrators may also require that roaming users be directed to the copy of their data or user profile that is located on a server closest to their physical location (eg. for users travelling to another site/branch office, this would be the replica in that office).
    Scenario 2B: DFS Namespaces is configured – multiple link targets, read-only replica on central/hub server

    [Unsupported Scenario]

    Scenario highlights:

    • In this scenario, the exact same configuration described above (scenario 2A) applies with only one difference - the replica on the central server is configured to be a read-only DFS Replicated folder.

    Specifics:

    • In this scenario, in effect, end-users may be directed to any of the available replicas of data. It is expected that in the normal course of events, users will modify the data hosted on the branch office file server.
    • A DFS Namespace is configured and multiple targets are set up for namespace links. In other words, namespace links point to replicas of the share hosted on both the branch office file server as well as the central file server. The namespace may be configured to prefer issuing referrals to the share located on the branch office file server.
    • The replica on the central file server has been configured to be a read-only replica.
    • If the branch office file server were to be unavailable, users would be redirected to the replica on the central hub server. At this point however, the share becomes read-only for applications and the users, since this replica is a read-only replica.
    • Administrators may also require that roaming users be directed to the copy of their data or user profile that is located on a server closest to their physical location (eg. for users travelling to another site/branch office, this would be the replica in that office).

    What can go wrong?

    • In the deployment scenarios listed above (2A, 2B), it is not guaranteed that replication of home folder data or user profiles data between the branch office file server and the central file server is always up to date. This is because many factors may influence replication status, such as the presence of large replication backlogs caused by many files changing frequently or files that weren’t replicated because file handles were not closed, heavy system load, bandwidth throttling, replication schedules etc.
    • Since DFS Replication does not perform transactional replication of user profile data (i.e. replicating all the changes to a given profile, or nothing at all), it is possible that some files belonging to a user profile may have replicated, whilst some others may not have replicated by the time the user was failed over to the server at the central site.
    • The DFS Namespace client component may fail over the client computer to the central file server if it notices transient network glitches or specific error codes when accessing data over SMB from the branch file server. This may not always happen only when the branch file server is down, since momentary glitches and some transient file access error codes may trigger a redirect.
    • Therefore, there is a potential for users to be redirected to the central file server even if the namespace configuration was set to prefer referrals to the branch file server. If the replica data on the central file server is not in sync, users may be impacted in the following ways.

    As a result of the behavior described above the following consequences may be observed:

    • If the central file server is a read-write replica:

      • Roaming user profiles: User profile data may get corrupted since all the changes made by the user in their last logon may not have replicated to the central server. Therefore, the user may end up modifying a stale or incomplete copy of the roaming profile during their next logon, thus resulting in potential profile corruption.
      • Offline Files (CSC)/Folder Redirection: Users may experience data loss or corruption, since the data on the central replica may be stale/out of sync with the data on the branch office file server. Therefore, users may experience data loss, where their latest modifications are not persisted and they are presented a stale/old copy of data.
      • Since DFS Replication is a multi-master replication engine with last-writer-wins conflict resolution semantics, the stale copy of data that was edited on the central file server will win replication conflicts and will overwrite the fresher copy of data that existed on the branch file server, but didn’t replicate out.
    • If the central file server is a read-only replica:

      • When a user is directed to a read-only replica located on the central file server (Scenario 2B), applications and users will not be able to modify files stored on that share. This leads to user confusion since a file that could be modified just a while earlier has suddenly become read-only.
      • Roaming user profiles: If the user is directed to the central file server (read-only replica), the profile may be in a corrupt state since all changes made by the user in their last logon may not yet have replicated to the central server. Additionally, the roaming user profiles infrastructure will be unable to write back any subsequent changes to the profile when the user is logged off, since the replica hosting the user profile data is read-only.
      • Offline Files (CSC)/Folder Redirection: Users may experience data loss or corruption, since the data on the central replica may be stale/out of sync with the data on the branch office file server. Therefore, users may experience data loss, where their latest modifications are not persisted and they are presented a stale/old copy of data. Additionally, users will notice sync errors or conflicts for files that have been modified on their computers. It will not be possible for users to resolve these conflicts, because the server copy is now read-only (since it is hosted on a read-only replicated folder).
    What if one of the link targets is disabled?

    Scenario highlights:

    • In this scenario, the exact same configurations described above (scenario 2A or scenario 2B) apply, with one key difference – the link target that points to the share located on the central hub server is disabled during the normal course of operations.
    • If the branch office file server were to be unavailable, the link target to the central hub server is manually enabled, thus causing client computers to fail over to the copy of the share on the central file server.

    This deployment variant helps avoid the problems caused by DFS Namespaces failing over due to transient network glitches or when it encounters specific SMB error codes while accessing data. This is because the referral to the share hosted on the central file server is normally disabled.

    However, the important thing to note is that the side-effects of replication latencies are still unavoidable. Therefore, if the data on the central file server is stale (i.e. replication has not yet completed), it is possible to encounter the same problems described in the ‘What can go wrong?’ section above. Before, enabling the referral to the central file server, the administrator may need to verify the status of replication to ensure that the adverse effects of data loss or roaming profile corruption are contained.

    Support Statement:

    Both variants of this deployment scenario (2A and 2B) are not supported. The following deployment use-cases will not work:

    • TS farm using a DFS Namespace with either the file server in the branch or the central file server as backend store (link targets).
    • Laptops in branch office with offline files configured against a DFS link with multiple targets.
    • Regular desktops with folder redirection configured against a DFS link with multiple targets.

    In this scenario, the following technologies are not supported and will not work as expected:

    • Folder redirection against the namespace with multiple link targets.
    • Client side caching/Offline files.
    • Roaming user profiles.
  • Replacing DFSR Member Hardware or OS (Part 2: Pre-seeding)

    Ned here again. Previously I discussed options for performing a hardware or OS replacement within an existing DFSR Replication Group. As part of that process you may end up seeding a new server’s disk with data from an existing server. Pre-seeded files exactly match the copies on an upstream server, so that when initial non-authoritative sync is performed no data will be sent over the network except the SHA-1 hash of each file for confirmation. For a deeper explanation of pre-seeding review:

    In order to make this more portable I decided to make this a separate post within the series. Even if you are not planning a file server migration and just want to add some new servers to a replica with pre-seeding, the techniques here will be useful. I demonstrate how to pre-seed from Windows Server 2003 R2 to Windows Server 2008 R2 as this is the common scenario as of this writing. I also call out the techniques needed for other OS arrangements, and I will use both kinds of Windows backup software as well as robocopy in my techniques.

    Huge Update!!! We finally have a TechNet article on DFSR Preseeding! It's here! It's called Copying Files to Preseed or Stage Initial Synchronization! Go go go!!! Goooooo!

    There are three techniques you can use:

    • Pre-seeding with NTBackup
    • Pre-seeding with Robocopy
    • Pre-seeding with Windows Server Backup

    The most important thing is to TEST. Don’t be a cowboy or get sloppy when it comes to pre-seeding; most cases we get with massive conflict problems were caused by lack of attention to detail during a pre-seeding that took a functional environment and broke it.

    Read-Only Pre-Seeding

    If using Windows Sever 2008 R2 and planning on using Read-Only replication, make sure you install the following hotfix before configuring the replicated folder:

    An outgoing replication backlog occurs after you convert a read/write replicated folder to a read-only replicated folder in Windows Server 2008 R2 - http://support.microsoft.com/kb/2285835

    This prevents a (cosmetic) issue where DFSR displays pre-seeded files as an outbound backlog on a read-only replicated folder. A read-only member cannot have an outbound backlog, naturally.

    Pre-seeding with NTBackup

    If your data source OS is Windows Server 2003 R2, I recommend you use NTBackup.exe for pre-seeding. NTBackup correctly copies all aspects of a file including data, security, attributes, path, and alternate streams. It has both a GUI and command-line interface.

    Prerequisites

    If pre-seeding from Windows Server 2003 R2 to Windows Server 2003 R2, no special changes have to be made. If pre-seeding from Windows Server 2003 R2 to Windows Server 2008 or Windows Server 2008 R2, you will need to download an out-of-band version of NTBackup to restore the data:

    More info on using NTBackup: http://support.microsoft.com/kb/326216/pl

    Critical note: Restoring an entire volume (rather than specific folders like demonstrated below) with NTBACKUP will cause all existing replicated folders on that volume to go into non-authoritative sync. For that reason you should never restore an entire volume if you are already using DFSR on a server volume being pre-seeded. Just restore the replicated folders like I do in the examples.

    Procedure

    1. Start NTBackup.exe on the Windows Server 2003 R2 DFSR computer that has the data you are going to pre-seed elsewhere.

    2. Select the Replicated Folder(s) you are going to pre-seed. In the example below I have two RF’s on my E: drive:

    image

    Note: When selecting the replicated folders, you can optionally de-select the DFSRPRIVATE folders underneath them to save time and space in the backup.

    3. Backup to a flat file format (locally, if you have the disk capacity).

    4. When the backup is complete, copy that file over to your new server that is going to replicate this data in the future. If the server is Win2008 or Win2008 R2, make sure you have the NT Restore tool installed.

    Note: very large files – such as NTBackup BKF files that are hundreds of GB – can be copied much faster over a gigabit LAN by using tools that support unbuffered IO. A few Microsoft-provided options for this are:

    5. Start the NTBackup tool on your new DFSR server that you are pre-seeding.

    image

    6. Select to restore data. In the Win2008/R2 restore tools, this is the only option available.

    7. Select the backup file, then drill down into the backed up files so that you select the parent folders containing all the user data.

    image

    Note: You may need to select “Tools”, then “Catalog a backup file” to select a backup to restore.

    image

    8. Change the “Restore files to:” dropdown to “Alternate Location”

    9. Specify the “Alternate Location” path to match what it should be on the new server. In my case the replicated folders had existed on the root of the drive, so I restored them to the root of the new servers data drive (E:\).

    image

    Note: By default the security and mount points will be restored. Security must be restored or file hashes will change and the pre-seeding operation will fail. DFSR doesn’t replicate junction points so there is no need to check that box.

    image

    10. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data. You have the option to delete the DFSRPrivate folder that was restored within your RF(s) at this point, as it will not be useful for pre-seeding.

    Pre-seeding with Robocopy

    If your data source OS is Windows Server 2008, I recommend you use Robocopy for pre-seeding. While Windows Server 2008 supports Windows Server Backup, it lacks granularity in backing up files. Robocopy can also be used on the other operating systems but it is not as recommended as using a backup.

    Prerequisites

    Robocopy is included with Windows Vista and later, but there have been subsequent hotfix versions that are required for correct pre-seeding. It is not included with Windows Server 2003. You must install the following on your computer that will be pre-seeded, based on your environment (there is no reason to install on the server that currently holds the old data files):

    • Download latest Windows Server 2008 R2 Robocopy (KB979808 or later - current latest as of this update is KB2639043)
    • Download latest Windows Server 2008 Robocopy (KB973776 or later)
    • Download Windows Server 2003 robocopy (2003 Resource Kit)

    Note: Again, it is not recommended that you pre-seed a new Windows 2003 R2 computer using Robocopy.exe as there are known pre-seeding issues with the version included in the out-of-band Windows Resource Kit Tools. These issues will not be fixed as Win2003 is out of mainstream support. You should instead use NTBackup.exe as described previously.

    More info on using robocopy: http://technet.microsoft.com/en-us/library/cc733145(WS.10).aspx

    Procedure

    1. Logon to the computer that is being pre-seeded with data from a previous DFSR node. Make sure you have full Administrator rights on both computers.

    2. Validate that the Replicated Folders that you plan to copy over do not yet exist on the computer being pre-seeded.

    Critical note: do not pre-create the base folders that robocopy is copying and copy into them; let robocopy create the entire source tree. Under no circumstances should you change the security on the destination folders and files after using robocopy to pre-seed the data as robocopy will not synchronize security if the files data stream matches, even when using /MIR.

    Consider robocopy a one-time option. If you run into some issue with it, delete all the data on the destination and re-run the robocopy commands. Do not try to “fix” the existing data as you are very likely to make things worse.

    image

    3. Sync the folders using robocopy with the following argument format:

    Robocopy.exe “\\source server\drive$\folder path” “destination drive\folder path” /b /e /copyall /r:6 /xd dfsrprivate /log:robo.log /tee

    For example:

    image

    Note: You have the option to use the multi-threaded /MT option starting in the Win2008 version of Robocopy to copy more than one file at a time. The downside of /MT is that you cannot easily see copy progress.

    Note: You also have the option to use the /LOG option to redirect all output to a file for later review. This is useful to see more specifics about errors if encountered. The downside is that you will see no console progress.

    image

    Note: These arguments use a backup API that can copy most in-use file types (/b), include subfiles and folders (/e), copy all aspects of a file (/copyall), retry 6 times if a file copy errors (/r:6), excludes folders called Dfsrprivate (/xd dfsrprivate), writes to a log (/log:robo.log), and also outputs to console (/tee). This DfsrPrivate exclusion can be changed to a full path as well if you suspect this is a legitimate user data folder name deeper in the Replicated Folder (typically it is not; if any copies exist they are usually from previously replicated folders that should have been cleaned up by a file server administrator).

    4. When the copy completes, validate that there were no errors and that only one folder was skipped (that will be the DFSRPrivate folder).

    Note: if you find FAILED entries, you can review the log for specifics.

    5. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data.

    Pre-seeding with Windows Server Backup

    If your data source OS is Windows Server 2008 R2, I recommend you use Windows Server Backup (WSB) for pre-seeding. WSB correctly copies all aspects of a file including data, security, attributes, path, and alternate streams. It has both a GUI and command-line interface. I do not recommend WSB on Windows Server 2008 non-R2, as it lacks granularity in backing up files – refer to the Robocopy section of this article if your source computers are Win2008 non-R2.

    Prerequisites

    Windows Server Backup must be installed as a feature on the DFSR computers; it is not available by default. This can be done through ServerManager.msc or DISM.EXE.

    More info on using Windows Server Backup: http://technet.microsoft.com/en-us/library/ee849849(WS.10).aspx

    Procedure

    1. Start Wbadmin.msc on the Windows Server 2008 R2 DFSR computer that has the data you are going to pre-seed.

    2. Select “Backup Once” and then under “Select Backup Configuration” choose “Custom”.

    image

    3. Use “Add Items” to select the replicated folders that you will be pre-seeding.

    image

    Note: Do not attempt to exclude the DFSRPrivate junction point folders, as you will receive an error “one of the file paths specified for backup is under a reparse point”.

    4. Select where to store the backup. This can be local if you have another disk with enough capacity, or a remote network location. It cannot be the same drive as the replicated folders being backed up.

    image

    5. If the backup was done locally, copy the WindowsImageBackup folder containing your backup to the location where you will restore the data. It could be a disk on the server you are pre-seeding or a central file share. It cannot be the actual disk(s) you are going to restore data to on the new computer.

    6. Start Windows Server Backup on your server that you are pre-seeding with data and select “Recover”.

    7. Select “A backup stored on another location”.

    8. Select the correct location type. If the file was saved to this server, select “Local drives” and if it’s on another file share choose “Remote shared folder”.

    9. You will see the old source data server in the list. Select the server and proceed.

    image

    10. The backup dates will be listed. By default the most recent will be displayed and this should be your backup; if not choose the correct one.

    image

    11. Select “Files and Folders” for the “Recovery Type”.

    12. For “Items to Recover”, select the server node in “Available Items” tree. Whatever folder you select here, all of its child objects will be restored. For example, here I had two replicated folders on this server at the root of the drive that I backed up. If I just restore the “E” drive backup contents, both folders will be restored.

    image

    13. Under “Specify Recovery Options” select the destination path. Set “Overwrite the existing versions with the recovered versions”. Make sure that “restore access control list…” is enabled (i.e. checked ON).

    image

    Note: There should be no existing data to overwrite in this scenario typically; this radio button is selected for completeness. Pre-seeded data should win, that is why you are using it; existing data cannot be trusted.

    14. Restore the data by selecting “Recover”.

    15. At this point you are done pre-seeding. See section Validating Pre-Seeding. When that is complete you can proceed with replicating the data. You have the option to delete the DFSRPrivate folder that was restored within your RF(s) at this point, as it will not be useful for pre-seeding.

    Validating Pre-seeding

    Having theoretically pre-seeded correctly at this point, you need to spot check your work and validate that the file hashes are matching on the server. If a half dozen match up, you are usually safe to assume all the rest worked out – validating every single file is possible but in a large data set it will be very time consuming and of little value.

    Prerequisites

    You must have a Windows 7 or Windows Server 2008 R2 computer somewhere in your environment (even if it is not part of the DFSR environment being migrated) as it includes a new version of DFSRDIAG.EXE that has a filehash checking tool. If you do not have at least a Windows 7 computer running RSAT you will not be able to properly validate SHA-1 DFSR file hash data.

    • If using Win7, install RSAT and add the Distributed File System tools.

    • If using Win2008 R2 servers, add the Feature of Distributed File System tools.

    image

    Note: If you have no copy of Windows 7 you must open a support case in order to gain access to an unsupported internal tool for file hash checking. The cost of this support case is at least the same as a copy of Windows 7 though and the tool you are provided will receive no support, so this is not as advisable as purchasing one Win7 license.

    More info on using DFSRDIAG FILEHASH: http://blogs.technet.com/b/filecab/archive/2009/01/19/dfs-replication-what-s-new-in-windows-server-2008-r2.aspx

    Procedure

    1. Note the path of six files within the source data server. These should be scattered throughout various nested folder trees.

    2. For one of those test files, use DFSRDIAG.EXE to get a hash from the source computer and the matching file on the pre-seeded computer:

    DFSRDIAG.exe filehash /path:”source computer path file”

    DFSRDIAG.exe filehash /path:”pre-seeded computer path file”

    For example:

    image

    3. If DFSRDIAG shows the same hash value for both copies of the file, it has been pre-seeded correctly and matches in all file aspects (data stream, alternate data stream, security, and attributes). If it doesn’t match, you made a mistake in your pre-seeding or someone has changed the files after the fact. Start over.

    4. Repeat for five more files (or more until you feel comfortable that pre-seeding was done perfectly).

    Note: If you want to check every file, consider using DIR /B to build a list of all files on both servers, then using a FOR loop to export the hashes from all of them. But expect to wait a long time. 

    Update 03/04/2011: Paul Fragale has written a DFSRDIAG FILEHASH powershell script that does automated spot checking for you. Grab it here: http://gallery.technet.microsoft.com/scriptcenter/1de44cc1-ce79-4e98-9283-92548fc02af9 

    Final Considerations

    Keep in mind that unless your data is 100% static or users are not allowed to modify files during pre-seeding and DFSR initial sync, some file conflicts are to be expected. These will be visible in the form of DFSR Event Log 4412 entries on the server that was pre-seeded. The point of pre-seeding is to minimize the amount of data to be replicated initially during the non-authoritative replication phase on the downstream server; unless data never changes there will always be a delta that DFSR will have to catch up after pre-seeding.

    Series Index

    - Ned “beanstack” Pyle

  • Series Wrap-up and Downloads - Replacing DFSR Member Hardware or OS

    Hey all, Ned here again. Here is the complete set of DFSR upgrade and migration docs:

    A few of you asked if the series around DFSR server replacements would have a “portable” version. I banged those up in DOCX, XPS, and PDF formats. Pick your poison below.

    Thanks and I hope you enjoyed the series.

    - Ned “this was 54 pages with thinned margins” Pyle

  • Replacing DFSR Member Hardware or OS (Part 1: Planning)

    Hello folks, Ned here again to kick off a new five-part series on DFSR. With the release of Windows Server 2008 R2, the warming of economies, and the timing of hardware leases, we have started seeing more questions around replacing servers within existing DFSR Replication Groups.

    Through the series I will discuss the various options and techniques around taking an existing DFSR replica and replacing some or all of its servers. Depending on your configuration and budget, this can range from a very seamless operation that users will never notice to a planned outage where even their local server may not be available for a period of time. I leave it to you and your accountants to figure out which matters most.

    This series also gives updated steps on validated pre-seeding to avoid any conflicts and maximize your initial sync performance. I will also speak about new options you have in this replacement cycle for clusters and read-only replication.

    Finally: people get nervous when they start messing around with gigabytes of user data scattered across a few continents. I’ll be cutting out most of the wacky Ned’isms here and sticking to business in a “TechNet-Lite” style. Hopefully it’s not too boring.

    The Scenario

    The most common customer DFSR configuration is the classic hub and spoke data collection topology. This is:

    • Multiple branch file servers that hold user data in the field and are replicating data back to a single main office hub for backup purposes.
    • Some data may flow from the hub out to the branches but that will generally be very static, such as application installation packages or HR paperwork.
    • The storage on the branch servers is local fixed disks.
    • The storage on the hub server is a SAN.
    • The servers are mostly (if not all) currently running Windows Server 2003 R2 SP2.

    image

    There are variations possible where you might have more SAN’s or no SAN’s or 50 servers or 5 hubs. None of that really matters once you understand the fundamentals that are explained here in these simplified examples. Just focus on how this works at the micro and you will have no trouble at the macro.

    In my diagrams below the following holds true:

    • All DFSR servers are Windows Server 2003 R2 SP2.
    • The hub uses a fiber-attached SAN, the branch servers have local disks.
    • The topology is hub and spoke. BRANCH-01 and BRANCH-02 replicate with HUB-DFSR, each in their own replication group.
    • All my replacement OS’ are Windows Server 2008 R2 (so that it is possible to use cluster and read-only).
    • The domain is running Windows Server 2008 R2 DCs (so that it is possible to use read-only).
    • The replacement hubs are clustered to provide higher availability.

    The Options

    There are a number of ways you can replace your servers with new hardware and operating systems. I have ordered these in least to most disruptive to the replication. As is often the case, there is an inverse correlation between this and cost/effort. In the follow-on articles I go into the specifics of these methods.

    Note: the diagrams are simplified for understanding and are not a complete set of steps. Do not use these diagrams as your sole planning and methodology; keep reading the other articles in the series.

    You may find that you implement a combination of the options depending on your time, budget, and manpower.

    N + 1 Method (Hardware, OS)

    The “N+1” method entails adding a new replacement server in a one-to-one partnership with the server being replaced. This allows replication to be configured and synchronized between the two nodes without end users being interrupted for long periods. It also allows both the hardware and OS to be replaced with newer versions. Pre-seeding is also possible. When the servers are synchronized the old server is removed from replication and the new one renamed. The con is that you will need enough storage for two hubs, which may be costly if you are low on capacity currently.

    imageimage

    • Figure 1 – Existing environment with old hub and branches 
    • Figure 2 – New hub cluster replicates with old hub only

    imageimage

    • Figure 3 – Old branch servers now replicate with new hub
    • Figure 4 – New branch server replicates with old branch server

    imageimage

    • Figure 5 – New branch server now replicates with new hub server
    • Figure 6– Old Servers removed  

     

    Data Disk Swap Method (Hardware, OS)

    The “Data Disk Swap” method does not require twice the storage capacity of the old hub and instead moves an existing disk (typically a LUN) from an old to a new node. This also means you get pre-seeding for free. The downside to this method is that replication to the hub will be interrupted during that disk move process and during the non-authoritative sync will have to happen between the hub and its partners, putting the branches at risk during that timeframe.

    imageimage

    • Figure 1 – Existing environment with old hub and branches
    • Figure 2 – New hub cluster built

    imageimage

    • Figure 3 – Old hub server removed, new hub attached to storage
    • Figure 4 – New branch server replicates with old branch server

    imageimage

    • Figure 5 – New branch server now replicates with new hub server
    • Figure 6– Old Servers removed

     

    Reinstall Method (OS Only)

    The “Reinstall” method can be used to lay down a later operating system over a previous edition without upgrading. Files are pre-seeded as the data will not be touched in this method, but replication will be halted until the installs are done and replication is reconfigured, leading to a potentially lengthy downtime. Previous OS versions installed will be immaterial to this method.

    imageimage

    • Figure 1 – Existing environment with old hub and branches
    • Figure 2 – OS’ reinstalled and DFSR rebuilt

     

    Upgrade Method (OS only)

    Finally, the “Upgrade” is what it sounds like – an in-place increasing of the OS version using setup. As long as your servers meet the requirements for an in-place upgrade then this is a supported option. It will not cause replication to synchronize again but there will be downtime during the actual upgrade itself. It’s also not possible to deploy Win2008 R2 if the old computers are running 32-bit OS. As with any upgrade, there is some risk that the upgrade will fail to complete or end up in an inconsistent state, leading to lengthier troubleshooting process or a block of this method altogether. For that reason upgrades are the least recommended.

    imageimage

    • Figure 1 – Existing environment with old hub and branches
    • Figure 2 – OS’ Upgraded

    Series Index

    - Ned “full mesh” Pyle

  • AD LDS Schema Files Demystified

    Hi, Russell here. When installing Active Directory Lightweight Domain Services (AD LDS) instances, it is quite possible to paint oneself into a corner rather quickly. That’s because LDS comes with minimal schema definitions. To truly make LDS useful to your applications, one must have an understanding of how best to take advantage of the included schema definition files.

    When performing an LDS installation using the AD LDS Setup Wizard, you are offered several schema options:

    clip_image002

    When performing an installation using ADAM SP1, the following schema options are presented:

    clip_image004

    So how do you know which LDF files to select? Well seriously, it all depends upon your intentions, and I’m not talking about whether or not you want to ask our resident Elf out on a date.

    clip_image005

    Ideally, Schema definition requirements should be defined by your Application Developers. But as an AD or Server Administrator it will greatly benefit you to assist in the decision making process as the choices made during install are permanent. So what to pick?

    Let’s start with definitions of the basic LDF files included in ADAM SP1:

    • MS-InetOrgPerson – Is a Microsoft implementation of the RFC 2798 LDAP Object Class. The InetOrgPerson object is used in many non-Microsoft X.500 and LDAP Directory Services to represent people within the enterprise.
    • MS-User.ldf - Is a Microsoft implementation of the X.500 User Class defined in RFC 1274. The User object class is the traditional representation for people within Microsoft’s Active Directory. Import of this LDF will provide AD LDS with a base user class, which can be used to define local users
    • MS-AZMan.ldf – Classes and attributes required to use ADAM/LDS as an authorization store for Windows Authorization Manager, AKA AZMan. Import of this LDF provides the base functionality to use AZMan .NET class libraries to leverage AD LDS as a role-based authentication store
    • MS-UserProxy.ldf – One of two LDF files required to use ADAM/LDS Bind Redirection Features. This ldf file extends default (no ldf imports) schema to allow user synchronization with Active Directory. Useful when you want to allow your internal forest(s) users extranet access to AD LDS hosted applications

    I leaned on the word “implementation” in a couple of those definitions. That’s because whenever we discuss Internet RFCs, there is much that’s open to interpretation due to the use of the words “should,” “may,” “shall,” etc. as defined in Key words for use in RFCs to Indicate Requirement Levels. I also pointed out that UserProxy.ldf is one of two ldf files required to use ADAM/LDS for Bind Redirection to Active Directory. That’s because MS-ADAMSyncMetadata.ldf is missing from the ADAM SP1 Setup Wizard. (So is UserProxyFull). Windows Server 2008 and Windows Server 2008 R2 include these additional schema definitions as part of the Setup Wizard:

    • MS-ADAMSyncMetadata – Creates the ADAMSync engine classes and attributes necessary to synchronize Active Directory with ADAM/LDS. Adamsync.exe uses these classes and attributes to translate AD Users into ADAM/LDS users. This gives you the design flexibility of allowing domain users coming in via the internet, to logon to LDS & be authenticated via proxy to the domain.
    • MS-ADLDS-DisplaySpecifiers – New for Windows Server 2008 (It is 2010 after all). Provides the capability to manage ADAM/LDS replication configuration with AD Sites and Services
    • MS-UserProxyFull – Allows full user or inetorgperson attribute definition for synchronized Active Directory users. Originating in ADAM SP1 but hidden from the installation wizard, it is now available as an option to MS-UserProxy.ldf

    What? Hidden from the installation wizard you say? How can that be? Easy, there are actually several, optional schema mods contained within the Windows\ADAM installation directory. The LDF Files are coded with “@@UI-Description: @@excludeFromList” to keep them out of the Setup Wizard GUI. In Windows Server 2008 R2, there are four other LDF files hidden from view:

    clip_image007

    These are actually some of the best files available. It is a shame they are hidden from view:

    • MS-adamschemaw2k3.ldf – This is a representation of Active Directory’s schema for Windows Server 2003 R2
    • MS-adamschemaw2k8.ldf – Like its little brother (no, not Scooter) this is a representation of Windows Server 2008 Schema
    • MS-ADAM-Upgrade-1 – Provides ADAM 1.0 and ADAM SP1 instances the ability to reload SSL certificates and makes the UnexpirePassword controlAccessRight to the schema.
    • MS-ADAM-Upgrade-2 – Introduces Windows Server 2008 R2 Recycle Bin Features into LDS

    Now why would you need this enticing new feature in 2008 R2, such as the Recycle Bin? Uh, I don’t know, perhaps you like to see your users disappear with no way to recover? (No system state backup, no recycle bin to catch mistakes.) I work nights; I see many disaster recoveries, not just for AD LDS, but for AD too. This nifty feature can save you time and money – and most importantly – your job.  Until next time.

    -Russell “Rusty aka R2 aka Spaniard” Despain

  • Replacing DFSR Member Hardware or OS (Part 3: N+1 Method)

    Hello readers, Ned here again. In the previous two blog posts I discussed planning for DFSR server replacements and how to ensure you are properly pre-seeding data. Now I will show how to replace servers in an existing Replication Group using the N+1 Method to minimize interruption.

    Make sure you review the first two blog posts before you continue:

    Background

    As mentioned previously, the “N+1” method entails adding a new replacement server in a one-to-one partnership with the server being replaced. That new computer may be using local fixed storage (likely for a branch file server) or using SAN-attached storage (likely for a hub file server). Because replication is performed to the replacement server – preferably with pre-seeded data – the interruption to existing replication is minimal and there is no period where replication is fully halted. This reduces risk as there is no single point of failure for end users, and backups can continue unmolested in the hub site.

    The main downside is cost and capacity. For each N+1 operation you need an equal amount of storage available to the new computer, at least until the migration is complete. It also means that you need an extra server available for the operation on each previous node (if doing a hardware refresh this is not an issue, naturally).

    Because a new server is being added for each old server in N+1, new hardware and a later OS can be deployed. No reinstallation or upgrades are necessary. The old server can be safely repurposed (or returned, if leased). DFSR supports renaming the new server to the old name; this may not be necessary if DFS Namespaces are being utilized.

    Requirements

    For each computer being replaced, you need the following:

    • A replacement server that will run simultaneously until the old server is decommissioned.
    • Enough storage for each replacement server to hold as much data as the old server.
    • If replacing a server with a cluster, two or more replacement servers will be required (this is typically only done on the hub servers).

    Repro Notes

    In my sample below, I have the following configuration:

    • There is one Windows Server 2003 R2 SP2 hub (HUB-DFSR) using a dedicated data drive provided by a SAN through fiber-channel.
    • There are two Windows Server 2003 R2 SP2 spokes (BRANCH-01 and BRANCH-02) that act as branch file servers.
    • Each spoke is in its own replication group with the hub (they are being used for data collection so that the user files can be backed up on the hub, and the hub is available if the branch file server goes offline for an extended period).
    • DFS Namespaces are generally being used to access data, but some staff connect to their local file servers by the real name through habit or lack of training.
    • The replacement computer is running Windows Server 2008 R2 with the latest DFSR hotfixes installed, including KB2285835.

    I will replace the hub server with my new Windows Server 2008 R2 cluster and make it read-only to prevent accidental changes in the main office from ever overwriting the branch office’s originating data. Note that whenever I say “server” in the steps you can use a Windows Server 2008 R2 DFSR cluster.

    Procedure

    Phase 1 – Adding the new server

    1. Inventory your file servers that are being replaced during the migration. Note down server names, IP addresses, shares, replicated folder paths, and the DFSR topology. You can use IPCONFIG.EXE, NET SHARE, and DFSRADMIN.EXE to automate these tasks. DFSMGMT.MSC can be used for all DFSR operations.

    clip_image002

    clip_image004

    2. Bring the new DFSR server online.

    3. Optional but recommended: Pre-seed the new server with existing data from the hub.

        Note: for pre-seeding techniques, see Replacing DFSR Member Hardware or OS (Part 2: Pre-seeding)

    4. Add the new server as a new member of the first replication group.

    clip_image006

    Note: For steps on using DFSR clusters, reference:

    5. Select the server being replaced as the only replication partner with the new server. Do not select any other servers.

    clip_image008

    6. Create (or select, if pre-seeded) the new replicated folder path on the replacement server.

    clip_image010

    Note: Optionally, you can make this a Read-Only replicated folder if running Windows Server 2008 R2. Make sure you understand the RO requirements and limitation by reviewing: http://blogs.technet.com/b/askds/archive/2010/03/08/read-only-replication-in-r2.aspx 

    7. Complete the setup. Allow AD replication to converge (or force it with REPADMIN.EXE /SYNCALL). Allow DFSR polling to discover the new configuration (or force it with DFSRDIAG.EXE POLLAD).

    clip_image012

    8. At this point, the new server is replicating only with the old server being replaced.

    clip_image014

    clip_image016

    9. When done, the new server will log a 4104 event. If pre-seeding was done correctly then there will be next to no 4412 conflict events (unless the environment is completely static there are likely to be some 4412’s, as users will continue to edit data normally).

    clip_image018

    10. Repeat for any other Replication Groups or Replicated folders configured on the old server, until the new server is a configured identically and has all data.

    Phase 2 – Recreate the replication topology

    1. Select the Replication Group and create a “New Topology”.

    clip_image020

    2. Select a hub and spoke topology.

    clip_image022

        Note: You can use a full mesh topology with customization if using a more complex environment.

    3. Make the new replacement server the new hub. The old server will act as a “spoke” temporarily until it is decommissioned; this allows for it to continue replicating any last minute user changes.

    clip_image024

    clip_image026

    clip_image028

    4. Force AD replication and DFSR polling again. Verify that all three servers are replicating correctly by creating a propagation test file using DFSRDIAG.EXE PropagationTest or DFSMGMT.MSC’s propagation test.

    5. Create folder shares on the replacement server to match the old share names and data paths.

    6. Repeat these steps above for any other RG’s/RF’s that are being replaced on these servers.

    Phase 3 – Removing the old server

    Note: this phase is the only one that potentially affects user file access. It should be done off hours in a change control window in order to minimize user disruption. In a reliably connected network environment with an administrator that is comfortable using REPADMIN and DFSRDIAG to speed up configuration convergence, the entire outage can usually be kept under 5 minutes.

    1. Stop further user access to the old file server by removing the old shares.

    Note: Stopping the Server service with command NET STOP LANMANSERVER will also temporarily prevent access to shares.

    2. Remove the old server from DFSR replication by deleting the Member within all replication groups. This is done on the Membership tab by right-clicking the old server and selecting “Delete”.

    clip_image030

    3. Wait for the DFSR 4010 event(s) to appear for all previous RG memberships on that server before continuing.

    clip_image032

    4. At this point the old server is no longer allowing user data or replicating files. Rename the old server so that no accidental access can occur further. If part of DFS Namespace link targeting, remove it from the namespace as well.

    clip_image034clip_image036

    5. Rename the replacement server to the old server name. Change the IP address to match the old server.

    clip_image038

    Note: This step is not strictly necessary, but provided as a best practice. Applications, scripts, users, or other computers may be referencing the old computer by name or IP even if using DFS Namespaces. If it is against IT policy to use server names and IP addresses instead of DFSN – and this is a recommended policy to have in place – then do not change the name/IP info; this will expose any incorrectly configured systems. Use of an IP address is especially discouraged as it means that Kerberos is not being used for security.

    6. Force AD replication and DFSR polling. Validate that the servers correctly see the name change.

    7. Add the new server as a DFSN link target if necessary or part of your design. Again, it is recommended that file servers be accessed by DFS namespaces rather than server names. This is true even if the file server is the only target of a link and users do not access the other hub servers replicating data.

    8. Replication can be confirmed as continuing to work after the rename as well.

    clip_image040

    clip_image042

    9. The process is complete.

    Final Notes

    As you can now see the steps to perform an N+1 migration operation are straightforward no matter if replacing a hub, branch, or all servers. Use of DFS Namespaces makes this more transparent to users. The actual outage time of N+1 is theoretically zero if not renaming servers and performing the operation off hours when users are not actively accessing data. Replication to the main office for never stops, so centralized backups can continue during the migration process.

    All of these factors make N+1 the recommended DFSR node replacement strategy.

    Series Index

    - Ned “+1” Pyle

  • New ADFS Content on TechNet Wiki

    Adam Conkle has published some great troubleshooting, tips and tricks and how to articles on TechNet that should help you in evaluating and implementing Active Directory Federation Services.

    AD FS - How to invoke a WS-Federation sign-out

    AD FS 2.0 - "An unexpected error has occurred" error or blank page displayed attempting to log on to SharePoint, Event ID 23 logged

    AD FS 2.0 - The service fails to start. "The service did not respond to the start or control request in a timely fashion. "

    AD FS 2.0 - Query notification delivery failed because of the following error in service broker: 'The conversation handle "{GUID} is not found.'

    Windows Identity Foundation (WIF) - FedUtil.exe on Windows Server 2003 fails with "Object Identifier (OID) is unknown."

    AD FS 2.0 - Prompted for credentials when you are expecting to be allowed anonymous access

    Windows Identity Foundation (WIF) - How to change certificate chain validation settings for web applications

    AD FS 2.0 - How to set the Primary Federation Server in a WID Farm

    AD FS 2.0 - The Admin event log shows Error 111 with System.ArgumentException: ID4216

    Windows Identity Foundation (WIF) throws exception: "ID6018: Digest verification failed for reference"

    AD FS 2.0 - Browsing to Federation Metadata fails "Unable to download federationmetadata.xml"

    AD FS 2.0 - Continuously prompted for credentials when using FireFox 3.6.3

    AD FS 2.0 - How to configure the SPN (servicePrincipalName) for the service account

    AD FS 2.0 - Continuously prompted for credentials while using Fiddler Web Debugger

    AD FS 2.0 - "Script is disabled. Click Submit to continue."

    AD FS 2.0 - How to enable and immediately use AutoCertificateRollover

    AD FS 2.0 - How to perform an unattended installation of an AD FS 2.0 STS or Proxy

    AD FS 2.0 - The AD FS 2.0 Windows Service fails to start - Event 102 and 220 logged

    AD FS 2.0 - How to manually run the AD FS 2.0 Initial Configuration

    AD FS 2.0 - "ID4037: The key needed to verify the signature could not be resolved from the following security key identifier"

     -- Jonathan "Ned's Blog Monkey" Stephens

  • Friday Mail Sack: Barbados Edition

    Hello world, Ned here again. I’m back to write this week’s mail sack – just in time to be gone for the next two weeks on vacation and work travel. In the meantime Jonathan and Scott will be running the show, so be sure to spam the heck out of them with whatever tickles you. This week we discuss DFSR, Certificates, PKI, PowerShell, Audit, Infrastructure, Kerberos, NTLM, Active Directory Migration Tool, Disaster Recovery, and not-art.

    Catluck en ’ dogluck!

    image

    Question

    I need to understand what the difference between the various AD string type attribute syntaxes are. From http://technet.microsoft.com/en-us/library/cc961740.aspx : String(Octet), String(Unicode), Case-Sensitive String, String(Printable), String(IA5) et al. While I understand each type represents a different way to encode the data in the AD database, it isn't clear to me:

    1. Why so many?
    2. What differences are there in managing/using/querying them?
    3. If an application uses LDAP to update/read an attribute of one string type, is it likely to encounter issues if the same routine is used to update/read a different string type?

    Answer

    Active Directory has to support data-storage needs for multiple computer systems that may use different standards for representing data. Strings are the most variable data to be encoded because one has to account for different languages, scripts, and characters. Some standards limit characters to the ANSI character set (8-bit) while others specify another encoding type altogether (IA5 or PrintableString for X.509, for example).

    Since Active Directory needs to store data suitable for all of these various systems, it needs to support multiple encodings for string data.

    Management/query/read/write differences will depend very much on how you access the directory. If you use PowerShell or ADSI to access the directory, some level of automation is involved to properly handle the syntax type. PowerShell leverages the System.String class of the .NET Framework which handles, pretty much invisibly, the various string types.

    Also, when we are talking about the 255-character extended ANSI character set, which includes the Latin alphabet used in English and most European Languages, then the various encodings are pretty much identical. You really won't encounter much of a problem until you start working in 2-byte character sets like Kanji or other Eastern scripts.

    Question

    Is it possible / advisable to run the CA service under an account different from SYSTEM with EFS enabled for some files that should not be read by system or would another solution be more appropriate?

    Answer

    No, running the CA service under any account other than Network Service is not supported. Users who are not trusted for Administrator access to the server should not be granted those rights.

    [PKI and string type answers courtesy of Jonathan Stephens, the “Blaster” in our symbiotic “Master Blaster” relationship – Ned]

    Question

    Tons of people asking us about this article http://blogs.technet.com/b/activedirectoryua/archive/2010/08/04/conditions-for-kerberos-to-be-used-over-an-external-trust.aspx and if it is true or false or confused or what.

    Answer

    It’s complicated and we’re getting this ironed out. Jonathan is going to create a whole blog post on how User Kerberos can function perfectly without a Kerberos Trust, or with an NTLM trust, or with no trust. It’s all smoke and mirrors basically – you don’t need a trust in all circumstances to use User Kerberos. Heck, don’t even have to use a domain-joined computer. For now, disregard that article please.

    Question

    I followed the steps outlined in this blog post: http://blogs.msdn.com/b/ericfitz/archive/2005/08/04/447951.aspx. Works like a champ and I see the data correctly in the Event Viewer. But when I try to use PowerShell 2.0 on one of those Win2003 DC’s with this syntax:

    Get-EventLog -logname security -Newest 1 -InstanceId 566 | Where-Object { $_.entrytype -match "Success" } | Format-List

    A bunch of the outputs are broken and unreadable (they look like un-translated GUID’s and variables). Like Object Type and Object Name, for example:

    image

    Answer

    Ick, I can repro that myself.

    This appears to be an issue in PowerShell 2.0 Get-EventLog cmdlet on Win2003 where an incorrect value is being displayed. You can’t have the issue on Win2008/2008 R2, I verified. Hopefully one of our Premier contract customers will report this issue so we can investigate further and see what the long term fix options are.

    In the meantime though, here’s some sample workaround code I banged up using an alternative legacy cmdlet Get-WmiObject to do the same thing (including returning the latest event only, which makes this pretty slow):

    Get-WmiObject -query "SELECT * FROM Win32_NTLogEvent Where Logfile = 'Security' and EventCode=566" | sort timewritten –desc | select –first 1

    Slower and more CPU intensive, but it works.

    image

    A better long term solution (for both auditing and PowerShell) is get your DC’s running Win2008 R2.

    Question

    Do you have suggestions for pros/cons on breaking up a large DFSR replication group? One of our many replication groups has only one replicated folder. Over time that folder has gotten to be a bit large with various folders and shares (hosted as links) nested within. Occasionally there are large changes to the data and the replication backlog obviously impacts the ENTIRE folder. I have thought about breaking the group into several individual replication folders, but then I begin to shudder at the management overhead and monitoring all the various backlogs, etc.

    1. Is there a smooth way to transition an existing replication group with one replicated folder into one with many replicated folders? By "smooth" I mean no disruption to current replication if at all possible, and without re-replicating the data.
    2. What are the major pros/cons on how many replicated folders a given group has configured?

    Answer

    There’s no real easy answer – any change of membership or replicated folder within an RG means a re-synch of replication; the boundaries are discrete and there’s no migration tool. The fact that a backlog is growing won’t be helped by more or fewer RG/RF combos though, unless the RG/RF’s now involve totally different servers. Since the DFSR service’s inbound/outbound file transfer model is per server, moving things around locally doesn’t change backlogs significantly*.

    So:

    1. No way to do this without total replication disruption (as you must rebuild the RG’s/RF’s in DFSR from scratch; the only saving grace here is if you don’t have to move data, you would get some pre-seeding for free).
    2. Since each RF would still have a staging/conflictanddeleted/installing/deleted folder each, there’s not much performance reasoning behind rolling a bunch of RF’s into a single RG. And no, you cannot use a shared structure. :) The main piece of an RG is administrative convenience: delegation is configured at an RG level for example, so if you had a file server admin that ran all the same servers that were replicating… stuff… it would be easier to organize those all as one RG.

    * As a regular reader though, I imagine you’ve already seen this, which has some other ways to speed things up; that may help some of the choke ups:

    http://blogs.technet.com/b/askds/archive/2010/03/31/tuning-replication-performance-in-dfsr-especially-on-win2008-r2.aspx

    Question

    Is there an Add-QADPermission (from Quest) equivalent command is in AD PowerShell?

    Answer

    There is not a one-to-one cmdlet. But it can be done:

    http://blogs.msdn.com/b/adpowershell/archive/2009/10/13/add-object-specific-aces-using-active-directory-powershell.aspx

    It is – to be blunt – a kludge in our current implementation.

    Question

    I am working on an inter-forest migration that will involve a transitional forest hop. If I have to move the objects a second time to get them from a transition forest into our forest then will I lose the original SID History that is in the SID History attribute.?

    Answer

    You will end up with multiple SID history entries. It’s not an uncommon scenario to see customers would have been through multiple acquisitions and mergers end up with multiple SID histories. As far as authorization goes, it works fine and having more than one is fine:

    http://msdn.microsoft.com/en-us/library/ms679833(VS.85).aspx

    Contains previous SIDs used for the object if the object was moved from another domain. Whenever an object is moved from one domain to another, a new SID is created and that new SID becomes the objectSID. The previous SID is added to the sIDHistory property.

    The real issue is user profiles. You have to make sure that ADMT profile translation is performed so that after users and computers are migrated the ProfileList registry entries are updated to use the user’s real current SID info. If you do not do this, when you someday need to use USMT to migrate data it will fail as it does not know or care about old SID history, only the SID in the profile and the current user’s real SID.

    And then you will be in a world of ****.

    image 
    Picture courtesy of the IRS

    Question

    Do you know if there is any problem with creating a DNS record with the name ldap.contoso.com name? Or maybe there will be some problems with other components of Active Directory if there is a record called “LDAP”?

    Answer

    Windows certainly will not care and we’ve had plenty of customers use that specific DNS name. We keep a document of reserved names as well, so if you don’t see something in this list, you are usually in good shape from a purely Microsoft perspective:

    909264  Naming conventions in Active Directory for computers, domains, sites, and OUs
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;909264

    This article is also good for winning DNS-related bar bets. If you drink at a pub called “The Geek and Spanner”, I suppose…

    image
    This is not that pub

    Question

    I'm currently working on a migration to Windows Server 2008 R2 AD forest – specifically the Disaster Recovery plan. Is it good idea to take one of the DCs offline, and after every successful "adprep operation" bring it back online? Or in case if something will go bad use this offline one to recreate domain?

    Answer

    The best solution is to put these plans in place:

    Planning for Active Directory Forest Recovery
    http://technet.microsoft.com/en-us/library/planning-active-directory-forest-recovery(WS.10).aspx

    That way no matter what happens under any circumstances (not just adprep), you have a way out. You can’t imagine how many customers we deal with every day that have absolutely no AD Disaster Recovery system in place at all.

    Question

    How did you make this kind of picture in your DFSR server replacement series?

    image

    [From a number of readers]

    Answer

    MS Office to the rescue for a non-artist like me. This is a modified version of the “relaxed perspective” picture format preset.

    1. Create your picture, then select it and use the Picture Tools Format ribbon tab.

    image

    2. Use the arrows to see more of the style options, and you’ll see the one called “Relaxed Perspective, White”. Select that and your picture will now look like a three dimensional piece of paper.

    image

    3. I find that the default is a little too perspective though, so right-click it and select “Format Picture”.

     image 

    4. Use the 3-D Rotation menu to adjust the perspective and Y axis.

    image

    You can get pretty crazy with Office picture formatting.

    image
    Why yes sir, we do have plastic duck eight-ball clipart. Just the one today?

    See you all in a few weeks,

    Ned “please don’t audit me, I was kidding” Pyle