Blog - Title

July, 2012

  • Common DFSN Configuration Mistakes and Oversights

    Hello all, Dave here again. Over a year ago, Warren created an ASKDS blog post covering common DFS Replication (DFSR) mistakes and oversights. Someone asked me recently where the common DFS Namespaces (DFSN) mistakes article was located. Well, here it is now...

    Below you will find details about the more common configuration mistakes or oversights with the DFSN service. I hope this post helps you avoid them. Where possible, I provided links for additional details.

    Improperly decommissioning a namespace server or reinstalling the operating system of a namespace server

    If someone removes a namespace server from the network, turns it off permanently, or reinstalls the operating system, namespace access by clients and administrators alike may experience delays. If the server is the only namespace server hosting the namespace, namespace access will fail. In addition, this blocks you from creating a new namespace with that original namespace's name, as its configuration information still resides within the Active Directory. It is also possible to leave orphaned configuration information in the registry of a namespace server that may prevent it from hosting a future namespace.

    None of these are irreversible situations, but it takes some manual work to remove references to an improperly decommissioned namespace server as detailed here (and check out all those errors you could cause!):

    977511        About the DFS Namespaces service and its configuration data on a computer that is running Windows Server 2003 or Windows Server 2008
    http://support.microsoft.com/kb/977511/EN-US

    Before decommissioning a namespace server, remove the server from all namespaces using dfsmgmt or dfsutil. If a namespace server goes permanently offline, clean up the namespace configuration in Active Directory.

    Disabling the DFS Namespace service of domain controllers

    It may seem tempting, but domain controllers should never have the DFS Namespace service disabled. Clients require access to the DFS service of domain controllers to enumerate trusted domains, obtain lists of available domain controllers, and to process domain namespace referral requests. The DFSN service typically requires very little CPU and memory to service requests, even from numerous clients. Also, realize that Group Policy processing requires issuing DFSN referrals to clients that attempt to access the SysVol and Netlogon shares. Sysvol and Netlogon are specialized namespaces that Active Directory automatically manages.

    Always keep the DFS Namespaces service running on domain controllers.

    Managing a namespace using the original Windows Server 2003 (non-r2) management MMC snap-in

    In Windows Server 2003, there exists an original version of the tool to manage DFS Namespaces called dfsgui.msc

    ("Distributed File System"). This version lacks many features provided by the 2003 R2 (and later) versions of the DFSN management tool dfsmgmt.msc ("DFS Management"). Although you may use dfsgui.msc to manage a namespace running on these later operating systems, the Windows product team never tested interoperability. The newer MMC also is much more efficient and has some usability improvements. Lastly, the older tool does not allow the configuration of delegation, referral ordering, client fail back, polling configuration, Access-based enumeration (2008 and later), DFSR, etc.

    Always use the newer "DFS Management" console in lieu of the "Distributed File System" console.

    Configuring the "Exclude targets outside of the client's site" setting on a namespace though some Active Directory sites have no targets

    The major benefit of DFSN is fault tolerance. However, there are times when namespace administrators would prefer a client's file access to fail if the local site's namespace or file server (DFS folder target) is unavailable. A common mistake is to enable this feature and not realize a number of conditions may prevent clients from reaching any targets.

    For instance, to issue proper referrals, the DFSN service must determine the site of all namespace and folder targets. Then, it must identify the site of the client. With the exclusion setting configured, if the client's site cannot be determined, if the client is erroneously associated with a site without any targets, or if the site association of namespace servers and/or folder targets cannot be properly determined, access to the namespace's resources may fail. Below, I attempt to identify the site of the system named “workstation” using DFSUtil and the operation failed:

    image

    At the bottom of KB article 975440 are methods for troubleshooting this scenario. You may find the other details in the article useful as well.

    Before configuring the "exclude targets outside of the client's site" setting, ensure each site containing clients that require access to a namespace path has a target within their site and verify your Active Directory site configuration is accurate.

    Configuring multiple DFSN folder targets on a replicated folder hosting user profiles data

    Ned's details found here have become the standard for why Microsoft does not recommend or support using Distributed File System Replication (DFSR) and DFSN (multiple folder targets) for user profile data. For example: if a user were to be referred to a file server that hasn't yet replicated in recently changed data (or perhaps you don't have replication enabled on the shared folder), then the user will not have access to their expected data. The user will logon and receive an old version of their user profile. At best, this generates helpdesk calls--at worst, a user may unknowingly utilize outdated files and lose data.

    Do not use DFSN paths with multiple folder targets to store user profiles or volatile user data. Whenever implementing DFSN, you must be vigilant to ensure users and clients are never affected by referrals to alternate file servers where expected data may not reside.

    Active Directory site configuration doesn't accurately reflect the physical network topology

    As mentioned previously, site associations evaluated by the DFS namespace service are crucial for clients’ direction to optimal folder targets. It is important to validate that each site required to have a target has one (or more) targets, that site configurations defined in the Active Directory are accurate, and that you monitor the environment for changes that can affect site associations.

    DFSDiag.exe is one method to validate site associations of all servers involved in the referral process and you should run it from a client if you suspect it isn't working properly. I recommend using the "/testreferral" argument as it diagnoses just about everything related to the namespace's health along with the site associations of clients and targets.

    Routinely use DFSDiag.exe to check for failures before they impact users or prior to going "live" with a new namespace.

    Incorrectly disabling IPv6 on servers involved in the DFSN referral process

    As detailed in this blog post, IPv6 can cause the DFSN referral process to fail or be inaccurate (even on networks that do not support IPv6 functionality). If you are using IPv4 addresses that are not considered a member of the private address space (look here for details), then it is likely your servers are registering IPv6 address in DNS. If you haven't configured IPv6 subnet definitions in Active Directory, these DNS records may be used by DFSN when processing referrals. They cause site associations to fail and incorrect referral ordering.

    Once easy method to see if this scenario affects you is to simply open up the DNS Manager MMC snap-in and scan all the host (A) records for your domain's forward lookup zone. If you see any that report an IPv6 address and they are namespace servers or folder targets, you must take corrective action. The aforementioned blog post details some options.

    Validate your site configuration for IPv6 compatibility.

    Not utilizing multiple namespace servers for domain-based namespaces or a clustered namespace server for standalone namespaces

    Avoid single-point-of-failure wherever possible. If hosting important data within a DFS Namespace that must be available at all times, there is no reason to have only a single namespace server hosting it. As mentioned previously, DFSN requires very little CPU and memory resources under typical conditions. If the single namespace server goes down, no one can access the data.

    If server resources are scarce and prevent the configuration of additional namespace servers to host a namespace, consider adding domain controllers as namespace servers. On Windows Server 2008 and 2008 R2 domain controllers, install the "DFS Namespaces" role service to get the DFSN management console and command-line tools. Otherwise, the DFSN service is already installed on domain controllers via the DCPromo operation, but none of the tools are available on it.

    Always utilize multiple namespace servers. If a standalone namespace is necessary, use a clustered namespace.

    Using the less capable namespace type

    Windows Server 2008 introduced a new type for DFS Namespaces, aptly named "Windows Server 2008 mode". This offers increased scalability and functionality over a "Windows 2000 Server Mode" namespace. As long as your environment supports the minimum requirements for 2008 mode namespaces, there is no reason not to use them.

    To use the Windows Server 2008 mode, the domain and namespace must meet the following minimum requirements:

    • The forest uses the Windows Server 2003 or higher forest functional level
    • The domain uses the Windows Server 2008 domain functional level
    • All namespace servers are running Windows Server 2008 (or later)

    Not choosing this mode is a common mistake by administrators. However, it is possible to migrate from a Windows 2000 Server Mode namespace to a Windows Server 2008 mode namespace using the information found here. However, you should perform the migration during a period where your DFS clients can tolerate an outage--migration requires recreating the namespace.

    Standalone namespaces used to be the only option if your namespace required more than 5,000 DFS folders (links). If you fall into this scenario, consider replacing it with a Windows Server 2008 mode namespace.

    Choose the correct namespace mode when creating a namespace.

    Using mount points

    Ned covers this very succinctly in a mail sack response found here. There is no reason to utilize mount points within a DFS Namespace's configuration for the namespace's shared folder. Note: we fully support utilizing mount points under a file share that is the target of a DFS folder.

    Do not use mount points within the namespace.

    Enabling DFSR or FRS on the namespace share

    Generally, we do not recommend enabling DFSR or File Replication Service (FRS) replication on a DFS namespace's share. Doing so causes a number of issues, as the replication service attempts to replicate modifications to the reparse folders maintained by the DFSN service within the share. As DFSN removes or adds these reparse folders, FRS/DFSR may replicate the changes before or after DFSN has an opportunity to update the reparse folders based on recent configuration changes.

    Ken has an excellent blog here that goes into this further. He also covers additional problems if you try to mix DFSN with DFSR read-only replicas.

    Replicate data within separate shared folders and refer clients to those shares using DFS folders within a namespace.

    Placing files directly in the namespace share

    Avoid placing files or folders within the namespace share. Place any data requiring access via the namespace within a share targeted by a DFS folder. This prevents any potential conflicts with reparse folders. It also ensures that in the event a user connects to an alternate namespace server, they always have a consistent view of the data.

    Do not place files or folders within the namespace share.

    Not implementing DfsDnsConfig on namespace servers

    DfsDnsConfig?” you may ask. This setting alters the types of names, NetBIOS or DNS FQDN, used for namespace referrals. As network environments grow larger and more complex, it makes sense to move away from the flat NetBIOS WINS namespace to a DNS namespace. Only utilize NetBIOS names if you have network clients that specifically require it and must access DFS namespaces. By using FQDN names, you reduce complexity as clients no longer are dependent on appending correct DNS suffixes to the targets’ names. .

    Configure namespace servers to utilize the FQDN names for namespace referrals.

    Using Access Based Enumeration on “Windows 2000 Server Mode” namespaces hosted by 2008 and/or 2008 R2 namespace servers

    If you have enabled Access Based Enumeration (ABE) on a “Window server 2000 Mode” namespace using the instructions found here, you may find that the Windows Server 2008 or 2008 R2 namespace servers are not correctly maintaining permissions their reparse folders. Your custom permissions may be lost if you restart the server or its DFSN service. If any Windows Server 2003 namespace servers also host that namespace, they do not experience this symptom…

    An update is available for both Windows Server 2008 and 2008 R2 to correct this behavior. It prevents the DFS Namespaces service from replacing your custom reparse folder permissions you implement via the instructions found in KB article 907458. Again, those instructions are only appropriate if you have a specific requirement to utilize “Windows 2000 Server Mode” namespaces with 2008 and/or 2008r2 namespace servers.

    If supported by your environment (see section “Using the wrong namespace version” above), it is better to leverage the native support for ABE in a “Windows Server 2008 mode” namespace. You no longer require use of icacls.exe (or similar NTFS security editing tool) on each namespace server. The DFSN service automatically enforces the reparse folder permissions defined in the DFS Namespace configuration. Because of the mode requirement, Windows Server 2003 servers cannot be namespace servers in the namespace.

    This is also a good opportunity to mention the "latest hotfixes" articles for DFS Namespace servers:

    968429        List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2

    958802        List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2003 and in Windows Server 2003 R2

    Ensure you update the DFSN services, especially if you wish to use ABE with 2008 and/or 2008 R2 namespace servers.

    Configuring domain-based namespaces in an unhealthy Active Directory environment

    All DFS Namespace deployments require a healthy Active Directory environment to operate successfully. That is why the "How DFS Works" TechNet article begins by describing what constitutes an "optimal environment" for DFS. If DNS, Active Directory replication, site configuration, PDC FSMO owner domain controller, DFS service state, client health, network communications, and security settings are not operating properly, DFSN operations will suffer. Too often, administrators deploy DFSN in unhealthy environments and clients are unable to access resources properly.

    Below are some strategies for verifying many components of your environment:

    Use DFSDiag.exe to test all major dependencies of the DFSN service.

    If you don't yet have any namespaces in the environment, you may still run the following command to test the domain controllers for DFSN service state and consistency of their site associations:

    Dfsdiag /testdcs > dfsdiag_testdcs.txt

    If a domain-based namespace already exists, the following commands may be utilized to exercise all of Dfsdiag's tests and output results to the specified file:

    dfsdiag.exe /testreferral /dfspath:\\contoso.com\namespace1 /full > dfsdiag_testreferral.txt

    dfsdiag.exe /testdfsintegrity /dfsroot:\\contoso.com\namespace1 /recurse /full > dfsdiag_testdfsintegrity.txt

    Note: Many of DFSDiag's diagnostics are relative to the system running the utility (e.g. site associations, network connectivity, etc.). Thus, you may find running it from various systems throughout your environment gives you more accurate health assessment . In addition, you may acquire dfsdiag.exe from the Remote Server Administration Tools included with Server editions of 2008 or 2008 R2, or from the download for Vista or Windows 7. Install the "Distributed File System Tools" role administration tool.

    Evaluate domain controller health, site configurations, FSMO ownerships, and connectivity:

    Use Dcdiag.exe to check if domain controllers are functional. Review this for comprehensive details about dcdiag.

    Suggested commands:

    Dcdiag /v /f:Dcdiag_verbose_output.txt

    Dcdiag /v /test:dns /f:DCDiag_DNS_output.txt

    Dcdiag /v /test:topology /f:DCDiag_Topology_output.txt

    Active Directory replication

    If DCDiag finds any replication failures and you need additional details about them, Ned wrote an excellent article a while back that covers how to use the Repadmin.exe utility to validate the replication health of domain controllers.

    Suggested commands:

    Repadmin /replsummary * > repadmin_replsummary.txt

    Repadmin /showrepl * > repadmin_showrepl.txt

    Always validate the health of the environment prior to utilizing a namespace.

    My hope is that you find this article helpful (I guess sometimes the glass half-empty folks need an article too!)

    -Dave “Doppelwarren” Fisher

  • Managing the Recycle bin with Redirected Folders with Vista or Windows 7

    Hi, Gary here, and I have been seeing a few more questions regarding the recycle bin on redirected folders. With the advent of Windows Vista there was a change in redirected folders and the support for the Recycle bin. Now each redirected folder has a Recycle Bin associated with it. Windows XP only implemented it for the My Documents folder. Unfortunately, the Administrative Templates only give control of the bin as a whole and not for each redirected folder. The global settings can be controlled by the following policies that work with at least Windows XP:

    User Configuration\Administrative Templates\Windows Components\Windows Explorer

    Display confirmation dialog when deleting files

    Do not move deleted files to the recycle bin

    Maximum allowed Recycle bin size

    Since there are no policies for the individual bins, we have to modify the registry settings for the user in other ways.

    Before you start: modifying these registry keys is officially unsupported. They may stop working at any time after a service pack, hotfix, etc. comes out someday.

    That would be through the use of Group Policy Preferences Registry settings or script that would import or set the registry settings. No matter which way you would do it, the registry would have the following settings for each known folder recycle bin:

    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\BitBucket\KnownFolder\{KnownfolderGUID}

    MaxCapacity (REG_DWORD) in MB with a minimum value of 1

    NukeOnDelete (REG_DWORD) 0 (move to recycle bin) or 1 (delete)

    Note: Known Folder GUIDs can be found at here.

    Example registry path for the redirected My Documents/Documents folder would be:

    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\BitBucket\KnownFolder\{FDD39AD0-238F-46AF-ADB4-6C85480369C7}

    What is the easy way to get the registry settings and push them out?

    The easy way to get the registry settings would be by redirecting the desired folders and configuring the Recycle bin for each folder as desired through the GUI with a user that is already redirected. That is by right-clicking on the Recycle Bin icon and bringing up Properties to configure the individual recycle bin settings of a user that has the folders already redirected. The following picture is an example where just the Documents folder was redirected and configured to delete the files immediately. As other folders are redirected there will be other entries that show up there for each of the additional locations.

    image

    Once you have the registry settings, you have the following options to push out the desired settings to the client machines:

    Export the KnownFolder key and then import with a logon script or other method to import the values

    1. Open Regedit.exe by clicking the Start button and typing it in the Search section.
    2. Navigate to the following registry location:
      HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\BitBucket\KnownFolder
    3. Right-click on the KnownFolders key and choose export to save it to a file such as recyclebin.reg
    4. Use the following example command line to import the recyclbin.reg file in a logon script or GPO script:

    regedit /s recyclebin.reg

    Create a Group Policy Registry preference setting to push out the NukeOnDelete and MaxCapacity settings for each folder to control. The advantage of this is that if you desire you can modify the settings and get pushed out and use the built in filtering to control specific settings application or value.

    1. With GPMC installed on the machine edit or create a new policy in the domain
    2. Navigate within the editor to the following location
      User Configuration\Preferences\Windows Settings\Registry
    3. Right-click and choose New->Registry Item
    4. Click on the “…” button next to the Key Path edit box
    5. Navigate to the KnownFolder key path described in this blog and the GUID of the redirected folder.
    6. Select the MaxCapacity value and click OK
    7. On the Common tab, check the box for “Run in logged-on user’s security context (user policy option)
    8. Click OK
    9. Repeat steps 3-8 for the NukeOnDelete value
    10. Repeat steps 3-9 for each additional GUID you want to control

    What happens to files already in the recycle bin when the setting is applied to the registry?

    The NukeOnDelete value just tells Explorer that no additional files will be put in the recycle bin and will be deleted from the file system. Files and folders that are already in that bin will remain until it is emptied and the MaxCapacity essentially gets ignored at that time. It might be best to configure the MaxCapacity to 1 MB and NukeOnDelete to 0 to start out with. That way the maximum size the recycle bin would be is 1 MB and the next time the user deletes a file it will be reduced to that size as well. Then the NukeOnDelete could be changed to 1 later on and the recycle bin won’t be taking up any additional space.

    Can we just delete the $RECYCLE.BIN folder from the network location?

    That would be one way to “empty” the specific recycle bin on the user’s behalf. However, no matter what the settings are, the next time a file is deleted it will get recreated so don’t expect it to remain deleted for very long. Also, from then on the recycle bin will honor the setting anyway.

    Can the deleted files move the local recycle bin instead of staying on the network location?

    No. The Recycle Bin folder view is a merged view of all the locations into one view. When a file or folder is deleted, it is just moved and renamed on the same drive. No copying is involved.

    While we do not have a way to control the individual recycle bin folders through policy in box we can do it through the registry to get that out to multiple machines now for management.

    Thanks,

    Gary “Waste Management Consultant” Mudgett

  • Kerberos errors in network captures

    Hi guys, Joji Oshima here again. When troubleshooting Kerberos authentication issues, a network capture is one of the best pieces of data to collect. When you review the capture, you may see various Kerberos errors but you may not know what they mean or if they are real problems. In this post, I’m going to go over many of the common Kerberos errors seen in these traces, explain what they mean, and what to do about it when you see it.

    I designed this post for IT professionals who have experience reviewing network captures. If you are unfamiliar with Kerberos Authentication, I recommend reading Kerberos for the Busy Admin by Rob Greene.

    What is the best way to get the network capture?

    If you are looking for Kerberos related problems, it is important to see the ticketing process over the wire. Follow the steps below to see the requests and possible returned failures. I typically prefer Network Monitor to Wireshark for captures as it gathers the process name, but you can use either one.

    1. To reduce the possibility of caching data, do one of the following:

    • Close/Reopen client application
    • Logoff/Logon client workstation
    • Reboot client workstation

    2. Start the network capture

    3. Clear DNS cache using:

    ipconfig /flushdns

    4. Clear NetBIOS cache using:

    nbtstat –RR

    5. Clear user Kerberos tickets using:

    klist purge

    6. Clear system / computer Kerberos tickets using (Vista or higher only):

    Klist –li 0x3e7 purge

    7. Reproduce the authentication failure with the application in question

    8. Stop the network capture

    image

    Now that you have the capture, you can filter the traffic using the string ‘Kerberosv5’ if you are using Network Monitor. If you are using Wireshark, you can filter using the string ‘Kerberos’.

    image

    To be more thorough, load the Authentication Traffic filter that shows packets containing Kerberos tickets as well. You can do this by clicking the Load Filter button, choose Standard Filters, and then click Authentication Traffic.

    image

    You need to click the Apply button before the filter is actually loaded.

    image

    If there is a lot of traffic, remove the lines for NLMP to reduce some of the noise. Remember to click the Apply button again to make the changes effective.

    image

    Important: Depending on the application, the topology, and the domain structure, it may be beneficial to take simultaneous network captures from various points including the client, middle-tier server(s), and back-end server(s).

    KDC_ERR_S_PRINCIPAL_UNKNOWN

    When a domain controller returns KDC_ERR_S_PRINCIPAL_UNKNOWN, it means the client sent a ticket request for a specific Service Principal Name (SPN) and was unable to locate a single Active Directory object via an LDAP query with that service principal name defined on it.

    There are two major causes of this error. The first is the SPN is not registered to any principal. In that case, you should identify which principal will be decrypting the ticket, and register the SPN to that account. The other major cause for this is the SPN was registered to more than one principal in the same Active Directory domain. In this scenario, the domain controller does not know which principal to use, so it returns the same error. Determine which principal is appropriate, and remove the SPN from the other(s). You can read more about this error here.

    Extra details to keep in mind:

    The HOST SPN (host/server.domain.com) works for multiple services like HTTP & RPCSS. If you would like to see the default Host to SPN mappings use LDP or ADSI Edit and navigate to: cn=Directory Services,CN=Windows NT,CN=Services,CN=Configuration,DC=[Your Domain Component]. Then look at the sPNMappings attribute.

    image

    KDC_ERR_C_PRINCIPAL_UNKNOWN

    Similar to KDC_ERR_S_PRINCIPAL_UNKNOWN, KDC_ERR_C_PRINCIPAL_UNKNOWN means the domain controller does not know which client principal it should use to encrypt the ticket. The difference here is that instead of a missing or duplicate SPN, there is a missing or duplicate User Principal Name (UPN).

    To resolve this, determine if the requestor has the correct UPN. If so, then determine if there is a principal with a matching UPN. If there is a match, look for a duplicate UPN.
    You may be scratching your head on the duplicate UPN part because if you try to add/modify a principal that has a duplicate UPN in Active Directory Users & Computers (ADUC), it will block you from doing this. Active Directory does not actually enforce the uniqueness of User Principal Names, but it leaves that up to the application. ADUC checks for duplicates, but other utilities like adsiedit.msc and ktpass.exe do not.

    KDC_ERR_ETYPE_NOTSUPP

    Here, the client has requested a ticket from the domain controller with a specific algorithm of which the domain controller does not have a hash. In the request, the client will list all the algorithms it supports. The domain controller will pick the highest one that it supports and returns the ticket encrypted with that algorithm. If there are no matches, the domain controller returns KDC_ERR_ETYPE_NOTSUPP.

    One common cause of this is older devices that are requesting DES encrypted tickets. By default, DES encryption is disabled in Windows 7 and Windows Server 2008 R2. Ideally, you should update those devices or Kerberos clients to support the newer encryption algorithms. If they cannot be upgraded or replaced, then you can enable DES through group policy. For a good way to find these devices, I recommend reading Hunting down DES in order to securely deploy Kerberos by Ned Pyle.

    Another common cause of this is when a device requests an AES encrypted tickets before you raise the functional level of the domain to 2008 or higher. This does not typically occur on Windows clients as they request the legacy algorithms in addition to AES. This scenario is more likely to occur on Unix/Linux systems where an administrator specifies a single algorithm in the krb5.conf file. In this case, raise the functional level of the domain or configure the client to utilize another algorithm, like RC4-HMAC.

    If the client is requesting an algorithm that the domain controller should support, but is still returning the error, try resetting the password on the account and wait for replication to converge. Resetting the password regenerates the hashes stored in the directory.

    Note: Domain controllers do not store the password of the user. Instead, they store various hashes of the password using various algorithms.

    KDC_ERR_PREAUTH_REQUIRED

    If you see this error in the trace, it does not indicate there is a problem at all. The client requested a ticket but did not include the pre-authentication data with it. You will typically see the same request sent again with the data and the domain controller issuing the ticket. Windows uses this technique to determine the supported encryption types.

    KDC_ERR_PREAUTH_FAILED

    KDC_ERR_PREAUTH_FAILED indicates the pre-authentication data sent with the ticket is not valid. It usually means the user does not exist or the password supplied is invalid.

    KRB_AP_ERR_SKEW

    To avoid packet replay attacks, Kerberos tickets include an authenticator with the ticket. This authenticator is based on a timestamp so an attacker cannot reuse them. Getting KRB_AP_ERR_SKEW typically means there is a time synchronization issue in your domain, and the time difference is greater than the default 5 minutes. If this is a common problem, start looking for time drifts across the infrastructure.

    KRB_AP_ERR_REPEAT

    This is another mechanism created to reject replay attacks. The server caches information from recently received tickets. If the server name, client name, time, and microsecond fields from the Authenticator match recently seen entries in the cache, it will return KRB_AP_ERR_REPEAT. You can read more about this in RFC-1510. One potential cause for this is a misconfigured network device in between the client and server that could send the same packet(s) repeatedly.

    KRB_AP_ERR_MODIFIED

    If a service returns KRB_AP_ERR_MODIFIED, it indicates that the service was unable to decrypt the ticket that it was given. The name of the error suggests that an attacker may have modified the ticket in order to gain access to a system. While this is possible, the most common reason is when the Service Principal Name (SPN) is registered to the wrong account. If Service A gets a ticket encrypted with Service B’s password, Service A cannot decrypt it using its password. There is no way for the service to know why it cannot decrypt the ticket, so it returns this error. To resolve this issue, determine which account is actually running the service and move the SPN to that account. If the service is running as Local System, Local Service, or Network Service, set the SPN on the computer account.

    Another possible cause is a duplicate SPN in two different domains in the forest. If it appears the SPN is registered to the correct account, search the entire forest for a duplicate SPN. For example: Say there is a service in Domain A that uses the SPN http/service.contoso.com and the same SPN exists in Domain B. If a user from Domain B tries to access the service in Domain A, it will fail with this error. The reason for this is the client in Domain B will first try to contact a domain controller in Domain B for that SPN. That domain controller will return a ticket for the account in Domain B.

    An interesting issue we see revolves around IIS7 and Kernel Mode Authentication. Typically, you should register the SPN to the account that is running the application pool. Kernel Mode Authentication speeds up authentication requests and performs the decryption in the context of the computer account. In the case of load balanced web servers, you cannot have multiple nodes using the computer different contexts to decrypt the ticket. Either disable Kernel Mode Authentication or use the useAppPoolCredentials in the applicationhost.config file of the web server. For more information, review:

    KDC_ERR_BADOPTION

    If the domain controller returns KDC_ERR_BADOPTION, it means that one of the KrbFlags set in the KdcOptions is not allowed. You can see a sample of the options in the figure below.

    image

    The most common scenario is a request for a delegated ticket (unconstrained or constrained delegation). You will typically see this on the middle-tier server trying to access a back-end server. There are several reasons for rejection:

    1. The service account is not trusted for delegation

    2. The service account is not trusted for delegation to the SPN requested

    3. The user’s account is marked as sensitive

    4. The request was for a constrained delegation ticket to itself (constrained delegation is designed to allow a middle tier service to request a ticket to a back end service on behalf on another user, not on behalf of itself).

    KDC_ERR_WRONG_REALM

    This error may occur when a client requests a TGT from a domain controller for a domain to which the client does not belong. This error refers the client to the correct domain and does not indicate a problem. You can read more about it here.

    KDC_ERR_TGT_REVOKED

    Getting a KDC_ERR_TGT_REVOKED error means that the TGT presented to the domain controller in order to get a service ticket is not valid. These errors are common when the client is in a site with a Read Only Domain Controller (RODC) and is attempting to access a resource in another site. Seeing this error does not necessarily mean there is a problem. Read more about the ticketing process with RODCs here.

    Conclusion

    Troubleshooting authentication issues is not always cut and dry. I hope you now understand the meanings behind common Kerberos errors what you can do about them. If you want to dive a bit deeper into this, I recommend reading the RFCs associated with Kerberos.

    Kerberos RFC
    http://www.ietf.org/rfc/rfc1510.txt
    http://www.ietf.org/rfc/rfc4120.txt

    I’ve also linked to a more comprehensive list of Kerberos errors you may encounter.

    Kerberos and LDAP Error Messages
    http://technet.microsoft.com/en-us/library/bb463166.aspx

    Until next time,

    Joji “three-headed puppy” Oshima

  • Friday Mail Sack: I Don’t Like the Taste of Pants Edition

    Hi all, Ned here again. After a few months of talking about Windows Server 2012 to other ‘softies from around the globe, I’m back with the sack. It was great fun – and not over yet, it turns out – but I am finally home for a bit. The only way you don’t know that the next version of Windows is nearly done is if you live in a hobbit hole, so I’ll leave all those breathless announcements to the rest of the internet.

    This week we talk:

    Let’s get to it.

    Question

    I accidentally chose the wrong source replicated folder when setting up DFSR and now I have a few terabytes of data in the preexisting folder. I found your RestoreDfsr script to pull out the intentionally-mangled data, but it’s taking a long time to put everything back. Is there an alternative?

    Answer

    The script just wraps xcopy and makes copies rather than moving files, so it is not scalable when you get into the multi-TB realm (heck, you might even run out of disk space). If it’s reaaaallly slow (when compared to another server just copying some similar files) I’d worry that your disk drivers or firmware or SAN or NAS anti-virus or third party backup agents or whatever are contributing to the performance issues.

    However!

    All of the files and folders in pre-existing deeper than the root are not mangled and don’t require any special scripts to copy out. Only the files and folders at the root of the preexisting RF are mangled and require the preexistingmanifest.xml for the heavy lifting. Therefore, a quicker way to fix this would be to just figure out the original folder names at the root by examining the pre-existing manifest file with your eyeballs. Rename them to their original name and then use Windows Explorer MOVE (not copy) to just move them back into the original folder. That would leave only the mangled files in the root of the pre-existing folder, which you could then use the script to restore – presumably with less data to restore and where the slower xcopy performance no longer matters.

    Question

    When I run dfsutil diag viewdfsdirs c: verbose on this Win2008 R2 server, I see errors like this:

    Unable to open file by ID
    Unable to open file by ID

    This volume (C:\) contains 5 DFS Directories.
    Done processing this command.
    C:\DFSRoots\root1\link1
    C:\DFSRoots\root2\link2
    C:\DFSRoots\root3\link3

    What is the ID in the error? How can I tell the other two folders that it’s missing?

    Answer

    Dfsutil.exe uses the reparse point index to find DFS links on a volume.

    clip_image002[6]

    Due to some error, dfsutil.exe failed to open some of them. We definitely need a better error that tells you return code and failed path. Sorry.

    First, look in c:\dfsroots. The two link folders not returned by your list below are probably in there. If they are not in c:\dfsroots at all, use:

    DIR c:\ /S /AL

    That returns all reparse points on the volume. Any besides the default ones (in user profiles, programdata, and sysvol) are probably your bad guys. You’d want to make sure they still show up correctly in fsutil, that you have no errors with chkdsk, that they have not been intercepted by some wacky third party, etc.

    You can also use (if you have later OSes):

    Dfsdiag /testreferral /dfspath:\\contoso.com\namespace /full > output.txt

    Question

    I am using USMT 4.0 to migrate users that are members of the Administrators group and using a config.xml to make those users only be members of the Users group on the destination computer.  I am running these USMT scripts as the users themselves, so they are already administrators on both the source and destination computer when scanstate and loadstate run.

    I am finding that the users are still members of administrators after loadstate. Am I doing something wrong or does this XML not work?

    <Configuration>

    <ProfileControl>

        <localGroups>

        <mappings>

             <changeGroup from="administrators" to="Users" appliesTo="MigratedUsers">

                <include>

                   <pattern>*</pattern>

                </include>

             </changeGroup>

        </mappings>

        </localGroups>

    </ProfileControl>

    </Configuration>

    Answer

    Long answer, deep breath:

    1. USMT 4.0 requires that the user running loadstate.exe is a member of the built-in Administrators group and holds privileges SeBackupPrivilege, SeRestorePrivilege, SeTakeOwnershipPrivilege, SeDebugPrivilege, SeSecurityPrivilege.

    2. It is not a best practice that you log on as the end user being migrated or that end users run their own migrations:

    • From a security perspective, it’s bad if USMT migration users have to know the end user’s domain password.
    • From a USMT perspective, it’s bad because the end user’s more complex profile and settings are more likely to be in-use and fight the migration, unlike a simple migration user that exists only to run USMT.
    • If the end user is running it himself, it’s bad because they have no way to understand if USMT is working correctly.
    • Therefore, you should always use separate migration user accounts.

    It’s easy to misinterpret the results of using this XML, though. It is not retroactive - if the group memberships already exist on the destination before running loadstate, USMT does not alter them. USMT is designed to copy/skip/manipulate source groups but not destroy destination existing groups.

    Since your design requires destination administrator group memberships before running loadstate, this XML cannot work as you desire. If you switch to using separate migration accounts, MDT, or SCCM, then it will work correctly.

    Question

    I am using new Server Manager in Windows Server 2012 Release Candidate to manage the remote machines in my environment. If I right-click a remote server I see that list of management tools for the given roles I installed. When I run some of the GUI tools like LDP or Computer Management, they target the remote servers automatically. However, the command-line tools just show the help. Is this intentional or should it run in the context of the remote server?

    image

    Answer

    All of the command-line tools are run in this fashion, even when they support remote servers (like repadmin or dcdiag) and even when targeting the local server. We can’t get into a design that deals out a million command-line arguments – imagine trying to provide the menus to support all the various remote scenarios with NETDOM, for example. :-D

    clip_image002
    Oy vey

    Since providing a remote server alone isn’t enough to make most tools work – Dcdiag alone has several dozen other arguments – we just went with “let’s get the admin a  prompt and some help, and let them have at it; they’re IT pros and smart”.

    If you haven’t used Server Manager yet, get to it. It’s a great tool that I find myself missing in my Win200L environments.

    The “L” is for legacy. YeeaaaahhinyourfaceolderproductsthatIstillsupport!!!

    Question

    Does USMT 4.0 migrate the Offline Files cache from Windows XP to Windows 7? My testing indicates no, but I find articles implying it should work.

    Answer

    Unfortunately not. Through an oversight, the migration manifest and down-level plugin DLL were never included. The rules of USMT 4.0 are:

    • USMT 4.0 does not migrate CSC settings and the "dirty" (unsynchronized to server) file cache from Windows XP source computers
    • USMT 4.0 does migrate CSC settings and the "dirty" (unsynchronized to server) file cache from Windows Vista and Windows 7 source computers

    In order to migrate the CSC dirty cache, USMT needs plugin DLLs provided by Offline Files. The Offline Files changes from XP to Windows 7 were huge, but even Win7 to Win7 and Vista to Win7 need the plugins for the path conversions.

    To workaround this issue, just ensure that users manually synchronize so that all offline files are up to date on the file server. Then migrate.

    If you are migrating from Vista (why?! it’s great!!!) or Windows 7 and you want to get the entire cache of dirty and synchronized files, you can use the DWORD value to force the cscmig.dll plugin to grab everything:

    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\CSC\Parameters
    MigrationParameters = 1
     

    This is rather silly in a LAN environment, as it will take a long time and increase the size of your compressed store for little reason; these files are going to sync back to the user anyway after the migration. Maybe useful for remote users though; your call.

    Question

    I'm using Windows Server 2012 Release Candidate and I'm trying to create a new Managed Service Account. I'm running the following from an elevated PS Active Directory module:

    New-ADServiceAccount -name Goo

    The command fails with error:

    NewADServiceAccount : Key does not exist

    Answer

    There are new steps required for managed service accounts in Win2012. We created an object class called a group Managed Service Account (gMSA). GMSA supersedes the previous standalone Managed Service Account (sMSA) functionality introduced in Windows Server 2008 R2. It adds support for multiple computers to share the same service account with the same password automatically generated by a domain controller. This makes server farms using MSAs possible for the first time. SQL 2012, Win2012 IIS app pools, scheduled tasks, custom services, etc. all understand it. It’s very slick.

    The new Microsoft Key Distribution Service on Win2012 DCs provides the mechanism to obtain the latest specific key for an Active Directory account. The KDS also continuously creates epoch keys for use with any accounts created in that epoch period. For a gMSA, the domain controller computes the password on the keys provided by the KDS in addition to other attributes of the gMSA. Administrator-authorized Windows Server 2012 and Windows 8 member hosts obtain the password values by contacting a Win2012 DC through Netlogon and cache that password for use by applications. The administrator never knows the password and it’s secure as it can be – just like a computer account password, it’s the maximum 240-bytes of randomly generated goo, changed every 30 days by default. The new KDS is used for other features besides gMSA as well.

    And this finally brings me to your issue – you have to first create the root key, using the Add-KdsRootKey cmdlet. This root key is then used as part of all the subsequent gMSA work.

    If you want to see some preliminary step-by-step documentation, check out Getting Started with Group Managed Service Accounts. I’ll be coming back to this new feature in detail after we RTM Windows Server 2012.

    Question

    What does the what /disabledirectoryverification option do in DFSRadmin.exe membership new?

    Answer

    If the folder you specified for the RF with /localpath does not exist with membership new, dfsradmin.exe will create it for you by default with the correct permissions. If it does exist already, it will modify the existing permissions to let the DFSR service use that folder. If the folder already exists with the correct permissions, it does nothing. Using this argument prevents all of these convenient actions.

    You would use this argument mainly if you were a crazy person.

    Other Stuff

    I didn’t get to go to Comic-Con this year, but thanks to the InterWaste, you can at least see some of the footage sans the special trailers. The best ones I’ve found are… well, pretty obvious:

    There are often some heavy-duty spoilers in these panels – no whining if you find out that Superman is from Krypton and Batman’s parents were killed by the Joker. They also get a little sweary sometimes.

    Naturally, if comic-con is your thing, you need your fill of cosplay. The two best galleries are Comicvine.com and – rather oddly – tested.com.

    This is how David Fisher sees himself when he looks in the mirror.

    tumblr_m4r7jcJr321qd5d3qo1_500
    Via tumblr

    It’s summer, which means good reading. IO9 has their annual list of the best Sci-Fi and Fantasy to check out. Redshirts was pretty good and I am starting Ready Player One shortly as no one can shut up about it (Ernest Cline will give you a Delorean if you are an ultrageek). Charles Stross is an automatic if you’re in IT or a British person; in that vein I recently enjoyed The Rook and am revisiting William Gibson this month.

    And finally:

    Hey, look it’s Ned in a restaurant, grab a picture!

    image

    And look, he’s with Satan!

    image

    Until next time,

    - Ned “I always feel like someone’s watching me” Pyle

  • I’m Baaaaaccccck

    Hey all, Ned here again. After a few months of training room huffing, airline food loathing, and PowerPoint shilling, I’m back in Charlotte. I’ve got a backlog of legacy product posts to share from colleagues, and with Windows 8 and Windows Server 2012 nigh, more new goo coming your way from me and Mike. And if I can’t come up with a halfway-decent Mail Sack after nearly two months, I’ll eat my own pants – expect that Friday. The blog post, I mean; not trouser consumption.

    See you soon,

    Ned “In case you’re wondering, the Dutch are like the Yankees of Korfbal, and it still rains in England” Pyle

  • Standardizing Dynamic Access Control Configuration – Exporting and Importing Dynamic Access Control objects between Active Directory Forests

    [This is a guest post from Joe Isenhour, a Senior Program Manager in Windows Server. You may remember him from his previous ADFS claims rule post. If you are not yet up to speed on the DAC security suite in Windows Server 2012, I recommend our own Mike Stephens’ treatise Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta - the Neditor]

    Hello, my name is Joe Isenhour and I work in the Windows Server team. I’m continuing our blog series on Dynamic Access Control (DAC) with a post that addresses some of the scenarios related to deploying DAC to multiple Forests.

    You can find a demo video that shows the examples from this post here:

    To give you a little bit of background, DAC objects are stored in the Configuration container in Active Directory. This means that the scope for these objects is the Forest level. All of the Domains in a given Forest will use the same set of DAC objects.

     CN=Claims Configuration,CN=Services,CN=Configuration,<forest-dn>

    If your organization has multiple Active Directory Forests then you may encounter some of the scenarios covered in this post:

    1. Manage DAC objects between new and existing Active Directory Forests.
    2. Standardize DAC objects between existing Forests.

    We have built a couple of tools that will help you with the scenarios above. The tools are included in the Data Classification Toolkit. You can obtain the beta version of the tools here.

    The following conditions are required in order to use the tools that are described in this post:

    • You need an Active Directory Forest with the Windows Server 2012 schema
    • The tools must run on a Windows Server 2012 machine with Windows PowerShell and the Active Directory Administration tools

    Export Dynamic Access Control Objects from a Forest

    Let’s say that your company has invested some time authoring Central Access Policies and Claims related data for your corporate Active Directory Forest. Now your CTO office is mandating that some or all of your existing DAC objects need to be included in all of your company’s other Active Directory Forests. This can be challenging for several reasons. One challenge is that the task of creating the objects in the new Forest is labor intensive and prone to error. Another challenge is keeping the unique object ID’s consistent between the Forests. Claim Types and Resource Properties have ID’s that will need to be identical across Forests in order to enable more advanced scenarios like Claim Transformations.

    The Export-ClaimsConfiguration.ps1 tool will allow you to export specific DAC objects from a given Forest. The tool will allow you to export:

    • Claim Types
    • Resource Properties
    • Central Access Rules
    • Central Access Policies

    By default, the tool will export any dependent DAC objects. For example, if you choose to export only Central Access Policies, the tool will also export any Central Access Rules that are defined for the policies and any Claim Types and Resource Properties that are used in the policies.

    The following example demonstrates how to export all Central Access Policies from a Forest (including dependent objects). In this example the export will be stored in file called DACExport.xml.

    1. Open Windows PowerShell and navigate to the Data Classification Toolkit Tools directory

     C:\Program Files (x86)\Microsoft\Data Classification Toolkit\Tools

    2. Execute the following command:

     .\Export-ClaimsConfiguration.ps1 -File "C:\Temp\DACExport.xml" -IncludeCentralAccessPolicies

    When the export is finished, you should have a new file called DACExport.xml. Here is an example of the information contained in the file:

    image

    As you can see, there are many other data types defined in this file besides just Central Access Polices. There are Central Access Rules, Resource Properties, and a section called domainIdentities which we will discuss in a moment. This is because the Central Access Policy that was exported has dependencies on other objects. In order for the Central Access Policy to exist in a new Forest, it will need to have all of these dependent objects.

    Take a look at the section titled domainIdentities. This section is used to store a mapping between the Security Identifiers that were found in the Central Access Rules and their friendly SamAccountName. Here is an example of how this section might look:

    image

    As you can see, the export tool created an entry for every Security Identity that it discovered during the export and stored the id along with the SamAccountName.

    Security Identities are unique to a Domain and therefore cannot be exported/imported into other Domains or Forests. This section is used during the import process to map accounts or groups in the target Forest with the accounts found in the export file.

    For a detailed list of the options available for exporting DAC objects, execute the following command:

     .\Export-ClaimsConfiguration.ps1 –Help

    Import Dynamic Access Control Objects into a Forest

    Once you have created your export, you can begin importing the data into other Active Directory Forests. You have a few options when importing:

    WhatIf: The import tool implements the standard Windows PowerShell WhatIf condition. If you import using this switch, none of the data will be written to the directory. Instead, you will see the output of what would have been written to the directory had you not chosen the WhatIf switch.

    Overwrite: The import process will not write any data to the directory if it finds a conflict with one or more objects. A conflict occurs when a DAC object already exists in the target Forest and has one or more properties that differ from the object in the export file. You may choose the Overwrite option to overwrite the object properties in the target Forest with the properties found in the export file. Certain properties; like IsSingleValued, can’t be overwritten. In this case, the tool will alert you that the property will not be overwritten, but will continue overwriting the other properties.

    The following example demonstrates how to import Central Access Policies from an export file. In this example the export will be stored in file called DACExport.xml.

    1. Open Windows PowerShell and navigate to the Data Classification Toolkit Tools directory

     C:\Program Files (x86)\Microsoft\Data Classification Toolkit\Tools

    2. Execute the following command to import the data in WhatIf mode:

     .\Import-ClaimsConfiguration.ps1 -File "C:\Temp\DACExport.xml" -Server [domaincontroller] –WhatIf

    If the import file contains Central Access Rules then you will likely get prompted to match the identities in the source file with identities in the target Domain/Forest. The tool accomplishes this by matching the SamAccountName in the source file with accounts in the target Forest. You may choose to bypass this prompt using the DontPromptForSIDConfirmation switch. This switch will indicate that you want to accept any matches that are found by the tool.

    image

    In some cases, there may be multiple matching accounts in the target Forest. If that happens, you will be presented with a list of accounts that match the source account and you will need to select the proper account.

    If there are no conflicts in the source file and the target Forest, then the script should complete with output similar to the image below.

    image

    If a conflict was found, then you will instead see a report indicating that there is a conflict. Remember, you can choose to overwrite conflicts using the –Overwrite switch.

    If the output from the command looks correct then you can choose to run the command again without the WhatIf switch and write the DAC objects to the target Forest.

    For a detailed list of the options available for exporting DAC objects, execute the following command:

     .\Export-ClaimsConfiguration.ps1 –Help

    I hope that you’ve found these examples helpful. Watch for more great posts and examples of Dynamic Access Control in Windows Server 2012.

    Joe Isenhour

  • Dynamic Access Control and ISV Goodness

    Hey all, Ned here with a quickie: Robert Paige just published an interesting read on Windows Server 2012 Dynamic Access Control over at the Windows Server blog:

    http://blogs.technet.com/b/wincat/archive/2012/07/20/diving-deeper-into-windows-server-2012-dynamic-access-control.aspx

    It highlights the work done with Independent Software Vendor partners and the products they are creating to integrate with the DAC suite. Definitely worth a read.

    As always, I recommend you check out Mike Stephens’ depth whitepaper Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta if you are looking to get neck deep.

    - Ned “the regurgitator” Pyle

  • RSA Key Blocking is Coming

    Hey all, Ned here again with one of my rare public service announcement posts:

    In August 2012, Microsoft will issue a software update for Windows XP, Windows Server 2003, Windows Server 2003 R2, Windows Vista, Windows Server 2008, Windows 7, and Windows Server 2008 R2. The update will block the use of RSA cryptographic keys that are less than 1024 bits.

    To understand what all this means, why we are doing this, and how to determine your current certificate usage, check out:

    Ned “Rob Greene made me post this, so I will make him answer your comments” Pyle