Blog - Title

January, 2009

  • Using Group Policy Preferences to Map Drives Based on Group Membership

    Hello AskDS Blog Readers, Mike here again! A common request we hear is how to automatically connect specific network shares to drive letters based on group membership. Mapping network drives based on group membership requires some programming knowledge-- either VBScript or command shell (batch files). VBScript based logon scripts can require hundreds of lines of code to provided a complete solution. And batch files require the assistance of helper applications such as IFMEMBER.EXE and NET.EXE, and introduce many challenges with controlling how Windows processes the script. But Group Policy Preferences removes the programming requirement and awkwardness of scripting mapped drives based on group membership. There are many scenarios in which you may want to map a local drive letter to a specific network share to include public drive mappings, inclusive group drive mappings, and exclusive group drive mappings.

    Public drive mappings typically do not require membership to a particular group. However, sometimes public drive mappings do not provide enough granularity. Most organizations have data specific to business units such as accounting, marketing, or human resources.. Inclusive Group Drive mappings solve this problem by allowing a configuration that maps a specific drive letter to a specific network share based on the user being a member of a particular group. This ensures members of the accounting unit receive drive letters mapped for accounting and members of human resources map their respective drives. Exclusive drive mappings are not very common; however, they do provide the flexibility to prevent a user from mapping a particular drive letter to a network share if they are not a member of a specific group. A good example of exclusive drive mappings is to prevent the CIO or other executives members from mapping a drive letter in which they are likely to never use. Let us take a closer look at these scenarios

    Public drive mappings

    Producing a Group Policy Preference item to create public drive mappings is simple. The GPO containing the preference item is typically linked to higher containers in Active Directory, such as a the domain or a parent organizational unit.

    Configuring the drive map preference item.

    image

    Figure 1 Configuring mapped drive preference item

    Newly created Group Policy objects apply to all authenticated users. The drive map preference items contained in the GPO inherits the scope of the GPO; leaving us to simply configure the preference item and link the GPO. We start by configuring the drive map preference item by choosing the Action of the item. Drive map actions include Create, Replace, Update, and Delete. These are the actions commonly found in most preference items. Create and Delete actions are self-explanatory. The compelling difference between Replace and Update is that Replace deletes the mapped drive and then creates a new mapped drive with the configured settings. Update does NOT delete the mapped drive-- it only modifies the mapped drive with the new settings. Group Policy Drive Maps use the drive letter to determine if a specific drive exists. The preceding image shows a Drive Map preference item configure with the Replace action. The configured location is a network share named data; hosted by a computer named hq-con-srv-01. The configured drive letter is the G drive. All other options are left at their defaults. This GPO is linked at the contoso.com domain.

    The results of this configuration are seen when using Windows Explorer on the client computer. The following picture shows a user's view of Windows Explorer. We see there is one network location listed here, which is the G drive that is mapped to \\hq-con-srv-01\data.

    image

    Figure 2 Public drive map client view

    Later, we'll see how to use exclusive drive mappings with public drive mappings as a way to exclude public drive mappings from a subset of users.

    Inclusive drive mapping

    Inclusive drive mappings are drives mapped to a user who is a member of (or included) in a specific security group. The most common use for inclusive drive maps is to map remote data shares in common with a specific sub set of users, such as accounting, marketing , or human resources. Configuring an inclusively mapped drive is the same as a public drive mappings, but includes one additional step. The following image shows us configuring the first part of an inclusive drive mapping preference item.

    image

    Figure 3 Inclusive drive mapping

    Configuring the first part of an inclusive drive mapping preference item does not make it inclusive; it does the work of mapping the drive. We must take advantage of item-level targeting to ensure the drive mapping items works only for users who are members of the group. We can configure item level targeting by clicking the Targeting button, which is located on the Common tab of the drive mapping item. The targeting editor provides over 20 different types of targeting items. We're specifically using the Security Group targeting item.

    image

    Figure 4 Security group targeting item

    Using the Browse button allows us to pick a specific group in which to target the drive mapping preference item. Security Group targeting items accomplishes its targeting by comparing security identifiers of the specified group against the list of security identifiers with the security principal's (user or computer) token. Therefore, always use the Browse button when selecting a group; typing the group name does not resolve the name to a security identifier.

    image

    Figure 5 Configured inclusive security group targeting item

    The preceding screen shows a properly configured, inclusive targeting item. A properly configured security group targeting item shows both Group and SID fields. The Group field is strictly for administrative use (we humans recognize names better than numbers). The SID field is used by the client side extension to determine group membership. We can determine this is an inclusive targeting item because of the text that represents the item within the list. The word is in the text "the user is a member of the security group CONTOSO\Management." Our new drive map item and the associated inclusive targeting item are now configured. We can now link the hosting Group Policy object to the domain with confidence that only members of the Management security group receive the drive mapping. We can see the result on a client. The following image shows manager Mike Nash's desktop from a Windows Vista computer. We can see that Mike receives two drive mappings: the public drive mapping (G: drive) and the management drive mapping (M: drive).

    image

    Figure 6 Client view of inclusive drive mapping

    Exclusive drive mapping

    The last scenario discussed is exclusive drive mapping. Exclusive drive mappings produce the opposite results of an inclusive drive mapping; that is, the drive map does NOT occur if the user is a member of the specified group. This becomes usefully when you need to make exceptions to prevent specific drives from mapping. Let's add an exclusive drive mapping to our public drive mapping to prevent specific members of management from receiving the public drive mapping.

    image

    Figure 7 Configured exclusive drive mapping

    The preceding image shows the changes we made to the public drive mapping (from the first scenario). We've added a Security Group targeting item to the existing public drive mapping preference item. However, the targeting item applies only if the user IS NOT a member of the ExcludePublicDrives group. We change this option using the Items Options list. The client view of manager Monica Brink shows the results of applying Group Policy.

    image

    Figure 8 Client view of exclusive drive mapping

    This client applies two Group Policy objects; each containing a drive mapping preference item. One item contains our public drive mapping with an exclusive security group targeting item. The other GPO contains the management drive mapping with an inclusive security group targeting item. The client processes the public drive mapping GPO; however, the exclusive targeting item verifies that Monica is a member of the ExcludePublicDrives group. Monica is also a member of the Management group. Therefore, Monica's group memberships prevent her from receiving the public drive mapping and include her in receiving the management drive mapping.

    Summary

    Drive mapping preference items do not require any scripting knowledge and are easy to use. Leveraging targeting items with drive mapping items increases the power in which to manage drive mapping to users and computers. Public drive mappings are typically linked at higher levels in the domain and generally apply to a large subset (if not all) users. Inclusive drive mappings associate as specific subset of data with a specific group of people, often times mapping to logical divisions within an organization such as accounting, marketing, or human resources. Exclusive drive mappings invert the principals of inclusive drive mappings. The user must not be a member of the specified group for the drive mapping to occur.

    Best practices

    Be sure to link GPOs high enough in Active Directory so the scope of the drive mapping effects the largest group of user accounts. Obviously, not every GPO should be linked at the domain; however, if there is an accounting organizational unit with three child OUs-- then linking at the Accounting OU effects that largest amount of users. Allow your inclusive and exclusive targeting item to do the bulk of your work. GPOs hosting inclusive drive mappings are best used when the number of user needing the drive mapping are fewer than the number who do not. Exclusive drive mappings are best used when the number of user not requiring the drive mapping are fewer than the number that do. These rules help prevent users from becoming members of too many groups and increasing the cost of managing drive mappings within the organization.

    - Mike “Play Some Skynyrd!’ Stephens

  • Remote Server Administration Tools (RSAT) Available For Windows 7 Beta

    Ned here. For those testing Windows 7 administration capabilities, this is for you.

    Download here

    This is the list of Windows Server 2008 administration tools which are included in Win7 RSAT Client:

    Server Administration Tools:
    • Server Manager

    Role Administration Tools:
    • Active Directory Certificate Services (AD CS) Tools
    • Active Directory Domain Services (AD DS) Tools
    • Active Directory Lightweight Directory Services (AD LDS) Tools
    • DHCP Server Tools
    • DNS Server Tools
    • File Services Tools
    • Hyper-V Tools
    • Terminal Services Tools

    Feature Administration Tools:
    • BitLocker Password Recovery Viewer
    • Failover Clustering Tools
    • Group Policy Management Tools
    • Network Load Balancing Tools
    • SMTP Server Tools
    • Storage Explorer Tools
    • Storage Manager for SANs Tools
    • Windows System Resource Manager Tools

    If you need any kind of support, head on over to the TechNet forums or drop us a line here.

    - Ned Pyle

  • Using PORTQRY for troubleshooting

    Hi all, Mark from Directory Services again. This time I would like to talk about one of the many tools that we use in troubleshooting network issues. At times you may see errors such as The RPC server is unavailable or There are no more endpoints available from the endpoint mapper (These error messages can be DNS related at times). Of course you may see an error code such as 1722 or 1753. So you ask, “What does the error code mean?” If you get an error such as 1722 or 1753, open a command prompt and type in the following:

    Net helpmsg <error number>

    An example would be net helpmsg 1722 and after hitting enter you will get The RPC server is unavailable. If the error number is something such as a hex number then they will not resolve using the net helpmsg command.

    Before we get into talking about using portqry we need to understand a little bit more about RPC (Remote Procedure Call). When a server starts up applications and services that are going to be listening for other clients connecting, they may register with the RPC Endpoint Mapper (EPM). EPM will keep a database of all these registrations and what ports they are listening on. Ports are used so that applications can communicate between client and server. By default EPM will listen on port 135. When a client needs to know what port an application is listening on it queries RPC to find out. This is much like going to the hospital to see a sick friend. The address of the hospital would be the IP address of the server. When you go in the door to the information desk to find out what room your friend is in would be the same as querying RPC on port 135. Then once you’re in the room with your friend you now connected with the application you wanted to talk to. Some ports such as LDAP (389), SMB (445), etc. are hard coded so they are consistent across all Windows clients and servers. To see a list of common ports go to KB article here. Applications may register a port on what is referred to as ephemeral ports. An ephemeral port is a port above 1024 and less than 65536. Take LSASS (Local Security Authority Subsystem Service) for an example, it will usually listen on a couple of ports just above 1024 such as 1025, 1026 or 1027. An example of the output is below:

    Note: I am querying port 135 against a Domain Controller in my examples.

    UUID: 12345778-1234-abcd-ef00-0123456789ac

    ncacn_ip_tcp:192.168.0.30[1025]

    The UUID stands for Universally Unique Identifier, no 2 applications should ever have the same UUID, and is followed by an annotated name if one exists. On the second line is the protocol the application uses and the IP address of the server. Then in the brackets will be the port number the application is listening on. The example above was taken from the LSASS service but you can see multiple entries for a single application as well. Such as this:

    UUID: 12345778-1234-abcd-ef00-0123456789ac

    ncacn_np:\\\\server1[\\PIPE\\lsass]

    Now note that the UUID’s are the same, the annotated name is the same if one I present, but the protocol is different and now we have the server name path as this is the named pipe instance. So how do we query the RPC to find out what applications are registered? Glad you asked! First we need to download the portqry command line tool from here or the GUI tool from here. I prefer using the command line tool and piping the results to a text file. Once you have the tool downloaded and installed open a command prompt. Let’s say you have issues connecting to another server (call it Server1) across a WAN where there is a firewall in place. At the command prompt run the following command:

    Portqry –n server1 –e 135 > port135.txt & port135.txt

    This will query the RPC EPM service (-e switch) on server1 (-n switch) and pipe the results to a text file. The use of the”&” and referring to the text file again will open the text file after the command is done. Now look at the text file. The first portion is simply taking the server name that you are querying against and resolving to an IP address:

    Querying target system called:

    Server1

    Attempting to resolve name to IP address...

    Name resolved to 192.168.0.30

    The next line is very critical:

    TCP port 135 (epmap service): LISTENING

    Querying Endpoint Mapper Database...

    Server's response:

    If something other than LISTENING is returned then there could be a problem with a port being blocked somewhere. If the port shows to be FILTERED then a firewall or VLAN could be blocking that port, if the port returns NOT LISTENING then we got to the machine but the machine is not listening on that port number. If you get a LISTENING or FILTERED response, check and see whether we are checking TCP or UDP, most likely it was attempting to use UDP and this would be a normal response as UDP is connectionless. An example of this would be if you query port 88 for Kerberos against a DC and use the following syntax:

    Portqry –n server1 –e 88 –p both

    Querying target system called:

    Server1

    Attempting to resolve name to IP address...

    Name resolved to 192.168.0.30

    TCP port 88 (kerberos service): LISTENING

    UDP port 88 (kerberos service): LISTENING or FILTERED

    By default we will only query the port on TCP. By using the –p switch we can tell the portqry tool which protocol we want to use. Using the both after the –p we can tell the utility to query both TCP and UDP. In the above example we use UDP by default for Kerberos so that is why I wanted to check both protocols.

    Now back to querying the endpoint mapper. Let’s say you have two Domain Controllers with a firewall separating them and they are not replicating. You believe the networking admin has all the ports open on the firewall. From one DC query the other DC on port 135 and look at the output. Search the text file for lsass then note the UUID. Scroll up or down in the text file until you find the port number in brackets such as this, [1025]. Now run the portqry again using this:

    Portqry –n server1 –e 1025

    Does it return LISTENING? If so then good, if it shows FILTERED then the port is blocked on the firewall. Keep searching for lsass as you will find it will listen on more than one port, usually 2 different ones. Once you find the second port run the command again using the second port number and verify it is listening as well. You can also search the text file of FRS (File Replication Service) and query its port. If you query port 389 (LDAP) then you will get a stream of information from the DC such as below:

    currentdate: 12/16/2008 23:37:23 (unadjusted GMT)
    subschemaSubentry: CN=Aggregate,CN=Schema,CN=Configuration,DC=domain,DC=com
    dsServiceName: CN=NTDS Settings,CN=server1,CN=Servers,CN=Corp,CN=Sites,CN=Configuration,DC=domain,DC=domnamingContexts: DC=domain,DC=com
    defaultNamingContext: DC=domain,DC=com
    schemaNamingContext: CN=Schema,CN=Configuration,DC=domain,DC=com
    configurationNamingContext: CN=Configuration,DC=domain,DC=com
    rootDomainNamingContext: DC=domain,DC=com
    supportedControl: 1.2.840.113556.1.4.319
    supportedLDAPVersion: 3
    supportedLDAPPolicies: MaxPoolThreads
    highestCommittedUSN: 1541417
    supportedSASLMechanisms: GSSAPI
    dnsHostName: server1.domain.com
    ldapServiceName: domain.com:server1$@DOMAIN.COM
    serverName: CN=SERVER1,CN=Servers,CN=Corp,CN=Sites,CN=Configuration,DC=domain,DC=com
    supportedCapabilities: 1.2.840.113556.1.4.800
    isSynchronized: TRUE
    isGlobalCatalogReady: TRUE
    domainFunctionality: 2
    forestFunctionality: 0
    domainControllerFunctionality: 2

    You can use the –p both switch with this command and it will query both UDP and TCP and it will return the same information each time. Note the UDP will show up as LISTENING or FILTERED as discussed before. If you want to query multiple ports you can use a switch such as this: -o 88,389,445 All you have to do is separate the port numbers with a comma then it will run it against all the ports you specify one right after another. If you want to do a range of ports say between 1100 and 1105 use –r 1100:1105.

    If you prefer to use a GUI based tool instead of the command line download the GUI version from the link above or here. Extract the files then run the portqueryui.exe to open the tool. Enter the IP address or server name in the first field then in the Query Type you will notice you have a drop down box for Service to query. This is useful as these ports will be fined in the config.xml file that is located in the same directory. This is useful to gather information against multiple ports that are commonly queried and outputs the results to a window you can simply copy and paste into a text file for review later. It also allows you to manually enter the port numbers that you want to query on and the protocols.

    You can use portqry to troubleshoot several different issues. I had a case where when checking the NTFS permissions on a folder it would take up to 20 seconds to resolve the SID to a name. In a network trace we saw the traffic going from the server to a DC. In the trace we queried port 135 to find out what ports LSASS was listening on. It returned 2 different ports. In the trace we could see that the server would send a SYN packet to the first port and we would never get a response from the server. It kept trying to use that port for almost 20 seconds then it would fail over to the second port on the list and resolve the SID to a name. We used portqry using the first port number and showed the customer that port was being filtered on the VLAN which he later confirmed was true. Another time we used portqry to confirm a problem with a trust between 2 domains which is common scenario for us.

    Another note is that you cannot run portqry on a server back to itself. To do this run the following command on the server itself:

    Netstat –ano > netstat.txt & netstat.txt

    This will list out the protocols (TCP and UDP) the local address, foreign address (this would be a remote server that has a connection to your server) it’s state then the PID of the service that is listening locally. Here is a sample:

    Active Connections

    Proto Local Address Foreign Address State PID

    TCP 0.0.0.0:53 0.0.0.0:0 LISTENING 1424

    TCP 0.0.0.0:88 0.0.0.0:0 LISTENING 400

    TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 748

    TCP 0.0.0.0:389 0.0.0.0:0 LISTENING 400

    TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4

    TCP 0.0.0.0:464 0.0.0.0:0 LISTENING 400

    TCP 0.0.0.0:593 0.0.0.0:0 LISTENING 748

    TCP 0.0.0.0:636 0.0.0.0:0 LISTENING 400

    TCP 0.0.0.0:1025 0.0.0.0:0 LISTENING 400

    TCP 0.0.0.0:1027 0.0.0.0:0 LISTENING 400

    To find what process is listening open Task Manager, go to View – Select Columns – put a check mark in the box for PID then ok. You can click on the PID column to sort it by numerical order to look up the number easier. As in my example above the port 53, which is DNS, 1424 resolves to DNS.exe, 400 is lsass.exe, 748 is svchost.exe (one of many svchost.exe’s), etc.

    I hope I have not confused anyone too much and hope this helps in any troubleshooting network issues. Thanks!

    References:

    816103 HOW TO: Use Portqry to Troubleshoot Active Directory Connectivity Issues

    http://support.microsoft.com/default.aspx?scid=kb;EN-US;816103

    Troubleshooting networks without Netmon (Ned Pyle has some good information here as well)

    http://blogs.technet.com/askds/archive/2007/12/18/troubleshooting-networks-without-netmon.aspx

    - Mark Ramey

  • Windows Server 2008 R2 DFSR Features

    Ned here again. The cat is out of the bag now and we're a little more free to talk about DFSR features that are planned (not guaranteed - planned) to release with Windows Server 2008 R2. Our friends at the File Cabinet blog have posted an excellent writeup - definitely worth a look:

    DFS Replication: What’s new in Windows Server™ 2008 R2

    Here's the short and sweet list of areas that were added or improved:

    • Support for Windows Failover Clusters
    • Read-only Replicated Folders (now with true filter driver support)
    • SYSVOL on Read-only Domain Controllers (leveraging the improved Read-only functionality)
    • Diagnostics Improvements (DFSRDIAG adds support for replication state, record translation, and file hash checking for pre-seeding)

    You can try all these out in a test environment right now - hurry up and grab the ISO's before it's too late.

    - Ned 'The Short Simpson' Pyle

  • Seeing the domains through the forest: What you need to know to build your career in Directory Services technologies

    Hi, Steve again. I thought I would speak through a series of posts about what knowledge is critical to fulfilling the Windows Server Domain Admin role. This topic carries a ton of breadth and depth knowledge. As a beginning, you have to find out where all this knowledge and training is located. My goal is to get you started down this path so you are exposed to different technologies that you will need to understand and master in order to become a Domain or Enterprise administrator.

    The process of building the depth of knowledge required may take years to acquire. With some help and guidance I hope to reduce this time to several months. For other folks that have already started on this path or are already fulfilling this role, there may be topics that I reference that may hold some value for them as well. A lot of great information for Windows Server 2003 exists and I will focus on these resources.

    As I go through this blog there will be links to more information. The links will consist of required reading to achieve our goal of being domain admin ready. It may take weeks to progress through this blog for some folks. I intend to develop follow up blogs that discuss in more detail, especially focused on ideas revolving around the design portions for Active Directory (AD). I would like to present some examples of a theoretical company’s environment and build an actual AD design.

    So let’s get started. You have been using a computer for your personal use and you have just been hired to the helpdesk at your company to manage user accounts.

    Microsoft has published an enormous library of technical information and other information on TechNet. What is TechNet? Well, that would be a blog unto itself, but as a quick reference you can find the technical libraries for most of our products there. You will also find educational resources, downloads, event information, webcasts and newsgroups. Microsoft also provides a learning portal designed for IT Professionals here.

    Let’s talk about design first; each company has to choose an AD design. The simplest is a single forest/single domain where all of your accounts are stored in one domain. By default the first domain you create is the forest root domain, you can add more domains to your forest as a child domain of the root or add new domains as a separate tree in the forest. So how do you decide how many domains should you have? The vast majority of companies can live comfortably within one forest, but may require multiple domains within the same forest for a variety of reasons. This link discusses planning an AD deployment and choosing a logical structure. The more complicated your design the more time and effort required to manage your environment. You might ask yourself if your company requires more than one group of administrators to manage computer and user accounts and requires isolation of data or resources for security purposes. If the answer is clearly yes then you can plan on having more than one domain. You want to try to match your company structure to your AD structure whenever possible as a general rule of thumb. Domains quite often are used to isolate and group resources and normally domain administrators don’t have access to resources within another domain. You also decide that locality might play an important role in domain structure. It may be due to network isolation or even language differences but several companies have chosen to isolate different geographic locations into separate domains. In order to choose the correct design for your company you will need to engage participants from all of your business divisions so they can share their requirements for resources.

    AD allows admins to create logical containers within a domain that allows you to group resources for control and/or manageability. These items are called Organizational Units or OU’s for short. You may decide to create an OU structure for User Accounts that is separate from Group Accounts or Computer Accounts. You can further refine your collections of accounts based on business function or geography.

    AD also supports using objects to describe the physical organization of your network. For example, a site is defined by one or more network subnets. AD sites control what AD resources within the domain or forest a client should use. Typically we want our clients to use resources within their local site rather than traversing to other sites.

    By now, you are probably getting the picture that AD design is flexible enough to support a wide variety logical models. As an exercise you might consider an AD design for a large international company named Contoso. Just to make it easy say we have 30,000 employees in Redmond, WA, USA and 8,000 in North America, South America, Europe and Asia. Right from the start you are going to have questions where you will need to engage other business divisions to get answers. For instance, the first question you might want to answer is; “Is one forest enough or should I isolate a segment of our business entirely and setup a trust between two forests?” The next question might be “Should I use one domain or should I use multiple domains to organize and manage my resources?” You should also engage the network experts in your organization to understand how best to map the physical structure of your network into AD.

    The steps in choosing a design are critical to the success of a company’s IT infrastructure. Even though it seems like there is no “right” answer there are definitely going to be “wrong” answers. The best advice I can give is design a few models and start discovering the pro’s and con’s of each design.

    Choosing a Namespace

    There are fewer factors involved in choosing a namespace design. This document covers your choices well. Different business divisions might have some requirements with regards to namespace and they will need to be engaged in this discussion. You will find this discussion loops into the AD design as well and can be considered jointly.

    What to put where?

    Ok let’s say we have selected a namespace and we know how many forests, domains and sites fit our company best. What’s our next step in AD? We need to choose structure for where our accounts in AD will be stored. We want to choose a logical structure that makes our objects in AD easy to find and manage. You do this by creating OU’s within your domain. Some choices you may make are creating OU’s for either user and computer accounts or combining them. Keeping them separate will make manageability easier. You may group similar machines in the same OU or you may break out accounts based on business function or geographic location.

    Disaster Recovery

    Having a methodical and tested plan is key here. AD is an important component in the organization and we must maintain its availability. There are several scenarios that need to be covered in a plan developed to cover that potential event. Here are points of failure that should be included in a recovery plan:

    • Individual hardware failure on one DC
    • FSMO role holder failure
    • Malware infects one or all DC’s
    • Admin accidentally deletes data from AD
    • DNS server failure
    • Schema Corruption
    • Natural disasters (hurricanes, tornadoes, etc)

    Each one of these events or a combination is possible. You will need to work through these potential events and determine a clear and concise set of steps that need to be completed to resolve the problem.

    Regular backup of AD is critical. Testing these backups to make sure you can quickly and easily restore the data is best practice. All too often backup is scheduled but it’s not running or you cannot restore the data.

    FSMO Roles

    What are FSMO roles and how should they be distributed in your environment? This varies based on forest size. In the smallest environments we might put all roles on one server. In a large environment we might choose to place each role on a different machine. The FSMO PDC Emulator (PDCe) role brings the highest volume of work. In a large domain you would probably want to make sure the DC hosting the PDCe role is isolated from performing other roles, such as: not targeted by LDAP based applications, deployment server, web server, file and print server etc… The FSMO roles are important to make sure the forest and domain have these roles assigned to a specific DC.

    In addition to server selection we might also distribute the FSMO role holder by physical location. This maybe the most secure site or best network availability or highest number of client requests.

    Time

    Many companies have time requirements for their computer systems. By default the PDCe FSMO role holder is at the top of the time pyramid for the domain. Other domain members use W32Time service to synchronize their system clocks. Keeping the PDCe synchronized with an accurate source will help keep you domain member’s time accurate.

    Provisioning new accounts and Deployment

    As domain admin you will need to know how new accounts are created. Question: “Will HR be creating user accounts within their own software and does that software create new user accounts in AD?”. Some environments have very complex user provisioning scenarios. In more simple scenarios the user account maybe created by an administrator and manually gets their mailbox created and user group membership configured. In more distributed environments the Account Operators group maybe used and in the most complex scenarios it maybe the Human Resources Dept or the hiring manager who creates the user accounts.

    Group Policy

    There is a lot of information that falls under the group policy umbrella that a domain admin needs to be familiar with. In a nutshell you can use group policy to configure computer and user configuration settings on machines throughout your domain. You’ll want to familiarize yourself with GPMC, the group policy management tool.

    There are approximately 2,400 settings in Windows Vista that can be set in group policy. This gives the admins a good deal of flexibility in configuring computer and user settings. You use group policy not only to restrict the abilities of specific users in your domain but you can also enhance their experience through group policy as well.

    There are several general categories that can be controlled including: Application Deployment, Certificate Enrollment and Trust, Logon-logoff-startup-shutdown scripts, Restrict Groups settings, Internet Explorer configuration, disk quotas, user folder redirection, user rights and security configuration, etc… The list goes on and on depending on your needs.

    Group policy is also flexible; you can link group policies at a domain, site or OU level. Clients that have memberships in these containers either by direct membership or inheritance will receive these policies. Therefore, it follows that your design of AD can help ease the distribution of user and computer configurations.

    Here again collaborating with different business divisions within your organization is a must. Workstations in the accounting department will most likely need different software and access to their local machines than users in the Marketing department. Similarly, web servers will have different configurations than domain controllers. You can individually set the configuration as part of an image during the build process or you can manually change a machines configuration, but when you want to change thousands of machines at once, Group Policy is definitely the way to go.

    On top of normal GPO settings, we now have group policy preferences which increases the flexibility and extends the capability of what administrators can do with group policy.

    Authentication

    As the domain admin you have the proverbial “keys to the kingdom” for the resources in your domain. Security is a big responsibility to protect your domain’s resources. Domain security access requires authentication. There are several levels of authentication and you will want to implement the highest level of security where possible.

    Windows 2003 implements several authentication protocols: Negotiate, Kerberos, NTLM, Secure Channel, and Digest . It’s also extensible so other authentication protocols can be added.

    NTLM

    NTLM is a challenge-response authentication mechanism. The client attempts to access a resource and is challenged for credentials by the server. The client sends the username and a hash of the user account’s password and the server attempts to authenticate your credentials on a domain controller in the user’s domain. Therefore, the server must chain back to the user’s domain to be authenticated. NTLM has several variations and this is only one iteration. You would also loop anonymous access under this category if the username and password are null the target machine will attempt to logon the user as anonymous. If the server resource accepts anonymous authentication then the client will get access.

    Kerberos

    Kerberos is a more secure and efficient form of authentication than NTLM. It is the default authentication package in most cases beginng with Windows 2000. To summarize kerberos authentication, a client will ask for a service ticket for the server resource it wants to access. The client receives the ticket and forwards the ticket to the resource server to be authenticated. Wherever possible you would want to configure authentication to use Kerberos.

    Certificates

    Certificates can be used for authentication as well. Certificate technologies have grown in scope and complexity over the past several years. More and more technologies are using certificates to increase security. So even though it’s not an authentication protocol it is used in conjunction with authentication protocols to increase security. For example, smartcard authentication uses a certificate that is installed on a physical card. The card is placed in a smartcard reader and the user provides a PIN to access certificates on the card. In this way its two factor authentication because we are using something we have, the smartcard, and something we know, the PIN, to provide authentication.

    Certificates can also be used to increase security by encrypting the network traffic. Secure Sockets Layer, SSL is a well known method of encrypting traffic and can provide server identity. SMIME is another common scenario where users can encrypt and digitally sign their email.

    We are seeing more and more companies implementing their own internal Certificate Authority infrastructure. Having a certificate authority for your domain allows you to assign both user and computer certificates through both automated and manual methods. Using these certificates can significantly increase the security inside and outside your network.

    Applications

    Authentication can be difficult to manage. Two very common scenarios are choosing authentication methods in SQL and IIS. It would be nice if all your applications in the enterprise supported Kerberos and you could just worry about one method but that’s not realistic. It may be an overwhelming task to determine the configuration of all applications. Where you would have concerns are scenarios where plain text or basic authentication is being used. You’ll want to restrict this behavior as much as possible and never use your domain admin credentials to access those applications. If however it is the only method that can be used at the very least the authentication should be encrypted using certificates.

    Trusted domains

    Domain admins must determine if they will allow a trust to be established with another domain or forest. Moving to 2003 forest level allows you to establish a forest level trust and therefore inherit trusts for domains within the other forest.

    SSL- Secure Sockets Layer

    We can use certificates to provide encrypted sessions to servers. The most common example with be using HTTP over SSL. In this case, we would issue a server certificate to the web server that would confirm the server’s identity and allow for users to establish an encrypted session. For internet facing web servers, normally you would purchase a certificate from a trusted authority.

    Another example of using SSL internally within your organization is for LDAP over SSL. Typically our domain controllers service client request over port 389. We can leverage application that are LDAPS enable by installing a server authentication certificate on our DC’s.

    EFS-Bitlocker

    These two technologies provide file encryption. Bitlocker was introduced in the Windows Vista operating system. It helps provide whole drive encryption that is seamless to the user. It helps protect both data and operating system files and is especially useful on laptops where a user may not be able to maintain physical security of the device. EFS is the technology we use to encrypt specific files on a computer. By default the domain admin is the recovery agent for all EFS files in the domain. These encryption technologies can remove your access to data and it may be lost. Care needs to be given to design a proper system where the domain admin decides who can use encryption and for what purposes. As well as who will be the recovery agent in case a client cannot decrypt their files.

    Authorization

    Resource access is controlled through an access control list, (ACL), in most situations. Fundamentally, we need to determine whether we will create ACL’s based on users or groups. We recommend setting security on resources by using domain based groups when more than one user will be accessing the resource. Adding a user to that group gives them access to all those resources and, conversely, removing them restricts their access. Change management is much easier if group membership matches resource needs. Securing groups and controlling group membership is important as a domain admin to strike a balance between people that do not want to use any groups and those who would like a person to be a member of a 1000 groups.

    Auditing

    Some businesses are required by government regulations to maintain auditing at a certain level. Outside of these bounds, security auditing needs to be controlled closely on the domain controllers. This would include not only capturing the data but also periodically reviewing the audit logs to confirm their content.

    Domain Security policy

    Password Policy – This is rather straight forward. Mainly need to determine the level of complexity, password expiration age and lockout threshold. Here, you want to have a secure password that changes on a regular basis but not so stringent that it costs your company money in lost productivity and helpdesk related issues. Complex passwords are the best way to increase security. On the other hand, there is a certain theory that a low account lockout threshold will increase security.

    The lower the number for account lockout the more frequently accounts will be locked out. Any number less than 7 will most likely increase lockout dramatically.

    Choosing the right combination of lockout threshold, duration and complexity will help keep everyone working with an acceptable level of password security.

    Care should be given to examine specific accounts that handle sensitive data. Not only should the data be closely protected but also the accounts that are used to control that data. This may include company executives, domain administrators, HR and Finance employees and application service accounts.

    Delegation of Control – Depending on your organization’s size you may have a highly distributed group of users that modify Active Directory objects. The goal would be to give each person responsible for AD management the least privilege required to perform their responsibilities.

    For ease of use we create specific groups with Active Directory to achieve common tasks: Server Operators, Account Operators, Backup Operators etc… These groups have predefined access to domain resources. Other actions may be non-standard and require specific permissions in Active Directory. Several applications write to Active Directory and their service accounts will need specific access rights. Typically this would be applications that may need Enterprise Admin permissions to install. In addition, they may create specific groups particular to their application that allows them to write to Active Directory. Microsoft Exchange is a good example.

    ACLs on AD objects – For the most part, the default permissions on an object within Active Directory will be acceptable. It can be very difficult to manage and troubleshoot access problems when you are not using a standard approach to control access. Setting specific restrictions to particular objects is where administration can turn into a nightmare. Make sure strict change control and documentation is enforced whenever making changes to AD. Keep in mind Active Directory will outlast many of your administrators. Nobody wants to be in a position where you are trying to back out changes that were completed a year ago without documentation of what changes were made.

    Core Network Technologies

    Although these technologies are managed and mostly controlled by our networking group, domain admins need to understand the concepts associated with network design and administration. At the core, TCP/IP is our primary communication protocol suite.

    DHCP is how our hosts get network addresses that are dynamically assigned and how we configure clients for DNS registration, Wins and DNS discovery.

    WINS is older technology for NetBIOS name resolution that is still in use in many networks.

    DNS is more critical to a fully functioning and distributed AD environment. Netlogon is used to register the domain controller records in DNS. These records allow client to discover domain roles and services within their AD site.

    RAS and VPN

    Users are accessing our network in greater frequency outside of the LAN. In order to work closely with your network counterparts you need to understand some of their technologies. These would include but are not limited to: Routing, Remote Access, VPN, IPSec, Wireless, RDP, etc…

    Firewalls

    There are two types of firewalls that you will encounter: the one at the edge of your network’s boundaries and one installed on workstation/server computers. While the network team will manage the perimeter firewall, the firewall installed on your clients and servers may be managed by Group Policy and, hence, in the domain admins’ realm.

    For the firewalls on the perimeter, you will need to be familiar with the ports that are required to be open.

    Distribution of Domain and Forest Roles

    The FSMO roles: Schema Master and DomainNaming Master are forest based roles. The PDCe, RID Master and Infrastructure Master are domain specific roles. In a small domain scenario you may have all 5 roles installed on the same server. In larger environments you will most likely decide to distribute these roles to separate machines, and in the case of having more than one domain in your forest, you will need to host these roles on multiple machines. Usually the roles are hosted on machines in the data center or hub site. As far as the roles are involved the PDCe role hosts a lot more activity then all the other roles combined.

    Global Catalog(GC) server placement is also a concern for efficiency. GC’s are required for logon and need to be distributed efficiently. Having a GC present in remote sites will help to significantly reduce the amount of logon traffic and the time required for logon. Conversely, a GC in a remote site will consume network bandwidth during replication cycles.

    Active Directory

    Schema

    The schema is the base configuration for Active Directory. It defines the types of objects that will be created inside the database. Changes to the schema can be difficult or impossible to be undone. As the domain is upgraded to new versions there is typically going to be a schema update associated. Other applications may also extend the schema such as Exchange or third party applications. Before any schema update, ensure a rollback or recovery process is in place. Removing an upgrade to the schema is may not be possible or corruption of the schema may cause permanent malfunction of AD. You will need to be a member of the Schema Admins group to be able to modify the schema and the modification needs to take place on the Schema Master.

    Configuration

    The configuration container has a lot of information stored there but very little that will be actively managed. For instance, there is information about the configuration of the active directory forest and their associated partitions. This includes information about AD sites and Enterprise services such as Certificates and Exchange email. Extended Rights inside AD are defined here such as changing the FSMO role holders. There is just one configuration per forest so whatever is written here is available to all domains in the forest.

    Domain partition

    This is where all of your user, computers and groups are stored. But if you enable advanced features in AD Users and Computers snapin, ADUC.MSC you will see a lot more data is contained in the system container. There is information associated with the AdminSDHolder process which protects your admin accounts from loosing permissions to AD, Domain DNS information and Group policy is also stored here. Rarely, if ever, will you modify any items in the system container using ADUC. It would be wise to restrict delegation to this container. On the other hand, the AD delegation wizard makes distributing permissions to other admins or account management people easy.

    DNS Partitions

    You can store and replicate DNS information for the forest and for specific domains to DC’s. Active Directory integrated zones are stored in AD. Standard primary zones are stored as files on the DNS server.

    Active Directory Tools

    Just to mention a brief description of the core toolset a domain admin may use on a regular basis or under emergency conditions.

    • Active Directory Users and Computers (ADUC) – MMC, Create, Find and manage computer, user and group accounts.
    • Active Directory Sites and Services – MMC, Create and manage AD sites, subnets and site links.
    • Active Directory Domains and Trusts – MMC, Establish trusts with other domains and forests.
    • GPMC – Group policy management
    • NTDSUTIL.EXE – Active Directory database maintenance, manually removing domains and DC’s, authoritative restore, FSMO role ownership.
    • Netdom – join and unjoin domain, reset DC password, query FSMO roles, change computer name.
    • NLTest – reset computer secure channel, query and establish trust, discover DC and GC’s.
    • Repadmin – Confirm AD replication, Force replication, Replication summary.
    • DCdiag – Domain controller diagnostic, checks connectivity and configuration.
    • Netdiag – Check network configuration, DC lists, DC discovery and DNS registration.
    • Gpresults – Shows group membership, user rights and group policy settings.
    • Gpotool – query group policy status on DC’s for last modified and consistency.
    • Whoami – Show user’s sid and group membership.
    • ADtoken – User group membership and token size.
    • Subinacl – Discovers security descriptors for file resources.
    • DSacls – Control security on AD objects.
    • EFSinfo – describes certificate information for encrypted files.
    • Ntbackup – backup system state of DC.

    This is a small subset of tools that an admin will use but these tools seem to be used regularly. There are frequently other applications with similar or overlapping features. There are even software packages that will monitor and manage Active Directory.

    What’s new in Windows 2008

    There are many new features and services available that released with Windows 2008 Server. Some of the key changes for 2008 Active Directory Domain services are: Read Only DC’s, Disaster Recovery and Fine Grained Password Policy.

    Summary

    It’s been a long road to get to this point if you have been following all the links in the document along the way. In no way is this intended to be the “be all and end all” of domain admin knowledge. I hope if you are a domain admin already, you gained some knowledge in some areas. If you are new to domain administration you hopefully learned a great deal. You might now look to some official Microsoft curriculum or certification, please visit Microsoft Learning website.

    - Steve ‘Milk in the fridge’ Taylor

  • Negotiate security support provider behavior

    Greetings DS blog readers, Todd here. I wanted to talk a little about the Negotiate security support provider (SSP) and how there are times when it will intentionally use NTLM rather than Kerberos. [And if that’s not interesting, keep reading anyway because there is a slick trick in here for network captures - Editor]

    In a properly configured and functioning domain when SSP Negotiate is utilized and the client application resides on the target server to be accessed, SSP Negotiate will choose NTLM instead of Kerberos. Microsoft Negotiate acts as an application layer between Security Support Provider Interface (SSPI) and the other SSPs. When an application calls into SSPI to log on to a network, it can specify an SSP to process the request. If the application specifies Negotiate, Negotiate analyzes the request and picks the best SSP to handle the request based on customer-configured security policy.
    Currently, the Negotiate security package selects between Kerberos and NTLM. Negotiate selects Kerberos unless it cannot be used by one of the systems involved in the authentication, the calling application did not provide sufficient information to use Kerberos, or the client and server are the same machine. For efficiency and security reasons NTLM is chosen over Kerberos when the client and the server are the same machine. This behavior is by design.

    How to reproduce the behavior

    Note: The test computer must be part of a domain and on a routed network for the reproduction to work.

    1. On your test computer create a share.
    2. Install a network capture utility. (Netmon, Wireshark, etc...)
    3. Add a network route to the test machine to allow network traffic to be seen with the source and destination are the same location.
    Run the following command on the machine where you need to see the to and from traffic on itself:

    route add <IP Address of the server that you are on> <IP Address of default gateway of the server you are on>

    This causes the server to send internal packets over the network that would ordinarily stay completely local and not be viewable in a network trace. The packets will just return to the test computer itself.

    PLEASE NOTE: To remove the “route add” issue the command “route add <IP Address of the server that you are on>”

    Note that each packet going from the server to itself will appear twice ... once exiting the server on its way to the router, once returning from the router on its way back to the server. You can ‘post-process’ the capture to eliminate the duplicate packets (i.e. don't display the packets where the source Ethernet address matches the router).

    In Netmon 3.x you can use a Display Filter constructed similar to:

    HTTP and Ethernet.SourceAddress == 0x010203040506

    For Wireshark you can use the following display filter:

    (tcp.port == 80) && (eth.src == 01:02:03:04:05:06)

    You will need to replace the 0x0102030405 or the 01:02:03:04:05:06 with the MAC address of your web server.

    4. Start the network capture.
    5. Access the share on the test computer from the test computer (i.e. from itself, to itself).
    6. Stop the network capture.
    7. Review network trace.

    You should see the following sequence in the trace:

    i. A GET request
    ii. A HTTP 401 Unauthorized response with:

    WWW-Authenticate:Negotiate

    …in the HTTP header

    iii. A second GET request with NTLM authentication

    Here is the workaround

    In order to allow a client application to utilize Kerberos in this usage scenario; the following steps can be enacted.

    1. Create an alias (CNAME) host record for the machine with a unique name for the forest. This can be accomplished using the DNS snap-in, expanding out he forward lookup zone for the domain which contains the web server, right clicking on the domain name and select New Alias (CNAME)…

    2. Register SPNs for the alias on the computer object. You can use Setspn to register an SPN which would be formatted similar to “Setspn –A HTTP/<alias_ you _registered_in_DNS> <webservername>”

    3. When accessing the machine with a browser use the alias. This will allow Kerberos to be utilized even if the client resides on the server.

    References
    http://msdn.microsoft.com/en-us/library/aa378748(VS.85).aspx

    I hope this has been enlightening.

    Best regards,

    Todd Maxey

  • DFSR SYSVOL Migration FAQ: Useful trivia that may save your follicles

    Hi, Ned here again. Today I'm going to go through some well-hidden information on DFSR SYSVOL migration; hopefully this sticks in your brain and someday allows you to enjoy your weekend rather than spending it fighting issues.

    As you already know, Windows Server 2008 introduced the ability to use Distributed File System Replication (DFSR) to replicate your SYSVOL folder, rather than the legacy File Replication Service (FRS). And there was much rejoicing. The step-by-steps for this process are documented here:

    1: SYSVOL Migration Series: Part 1 – Introduction to the SYSVOL migration process
    2: SYSVOL Migration Series: Part 2 – Dfsrmig.exe: The SYSVOL migration tool
    3: SYSVOL Migration Series: Part 3 – Migrating to the ‘PREPARED’ state
    4: SYSVOL Migration Series: Part 4 – Migrating to the ‘REDIRECTED’ state
    5: SYSVOL Migration Series: Part 5 – Migrating to the ‘ELIMINATED’ state

    (And before you get wound up in the Comments section - Yes, there is a TechNet Whitepaper for this coming just as soon as we can get it published! Don't hurt me!)

    So let's run through some Q & A.

    Q: What are the Domain Controller availability requirements during my migration?

    A: There are a couple.

    1. The PDC Emulator must be online any time the DFSRMIG tool is being invoked for a read or write operation. If the PDC Emulator is offline or inaccessible for LDAP, the user of DFSRMIG will receive:

    “Unable to connect to the Primary DC’s AD.

    Please make sure that the PDC is reachable and try the command later.”

    2. All DCs must remain online until they each complete their state steps. All DCs do not need to be accessible simultaneously. But the global state will never reach the Prepared, Redirected, or Eliminated state until all DCs have been able to complete their individual phases.

    The PDC Emulator requirement is because by default, administrators always edit group policy on the PDCE, so in most environments it will have the most up to date knowledge of policy. That and we need to talk to someone unique, so it might as well be him.

    It is recommended that a SYSVOL migration not be attempted unless all DCs are online and available. Change control blackouts should be scheduled to prevent modification to DCs that might impact their availability. This will minimize the window of time that the migration will take.

    Q: Is there some super-secret way to return to using FRS after reaching the Eliminated phase of DFSR migration?

    A: Microsoft does not support returning your domain to using FRS for SYSVOL replication after a completed DFSR migration (except to rebuild the domain). This is why the steps are done in a phased approach with two checkpoints where you can revert back to FRS without any consequences. Once you trigger the ELIMINATED phase to start, there is no going back, period.

    image

    Q: When does Robocopy run during the migration and what does it do?

    A: The DFSR service uses robocopy at several stages to synchronize SYSVOL directories outside of normal replication when it detects a SYSVOL migration is underway; a set of ‘pre-seeding’ and ‘save the GP admins from themselves’ operations.

    1. When Prepared state (DFSRMIG /SETGLOBALSTATE 1) is invoked, all DC’s robocopy their FRS SYSVOL data locally into the new DFSR content set. This is equivalent to ‘pre-seeding’ data and ensures that minimal file replication occurs to converge the content set. This is triggered by the DFSR service itself when:

    • AD replication has converged between a DC and the PDCE.
    • The DFSR service on that DC has polled (this runs every 5 minutes) and picks up the state change from CN=dfsr-LocalSettings

    2. When entering the Redirected state, the PDC Emulator (only) robocopies the local differences of FRS SYSVOL data into the new local DFSR content set, on itself. The other servers receive new data via replication.

    3. If you undo the Redirected state back to Prepared, the reverse happens. The PDC Emulator robocopies its local DFSR content set data into its local FRS content set. FRS replication synchronizes all other servers... eventually. Allow more time for this than entering Redirected, as FRS is not as fast to synchronize as DFSR.

    For sharp-eyed readers: we won’t run into any of the old pre-seeding issues (the file hash being changed by robocopy) here because DFSR correctly creates the SYSVOL_DFSR folder ACL, so there are no inheritance issues when the contents are copied in and replicated.

    Q: Event 8004 says something about RODC's. I don't have any RODC's. What the frak?

    A: The following event is incorrectly written in the DFSR event log(s) on servers that are not Read-only Domain Controllers when setting elimination state using DFSRMIG.EXE:

    Log Name: DFS Replication
    Source: DFSR
    Date: 9/28/2007 11:53:46 AM
    Event ID: 8004
    Task Category: None
    Level: Information
    Keywords: Classic
    User: N/A
    Computer: <WRITABLE DC>
    Description:
    The NTFRS member object for the Read-only Domain Controller <WRITABLE DC> was deleted successfully.

    Well, it finally happened – we had a bug. No, no, don’t try to tell me it’s not possible, it was bound to occur someday. :)

    The text in the event log is completely cosmetic and benign. It is supposed be fixed in a later version of the OS. Just ignore it.

    Q: What are all the AD and Registry state values that will be set at a given point in the migration?

    A: Ok, you asked for it buddy.

    =============

    1. Prepared Phase - DFSRMIG /SETGLOBALSTATE 1

    a. DFSRMIG contacts the PDC Emulator directly.

    b. Global objects are created under:

    CN=DFSR-GlobalSettings,CN=SYSTEM,DC=<domain>
    CN=DOMAIN SYSTEM VOLUME
      CN=SYSVOL SHARE
       CN=CONTENT
        CN=TOPOLOGY

    c. CN=DFSR-GlobalSettings now has msDFSR-Flags attribute set to 0.

    d. As DC's pick up the globalstate change via AD replication and DFSR service polling, they create and write to registry entry:

    HKLM\System\CurrentControlSet\Services\DFSR\Parameters\Sysvols\Migrating Sysvols
    Local State = 4
    [REG_DWORD]

    e. The PDC Emulator creates the:

    CN=dfsr-LocalSettings,CN=<servername>,DC=<domain>

    objects for all DCs and sets this attribute to:

    msDFSR-Flags = 80 (if RWDCs).
    msDFSR-Flags = 64 (if RODCs - the RODC itself will set it to 80 later).

    f. The DFSR service has now started and created the new local SYSVOL_DFSR structure. Robocopy has made a local copy of the FRS SYSVOL. All AD topology data has been written in to support the content set. Initial sync of the data has started (since robocopy has locally pre-seeded the data this should involve minimal replication data on the network). The registry on all DC's is:

    Local State = 5 [REG_DWORD]

    g. Once initial sync is done on all DCs:

    Local State = 1 [DWORD]
    'msDFSR-Flags' (on CN=dfsr-LocalSettings) = 16

    h. If DFSRMIG /GETGLOBALSTATE returns that all DCs are prepared, 'msDFSR-Flags' on CN=dfsr- GlobalSettings has changed to 16 because all DCs are prepared. All DCs are currently replicating DFSR and FRS content sets, with FRS being shared as SYSVOL.

    =============

    2. Redirected Phase - DFSRMIG /SETGLOBALSTATE 2

    a. DFSRMIG contacts the PDC Emulator directly.

    b. CN=DFSR-LocalSettings now has msDFSR-Flags attribute set to 96 on all DCs and this replicates out through AD.

    c. As DCs pick up the attribute from AD replication, their DFSR service sets:

    Local State = 6 [REG_DWORD]

    d. On the PDC Emulator only, robocopy syncs any changes between the FRS and DFSR's content sets, and this is replicated out through DFSR.

    e. Once SYSVOL data is in sync, SYSVOL content set is set to be the active SYSVOL share on all servers. FRS and DFSR are both still replicating data.

    f. When this is complete, for each DC:

    Local State = 2 [DWORD]
    'msDFSR-Flags' (on CN=dfsr-LocalSettings) = 32

    g. If DFSRMIG /GETGLOBALSTATE returns that all DCs are redirected, 'msDFSR-Flags' on CN=dfsr- GlobalSettings has changed to 32 because all DCs are prepared. All DCs are currently replicating DFSR and FRS content sets, with DFSR being shared as SYSVOL.

    ==============

    3. Eliminated Phase - DFSRMIG /SETGLOBALSTATE 3

    a. DFSRMIG contacts the PDC Emulator directly. At this point it is not possible to undo the changes!

    b. CN=DFSR-LocalSettings now has msDFSR-Flags attribute set to 112 on all DCs and this replicates throughout AD.

    c. As DCs pick up the attribute from AD replication, their DFSR service sets:

    Local State = 7 [REG_DWORD]

    d. On the PDC, the FRS content set information is removed and this is replicated through AD. As each DC sees this change, their FRS service stops replicating the FRS content set. The FRS service is stopped (and restarted if custom FRS sets still exist on a given server).

    e. When this is complete, for each DC:

    Local State = 3 [DWORD]
    'msDFSR-Flags' (on CN=dfsr-LocalSettings) = 48

    f. If DFSRMIG /GETGLOBALSTATE returns that all DCs are eliminated, 'msDFSR-Flags' on CN=dfsr-GlobalSettings has changed to 48 because all DCs are prepared. All DCs are currently replicating DFSR only.

    g. A final cleanup task on each DC will set their 'msDFSR-Flags' on CN=dfsr-LocalSettings to <NOT SET>. The same will happen from the PDC to CN=dfsr-GlobalSettings.

    ==============

    If any 'undo' phases are entered (where an administrator has decided to go from redirected back to prepared, redirected back to start, or prepared back to start), the flow above happens in reverse, with the exception that the following two entries exist in the 'Local State' registry entries:

    8 (Undo Redirecting)
    9 (Undo Preparing)

    Q: I'm not a huge fan of Ultrasound. Are there any other ways to validate the health of SYSVOL prior to and after migration?

    A: Sure thing - already discussed in a previous blog post here.

    Q: Are there any migration KBs or bugs I need to worry about?

    A: One KB, with a simple solution to domains that have non-standard (and frankly, not any safer than default) security configurations: http://support.microsoft.com/kb/2567421

    And that’s it. If you have any questions on DFSR SYSVOL migration not covered by this blog post or by the FileCab step-by-step guide, don’t hesitate to start swinging in the Comments section below.

    - Ned “Working on Windows 7 Beta But Still Has An Unnatural Love for Win2008 DFSR” Pyle

  • Addendum: Making the DelegConfig website work on IIS 7

    Hi All Rob here again. I thought I would take the time today and expand upon the Kerberos Delegation website blog to show how you can use the web site on IIS 7. Actually, Ned beat me up pretty badly for not showing how to set the site up on IIS 7 [I sure did. Rob’s revenge was to make a blog post so editorially complex that it took me forever to format and publish – Ned].

    First thing, I am not going to go over the entire setup to get it working. All the Kerberos delegation steps are exactly the same. However if you have looked at IIS7 the interface is totally different from previous versions.

    Installing IIS

    1. Launch Server Manager and select Roles in the tree view.
    2. Next click on the Add Roles link in the right hand pane.

    image

    Figure 1 - Adding Roles

    You will get the Select Server Roles dialog as shown below.

    3. You need to click on Web Server and immediately you will get another dialog box for the Additional Features that need to be installed for the Web Server to function.
    4. Click on the Add Additional Features button and click Next.

    image

    Figure 2 - Adding Web Server Role

    You will then be shown another dialog box to select Role Services.

    5. You need to make sure that you select ASP.NET. You will again be prompted for additional required Features and click on the Add Required Features button.

    image

    Figure 3- Selecting ASP.Net Role Services

    NOTE: You will also need to add the Authentication methods you want the IIS server to support. For demonstration purposes we are only adding Windows Authentication, and Basic Authentication.

    image

    Figure 4 - Authentication Modes

    6. Once you have selected all the Role Services, click next.
    7. Just prior to the installation of the Web Server role you are given a screen that lists the role services you are about to install.

    image

    Figure 5 - Confirming Role Services

    8. Then click the Finish button.

    So the IIS 7 interface is totally different than previous versions of the IIS MMC snapin. IIS7 can also do authentication in Kernel mode now which was not possible in previous IIS versions.

    Installing the web application

    1. Launch the Internet Information Server (IIS) Manager snapin.
    2. Expand the tree view and high light the web site.
    3. Right click on the web site (in the figure below we used Default Web Site), and select Add Application.

    image

    Figure 6 - Adding Web Application

    4. Type in the Alias name you want to use for the application, and the file path to the application directory for the web site.
    5. Make sure that you are using the Classic .NET AppPool for the application pool to be used for the web application.

    image

    Figure 7 - Configuring Web Application Settings

    6. After you have added the web application, you want to select the application directory.
    7. Select Authentication as shown in the figure below.

    image

    Figure 8- DeleConfig Authentication settings

    8. Double click on Authentication.
    9. Highlight Windows Authentication, and then right click and select Enable. If you want to support other authentication methods you can enable those and disable other authentication methods you do not want to support.
    10. Now highlight Anonymous Authentication, and then right click and select Disable.

    image

    Figure 9 - Enabling Windows Authentication

    Now that we have installed IIS and the application you need to decide what account will be used for the Application Pool Identity. I have found that the configuration is drastically different based on the account used. If you use Network Service you configure the system one way, and if you use a domain based account you need to configure the system another way.

    I will cover both methods. For the most part the simple configuration is to use Network Service as the Application Pool Identity and can be used most of the time except in cases where you have multiple web servers in a load balance configuration.

    Configuring Network Service as the AppPool Identity

    1. In the Internet Information Service (IIS) Manager snapin select Application Pools in the tree view.

    image

    Figure 10 - Verifying AppPool Identity

    2. Verify that the Identity being used is NetworkService.
    3. Next navigate to the web application. In my lab it is the DelegConfig application.
    4. Double click on Authentication while you have the web application node selected in the tree view.

    image

    Figure 11 - Web Application Authentication mode

    5. Make sure that Windows Authentication is enabled. This should have already been done under installing the web application.
    6. Right click on Windows Authentication and select Advanced Settings…

    image

    Figure 12 - Advanced Settings

    7. In the Advanced Settings… dialog box you want to make sure that Enable Kernel-mode authentication is checked.

    image

    Figure 13 - Enable Kernel-mode Authentication

    8. After this all the normal things need to be done in the domain to support Kerberos delegation.
    9. Then reboot the server.
    10. Test the application and it should work.

    Configuring a domain based user account as the AppPool Identity

    1. In the Internet Information Service (IIS) Manager snapin select Application Pools in the tree view.

    image

    Figure 14 - Verifying AppPool Identity

    2. Verify that the Identity being used is the domain based account you want.
    3. If it is not, then right click on the Classic .NET AppPool application pool and select Set Application Pool Defaults…

    image

    Figure 15 - Setting the Application Pool Defaults

    4. You will get the dialog box like the one listed below. Change the Identity being used as highlighted.

    image

    Figure 16 - Application Pool Defaults

    5. Select Custom account and click on the Set button.

    image

    Figure 17 - Setting custom account identity

    6. You will have to type in the domain name and user account that will be used and the password will need to be entered twice.

    image

    Figure 14 - Typing in the credentials

    7. Click the OK button on the Set Credentials dialog box.
    8. Click the OK button on the Application Pool Identity dialog box.
    9. Click the OK button on the Application Pool Defaults dialog box.
    10. Next navigate to the web application. In my lab it is the DelegConfig application.
    11. Double click on Authentication while you have the web application node selected in the tree view.

    image

    Figure 19 - Web Application Authentication mode

    12. Make sure that Windows Authentication is enabled. This should have already been done under installing the web application section.
    13. Right click on Windows Authentication and select Advanced Settings…

    image

    Figure 20 - Windows Authentication Advanced Settings

    14. In the Advanced Settings… dialog box you want to make sure that Enable Kernel-mode authentication is unchecked.

    image

    Figure 21 - Disable Kernel-mode authentication

    15. You need to add the application pool identity account to the following local computer groups: Administrators, and IIS_IUSRS.

    image

    Figure 22 - Add the account to the proper groups

    16. After this all the normal things need to be done in the domain to support Kerberos delegation.
    17. Then reboot the server.
    18. Test the application and it should work.

    Notes

    If you need to understand how to setup the delegation please visit my previous blog about the website located here. There you will see how to add / delete service principal names (SPN), as well as how to configure delegation within Active Directory Users and Computers (ADUC).

    I hope that you have found the blog helpful in getting your first IIS7 server configured to use the DelegConfig website and get your feet wet on how to configure IIS7 to support Kerberos authentication.

    - Rob ‘ScreenShot’ Greene