Blog - Title

May, 2008

  • Kerberos Authentication problems – Service Principal Name (SPN) issues - Part 1

    Hi Rob here again. I hope that you found the first blog on troubleshooting Kerberos Authentication problems caused by name resolution informative and learned something about how to review network captures as well as how the SMB protocol works at a high level when reviewing a network trace. This time we are going to focus on problems that arise when Service Principal Names are not configured properly to support Kerberos Authentication. We will use a web site having authentication problems to show you network captures and explain at a high level how the HTTP protocol works. There are other ways to troubleshoot Kerberos; one could just read Kerberos event logging outlined in KB 262177. Although you could rely on this method, it will take a longer to resolve the issue and you will be taking an educated guess without a network trace.

    I am going to layout my lab configuration in case you want to reproduce the problem and look at the network traces on your own.

    Forest layout

    The root domain litwareinc.com has one domain controller in the domain, and one member server.

    DC network configuration:
    Host Name:  FAB-RT-DC1
    IP Address: 10.10.101.20
    DNS:  10.10.101.20
    WINS: 10.10.100.60

    Member Server network configuration:
    Host Name:  FAB-RT-MEM1
    IP Address: 10.10.200.105
    DNS:  10.10.100.20
    WINS: 10.10.100.60


    Windows XP client network  configuration:
    Host Name: XPPRO02
    IP Address:  10.10.200.110
    DNS:  10.10.101.20
    WINS: 10.10.100.60

     


    The web application’s website address is: http://webapp.fabrikam.com/webapp  

     

    The website is being hosted on FAB-RT-MEM1 

     

    NOTE: I’m stating the obvious here, I know, but this configuration is for testing only. Having only one DC per domain is a single point of failure and should be avoided.

    Problem scenario:

    We want to use Kerberos authentication with a web application. The web application is running on IIS 6.0. The web application is using a web application pool. This web application pools Identity is running as a domain user account (FABRIKAM\KerbSvc) because at a future time they will be front ending the web servers with a network load balancer.

    When the users visit the website with an Internet Explorer client they are using NTLM and not Kerberos.

    Eventually the user name password dialog stops popping up and they get a message within IE stating “You are not authorized to view this page”

    We have a website that makes determining web site authentication easier to troubleshoot. As you can see we authenticated using NTLM.

    image

    If “Audit Logon Events” auditing was enabled for “Success” on the IIS Server would see the following event that would also prove we are authenticating using NTLM.

    image

    You can see that the Logon Type is “3” (Which means “network logon”) and the Authentication Package was “NTLM”. This proves that we authenticated using NTLM and not Kerberos.

    When you troubleshoot using network captures, you want to install the network capture utility on both ends of the communications to make sure that there are no network devices (routers, switches, VPN appliances, etc) that are manipulating the packet in between the two systems. We call this taking a double-sided trace. In support we will typically request a double-sided network capture be taken.

    Since my lab does not have any routers in the mix (both systems are on the same subnet) I am only going to trace on the source Windows XP client machine.

    So what is the best way to get the network capture?

    1.     Make sure that there are no Internet Explorer windows open, and in general close down as many applications as possible so that your network traces are as clean as possible.
    2.    Start the network capture utility.
    3.    Clear all name resolution cache as well as all cached Kerberos tickets.

    • To clear DNS name cache you type in:  IPConfig /FlushDNS
    • To clear NetBIOS name cache you type in:  NBTStat –R
    • To clear Kerberos tickets will need KList.exe:  KList purge

    4.    Launch Internet Explorer and go to the web site.
    5.    Once the website comes up or error messages are being displayed, go ahead and stop the network capture.

    Reviewing the network capture:

    If you are using Wireshark to view the trace, the filter is simple: “dns || Kerberos || ip.addr==<IP Address of Target machine>”. Basically, this filter means “Show me all packets sent to or from the target machine, all DNS name queries and responses, and all Kerberos authentication.”

    It should look similar to this:

    image

    Once you have the network capture, you should see all DNS, Kerberos Authentication (as well as packets that have Kerberos tickets in them), and anything destined for the remote system from the Windows XP client.

    Before we go over the capture too much, we should probably cover at a high level the steps taken to connect to a website.

    1.    Resolve the host name for the target system to an IP Address.

       a.    Look in HOSTS file.
       b.    Query DNS.
       c.    Look in LMHOSTS file.
       d.    Query WINS / NBNS.

    2.    Client connects to the website anonymously using “HTTP GET”

    3.    If the website allows anonymous access then it is done.  However if it does not, it responds back to the client with a list of authentication protocols it supports in the HTTP header.

    4.    Client attempts to get a Kerberos ticket for the website (from a domain controller) if the website supports Negotiate authentication.

    5.    Client then connects to the website and passes its credentials in the HTTP header.

    Step 1 - resolve the name:

    image

    Remember, we did “IPConfig /FlushDNS” so that we can see name resolution on the wire. Frame 10 & 11 is the query and response for FAB-RT-DC1 (DC). In frame 46 and 47 is a query and response for the website name of webapp.fabrikam.com and response with a “A” or HOST record back of IP Address 10.10.200.105.

    Step 2 – Client connects to the website anonymously:

    image

    So as we can see from the trace, the client does a HTTP GET request to the website and does not pass any user credentials. You can also notice that we are using HTTP 1.1 when we connect to the site.

    Step 3 – Web Server responds with support authentication protocols:

    image

    Here we see that the website must require authentication to access the site because the web server responded back with a “401 Unauthorized”. We can also see that the web server supports the authentication types of: “WWW-Authenticate: Negotiate”, and “WWW-Authenticate: NTLM”.

    In order for Kerberos authentication to work with IIS we must see Negotiate as an authentication method. If you do not see this you will need to enable this on the IIS web server or web site.

    215383 How to configure IIS to support both the Kerberos protocol and the NTLM protocol for network authentication (http://support.microsoft.com/default.aspx?scid=kb;EN-US;215383)

    Step 4 - Request a Kerberos ticket for the website:

    image

    Alright, now to the meat of Kerberos authentication and viewing it in a network trace. If you remember we used KList Purge command to clear out all tickets on the system. That means that the server has to get a TGT first and this is why you are seeing the AS-REQ and AS-REP frames (frames 58 and 59). If Kerberos ticketing is new to you, I would suggest reviewing the blog on how Kerberos works.

    Next, we see the TGS-REQ in Frame 60; let’s take a closer look at this packet in the details pane.

    image

    You can see that the user’s TGT is handed to the KDC under “padata: PA-TGS-REQ” section, and requesting a ticket for server “http/webapp.fabrikam.com” in the FABRIKAM realm (Windows Domain) under “KDC_REQ_BODY” section.

    OK, since we now know that we are requesting a Kerberos ticket for “http/webapp.fabrikam.com” in the fabrikam.com domain and the KDC (domain controller) responds to the Kerberos ticket request with KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN this would tell us that the SPN for “http/webapp.fabrikam.com” is missing or possibly that there are multiple accounts with the same Service Principal Name defined on them within the Active Directory Forest.

    Step 5 – Client connects and passes Credentials:

    image

    So we see in the following Frames:

    • Frame 75 there is another HTTP GET command and it wants to connect using NTLMSSP_NEGOTIATE.
    • Frame 77 since credentials were not passed in frame 62 the web server responds again with a 401 Unauthorized, but this time it also sends a NTLMSSP_CHALLENGE response.
    • Frame 79 there is HTTP GET with NTLM_AUTH and we see that the account attempting to authenticate is FABRIKAM\Administrator.
    • Frame 80 the web server responds with a 200 OK.

    You can see in the detail pane that I have highlighted packet 79; where the Authorization data is provided; and NLTM credentials that are being passed are domain of FABRIKAM and user account of Administrator from Host XPPRO02.

    In frame 80 the website responded back with an HTTP 200 OK message which basically means that it accepted the authentication.

    So, now the question become how do we fix the problem?

    Well, we now know what the Service Principal Name is that we are requesting (review Step 4, frame 60). The next step is within IIS 6, we need to know what account is running the Application Pool for the website in question. Once we know this information, we can validate / add the proper Service Principal Name to that account. How do you get it?

    1. Open up IIS Manager.

    2. Find the web site that has the application pool defined, right-click on it and select properties.

    3. On the “Home Directory” / “Virtual Directory” tab find the Application Pool field. This is the application pool that we need to find out what identity is being used on.

    image

    4. Now we can expand the Application Pools folder in IIS Manager and find the Application Pool.

    5. Right click and select properties on the Application Pool name, and select the “Identity” tab.

    image

    6. As you can see we are using a domain account called “FABRIKAM\KerbSvc” to run the web application pool.

    7. The next thing that should be done, is to make sure that the SPN “http/webapp.fabrikam.com” is not currently being used by any other accounts within the Active Directory forest. This can be accomplished in several different ways.

    • You could use LIDFDE to search for the SPN within all domains of the forest.
    • You could use LDP to search for the SPN within all domains of the forest.
    • You could use querySPN.vbs. 

    You can review the following KB article on how to use each of these tools: KB321044 for more detailed information on how to use these tools. The best method is to use querySPN.vbs, with this tool if you target a Global Catalog it can search through the entire domain tree. If you have multiple domain trees in the forest or you have multiple Forest Trusts, you will need to specify each domain tree root individually and search this way.

    So here is what we find when I use querySPN.vbs searching for http/webapp*

    image

    This is good; this tells us that there are no accounts that have that Service Principal Name in the forest. So if we add the SPN to the “FABRIKAM\KerbSvc” account we will not create a duplicate entry.

    8. Once you have validated that you are not going to create a duplicate SPN, you can use SetSPN.exe to set the Service Principal Name of “http/webapp.fabrikam.com” and “http/webapp” on the “FABRIKAM\KerbSvc” user account. Next, you should always verify that the SPN’s have been added by using SetSPN to list the Service Principal Names on the account. You can look at the below screen shot for how the commands were ran:

    image

    Notice that I have added the Netbios name for the site as well as the FQDN of the website. It is good practice to add both to the account. Applications may ask for SPN’s in either name format and it may not be clear which will be requested in every case.

    Now we can test to verify it is working by attempting to access the website again.

    image

    That worked. Now I know what you guys are starting to ask: how does this look in a network trace? Let’s find out.

    • In the Security Event log on the IIS server we find that “FABRIKAM\Administrator” did authenticate using Kerberos:

    image

    • Looking at the network capture we see the entire ticketing process with the domain controller and it responds with a TGS-REP as shown in the below capture:

    image

    This validates what the application is telling us: we authenticated to the web application using Kerberos Authentication.

    For those of you who would like to see the Kerberos Service Ticket being passed to the Web Server here is a screen shot of that functionality.

    image

    We can see that the web server accepted the authentication. We can also see that a Kerberos ticket was sent in the HTTP header by looking at the KRB5_Blob tag, and that Internet Explorer sent a Kerberos ticket for “http/webapp.fabrikam.com”

    Service Principal Name troubleshooting is usually a problem when you are setting up the application to support Kerberos. Typically once the application has been up and running for a while there are not too many SPN problems once the application is working unless the Service Principal Names are changing.

    Summary

    I hope that you have learned a few new things:

    • How to search for duplicate Service Principal Names as well as how to add Service Principal Names.
    • How to easily filter network traces to confidently determine where Kerberos authentication is failing.
    • How the HTTP protocol works with authentication so that you can determine how you authenticated to the web site.

    This blog post is the first in a three-part series that will cover the most common misconfigurations as they relate to Service Principal Name. So if I have not yet covered one of your current issues please check back soon. We will be covering issues like Duplicate SPN’s or the Service Principal Name being configured on the wrong account.

    - Robert Greene

  • Troubleshooting Kerberos Authentication problems – Name resolution issues

    Hi Rob here. I thought I would show you how we in Microsoft Commercial Technical Support typically troubleshoot Kerberos authentication issues. This discussion should do much to get you more comfortable viewing network traces for Kerberos authentication problems. There are other ways to troubleshoot Kerberos; one could use the Kerberos event logging outlined in KB 262177. Although you could rely on this method, it will take longer to resolve the issue and involves making some educated guesses without the network trace.

    I am going to layout my lab configuration in case you want to reproduce the problem and look at the network traces on your own.

    Forest layout:

    image

    The root domain litwareinc.com has one domain controller in the domain, and one member server.

    Domain Controller network configuration:

    Host Name:  LTWRE-RT-DC1
    IP Address: 10.10.100.20
    DNS:  10.10.100.20
    WINS: 10.10.100.60

    Member Server network configuration:

    Host Name:  LTWRE-RT-MEM1
    IP Address: 10.10.100.21
    DNS:  10.10.100.20
    WINS: 10.10.100.60

    The child domain litware-chld.litwareinc.com has one domain controller in the domain, and one member server.

    Domain Controller network configuration:

    Host Name:  LTWRE-CHD-DC1
    IP Address: 10.10.200.20
    DNS:  10.10.200.20
    WINS: 10.10.100.60

    Member Server network configuration:

    Host Name:  LTWRE-CHD-MEM1
    IP Address: 10.10.200.21
    DNS:  10.10.100.20
    WINS: 10.10.100.60

    NOTE: I’m stating the obvious here, I know, but this configuration is for testing only. Having only one DC per domain usually means you’ll be rebuilding the forest at some point.

    Network based troubleshooting (network captures) is the fastest way to determine the problem, and by learning a few short filters you can effectively troubleshoot most Kerberos-related problems.

    You can use any network capture utility that you feel comfortable with. I prefer Netmon, nmcap (part of Netmon 3.x) or netcap (XP and 2003 support tools) to collect the network trace, and I use Wireshark to view the network capture. This is in no way an endorsement of Wireshark – feel free to use Ethereal, Packetyzer, etc.

    Problem scenario:

    There is a service running on LTWRE-RT-MEM1 server that runs starts /runs as “LocalSystem” account. This service connects to a file share on LTWRE-CHD-MEM1 named “AppShare” to access some files. The Service is failing to retrieve the files and is giving you an error of “Access is denied”. When you attempt to access the share as a domain user account on LTWRE-RT-MEM1 you are able to access the share.

    Auditing for Logon/Logoff was enabled on LTWRE-CHD-MEM1, so you start by examining the security event log.

    When the LITWAREINC\Administrator attempts to access the share we get the following Audit Event:

    image

    Notice how the user that authenticated to the server is the “LITWAREINC\Administrator” account. It used NTLM authentication and the source machine name is LTWRE-RT-MEM1.

    When the Service attempts to access the share we get the following Audit Event:

    image

    Notice that when the service attempts to authenticate to the server it is doing it anonymously.

    Hey, why is the computer authenticating to the other machine using NTLM authentication?
    I thought we were in the 21st century with Kerberos authentication?

    As it turns out, starting with Windows XP and Windows Server 2003 a computer cannot not use NTLM authentication when accessing a remote resource. If it does, it will use Anonymous Logon credentials and typically fail.

    That means we have to figure out why Kerberos authentication is failing on LTWRE-RT-MEM1 when accessing a share on LTWRE-CHD-MEM1.

    Typically when you troubleshoot using network captures, you want to install the network capture utility on both ends of the communications to make sure that there are no network devices (firewalls, routers, switches, VPN appliances, etc.) that are manipulating the packet in between the two systems. We call this taking a double-sided trace.

    When working with a customer, we will typically request a double-sided network capture be taken. In this scenario I would start with installing the network capture utility on the source and destination server to see what is going on.

    So the next question I guess becomes what are the steps to taking a good network capture?

    Well, we want to see all name resolution, and we will also want to ensure that we see the Kerberos tickets (Authentication) in the capture. We also want to make sure that we can reproduce this problem at will to see this problem for ourselves.

    So, how can we reproduce the problem?

    1. Get a command prompt as the “SYSTEM” and attempt to access the remote system.

    On Windows 2000, Windows XP, and Windows Server 2003 we can use the AT command to get a command prompt as the “SYSTEM” account by type the following command:

    AT <Military Time in Future> /Interactive “cmd.exe”

    i.e.  if the time is currently 7:04 PM you would type in: AT 19:06 /Interactive “cmd.exe”

    Then at 7:06 PM you should see a command prompt pop up

    image

    NOTE:  You have to do this while logged into the console session.  If you are RDP’ed in you need to start the RDP session with the /console switch otherwise you will never see the command window start.

    2. Start the network capture utility.

    3. Clear all name resolution cache as well as all cached Kerberos tickets.

    • To clear DNS name cache you type in: IPConfig /FlushDNS
    • To clear NetBIOS name cache you type in: NBTStat –R
    • To clear Kerberos tickets will need KList.exe: KList purge

    The above commands need to be done in the command prompt that came up for “SYSTEM”

    4. Now you need to run a command that will require authentication to the target server. Either of the following will do:

    5. Once you get the error message, stop and save the network captures.

    Reviewing the network capture:

    If you are using Wireshark to view the trace, the Filter is simple: “dns || Kerberos || ip.addr==<IP Address of Target machine>”. Basically, this filter means “Show me all packets sent to or from the target machine, all DNS name queries and responses, and all Kerberos authentication.”

    It should look similar to this:

    image

    Once you have the network capture, you should see all DNS, Kerberos Authentication (As well as Packets that have Kerberos tickets in them), and anything destined for the remote system.

    Before we go over the capture too much, we should probably cover at a high level the steps taken to connect to a remote file share.

    1. Resolve the host name for the target system to an IP address.

    a. Look in the HOSTS file.
    b. Query DNS.
    c. Look in the LMHOSTS file.
    d. Query WINS / NBNS.

    2. Ping the remote system.
    3. Negotiate an Authentication protocol. Kerberos is preferred for Windows hosts.
    4. Request a Kerberos Ticket.
    5. Perform an SMB “Session Setup and AndX request” request and send authentication data (Kerberos ticket or NTLM response).

    Let’s look at those steps in more detail.

    Step 1 - resolve the name:

    image

    Remember, we did “IPConfig /FlushDNS” so that we can see name resolution on the wire. Frame 1 is the query out. Hmm, this looks kind of funny: querying for LTWRE-CHD-MEM1.litwareinc.com. Well, that part should be fine, I suppose, since the DNS server should not find the record. But wait Frame 6 shows that the DNS Server responded to the query with 10.10.200.21, and sure enough that is the correct IP Address for the target server.

    Step 2 - ping the remote system:

    image

    Yep, the remote system is ping able. See the Echo request and reply. So the system is up and available.

    Step 3 - Negotiate Authentication:

    image

    So now we negotiate the authentication protocol and the remote system responded; the response is the more important part of the packet. We see that it supports MS KRB5, KRB5, and NTLMSSP; it even gave us the principal name of the system.

    image

    Step 4 - Request a Kerberos ticket:

    image

    Alright, now to the meat of Kerberos authentication and viewing it in a network trace. If you remember, we used KList Purge command to clear out all tickets on the system. That means that the server has to get a Ticket Granting Ticket (TGT) first, and this is why you are seeing the AS-REQ and AS-REP frames. If Kerberos ticketing is new to you, I would suggest reviewing the blog on how Kerberos works.

    Next, we see the TGS-REQ in Frame 18; let’s take a closer look at this packet in the details pane.

    image

    You can see that the system is handing its TGT to the Kerberos Key Distribution Center (KDC) under “padata: PA-TGS-REQ” section, and requesting a ticket for server “cifs/LTWRE-CHD-MEM1.litwareinc.com” in the LITWAREINC.COM realm (Windows Domain) under “KDC_REQ_BODY” section.

    OK, since we now know that we are requesting a Kerberos ticket for “cifs/LTWRE-CHD-MEM1.litwareinc.com” in the litwareinc.com domain. This will not work since the remote system actually lives in the “litwareinc-chld.litwareinc.com” domain. So you see why the KDC responded back with KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN. Again, if you do not understand this please review the blog on how Kerberos works.

    Step 5 - Perform a SMB “Session Setup AndX request”:

    image

    So we see in the following Frames:

    • Frame 20 shows that, since Kerberos failed due to an unknown service principal name, the NTLMSSP_NEGOTIATE authentication package is selected. Frame 21 shows that the remote system sending the NTLMSSP_CHALLENGE (this is typical) back.
    • Frame 22 shows that the system sent no NTLM credentials to the remote system. It is authenticating as NT AUTHORITY\Anonymous.
    • Frame 23 shows that the remote system allowed the session to be created.
    • Frame 24 & 25 shows that we do a Tree connect to the IPC$ share and get a response.
    • Frame 26 & 27 shows that we connect the SRVSVC named pipe and get STATUS_ACCESS_DENIED back.

    So where do you think things start to go wrong here in the trace?

    If you answered DNS name resolution you would be correct. If name resolution is not working properly in the environment it will cause the application requesting a Kerberos ticket to actually request a Service ticket for the wrong service principal name. So if you remember the remote file server I am attempting to connect to “ltwre-chd-mem1.chd.litwareinc.com”, however the DNS Server found a record for “ltwre-chd-mem1.litware.com”. Since we found the remote file server in the “litwareinc.com” domain the Kerberos client requests a service ticket for “cifs/ltwre-chd-mem1.litwareinc.com” as noted in the Kerberos ticket request, and the KDC responds with KRB5KDC_ERR_S_PRINCIPAL_UNKNOWN.

    I did another net view specifying the FQDN of LTWRE-CHD-MEM1 and WOW, look at the output:

    image

    That actually worked! So, how can we fix this problem?

    Actually, there are several different ways to “fix” the problem:

    a. Find out why DNS is resolving the machine name incorrectly.

    i. Is there a HOST or CNAME record for this name?
    ii. Did you configure the DNS Zone for WINS lookup?

    b. Configure your application to use the FQDN of the system instead of NetBIOS name.

    c. We could add an Service Principal Name to LTWRE-CHD-MEM1 for “CIFS/LTWRE-CHD-MEM1.litwareinc.com”

    The best way to “Fix” the problem is to actually fix DNS name resolution. By the way, the lab was configured with “WINS Lookup” enabled on the litwareinc.com DNS Zone. If you are failing to use Kerberos authentication using the LocalSystem account, you are more than likely failing to use Kerberos authentication when users are going to the remote system. However, they are not getting “Access is denied” because user accounts, unlike machine accounts, can fail over to NTLM and authenticate with credentials rather than as Anonymous.

    If you find that fixing the DNS problem is not possible, then the next best solution would be to make the application use the FQDN of the server. Keep in mind that the application vendor would need to be involved to use this fix.

    The least favorite method to resolve the issue would be to add the SPN to the destination server using the SetSPN.exe tool. This is the least favorite because you are adding another name to the machine account in another domain. What would happen if in the future you bring up a new computer in the root domain with the same name? Now you have a duplicate SPN and this will lead to other Kerberos authentication problems.

    Well, I hope that you have learned a few new things like:

    • How name resolution problems could cause Kerberos authentication to fail.
    • How to easily filter network traces to confidently determine where Kerberos authentication is failing.
    • How the SMB protocol and authentication look in a network trace.

    Please keep in mind that there are several other ways that name resolution could cause Kerberos authentication to fail. You could have static WINS entries in the database, or you could have wrong entries in HOSTS / LMHOSTS files. You could be failing because of a CNAME / “A” (HOST) record within your DNS zone, or simply because of the DNS Zone is configured for “WINS Lookup”.

    Robert Greene

  • Verifying File Replication during the Windows Server 2008 DFSR SYSVOL Migration – Down and Dirty Style

    Hi, Ned here again. You’ve probably already started reading about how Windows Server 2008 now supports using Distributed File System Replication (DFSR) technology to synchronize SYSVOL. This means that the legacy File Replication Service (FRS) is no longer necessary and all the DFSR advantages for reliability, scalability, and performance can be realized.

    As with any migration, it’s important that you enter and leave in a stable state – we don’t want to fight the domain environment’s issues midstream in the migration after all. So today we will talk about how to validate that replication is working in FRS before you migrate and that DFSR is working when the migration is completed.

    Before I begin – the official TechNet documentation for the migration process is still in progress. Also, when it is released the docs will reference using (a not yet released version of) the Ultrasound tool. This post will not be about Ultrasound. Instead, I am going to use a combination of FRSDIAG and the DFSR Propagation tool. This is not as sophisticated as using Ultrasound, but it’s considerably simpler and easier – both good selling points.

    Understanding the Migration Process

    Before you attempt the migration, there is some mandatory reading. Even though I stated earlier that the SYSVOL TechNet docs are coming, there is an outstanding series of articles from the DFSR Core Development team PM Mahesh Unnikrishnan that covers the process in a systematic manner:

    SYSVOL Migration Series: Part 1 – Introduction to the SYSVOL migration process
    SYSVOL Migration Series: Part 2 – Dfsrmig.exe: The SYSVOL migration tool
    SYSVOL Migration Series: Part 3 - Migrating to the 'PREPARED' state
    SYSVOL Migration Series: Part 4 – Migrating to the ‘REDIRECTED’ state
    SYSVOL Migration Series: Part 5 – Migrating to the ‘ELIMINATED’ state

    These docs cover the entire migration end to end. You will see in Part 3 that it quickly covers verifying your FRS replication health with SONAR, FRSDIAG, or Ultrasound (and I’d throw MOM 2005 or SCOM 2007 in there as well).

    FRS testing prior to setting migration to the Prepared state

    When migrating SYSVOL, the Prepared state is phase where FRS is still replicating files and the SYSVOL share still points to its old location. We need to make sure that there are no existing replication problems with FRS before we go buck wild with DFSR – even though DFSR will do its own replication once we enter prepared state, the data it starts from FRS with needs to be consistent . Not to mention sensitivity about FRS will be heightened now and existing issues with FRS are more likely to be:

    1) Noticed.
    2) Blamed on the migration process.

    So let’s do this:

    1. Download FRSDIAG.

    2. Install it on PDC Emulator.

    3. Create a file called FRSCANARYTEST.TXT in the \\domain\sysvol\domain path

    (ex: \\contoso.com\sysvol\contoso.com\FRSCANARYTEST.txt).

    clip_image002

    4. Verify there are no other TXT files in this location.

    5. Allow the file enough time to replicate to all DC's in the domain (you need to have an understanding of what expected SYSVOL replication time is in your environment – it could be several days in severely latent networks).

    6. Run FRSDIAG and click Browse under 'Target Servers' - add all DC's in that domain.

    clip_image004

    7. Click: Tools -> Propagation File Tracer

    clip_image006

    8. Change the path to <your fqdn domain>\*.txt

    (ex: contoso.com\*.txt)

    clip_image008

    9. Set 'Expected number of hits' to 1.

    10. Click GO.

    11. Examine results to verify all servers return green, status done, results passed. If not, examine the failed servers directly – are there errors in the server details tab in FRSDIAG, the FRS event logs and is SYSVOL shared out?

    clip_image010

    DFSR testing prior to setting migration to the Eliminated state

    So everything was fine with FRS (or not, and we fixed it) and we’ve moved through the Prepared and Redirected phases of the SYSVOL migration. Now we’re ready to move into the Eliminated state and we want to make sure that DFSR is working fine for SYSVOL replication – because there’s no going back! We won’t need any add-on tools now because DFSR gives us its own reporting mechanism:

    1. Start DFSMGMT.MSC.

    2. Expand Replication -> Domain System Volume

    3. Select 'Create Diagnostic Report'

    clip_image012

    4. Select 'Propagation Test'

    clip_image014

    5. Select SYSVOL Share, select the PDCE as the propagation server, click Next.

    clip_image016

    6. Select Create and Close. Wait as long as it should take for your replication convergence to occur (only you can know this for your environment)

    7. Select 'Create Diagnostic Report'

    8. Select 'Propagation Report'

    9. Select SYSVOL Share, select the PDCE as the propagation server, click Next.

    10. Create and Close.

    11. Open the propagation report when prompted and verify file replicated to all
    DC's.

    And lookee lookee:

    clip_image018

    clip_image020

    clip_image022

    Pretty slick. It told us where we replicated from, to which DC’s, and how long it took. If the propagation file had not yet shown up yet or there had been errors, we’d see them here.

    Wrap Up

    As you can see, testing the state of SYSVOL replication doesn’t require complex or expensive tools. This post was not intended to be the final word of course – testing your AD replication is just as important, since ultimately the settings in the Active Directory database are what FRS and DFSR use to configure replication. I intentionally did not go into detail on what to do if you find file replication problems either, since they’ve been pretty documented for years now. If you’re interested in that kind of info anyway, please let us know.

    Ned Pyle

  • How to get the most from your FRSDiag…

    Hello all, its Randy here again. The File Replication Service (FRS) is a technology used to synchronize data between several data shares on different computers and often in different sites throughout an organization. Any change made to the FRS data is updated on all the partners that share this replication. This is the technology that also manages the contents of SYSVOL on all the domain controllers in the domain. There are a lot of moving parts to FRS and you may need to troubleshoot why the contents are not consistent on all the SYSVOL servers.

    FRSdiag is a .NET utility that can be used to gather diagnostics and troubleshoot FRS and can be run on any computer with administrative privileges against any computer running FRS. This utility can be downloaded at here. This report gathers data on all replica sets of which the target server is involved. This includes custom replica sets in the DFS namespace as well as SYSVOL replication. The example in this blog post reference SYSVOL replication, but the tips also pertain to replication in DFS. Below is a screenshot of the FRSDiag GUI interface.

    image

    Be sure to check out tools under the menu bar. This is a great way to do some simple troubleshooting tasks, like forcing FRS replication and querying for the OriginatorGUID of an FRS member server (OriginatorGUID is explained later.)

    IMPORTANT

    In order for any of this discussion to make sense, you first need to know how everything happens. If you are not an expert in FRS, please review the following article before proceeding. How FRS Works.

    SUMMARY

    The output from FRSDiag can look cryptic, so to simplify things, I want to separate the data gathered into three separate areas. The first area is Topology – this information focuses on the connections between servers and the components that make replication work. The second is VersionVector – this information tells replication partners that their data is current or what needs to be replicated with others. The third area is the Data being replicated. I will separate this blog post into these three areas and show some reports that give us this information. Lastly, I will not be discussing errors in the FRS event logs, as these are well documented. All the event logs are gathered in the FRSdiag report, this is a great way to troubleshoot an issue and find solutions by searching on the Microsoft support website for the event codes.

    TOPOLOGY

    The topology of your FRS environment lays out a map of how the data propagates to the FRS servers. The components of the topology are the Replica Sets, Replica Members, and the Connection Objects. A Replica Set is the replication of files and directories on a specific folder. This can be the SYSVOL folder, or a DFS folder using FRS to copy the contents among multiple targets. The Replica Members are the FRS servers that participate in the Replica Set. The Connection Objects are the paths that data can travel from an Upstream Replica Member to a Downstream Replica Member. The upstream member is where the changes happen and the downstream member receives these changes. When a change is made on the upstream partner, it sends a change notification to its downstream partners. When the downstream partner initiates the replication, it will reply to the upstream partner with a change acknowledgement and then pull the changes.

    The FRSDiag report is run on one server and the reports only detail the components relevant to that server. Therefore, you need to run the report on all the Replica Members in the Replica Set in order to get a full picture of the topology. You can select all the members of the replica set by choosing the Browse option when selecting a target server.

    image

    Some reports that show this information are connstat.txt, ntfrs_config.txt and ntfrs_sets.txt. If you are troubleshooting SYSVOL, you can also use the two repadmin reports. The repadmin reports pertain to Active Directory replication. Because SYSVOL uses the same Connection Objects as Active Directory replication, these reports can be helpful.

    Connstat.txt, ntfrs_config.txt and ntfrs_sets.txt all have their own unique bit of information, but they all basically say the same thing. All three reports group information on the Replica Set of which the target server is a member (the target server being the server that FRSDiag was pointing to when the report was ran.) Ntfrs_sets includes information on each of the Connection Objects used by the target server for that Replica Set. Here is a portion of my SYSVOL Replica Set information from my ntfrs_sets report.

    image

    Grouped beneath this Replica Set information are three Connection Objects that the target server uses.

    image

    This represents the connection to the NTFS Journal. This connection object pulls change orders from the local file system and populates data that originated on this server.

    image

    This represents the upstream member ADAR2DC2 and the downstream member ADAR2DC1. We have two listed because we have two-way replication. They look the same but you can tell which is upstream from another by looking in the ntfrs_ds.txt report. All of these components are objects in Active Directory, and ntfrs_ds.txt is the LDAP output of these objects. If you search this report for the Cxtion GUID: D9BDBC95-DCFC-43FA-8FAA-F6F60020669E, you will see that it is the Connection Object listed under “cn=d9bdbc95-dcfc-43fa-8faa-f6f60020669e, cn=ntds settings, cn=adar2dc1, cn=servers, cn=default-first-site-name, cn=sites,cn=configuration, dc=adatum, dc=com.” If this name sounds familiar, it’s because you see this in the AD sites and services MMC – we just display the connection object as <automatically generated>.

    image

    Now we will look at Connstat.txt. This is the most informative of the reports and where I go to first to get a good understanding of the situation. Even though we are discussing this under the context of topology, it also contains a wealth of information on the up-to-dateness of the downstream members and will make a good transition to our next topic, VersionVector information.

    Let’s look at Connstat.txt from a sample report.

    image

    This displays all of your connections grouped by the replica set. In our case, our replica set is SYSVOL and we are reporting on domain controller DC04. We see some log data at the top: OutLogSeqNum: 5053 and OutlogCleanup: 5053. The OutLogSeqNum indicates the number of changes in the local FRS database. The OutlogCleanup indicates what file number has been acknowledged and pulled from all downstream partners. In this case the numbers are the same, so all downstream partners are in sync with this domain controller.

    The next portion of data is registry information pertaining to the replica set. We see the root and staging path, these are local paths on the file server.

    The last portion is a spreadsheet of each of the replica partners defined by each connection object. Each line represents one connection object, so in this scenario we see two entries for each domain controller (because we have a connection replicating in each direction.)

    PartnerName is the name of either the upstream or downstream partner. It is upstream if the next column (I/O) indicates ‘In’, it is a downstream partner if the I/O indicates ‘Out’.

    The Outbound connections are more interesting because they monitor how up to date is the downstream partner. When troubleshooting, you find better information from the upstream partner’s Connstat.txt.

    Here is some information on how to read Connstat.txt. This goes into more detail than I do of the values and their meanings, so it is worth a look.

    So let’s take a look at one of these entries. We will look at LITWAREINC\DC01$ (I/O = Out.) This is the connection object for DC04 as the upstream and DC01 as the downstream.

    Rev = 8 - This indicates the revision number of NTFRS being used, A revision of 8 indicates that we are running Windows 2003.

    LastJoinTime = Fri Feb 1– This will include date and time, I just included the date portion in order to get the output to fit on one page. This indicates the last time the downstream partner has connected, and Last VVJoin is the last time the downstream partner compared version vectors to do a full synchronization. See How FRS Works to compare a join to a vvjoin and when they are performed.

    State = Joined and OLog State = OLP_ELIGIBLE indicate the current status of the connection. Do not be alarmed if it shows unjoined, this is normal. See Connstat remarks for an explanation of the state values.

    In order to verify connection state and out of sync, look at the LeadX and TrailX values and compare to the OutLogSeqNum of the upstream partner. In my environment, you see that all equal 5053. This indicates that we are up to date. The upstream partner’s Outbound Log Sequence number is the same as of what the downstream partner has been notified (the LeadX value) and what changes have been acknowledged (the TrailX value.)

    In my case, I had a remote DC (Remote 01) that appeared to not receive updates to SYSVOL. By using the information in the Connstat reports, I found the following topology described in the picture below and I was also able to find that our real problem was DC04, not Remote01.

    image

    Our symptom was that Remote01 did not have the latest GPO changes. Looking at the Connstat of Remote01, we look synchronized with DC04 and we are waiting for DC04 to update two changes. Below is the output.

    image

    Our Outlog Sequence Number is 8396 and the Trailx value from DC04 is 8394, indicating that 2 changes have not been accepted.

    If we look at DC04, there appears to be no problem, this reinforces the lesson that the upstream partner holds the relevant information in their connstat report.

    image

    If we look at a Connstat on DC05, we see that DC01 is behind by 10 changes (not alarming) but DC04 is behind by 127 changes. This led us to the discovery that our real problem was DC04.

    image

    In our case, we just created connection objects from DC05 to Remote01 instead of DC04 and it replicated immediately. This example shows the importance of knowing the complete replication topology. In addition to the topology information, we also looked at how up-to-date the servers are with their partners. This leads us to our next discussion: VersionVector objects.

    VersionVector

    One important component of synchronizing data is the ability for servers to distinguish what changes are needed and what changes have already been received. The VersionVector is a summary of how up-to-date this FRS member is with all the updates made on the other members of the replica set.

    Consider this scenario. You are throwing a huge party and want as many people to come as possible. To do this, you send out invitations to a group of friends and tell them to invite whoever they want. You need each of your friends to keep a list of attendees that is up-to-date regardless of which friend made the invitation. We need a simple way to ensure that all lists are the same and to propagate the changes as quickly as possible. To do this, we will require that each entry on the list require three things:

    1. The person invited

    2. Which friend made the invitation

    3. An invitation number.

    The invitation number is a simple count of how many invites a particular friend has made. So as I invite people I will count them 1,2,3,4,5 and so on. When I want to update my list with another friend, I can just tell them my latest count and they will know how many entries are required from me. Because there is a count associated with each friend, I can update someone's list with my invites, as well as invites from others. I will keep a table indicating how many invites I have received from others, similar to the one below:

    Randy

     

    15

     

    Bob

     

    18

     

    Sean

     

    14

     

    Tim

     

    21

     

    Jonathan

     

    16

     

    I run into a friend and his table looks like this

    Randy

     

    11

     

    Bob

     

    14

     

    Sean

     

    17

     

    Tim

     

    12

     

    Jonathan

     

    16

     

    I can quickly see that I need to tell my friend about my invites 12-15, Bob’s invites 15-18, and Tim’s invites 13-21. I also see that he is aware of all invites on my list originating from Sean and Jonathan because his number is equal to or greater than mine.

    This is how VersionVectors work. The VersionVector Table is similar to the table above. Each member of the replica set is listed as an OriginatorGuid and each OriginatorGuid has an associated Version Sequence Number (VSN.) Now when one partner updates another (during a Join,) that partner can provide updates that originate from it as well as those changes originating from other members in the replica set.

    A good report to review the VersionVector components are in NTFRS_SETS.txt and in the NTFRS_OUTLOG.txt. The outlog is the outbound log in the NTFRS database and includes all the latest change orders that the server posts for downstream partners. Each entry can be a local change order that originated on itself, or from its upstream partner and trickling down to its downstream partners. Here is a sample entry in the OUTLOG.txt

    image

    In this change order, we see a lot of GUIDs that are represented in the NTFRS_SETS.txt that was referenced above. We can track the referenced Connection Object by comparing the GUIDs in the change request above. First we see the change order above is for the Replica set SYSVOL

    Table Type: Outbound Log Table for DOMAIN SYSTEM VOLUME (SYSVOL SHARE) (1)

    We locate in the NTFRS_SETS.txt

    ACTIVE REPLICA SETS

    Replica: DOMAIN SYSTEM VOLUME (SYSVOL SHARE) (fac09b1b-fac4-41f2-95d63550da9f09bb)

    We then see in the OUTLOG Change order

    CxtionGuid : 7aa4ee28-09d5-498c-8a68c1bbf7e3c416

    We see find this Connection object in the NTFRS_SETS.txt under our Replica Set

    Cxtion: D9BDBC95-DCFC-43FA-8FAA-F6F60020669E (7aa4ee28-09d5-498c-8a68c1bbf7e3c416)

    Lets look at the properties of this connection object in NTFRS_SETS.txt

    image

    Earlier we researched in Active Directory that this connection object was for the upstream partner as ADAR2DC2 and the downstream partner as ADAR2DC1. We see attributes of the partner – in this case it is ADAR2DC2 because the report was run from ADAR2DC1. We also see that it is an inbound connection (meaning that we are the downstream server.) We also see other valuable information such as the last Join Time and status.

    We now can associate a change order in the NTFRS_OUTLOG.txt with the connection object that this change was received. We can also look back in the change order and see the VersionVector associated with this change, listed as FRSVsn.

    FrsVsn : 01c8a008 5103ee38

    In order to find this information in NTFRS_SETS.txt, we need to look at the Replica Set information rather than the specific connection object. If you scroll up to the beginning of the Replica set information, you will see its summary prior to the listing of the associated connection objects. Below is a screenshot of the Replica Set information

    image

    The last portion of this output is the VersionVector information. The VersionVectorTable consists of VvEntries. These entries are the pairings of the OriginatorGuid and the Version Sequence Number described earlier. The OriginatorGuid is a random number assigned to each of the FRS member servers. This number changes on a member whenever a VVJoin is done or a member is marked as authoritative by setting the burflags. The Version Sequence Number (VSN) is a hexadecimal number that increments with each change originating on that member. An easy way to determine the OriginatorGUID of a member, is to open the FRSDiag interface and select “Tools>Build GIUD to Name for Target Server(s)” from the menubar. You can search on these VVentries in the Outlog to locate the last known change on a particular connection. The entries under Replica Version Vector indicate the latest change order originating on that member server and Outlog Version Vector indicates the last entry purged from the Outbound Table. There is one more place that references the OriginatorGuid, and that is the FileIDTable. We will discuss this in our final topic, the data being replicated.

    Replicated Data

    The FileIDTable is a report that is not selected by default. It is a checkbox in the lower left corner of the tool named “ID Table Parser”. This report is a spreadsheet of every file and folder in the NTFRS database. You will get a warning message indicating that it could take an extremely long time to process. This warning is necessary because when we use FRS to replicate a DFS link containing hundreds of thousands of files If you are running this utility against a domain controller that does not host a DFS link, then this is typically not an issue. Below is an example of this output, the columns have been shortened to fit within this page, the GUIDs are much longer than those displayed:

    image

    Every file has a FileGuid that remains the same even if we rename the file. If we delete the file and create one with the same name, then it will have a new FileGuid. It also contains an attribute of ParentGuid, this tells us the folder where the object exists. These two attributes form the entire file/folder hierarchy for the replication group. With the combination of the FileGuid and ParentGuid, you can construct the entire data hierarchy of the replica set. As you can see in the picture above, the first entry on the list has a Parent ID of all zeros and a filepath of “.”; this is the replica root folder. For SYSVOL replication, the SYSVOL share is “.” and all other files stem from this point. If you are comparing FileID tables between different members, you will see that the FileGUID of the root folder is different for each member, but all other files and folders will share the same GUID across all members. The table also includes when and on which member it was created. You can see that member server’s OriginatorID as the Originator for the Replica Root Folder, this folder always originates on the local server.

    You can see the activities of these objects, as well as all the activity on the FRS member, by looking at the NTFRS logs. There are numerous logs that are formatted NTFRS_00001 and ordered largest number most recent. There is a lot of good information on reading these logs in the How FRS Works article. When reading these log files, you must have this open as a reference to be able to follow along with what is happening. A good learning exercise is to create a test file and watch it originate on one partner and propagate to another. You can also see how often data is replicating by looking at the time stamps; or why the data is replicating by looking at the USNReason. Be sure to read the article on How FRS Works to see what all the entries mean.

    As you may already know, FRS has been superseded by the DFSR (Distributed File System Replication) service introduced in Windows 2003 R2 and updated in Windows Server 2008. So why should we pay attention to something that is being replaced? The answer is SYSVOL replication between your domain controllers. Most of you will adopt DFSR in your distributed file system environment because of its enhanced efficiency, durability and reporting. But FRS will still have its place amongst your domain controllers to replicate SYSVOL content until you start deploying Win2008 and moving to DFSR for SYSVOL. You will be able to migrate SYSVOL replication to DFSR, but it will require your domain to be at 2008 functionality level and all your Domain Controllers running Windows 2008. For some of you, this may take some time, so in the meantime, hopefully you will find this information helpful.

    See you next time!

    - Randy Turner

  • Default Security Templates in Windows 2008

    Hi, David here again. You might be familiar with Security Templates that we use in Windows 2000 and 2003. The template is sort of the master set of security settings that we apply to a server when you either set it up or configure it using the Security Configuration and Analysis tool. Here in DS we often work with customers who are troubleshooting problems with a security template they’ve applied, or who are trying to build a custom security template for the servers in their environment.

    A few weeks ago, one of my coworkers asked me if we had the same functionality in Windows 2008. He was on the phone with a customer who had modified some settings and needed to restore the defaults because their applications were now not working properly.

    Note:
    this is why we always recommend testing and more importantly documenting security changes that you make.

    The answer, of course, was yes. Security templates do still exist in Windows 2008, but, as you might have guessed – they’ve changed.

    If you look at a Windows 2003 server, you’re going to see a bunch of default security templates:

    •    Setup Security.inf
    •    DC Security.inf
    •    Compatws.inf
    •    Secure*.inf
    •    Hisec*.inf
    •    Rootsec.inf
    •    …and so on

    These files can be found at the following location on Windows Server 2003: %systemroot%\security\templates.

    (Here’s a handy list of what those templates all do, by the way)

    However, if you look at a Windows 2008 server, there are actually only 3, and only 2 of them are really used.

    •    Defltbase.inf (not really used)
    •    Defltsv.inf (for servers)
    •    Defltdc.inf (for DCs)

    These files are located at the following location on Windows Server 2008: 
    %systemroot%\inf.

    There are a few additional templates as well, that apply to specific upgrade/installation scenarios. For example, dcfirst.inf is applied when building the first domain controller in a forest. The main difference here is that it sets the account policy as well. However, the three I listed above are the only default templates that should ever be applied manually.

    Here’s a snip of what a security template looks like:

    clip_image002

    That’s just a portion of the service settings – the templates are pretty big and affect a lot of things – user rights, service settings and permissions, file system permissions, and registry permissions, just to name a few. All those character strings you see there are SDDL strings. If you’d like to learn more about how those work, my buddy Jim wrote an excellent blog post (in two parts: Part 1 and Part 2) a little while back that shows how it all fits together.

    Please note: Reapplying the default security templates to a system does have the potential to remove any custom configuration that has been performed. This can be used to resolve problems resulting from incorrect security settings, but it may also break applications that relied on the custom configuration in order to function. Please also note that this is not a panacea – custom security settings in areas that the security template doesn’t cover (such as a data volume that you create after installation) will not be rolled back by this. Proceed with caution.

    If you do need to set a machine back to the default template, here’s how to do it. First, open a new MMC, and add the Security Configuration and Analysis snap-in.

    image

    The console helpfully tells you how to create a new database right there in the middle pane. When you do this, you’ll be asked to import the template you want to use initially. Start with defltsv.inf (if this is a member server) or defltdc.inf (if this is a DC). Once the template is imported, right-click, and choose Analyze Computer Now…

    clip_image002[5]

    This is the real strength of the snap-in. It has the ability to look at the system and tell you what’s different from the template that you loaded. Often this means that you don’t have to apply the template at all – instead you can zero in on the differences to find the source of the problem. You can browse through the different areas and look for red X’s – this will tell you that something on your computer is different from the base template. Here’s an example of how it looks:

    image

    A green checkmark means we’re the same, and a red X means that we’re different. This may not always be a bad thing, so it’s important to evaluate each setting individually. Usually if you are going through this process you already have an idea of where the problem is (file system permissions, registry permissions, etc), so that will help you narrow down the number of things to look at.

    Note: These settings can also be pushed through group policy, so make sure that you’re familiar with any policies applying to the computer before you start trying to apply a security template to it. You can see what policies are applied to the computer and what they’re doing by running gpresult /v from an elevated command prompt.

    Ok, so you’ve gone through all of this now and you still think you need to blow the machine back to default settings – the console can do that for you. All you’d have to do is right-click, and choose Configure Computer Now…

    image

    Use this option with caution because you rolling back changes that you’ve made is not easy (you should do this after first using secedit /GenerateRollback in conjunction with the log file – see this link for more details). But once you’ve done this, your server will be back at the default settings. You’ll have to go back in afterwards and redo any changes that you needed.

    For Server Core installations, or if you just like the command line, you can also accomplish all of this by using the SecEdit command-line tool.

    clip_image002[7]

    We generally recommend using the MMC when possible, as it’s a bit easier to use and gives you a nicer way to browse through settings that may be different, but the command-line tool can do everything that the MMC snap-in can do.

    The security configuration tools are pretty powerful, and in addition to everything I talked about above they give you the ability to create your own custom security templates, which you can then apply using the MMC or the SecEdit utility, or even through group policy. If you’d like to know more about any of this, there’s a fairly in-depth write up on TechNet, which you can find here.

    One final bit of advice – whatever you do, make sure to test and document those changes. Here in DS, we talk to people every day that are having problems because security changes were made to their servers and those changes were never tested or documented. So if you’re careful and methodical in your approach to securing your servers, you can save yourself a lot of time and frustration down the road in your production environment, as well as make troubleshooting much easier and simpler if problems do occur.

    David Beach

  • The Security Descriptor Definition Language of Love (Part 2)

    Hi. Jim here from DS here with a follow up to my SDDL blog part I. At the end of my last post I promised to dissect further the SDDL output returned by running the CACLS with the /S switch on tools share as follows:

    clip_image002

    Here is the output exported to a .txt file:

    "D:AI(D;OICI;FA;;;BG)(A;;FA;;;BA)(A;OICIID;FA;;;BA)(A;OICIID;FA;;;SY)(A;OICIIOID;GA;;;CO)(A;OICIID;0x1200a9;;;BU)(A;CIID;LC;;;BU)(A;CIID;DC;;;BU)"

    Let’s examine the first segment more closely: "D:AI(D;OICI;FA;;;BG)(A;;FA;;;BA)

    jim1

    Now the second ACE segment: (A;:FA;;;BA)

    jim2

    jim3

    You get the picture. There is a chart provided at the end which contains all the acronyms in addition to the ones illustrated in this output.

    At this point you may be asking why there are there two different ACE entries for Built-in Administrators. The first ACE indicates the ACE applied directly to the object (In this case TOOLS for the BA’s). The second ACE indicates the ACE’s for this object that flow down from TOOLS via inheritance.

    This is illustrated in the Permissions tab of Advanced Security Settings for the TOOLS share:

    clip_image002[5]

    Now you may well be wondering “Jim, how can I use this SDDL wonderment to make my administrative tasks less tedious?"

    Well here is an example on how you can do just that.

    Scenario: Its Friday at 3pm. You have to deploy 10 printers to the call center. Every single printer should have the exact same security settings for access (oversimplified yes, but you get the point). You need to get this done expediently so as not to miss happy hour. All the printers are IP’d and are installed on your print server. You have applied the necessary security on one printer as follows:

    clip_image004

    Using the SETPRINTER utility you can view the security applied in SDDL format as follows:

    clip_image006

    Here is the command as well as the output:

    C:\>setprinter -show \\2003dom-member\printer1 3

    pSecurityDescriptor="O:BAG:DUD:(A;;LCSWSDRCWDWO;;;BA)(A;OIIO;RPWPSDRCWDWO;;;BA)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2604)(A;CIIO;RC;;;CO)(A;OIIO;RPWPSDRCWDWO;;;CO)(A;CIIO;RC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;OIIO;RPWPSDRCWDWO;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;LCSWSDRCWDWO;;;PU)(A;OIIO;RPWPSDRCWDWO;;;PU)"

    Now create yourself a .CMD file containing the following parameters remembering of course to substitute your Print server name and your printer names where indicated. Also be sure NOT to wrap your SDDL parameters as below. This is done here purely for readability. The entire command should be on one line:

    setprinter \\”Print_Server_Name”\printer1 3 pSecurityDescriptor="O:BAG:DUD:(A;;LCSWSDRCWDWO;;;BA)(A;OIIO;RPWPSDRCWDWO;;;BA)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2604)(A;CIIO;RC;;;CO)(A;OIIO;RPWPSDRCWDWO;;;CO)(A;CIIO;RC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;OIIO;RPWPSDRCWDWO;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;LCSWSDRCWDWO;;;PU)(A;OIIO;RPWPSDRCWDWO;;;PU)"

    setprinter \\”Print_Server_Name”\printer2 3 pSecurityDescriptor="O:BAG:DUD:(A;;LCSWSDRCWDWO;;;BA)(A;OIIO;RPWPSDRCWDWO;;;BA)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2604)(A;CIIO;RC;;;CO)(A;OIIO;RPWPSDRCWDWO;;;CO)(A;CIIO;RC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;OIIO;RPWPSDRCWDWO;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;LCSWSDRCWDWO;;;PU)(A;OIIO;RPWPSDRCWDWO;;;PU)"

    setprinter \\”Print_Server_Name”\printer3 3 pSecurityDescriptor="O:BAG:DUD:(A;;LCSWSDRCWDWO;;;BA)(A;OIIO;RPWPSDRCWDWO;;;BA)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2604)(A;CIIO;RC;;;CO)(A;OIIO;RPWPSDRCWDWO;;;CO)(A;CIIO;RC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;OIIO;RPWPSDRCWDWO;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;SWRC;;;S-1-5-21-329599412-2737779004-1408050790-2605)(A;;LCSWSDRCWDWO;;;PU)(A;OIIO;RPWPSDRCWDWO;;;PU)"

    end

    exit

     

     

     

     

     

    You may add as many similarly configured printers as you like.

    Included below are charts for the acronyms of the SDDL taken directly from MSDN2. These can also be viewed here:

    http://msdn2.microsoft.com/en-us/library/aa374928.aspx

    ACE Type

    The ACE type designates whether the trustee is allowed, denied or audited.

    Value

    Description

    "A"

    ACCESS ALLOWED

    "D"

    ACCESS DENIED

    "OA"

    OBJECT ACCESS ALLOWED: ONLY APPLIES TO A SUBSET OF THE OBJECT(S).

    "OD"

    OBJECT ACCESS DENIED: ONLY APPLIES TO A SUBSET OF THE OBJECT(S).

    "AU"

    SYSTEM AUDIT

    "A"

    SYSTEM ALARM

    "OU"

    OBJECT SYSTEM AUDIT

    "OL"

    OBJECT SYSTEM ALARM

    INHERITANCE Flags

    "P SDDL_PROTECTED Inheritance from containers that are higher in the folder hierarchy are blocked.
    "AI" SDDL_AUTO_INHERITED Inheritance is allowed, assuming that "P" Is not also set.
    "AR" SDDL_AUTO_INHERIT_REQ Child objects inherit permissions from this object.

    ACE Flags The ACE flags denote the inheritance options for the ACE, and if it is a SACL, the audit settings.

    Value

    Description

    "CI"

    CONTAINER INHERIT: Child objects that are containers, such as directories, inherit the ACE as an explicit ACE.

    "OI"

    OBJECT INHERIT: Child objects that are not containers inherit the ACE as an explicit ACE.

    "NP"

    NO PROPAGATE: ONLY IMMEDIATE CHILDREN INHERIT THIS ACE.

    "IO"

    INHERITANCE ONLY: ACE DOESN'T APPLY TO THIS OBJECT, BUT MAY AFFECT CHILDREN VIA INHERITANCE.

    "ID"

    ACE IS INHERITED

    "SA"

    SUCCESSFUL ACCESS AUDIT

    "FA"

    FAILED ACCESS AUDIT

    Permissions

    The Permissions are a list of the incremental permissions given (or denied/audited) to the trustee-these correspond to the permissions discussed earlier and are simply appended together. However, the incremental permissions are not the only permissions available. The table below lists all the permissions.

    Value

    Description

    Generic access rights

    "GA"

    GENERIC ALL

    "GR"

    GENERIC READ

    "GW"

    GENERIC WRITE

    "GX"

    GENERIC EXECUTE

    Directory service access rights

    "RC"

    Read Permissions

    "SD"

    Delete

    "WD"

    Modify Permissions

    "WO"

    Modify Owner

    "RP"

    Read All Properties

    "WP"

    Write All Properties

    "CC"

    Create All Child Objects

    "DC"

    Delete All Child Objects

    "LC"

    List Contents

    "SW"

    All Validated Writes

    "LO"

    List Object

    "DT"

    Delete Subtree

    "CR"

    All Extended Rights

    File access rights

    "FA"

    FILE ALL ACCESS

    "FR"

    FILE GENERIC READ

    "FW"

    FILE GENERIC WRITE

    "FX"

    FILE GENERIC EXECUTE

    Registry key access rights

    "KA"

    KEY ALL ACCESS

    "K"

    KEY READ

    "KW"

    KEY WRITE

    "KX"

    KEY EXECUTE

    Object Type and Inherited Object Type
    Trustee
    The Trustee is the SID of the user or group being given access (or denied or audited). Instead of a SID, there are several commonly used acronyms for well-known SIDs. These are listed in the table below:

    Value

    Description

    "AO"

    Account operators

    "RU"

    Alias to allow previous Windows 2000

    "AN"

    Anonymous logon

    "AU"

    Authenticated users

    "BA"

    Built-in administrators

    "BG"

    Built-in guests

    "BO"

    Backup operators

    "BU"

    Built-in users

    "CA"

    Certificate server administrators

    "CG"

    Creator group

    "CO"

    Creator owner

    "DA"

    Domain administrators

    "DC"

    Domain computers

    "DD"

    Domain controllers

    "DG"

    Domain guests

    "DU"

    Domain users

    "EA"

    Enterprise administrators

    "ED"

    Enterprise domain controllers

    "WD"

    Everyone

    "PA"

    Group Policy administrators

    "IU"

    Interactively logged-on user

    "LA"

    Local administrator

    "LG"

    Local guest

    "LS"

    Local service account

    "SY"

    Local system

    "NU"

    Network logon user

    "NO"

    Network configuration operators

    "NS"

    Network service account

    "PO"

    Printer operators

    "PS"

    Personal self

    "PU"

    Power users

    "RS"

    RAS servers group

    "RD"

    Terminal server users

    "RE"

    Replicator

    "RC"

    Restricted code

    "SA"

    Schema administrators

    "SO"

    Server operators

    "SU"

    Service logon user

    I hope you have found this entertaining and informative!

    -          Jim Tierney

    The ObjectType is a GUID that Identifies a type of child object, a property or property set or an extended right.  If present it limits the ACE to the object the GUID represents.  For a more verbose explanation of this please visit the following link -

    http://www.microsoft.com/technet/prodtechnol/windows2000serv/reskit/distrib/dsce_ctl_iunu.mspx?mfr=true

    Inherited Object Type contains a GUID that identifies the type of child object that can inherit the ACE. Inheritance is also controlled by the ACE's Inheritance Flags and by any protection against inheritance placed on the child object in its Security Descriptor Control Flags.

     

     

  • Bulk exporting and importing WMI filters for Group Policy

    Mike again. Here is an updated version of the blog post which was originally published on the Group Policy blog. Check it out!

    Did you know you can import/export WMI filters using GPMC? However, your export is limited to one filter at a time – filter to a single .mof file. You then take the exported .mof file to a different domain and use GPMC to import each file. This is great when you only have one or two WMI filters. What about if you have 15 or 20? No worries! You can do this using the LDIFDE utility. (This is a long but detailed explanation).

    First, you need to find the WMI Filter you want to export (and eventually import). GPMC writes WMI filters in the Domain partition at:

    CN=SOM,CN=WMIPolicy,CN=System,DC=contoso,DC=com.

    The LDAP filter you use to return all WMI filters is (objectclass=msWMI-Som). You can narrow the number of returned items if you know the name of the WMI Filter by using
    (&(objectclass=msWMI-Som)(msWMI-Name=filtername)). You can lean more about LDAP search filter syntax from MSDN (http://msdn2.microsoft.com/en-us/library/aa746475.aspx). The following sample command line gives you and idea of how to export the WMI Filter:

    LDIFDE -f output.txt –d “dc=contoso.com” –r ”( objectclass=msWMI-Som)” –p subtree

    In the example above, -f designates the name of the output file that stores the exported WIM filter objects. Next, -d designates the based distinguished name; that is, where the search for objects starts. In this example, it starts at the beginning of the domain. The –r is an inclusive LDAP search filter. In this example we only want objects of the class
    msWMI-Som returned by the query. Lastly, the –p designates that type of search we want to use. A subtree search means the search begins at the designated base distinguished name and searches the entire depth of the tree for objects matching the designated filter—similar to using dir /s on a directory when searching for a file.

    Your options may vary. If you have problems exporting the items then add –j . (one dash, the letter J, a space, and one period) to the command line to create a log file in the current folder. A successful output.txt file looks similar to the following:

    dn: CN={1154EFFC-0090-4F23-8865-C8D555BF696E},CN=SOM,CN=WMIPolicy,CN=System,DC=contoso,DC=com
    changetype: add
    objectClass: top
    objectClass: msWMI-Som
    cn: {1154EFFC-0090-4F23-8865-C8D555BF696E}
    distinguishedName:
    CN={1154EFFC-0090-4F23-8865-C8D555BF696E},CN=SOM,CN=WMIPolicy,CN=System,DC=con
    toso,DC=com
    instanceType: 4
    whenCreated: 20070808151246.0Z
    whenChanged: 20070808151246.0Z
    uSNCreated: 40979
    uSNChanged: 40979
    showInAdvancedViewOnly: TRUE
    name: {1154EFFC-0090-4F23-8865-C8D555BF696E}
    objectGUID:: EPDEbOIaGEWyX3Z/b+eiKw==
    objectCategory: CN=ms-WMI-Som,CN=Schema,CN=Configuration,DC=contoso,DC=com
    msWMI-Author: Administrator@CONTOSO.COM
    msWMI-ChangeDate: 20070618142622.740000-000
    msWMI-CreationDate: 20070618142257.735000-000
    msWMI-ID: {1154EFFC-0090-4F23-8865-C8D555BF696E}
    msWMI-Name: Imported WMIFilter2
    msWMI-Parm1: This is the description for the filter
    msWMI-Parm2:
    1;3;10;45;WQL;root\CIMv2;Select * from win32_timezone where bias =-300;

    Once you successfully export the WMI Filters; you then need to prepare the output file for import.

    clip_image002 To prepare the output file for importing:

    1. First, save the file as another file name.

    2. Then, you need to download the GUIDGEN utility (this is not so important when importing the WMI filter into a different domain). As a reference, this is a guid: {DF380E6C-DB23-44ed-9BF6-435559503347}.

    •  
      • You MUST change the guids to import into the same domain or it will NOT import.

    3. Change the guid (to include open and closing curly braces) in the DN, CN, distinguishedname, name, and msWMI-ID attributes (use the same guid in each of these attributes).

    4. If importing into a different domain, change the LDAP path to reflect the new domain in the dn, distinguishedName, and objectCategory attributes. Only change the domain portion of the LDAP path.

    5. Next, you need to remark out the whenCreated, whenChanged, USNcreated, USNChanged, objectguid, msWMI-ChangeDate,  and msWMI-CreationDate attributes. Do this by inserting the # character and a space at the beginning of the line for each of the listed attributes.

    6. Optionally, you can change the text displayed in msWMI-Name, msWMI-Author, and msWMI-Parm1 attributes.

    •  
      • msWMI-Name is the display name of the WMI Filter shown in GPMC. 
      • msWMI-Author is the UPN format for the person creating the WMI filter.
      • msWMI-Parm1 is the description text shown for the WMI filter in GPMC.

    The final file should look similar to the following.

    dn: CN={4464D2C2-9063-4953-AE6F-A0D231EBF3CD},CN=SOM,CN=WMIPolicy,CN=System,DC=fabrikam,DC=com
    changetype: add
    objectClass: top
    objectClass: msWMI-Som
    cn: {4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    distinguishedName:
    CN={4464D2C2-9063-4953-AE6F-A0D231EBF3CD},CN=SOM,CN=WMIPolicy,CN=System,DC=fabrikam,DC=com
    instanceType: 4
    # whenCreated: 20070618142257.0Z
    # whenChanged: 20070618142622.0Z
    # uSNCreated: 26483
    # uSNChanged: 26485
    showInAdvancedViewOnly: TRUE
    name: {4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    # objectGUID:: 7sA6lK0PVE2fGNOSDTS5Kw==
    objectCategory: CN=ms-WMI-Som,CN=Schema,CN=Configuration,DC=fabrikam,DC=com
    msWMI-Author: Administrator@fabrikam.COM
    # msWMI-ChangeDate: 20070618142622.740000-000
    # msWMI-CreationDate: 20070618142257.735000-000
    msWMI-ID: {4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    msWMI-Name: Imported WMIFilter2
    msWMI-Parm1: This is the description for the filter
    msWMI-Parm2:
    1;3;10;45;WQL;root\CIMv2;Select * from win32_timezone where bias =-300;

    You’re almost ready to import the WMI filters. However, importing or adding a WMI Filter object into AD is a system only operation. You need to enable system only changes on a domain controller for a successful LDIFDE import. To do this, on the domain controller you are using for importing, open the registry editor and create the following registry value.

    Key: HKLM\System\CurrentControlSet\Services\NTDS\Parameters
    Value Name: Allow System Only Change
    Value Type: REG_DWORD
    Value Data: 1 (Binary)

    Next, you’ll need to reboot the domain controller to activate the new setting. Once the domain controller is rebooted, you can use LDIFDE to import the file into AD. Use the following command:

    LDIFDE  -i –f input.txt

    If you have problems then add –j . (one dash, the letter J, a space, and one period) to the command line to create a log file in the local folder. Once the import is complete you should delete the System Only Registry key and reboot the domain controller to deactivate the setting. A successful import looks similar to the following.

    Connecting to "hq-con-dc-01.fabrikam.com"
    Logging in as current user using SSPI
    Importing directory from file "import-wmi.ldf"

    Loading entries
    1: CN={4464D2C2-9063-4953-AE6F-A0D231EBF3CD},CN=SOM,CN=WMIPolicy,CN=System,DC=fabrikam,DC=com
    Entry DN: CN={4464D2C2-9063-4953-AE6F-A0D231EBF3CD},CN=SOM,CN=WMIPolicy,CN=System,DC=fabrikam,DC=com
    changetype: add
    Attribute 0) objectClass:top msWMI-Som
    Attribute 1) cn:{4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    Attribute 2) distinguishedName:CN={4464D2C2-9063-4953-AE6F-A0D231EBF3CD},CN=SOM,CN=WMIPolicy,CN=System,DC=fabrikam,DC=com
    Attribute 3) instanceType:4
    Attribute 4) showInAdvancedViewOnly:TRUE
    Attribute 5) name:{4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    Attribute 6) objectCategory:CN=ms-WMI-Som,CN=Schema,CN=Configuration,DC=fabrikam,DC=com
    Attribute 7) msWMI-Author:Administrator@FABRIKAM.COM
    Attribute 8) msWMI-ID:{4464D2C2-9063-4953-AE6F-A0D231EBF3CD}
    Attribute 9) msWMI-Name:Imported WMIFilter2
    Attribute 10) msWMI-Parm1:This is the description for the filter
    Attribute 11) msWMI-Parm2:1;3;10;45;WQL;root\CIMv2;Select * from win32_timezone where bias =-300;

    Entry modified successfully.

    1 entry modified successfully.

    The command has completed successfully

    And there you go: you’ve successfully exported and imported WMI filters.

    -Mike Stephens