Blog - Title

August, 2011

  • Cluster and Stale Computer Accounts

    Hi, Mike here again. Today, I want to write about a common administrative task that can lead to disaster: removing stale computer accounts from Active Directory.

    Removing stale computer accounts is simply good hygiene-- it’s the brushing and flossing of Active Directory. Like tartar, computer accounts have the tendency to build up until they become a problem (difficult to identify and remove, and can lead to lengthy backup times).

    Oops… my bad

    Many environments separate administrative roles. The Active Directory administrator is not the Cluster Administrator. Each role holder performs their duties in a somewhat isolated manner-- the Cluster admins do their thing and the AD admins do theirs. The AD admin cares about removing stale computer accounts. The cluster admin does not… until the AD admin accidentally deletes a computer account associated with a functioning Failover Cluster because it looks like a stale account.

    Unexpected deletion of Cluster Name Object (CNO) or Virtual computer Object (VCO) is one of the top issues worked by our engineers that support Clustering and High-Availability. Everyone does their job and boom-- Clustered Servers stop working because CNOs or the VCOs are missing. What to do?

    What's wrong here

    I'll paraphrase an article posted on the Clustering and High-Availability TechNet blog that solves this scenario. Typically, domain admins key on two different attributes to determine if a computer account is stale: pwdlastSet and LastLogonTimeStamp. Domains that are not configured to a Window Server 2003 Domain Functional Level use the pwdLastAttribute. However, domains configured to a Windows Server 2003 Domain Functional Level or later should use the lastLogonTimeStamp attribute. What you may not know is that a Failover Cluster (CNO and VCO) does not update the lastLogonTimeStamp the same way as a real computer.

    Cluster updates the lastLogonTimeStamp when it brings a clustered network name resource online. Once online, it caches the authentication token. Therefore, a clustered network named resource working in production for months will never update the lastLogonTimeStamp. This appears as a stale computer account to the AD administrator. Being a good citizen, the AD administrator deletes the stale computer account that has not logged on in months. Oops.

    The Solution

    There are few things that you can do to avoid this situation.

    • Use the servicePrincipalName attribute in addition to the lastLogonTimeStamp attribute when determining stale computer accounts. If any variation of MSClusterVirtualServer appears in this attribute, then leave the computer account alone and consult with the cluster administrator.
    • Encourage the Cluster administrator to use -CleanupAD to delete the computer accounts they are not using after they destroy a cluster.
    • If you are using Windows Server 2008 R2, then consider implementing the Active Directory Recycle Bin. The concept is identical to the recycle bin for the file system, but for AD objects. The following ASKDS blogs can help you evaluate if AD Recycle Bin is a good option for your environment.

    Mike "Four out of Five AD admins recommend ASKDS" Stephens

  • Kerberos and Load Balancing

    Hi guys, Joji Oshima here again. Today I want to talk about configuring Kerberos authentication to work in a load-balanced environment. This is a more advanced topic that requires a basic understanding of how Kerberos works. If you want an overview of Kerberos, I would suggest Rob’s excellent post, Kerberos for the Busy Admin. In this post, I will be using a load balanced IIS web farm as the example, but the principal applies for other applications.

    The Basics:

    As you may know, Kerberos relies on Service Principal Names (SPNs). SPNs are associated with objects in Active Directory and registered under the servicePrincipalName attribute.

    If you are using 2008 Server or higher, you can view this attribute using Active Directory Users and Computers (ADUC) with Advanced Features enabled and going to the Attribute Editor tab. Click the View menu and then select Advanced Features


    You can also view attributes using ADSI Edit (adsiedit.msc) or the Setspn command line tool.

    When an application makes a request for a Kerberos ticket, it makes that request for a specific SPN (like http/ The Key Distribution Center (KDC) will search Active Directory for the object that has that principal name registered to it, and encrypt the ticket with that object’s password. The object that is running the service has the same password, so when the ticket arrives, it can decrypt it.

    The Problem:

    If you have a single IIS server, the service is typically running under Local System. The standard SPNs are registered to the computer account (like host/server01 & host/* so when a request for http/server01 comes in, the ticket will be encrypted using the computer account’s password. This configuration works well for a single server environment.

    *The host SPN works for many services including http. If there is a specific entry for http, it will use that, otherwise it will fallback and use host.

    In a load-balanced environment, users will access the service using a unified name instead of the individual servers. Therefore, instead of accessing or, they will access In this scenario, there are two computer accounts, so where do you register the Service Principal Name? One idea would be to register the principal name to both computer accounts. The problem with this idea is that Service Principal Names must be unique. When the request comes in for http/, the KDC would not know which object’s password to encrypt the ticket with, so it will return an error. You will also see Event ID 11 populating your event logs if you have any duplicate SPNs in your directory.

    The Solution:

    Instead of running the service under Local System, have each server run the application using a specific service account. In IIS, you can accomplish this by having the application pool run under the service account. Here is how you would set it in IIS7. You could also follow the instructions in this TechNet article.

    1. Open Internet Information Services (IIS Manager) and browse to the Application Pools page


    2. Right click the application pool that is running your site, and choose Advanced Settings


    3. Under Process Model, look for Identity and click the button to the right


    4. This will bring up a dialog box where you can choose what credentials to use for the application pool. Choose Custom account and click the set button.


    5. Enter the credentials for your service account and click ok

    6. If you are using IIS7 and have Kernel Mode Authentication set, you will need to do one additional step. Open the ApplicationHost.config file and enable the useAppPoolCredentials setting. IIS7 added the option to authenticate users in Kernel mode to speed up the authentication process. By default, it will use the computer account for authentication requests even if the application pool is set to a service account. By changing this setting, you get the benefits of Kernel Mode Authentication, but still authenticate with the service account.

    Sample ApplicationHost.config change:

       <windowsAuthentication enabled="true" useAppPoolCredentials="true" />

    After you have the services running under the same service account, register the unified name, http/, to that service account. No matter which server the client is routed to, the service will be able to decrypt the ticket using its password. You can register SPNs using the command line tool Setspn.


    Extra Credit:

    Suppose you want the ability to use Kerberos authentication accessing the servers individually and using the unified name. Currently, if you request a ticket for http/, the KDC will encrypt the ticket using server01’s computer object password. The service is not running under local system, so it will not be able to decrypt that ticket. However, you can register additional SPNs to the service account. In this scenario, you could register the following SPNs to the service account.

    You can also view the attributes currently registered to an account using the Setspn command line tool.
    Syntax: Setspn –l accountname


    This will not interfere with the host SPNs registered to the computer account. When an incoming request comes for http/server01, it will check for the exact string first. If it cannot find it, it will look for host/server01.


    Remember that a Service Principal Name can only be registered on one account at a time. If you are using 2008 Server or higher, you can search for duplicate SPNs in your environment by using the command: Setxpn -f -q http/myapplication*


    You can also use the command line tool LDIFDE to find duplicate SPNs. The command below will output a text file called SPN.txt that contains all objects with a service principal name that starts with http/myapplication. This file will be located in the same directory you run the command in unless you specify a path in the –f switch.

    LDIFDE -f spn.txt -r (serviceprincipalname=http/myapplication*) -l serviceprincipalname

    • The –f switch determines the output file
    • The –r switch determines the search criteria, and the * at the end is a wildcard
    • The –l switch chooses which attributes will be listed in the output file

    Final Thoughts:

    There are many benefits to using Kerberos authentication but configuring it properly may feel like a daunting task, especially in a more complex environment. I hope this post makes configuring this a bit easier.

    - Joji “adult swim in the app pool” Oshima

  • Friday Mail Sack: Charlotte Edition

    Hiya folks, Ned back with a palette-cleansing Mail Sack after this monstrosity. This week we talk about:

    Let’s get swimmy-headed.


    I'm curious to get your feedback on custom AD Schema extensions – those that are created "in house" for a specific need. What is the overall Microsoft stance on the topic? Should we do it, or use AD LDS? We have an app we’re designing and want to do this the “right” way.


    It’s always “safer” to use AD/LDS - as it’s easier to throw that away and start over - but as long as you follow our best practices, we don’t get too bent out of shape if you extend your schema. That’s what it’s there for. The critical piece is that you get your base OID from ISO, or barring that, generate one using our safe script. You can also use proxy accounts to tie AD and ADLDS together, the way that we always intended but which never really caught on.

    Some of the best practices:

    The tricky part of extending your AD schema is no matter how carefully you do it, some millet-for-brains vendor may not. Then you buy their product and cannot use it, due to duplicate attributes or classes.  It’s a lot simpler to fix an AD/LDS schema than your AD forest schema.

    Hold payment on the vendor’s check until you are sure it works in your lab.


    When we deploy Domain Controllers in our environment (whether its a RWDC or RODC, they all run DNS) we always remove our Root Hints. We do this on our RODCs after the DCPROMO and before the reboot. Is there anyway to remove the Root Hints after the RODC becomes read-only?


    There’s nothing special as part of the RODC promotion process itself that would let you do this, as DNS configuration is quite restricted when you let the DC process handle it. You have a couple workarounds:

    1. Configure the root hints via its registry value “isslave”, perhaps run as a batch at the end of promotion. See KB2001154 for more info on configuring this. And no snickering at this Win2008 bug in the comments! Ok, maybe a little.

    2. Don’t have the RODC install and configure DNS (in an unattend install, this is “SkipAutoConfigDNS”). Script the DNS installation using DNSCMD to do everything. This seems like overkill, but answers the greater question of controlling RODC+DNS configuration.

    It’s critically important to note:

    Note Microsoft does not support the removal of all root hints from a Microsoft DNS server. A Microsoft DNS server must have at least one root hint. However, you can replace the existing root hints with new root hints. When you replace root hints, the change is permanent, and the old root hints do not reappear. If the DNS server if forwarding, click to select the Do not use recursion for this domain check box on the Forwarders tab in DNS Manager to make sure that the root hints will not be used.


    We are performing a USMT migration. On the source machine we generate the config.xml file by using the genconfig switch and we specify the migapp.xml file in the command line.

    Scanstate.exe /genconfig:config.xml /i:migapp.xml

    This way we can decide if we want to block certain apps from migrating without changing the migapp.xml.

    We have Itunes listed in the xml file as well as the application installed on the machine. But it never gets listed in the config.xml file. I’m missing other applications that TechNet says are supported.


    It’s because you have newer, “unsupported” versions of the apps installed – note how the MIGAPP.XML starts each section with a <detect>. For example, here are WinZip and Adobe Reader:



    If Adobe Reader 9.X and WinZip 8.*-10.* are installed, they will manifest and migrate:


    But if I have the latest Adobe Reader (10.*) and WinZip (15.*) installed, they are not detected and therefore, will not manifest or migrate:


    To be more supported by the vendor is to be less supported by USMT as time goes by.

    You have several options:

    1. Convince the vendor of the USMT-unsupported apps to give you updated migration XML. They submitted the previous version requirements for their own app during the USMTdevelopment process – that’s why so few apps are listed and they are so esoteric: Winzip but not WinRar?  RealPlayer but not WinAmp? AD-Aware but not a hundred other anti-spyware apps? Microsoft didn’t create the list.

    2. Create a copy of the migapp.xml, add detection elements for those newer versions and update any paths necessary, and validate that they see seem to migrate the right settings. Then migrate using that updated XML instead of the shipping XML and cross your fingers, because you are hoping you got it all right and your vendor is going to support it (Microsoft does not care – again, these are not our settings).

    3. Migrate older versions of the apps, then update them after the fact (included only for completeness’ sake – this is extremely gross)

    You’ll have this issue even if you don’t generate config.xml file, naturally.


    I am looking to replicate a folder under the “Windows” folder via DFSR. I found this article

    "When replicating a volume that contains the Windows system folder, DFS Replication recognizes the %WINDIR% folder and does not replicate it.

    I was wondering if there is any workaround for this, so we can replicate something under the c:\Windows folder.


    There is no way around this – it is by design and very intentional. Replicating something under %windir% makes me think you want to synchronize things like drivers between servers, which is a no-no. If you try, you get this DFSR event:

    Event ID=6410
    The DFS Replication service failed to initialize replicated folder %2 because
    the service detected that one of its working folders overlaps a Windows system
    This is an unsupported configuration.

    Additional Information:
    Overlapped Folder: %3
    Replicated Folder: %4
    Replicated Folder Name: %5
    Replicated Folder ID: %1
    Replication Group Name: %6
    Replication Group ID: %7
    Member ID: %8
    System Folder:%9

    You cannot use DFSR to replicate %systemroot% folders, except for the special case of SYSVOL on Win2008+ DCs.

    And while we’re on the subject: while this does not also check %programfiles%, %ProgramFiles(x86)%, or the hidden %programdata%, replicating those folders is just as likely to cause massive issues, to possibly include an unbootable server if you are especially unlucky. Move your data elsewhere.


    After discussing the "DC DNS A Records and Web Servers" question from Friday's Mail Sack with my co-workers, I have a question about that question. Let’s say someone changed their (same as parent folder) A record to point at the Virtual IP of a hardware load balancer. This VIP would serve as a content switch that looks at traffic like this: Are you destined for port 80 or 443? If YES -> redirect traffic to web server If NO -> redirect to 1 of x domain controllers. Is this a viable solution?


    The DNS folks and I discussed this option when I was vetting the previous post, and we ultimately decided that it would need some kind of third party device that Microsoft doesn’t make – so we could not speak to its viability, as we had no visibility. In addition, there are legitimate reasons to connect to a DC over web ports – the AD Management Gateway/AD Web Service uses HTTP/HTTPS traffic in order to allow you to use AD PowerShell, for example. So drawing the line would be tricky.

    Now I answered on the intornotz so I guess the cat’s out of the bag.


    What are the correct settings for DFS Namespace to make client failover occur more quickly? I have tried different cache timeout settings but it always seems to take about 30-45 seconds to get access to files again if a DFS target share goes offline.


    The issue isn’t DFSN; it’s the Redirector and SMB. Since you already had a connection to that server, the redirector tries to reconnect to it in case there was only a temporary network outage –it’s just a UNC path to a share at that point. The same happens if you point to a single share on a single server, and take that server offline – Windows Explorer doesn’t instantly give you an error that the server is unavailable. A network capture shows bursts of “retry” SMB traffic from that client until it finally gives up and says the server is not coming back. This behavior dates back to NT:

    148950  Changing the Windows NT Redirector Time-Out Value;EN-US;148950

    (This registry value isn’t applicable to later OSes; we decided allowing adjustment caused too many issues)

    The caching doesn’t change in this scenario either – nothing has changed for the actual link targets in the referral. When SMB gets tired of trying, you move to the next entry in the cache. I can set my client cache timeout to 5 seconds and still see the Redirector sending out retry SMB packets for 30-45 seconds.

    The DFSN client connectivity design isn’t for instant failover; it’s for geographical high availability and closest targeting. If you need instant failover, clustering is the way to go. Since the server and connection never goes away due to cluster magic, your users will not see noticeable delays.

    If you want the Mack of all solutions, cluster your DFSN link targets. At the very least, your hardware sales rep will appreciate it; he can now afford that Virage he’s been eyeing…


    If I search for an object against an AD LDS instance that I know is in AD DS, I get: Error 0x20D6 No superior reference has been configured for the directory service. The AD LDS Server is joined to the AD DS forest that the object I'm searching for is in (I have an application that needs to be restricted to looking at AD LDS, but also have AD LDS send off objects to AD DS that AD LDS cannot find). I found this article, but it doesn't provide any examples of how to configure a crossRef or superior reference, so I'm a little lost.


    You have to configure attribute superiorDnsRoot on the configuration partition crossref object. You can use this technique, but use the AD/LDS config partition path.

    That error in AD/LDS instance can also mean:

    • It doesn’t have a matching schema to your AD DS forest. Use the ADSchemaAnalyzer tool to validate this and sync the differences.
    • The DN specified is wrong
    • You are connecting to an ADAM/ADLDS instance on the wrong port (so lame, I know).

    Very generic as you can see. If the above KB doesn’t work, I suggest opening a support case to let us really dig in.

    Not work

    I’ve been teaching the past few weeks and a number of students commented on my rotating Windows 7 wallpaper of mecha. They were all downloaded from the amazing online art site ConceptRobots. They have thousands of these, in many styles. Check out a small sampling:



    Go there only if you have hours to waste

    We just installed a Coke Freestyle machine at work and it’s seriously cool. I’ve never seen a line at the soda fountain before, and ours are free. You should come see for yourself. Those students admiring my wallpaper were 33 new hires right here in Charlotte.


    Like baseball? Add this to your favorites. Not the prettiest site, but if you want amazing details, statistics, and stories, it’s the best.

    And finally - I was sitting at a light last week when I noticed this fella. It really sums up my eleven years of living in North Carolina and that my wife is right - I’m still just a damyankee:


    Ford F-150, even though he lives in an affluent suburb – check


    Aftermarket U-Haul tow-hitch - check


    Vanity plate with Larry the Cable Guy catchphrase – check


    Bumper sticker affirming that this is not the smallest pickup he will ever own – check

    But the icing on the cake:


    He bought the truck from NASCAR racing driver Dale Jarrett. Hells yeah.


    Have a great weekend folks.

    Ned “the ethernet cable guy” Pyle

  • Friday Mail Sack: Beard-Seconds Edition

    Hiya folks, Ned here again. This week we talk:

    Start the word punching!


    We are interested in removing the A records that are created by Domain Controllers in our Windows 2008 R2 environment so that we can have web servers respond to the FQDN without needing WWW, from our users on the internal network. So for example, if our AD forest name is, we want to add the FQDN to a web server, but can't because the DC A records are currently claimed. Is this possible to do without affecting normal client operations?


    Removing the “same as parent folder” A Records from DCs and blocking their further creation using Group Policy is going to cause issues.


    For example, you cannot create domain DFS namespaces, as it will explode trying to use the A records to find the domain namespace, with the oh-so-helpful “RPC server is unavailable” error:


    Any applications that use generic A records as part of a domain controller lookup, instead of SRV records, will fail. Any apps that “ping”  - not necessarily with ICMP - the domain to locate a DC will fail also. Some of these apps are Microsoft: ADMT has that problem too and there are likely others. Mostly though, they will be third party or homegrown apps and therefore unknowable to me. You will need to inventory all your application in the entire environment and try them all out in a test lab. That sounds like fun!

    This is the problem with using a public facing domain as the root of your AD forest, rather than building a new root under the FQDN. For example, Microsoft’s main AD forest root is

    The options:

    • Migrate your entire forest into a different FQDN than your current web server’s FQDN. Not gonna happen.
    • Migrate all your web servers into a different FQDN. Not gonna happen.
    • Run the web servers and their sites on your DCs. Not gonna happen.
    • Run just enough website on your DCs to redirect all traffic to the WWW addresses. Could maybe happen.
    • Do nothing, tell internal users to go to a WWW address rather than the itself, and leave it all as-is. Most likely to happen.


    Is it possible to use forwarded event log subscriptions with the DC’s Security event logs? It’s working for other logs, so I know I am doing it correctly in general, but the Security event log specifically never forwards from my DCs to my collector computer.


    By default, the Security logs cannot be forwarded due to permissions ACL’ing the log – the service that does the event collection has no rights to it. Being a DC is not relevant, even member servers have slightly more restrictive access to that log. On the servers you want to collect security event logs from (or in your case, on any DC in that domain), run:

    net localgroup "Event Log Readers" "NT Authority\Network Service" /add

    Let that replicate to all other DCs in the domain. Then restart those DCs. Now you can configure your subscription-collector… goo. And if you set up all the subscriptions and service correctly, will work. Here’s my 01 DC forwarding security events to my 05 member server:



    The steps for doing all this event log subscription stuff are here:


    Our Mainframe requires that user passwords be no longer than 8 characters and can only use a few special characters. Can Windows AD password policy enforce this type of limitation also, so our users don’t have to remember two passwords?


    No, you must create or purchase a password filter DLL to run on your DCs. We talked about the password rules on AD here previously. Our only limit to the maximum password length is the actual max – 128 characters, it’s not configurable. We’re mainly concerned with providing more avenues to complexity, not fewer. As always, I recommend two-factor auth when possible; passwords alone are not safe enough, and third party front-end tools that make the user pick words resistant to dictionary and PII attack are just encouraging users to write them down and tape it to their monitor.

    I highly discourage anyone from trying to write their own password filter DLL, by the way, and you should carefully test any third party ones before purchasing. There are a lot of filter vendors, so use that to your advantage at the negotiating table. The DLL has to run in LSASS.EXE on all your DCs. We have worked many a CritSit here where a custom password filter would cause LSASS to crash, which means no more DCs. Not to mention that you need to trust your vendor enough that they are correctly vetting their code, making sure there are no buffer overruns, injection attacks, etc. – you are giving them the highest possible privilege in your entire company. That vendor is effectively running as the DC itself, so they’d better be careful – nor should they like leaving little backdoors in their code. You know, like Siemens did with their nuclear control software.

    Some of these vendors also sell GINA or Credential Provider components. Beware any vendor that wants to sell you just those front-end pieces and not a DC password filter DLL. While helpful to the user, those clients are easily bypassed (often by accident, like when going through an Outlook Web Access page) and should not be the only way that passwords are restricted. The DC is the final password arbiter, so that’s where you need to enforce things.

    Just toss the Mainframe, this all sounds like too much work. :-P


    Does Windows Server 2003 auditing differentiate between NTLM and NTLMv2 as the authentication package recorded in the 528/529 events? I already know about event ID 4624 in Advanced auditing.


    No, it will always just say "ntlm". Because that auditing dates back to NT 3 and is exceptionally gross.


    I would like to programmatically detect if a given UNC path contains a DFS root or not – is that possible?


    NetShareGetInfo is one way, as it’s can return DFS structure SHARE_INFO_1005 if asked. Alternatively, you can also use: NetDfsGetInfo with \\server_or_domain\share and level 1 should tell you if the path is DFS. If success, the path is DFS. Unfortunately, almost any error returned does not necessarily mean that it is or isn’t DFS, just that something is broken. Also, the API is a blocking call and there is no timeout mechanism.


    Is it possible to use Excel Shared workbooks with DFSR on Windows Server 2008 R2?


    Yes, with the caveats that:

    1. Users should only be allowed access a single copy of the file – i.e. all users only have one share available for that file, not shares on 20 different DFSR servers. The merging functionality two+ replicated copies of a shared document will not work, based on replication timing. I.e. shared documents accessed by two different users from two different servers. Data loss will result.
    2. As long as at least one person has the shared notebook open having made their own changes and not saving them, those changes will not replicate. When saved by a user, their changes will commit to the “real” XLSX file and will replicate. Excel creates temporary backing files with tildes that will not replicate with default DFSR filters. This works with shared workbook’s passive auto-save as well as clicking the save button actively.
    3. I used Excel 2010 shared workbooks – I have no idea if the older versions will be worse, I leave that to you to test.
    4. This is “practical testing” – I don’t have an official support answer here; only that it works in my repro within these constraints and does not appear to explode. If you want an official answer, you need to open a case with the MS Office Excel team – they ultimately decide supportability of their app, not us in Windows.


    From my long understanding, when you host your DFS Namespace on Domain Controllers, you must be member of the Domain Admin Group in order to administer it; you can’t delegate the administration to a standard user. It talks about this in KB258992 and elsewhere. But I just tried delegating a user for a namespace that is hosted on DCs, and they were able to create links, delete them, and even delete the namespace root servers and the namespace itself! He’s not a member of any admin groups.


    If I delegate a user, I can modify and delete all aspects of a namespace except share removal. I cannot create new namespaces though, even if the share I want to use already exists and was created for me by an admin. If I try, “service control manager cannot be opened: access is denied” and kaboom:


    This is the case with both DCs and non-DCs - it makes no difference. The nugget here is that the KB only states the limitation once, and too loosely:

    When you delegate to users the ability to create DFS shares, the configuration of the DFS dictates how delegation must occur. When you configure a stand-alone DFS server, the delegation process involves adding the user who is delegated to the local Administrators group on the DFS server. When you configure a domain DFS, the user who is delegated must be added to the local Administrators group on each of the Root DFS server replicas. If the DFS root is on a domain controller, the user must be added to the Domain Admins group; otherwise, the user will receive an "access denied" error message.

    The UI states “Delegate Management Permissions” so it’s a semantic argument over what that means.  The fact that you cannot connect to Service Control Manager without admin permissions is not DFSN’s fault, though. And that’s why you need to be a member of the Administrators or Domain Admins group on DCs to fully manage DFS. If you were to do some unsupported hackery that let some delegated user operate the service control manager, you might as well make them be an admin – they can kill all your DCs now by shutting off all the services!

    Other Things

    Black Hat USA is this week. When they start posting briefings I’ll do my usual analysis.

    A list of amusing units of measurement. Mention of measure usually drives a European to show up complaining that we still don’t use the metric system in the US. Which is incorrect, but regardless, here’s my offer: we’ll fully mandate the metric system in the US when you abolish the hundreds of languages of Europe and use only English, the language of science, information technology, business, seafaring, aviation, entertainment, radio and diplomacy. Then you will be efficient in a way far more meaningful than hectares and grams. Bwaamp!

    Odd, I usually find that swearing makes me feel better. #%@$@&^*!

    The Car Show is better than Top Gear USA, as long as you start with Episode 3 of TCS.. Neither is fit to hold proper Top Gear’s sweaty Stig helmet.

    Any playlist made up songs from the movie soundtracks of Inception, Blade Runner, and The Abyss is hypnotically awesome. I cannot stop listening to these.

    Finally, my favorite new t-shirt from comic-con 2011…


    … and my favorite new t-shirt vendor, thanks to their booth packing two of the best games of all time, Defender and Battlezone. Now I just need Robots and Spaceships.

    Have a great weekend folks.

    Ned “bring back vector graphics” Pyle

  • AskDS is 12,614,400,000,000,000 shakes old

    It’s been four years and 591 posts since AskDS reached critical mass. You’d hope our party would look like this: 


    But it’s more likely to be:


    Without you, we’d be another of those sites that glow red hot, go supernova, then collapse into a white dwarf. We really appreciate your comments, questions, and occasional attaboys. Hopefully we’re good for another year of insightful commentary.

    Thanks readers.

    The AskDS Contributors

  • Improved Group Policy Preference Targeting by Computer Group Membership

    Hello AskDS readers, it's Mike again talking about Group Policy Preference targeting items. I posted an article in June entitled Targeting Group Policy Preferences by Container, not by Group. This post highlighted the common problems many people encounter when targeting preferences items based on a computer's group membership, why the problem occurs, and some workarounds.

    Today, I'd like to introduce a hotfix released by Microsoft that improves targeting preference items by computer group membership. The behavior before the hotfix potentially resulted in slow computer group policy application. The slowness was caused by the way Security Group targeting applies against a computer account. The targeting item makes multiple round trips to a domain controller to determine group memberships (including nested groups). The slowness is more significant when the computer applying the targeting item does not have a local domain controller and must use a domain controller across a WAN link.

    You can download the hotfix for Windows 7 and Windows Server 2008 R2 through Microsoft Knowledgebase article 2561285. This hotfix changes how the Security Group Targeting item calculates computer group membership. During policy application, the targeting item requests a copy of the computer's authentication token. This token is mostly identical to the token created during logon, which means it contains a list security identifiers (SIDs) for every group of which the computer is a member, including nested groups. The targeting item performs the configured comparison against this list of SIDs in the token, rather than multiple LDAP calls to a domain controller. This behavior aligns the behavior of computer security group targeting with that of user security group targeting. This should improve the performance of security group targeting.

    Mike "Try, Try, Try Again" Stephens

  • Friday Mail Sack: Unintended Hilarity Edition

    Hiya folks, Ned here again with another week’s questions, comments, and oddities. This time we’re talking:

    Let’s get it.


    When we change security on our group policies using GPMC, we always get this disturbing message:

    “The permissions for this GPO in the SYSVOL folder Are inconsistent with those in Active Directory”


    We remove the “Read” and “Apply Group Policy” checkboxes from Authenticated Users by using the Delegation tab in GPMC, then substitute our own specific groups. The policies apply as expected with no errors even when we see this message.


    It’s because you are not completely removing the Authenticated Users group. Authenticated Users does not only have “Read” and “Apply Group Policy”, it also has “List Object”, which is a “special” permission. The technique you’re using leaves Authenticated Users still ACL’ed, but with an invalid ACE of just “List”, and that’s what GPMC is sore about:


    Instead of removing the two checkboxes, just remove Authenticated Users:


    Better yet, don’t use the Delegation tab at all. The Security Filtering section on the main page sets the permissions for read and apply policy, which I presume is what you want. Just remove Authenticated Users and put in X. It gives you the desired resultant policy application, without any errors, and with less effort.


    Delegation is designed for controlling who can manipulate policies. It only coincidentally manages who gets policies.


    Is it possible to setup multiple ADMT servers and allow both the ability to migrate passwords? I know during the setup of the PES service on a source DC consumes a key file generated from the ADMT server. I wasn’t sure if this ties only allows that server the ability to perform password migrations.


    You can always have multiple ADMT copies, as long as they point to the same database; that’s where things tie together, not in ADMT itself. You could use multiple databases, but then you have to keep track of what you migrated in each one and it’s a real mess, especially for computer migration, which works in multiple phases.  You’d need multiple PES servers in the source domain and would have to point to the right one from the right ADMT DB instance when migrating users. This is highly discouraged and not a best practice.


    I was looking at Warren’s post on figuring out how much DFSR staging space to set aside. I have millions of files, how long can I expect that PowerShell to run? I want to schedule it to go once a week or so, but not if it runs for hours and incinerates the server.


    It really depends on your hardware. But for a worst case, I used one of my gross physical test “servers” (it’s really workstation-class hardware) and generated many 1KB files plus 64 1MB files to have something to pick:

    • 500,000+64 files took 1 minute, 45 seconds to calculate
    • 1,000,000+64 files took 3 minutes, 30 seconds to calculate

    The CPU and disk hit was negligible, but the memory usage significantly climbed. I would do this off hours if that server is starved for RAM.


    Can USMT migrate files that are longer in locations exceeding MAX_PATH rules of 260 characters?




    Both scanstate and loadstate supports paths up to ~32,767 characters, with each “component” (file or folder name) in that path limited to 255 characters.


    According to this article, Windows Server 2008 and 2008 R2 DCs use static port 5722 for DFSR. We mainly use Win2008 R2 member servers, so when choosing a port to set DFSR to, should I choose a different port within the range 49152 – 65535? Or would it be OK to set DFSR to 5722 on member servers too, so that all traffic on 5722 will be DFSR regardless of whether it's a DC or a member server involved in the replication?


    Totally OK to use 5722 on members and makes your life easier on the firewall config. Make sure you review:


    What are the most common Active Directory-related support cases Microsoft gets? I’m planning some training and want to make sure I am hitting the most powerful topics.


    In no particular order:

    • Slow Logon (i.e. between CTRL+ALT+DEL and a working, responsive desktop)
    • Group policy not applying
    • Kerberos failures (duplicate/missing SPNs and bloated token)
    • Domain upgrade best practices and failure (i.e. ADPREP, first new DC)
    • AD replication failing (USN rollback, lingering objects, tombstone lifetime exceeded)

    The above five have remained the top issues for 12 years now. Within the rest of Directory Services support, ADFS and PKI have seen the most growth in the past year.


    Other Things

    In case you live on the Mariana Islands and only got your first Internet connection today, we’ve started talking about Windows 8. Shiny pictures and movies too.


    Preemptive strike: I cannot talk about Windows 8.

    The power of inspirational infographics, via the awesome and from the brilliant H57 Design:


    The Cubs were robbed.

    It’s time for IO9 2011 fall previews of science fiction and fantasy:

    We released the Windows 7 theme you’ve been wanting, Jonathan!

    Is this the greatest movie ever created? Certainly one of the most insane. It’s safe for work.

    Unless you work in an anthropomorphic cannibalism outreach center

    And finally, from an internal email thread discussing some new support case assignment procedures:

    From: a manager
    To: all DS support staff at Microsoft
    Subject: case assignment changes

    For cases that are dispatched to the Tier 3 queue and assigned based on an incorrect support topic or no support topic listed. Engineers will do the following:

    1. Set appropriate Support topic

    2. Update the SR Title-with: STFU\[insert new skill here]

    3. Correct support topic for assignment

    4. Dispatch the case back to the queue for re-assignment

    Five minutes later:

    From: a manager
    To: all DS support staff at Microsoft
    Subject: RE: case assignment changes

    Incidentally, the acronym STFU stands for “Support Topic Field Update” :-)


    Have a nice weekend, folks.

    Ned “the F is for Frak” Pyle

  • The Security Log Haystack – Event Forwarding and You

    Hi. This is your guest writer Mark Renoden. I’m a Senior Premier Field Engineer based in Sydney, Australia and I’m going to talk to you about the use of Event Forwarding to collect security events. This is particularly useful when:

    • You have specific events you’re looking for (e.g. account lock out investigations)
    • You have an aggressive audit policy resulting in rapid security event log roll over
    • You have a lot of servers (and therefore logs) to watch

    Historically, you’d use a tool like EventCombMT to skim the security logs across your servers for the events of interest but in the case where security event logs quickly roll over, it might come too late.

    I'll take the account lock out example. Before I dive into the details of Event Forwarding, there’s some preparation you need to do first. These steps are different for Windows Server 2003, and Windows Server 2008/2008 R2.

    Preparing Windows Server 2003 SP2

    I’ll show you how to prepare your Windows Server 2003 machines so you’re able to collect security events from them.

    1. Make sure you have the Windows Firewall/Internet Connection Sharing (ICS) service started and configured to start automatically.


    This doesn’t mean you need the firewall configured – only that you have the service running which is required for the Windows Event Collector service. For example, your Windows Firewall/Internet Connection Sharing (ICS) service can be running but your firewall can be off.


    2. Download and install the Windows Remote Management package from

    3. Grant the Network Service account READ access to the security event log by appending (A;;0x1;;;NS) to the following registry value:

    Key: HKLM/SYSTEM/CurrentControlSet/Services/EventLog/Security

    Value: CustomSD

    For example, the default security descriptor with READ for the Network Service appended is:


    The CustomSD registry value accepts a security descriptor using the Security Descriptor Definition Language (SDDL). You can read more about SDDL here:

    You can deploy this step on a larger scale using Group Policy as detailed in:

    How to set event log security locally or by using Group Policy in Windows Server 2003

    For Windows Server 2008 or later, you can also use Group Policy Preferences to deploy registry settings

    Information about new Group Policy preferences in Windows Server 2008

    Preparing Windows Server 2008 and Windows Server 2008 R2

    Just like Windows Server 2003, you have to prepare your Windows Server 2008/2008 R2 machines for collection of security events. To do this, simply add the Network Service account to the Built-in Event Log Readers group.


    If instead, you’d like to be more specific and restrict Network Service account READ access to just the security event log, you can modify the security event log security descriptor as follows.

    1. Open up a command prompt and run:

    wevtutil gl security

    This command tells you the current security descriptor for the security event log – specifically in the channelAccess value. The default value is:


    Again, you want to append read access for the Network Service. In my example, your new security descriptor will be:


    2. At the same command prompt, run:

     wevtutil sl security /ca:O:BAG:SYD:(A;;0xf0005;;;SY)(A;;0x5;;;BA)(A;;0x1;;;S-1-5-32-573)(A;;0x1;;;S-1-5-20)

    Note: This is all one command on the same line.


    Configure the Member Server to Collect Events

    Now that your server (in my example the Domain Controllers) configuration is complete, you need to configure the member server as the collection point.

    1. On the member server that will be collecting the events, open a command prompt and run:

    winrm qc

    wecutil qc

    Answer YES to any prompts you see.

    The first command (winrm qc) configures the member server to accept WS-Management requests from other machines while the second command (wecutil qc) configures the Windows Event Collector service.

    2. At the same command prompt, execute the following command and record the port:

     winrm enumerate winrm/config/listener


    3. Open the Event Viewer

    4. In the left-hand pane, click Subscriptions, then right-click Subscriptions and then left-click Create Subscription.

    5. Specify a subscription name and then select Source computer initiated.


    6. Click Select Computer Groups…

    7. Click Add Domain Computers… and specify Domain Controllers (or a security group that includes the servers you’re interested in).


    8. Click OK and OK.

    9. Back on the Subscription Properties screen, click Select Events… and specify the events you wish to capture.

    In my example, I’m looking for logon failures leading to account lockouts. These are logged as event 675 on Windows Server 2003 and event 4771 on Windows Server 2008 / 2008 R2.


    10. Click OK.

    11. Back ok the Subscription Properties screen, click Advanced… and choose Minimize Latency.


    12. Click OK and then OK to close the Subscription Properties screen.

    13. Open a command prompt, run:

    wecutil ss <subscription name> /cm:Custom /dmi:1


    Note: This step is only necessary if event collection is time critical.

    Policy for Event Forwarding

    Having prepared the servers for collection of security events, you now require a Group Policy Object applied to them. This GPO will specify the member server (running Windows Server 2008 or later) where events are collected.

    You must create and edit the GPO from a Windows Vista, Windows Server 2008, Windows 7 or Windows Server 2008 R2 system. These are the only operating systems that provide policy settings for Windows Remote Management and Event Forwarding.

    In my example, I want security events collected from my Domain Controllers. My member server is running Windows Server 2008 R2.

    1. Open the Group Policy Management Console (GPMC), create a new GPO and link it to the Domain Controllers OU.


    2. Right-click the new GPO and open it for editing.

    3. In the GPO Editor, navigate to Computer Configuration | Policies | Administrative Templates | Windows Components | Windows Remote Management (WinRM) | WinRM Service

    4. In the right-hand pane, open Allow automatic configuration of listeners.

    5. Set the policy to Enabled and set the IPv4 and IPv6 filters to *.


    6. Click OK.

    7. In the GPO Editor, navigate to Computer Configuration | Policies | Administrative Templates | Windows Components | Event Forwarding

    8. In the right-hand pane, open Configure the server address, refresh interval, and issuer certificate authority of a target Subscription Manager.

    9. Set the policy to Enabled and click Show….

    10. Add the value Server=<member_server>:<port> where <port> is the port recorded earlier.


    11. Click OK and OK.

    12. Close the GPO editor.

    13. Restart the Windows Remote Management (WS-Management) service on the Domain Controllers.

    14. Wait. You need to be patient. Group Policy has to apply, the Windows Remote Management (WS-Management) service on the Domain Controllers has to pick up those policy settings and the Windows Event Collector service on the member server has to start talking to the DCs.

    Once all of that settles down, you’ll see the events in the Forwarded Events log on the member server.



    Now you’re fully prepared to nail down those troublesome problems in environments with high churn security logs!

    - Mark ”If it’s there I’ll find it” Renoden