Blog - Title

June, 2011

  • Fun with the AD Administrative Center

    Hi folks, Ned here again. We introduced the AD Administrative Center in Windows Server 2008 R2 to much fanfare. Wait, I mean we told no one and for good measure, we left the old AD Users and Computers tool in-place. Then we continued referencing it in all our documentation.

    And people say we're a marketing company.

    I've talked previously about using ADAC as a replacement for acctinfo.dll. Today I run through some of the hidden coolness that ADAC brings to the table as well as techniques that make using it easier. If you're never used this utility, make sure you review the requirements and if you don't have any Windows Server 2008 R2 DCs, install the AD Management Gateway and its updates on at least one of your older DCs in each domain. ADAC is included in RSAT.

    I am going to demo as much as possible, so I hope you have some bandwidth this month, oppressed serfs Canucks and Aussies. Since this is me, I'll also show you how to work around some ADAC limitations - this isn’t a sales pitch. To make things interesting, I am using one of my more complex forests where I test the ADRAP tools.


    Fire up DSAC.EXE and follow along.

    ADAC isn't ADUC

    The first lesson is "do not fight the interface". Don’t try to make ADAC into AD Users and Computers simply because that's what you’re used to. ADUC wants to click everywhere, expanding trees of data. It's also has short-term memory loss - every time you restart it you have to set it up all over again.

    ADAC realizes that you probably stick to a few areas most of the time. So rather than heading to the Tree View tab right away to start drilling down, like this:


    … instead, consider using navigation nodes to add areas you are frequently accessing. In my case here, the Users container is an obvious choice:



    This pins that container in the navigation pane so that I don’t have to click around next time.


    It's even more useful if I use many deeply nested OU structures in the domain. For example, rather than clicking all the way into this hierarchy each time:


    I can instead pin the areas I plan to visit that week for a project:


    Nice! It even preserves the visual hierarchy for me. Notice another thing here - ADAC keeps the last three areas I visited in the recent view list under that domain. Even if I had not pinned that OU, I'd still get it free if I kept returning to it:


    Once you open one of those users, you don't have to dig through a dozen tabs for commonly used attributes. The important stuff is right up front.


    For a real-world example of how this does not suck, see this article. The old tabs are down there in the extensions section still, if you need them:


    A lot of people have a lot of domains

    One thing AD Users and Computers isn’t very good at is scale: it can only show you one domain at a time, requiring you to open multiple dialogs or create your own custom MMC console.


    In ADAC, it’s no sweat - just insert any domains you want using Add Navigation Nodes again:


    I can add other navigation nodes for those domains without adding the domains themselves too. Each domain gets that three-entry "recently used" list too. I'm also free to move the pinned nodes up and down the list with the right-click menu, if I have OCD. For instance, if I want the Users and Computers container from three domains, it's nothing to have them readily available, in the order I want:



    Come on now, you have to admit that is slick, right?

    Always look for the nubbin arrow

    Scattered around the UI are little arrows that allow you to hide and expose various data views. For instance, you can give yourself more real estate by hiding the navigation pane:


    Or see a user's logon information:


    Or hide a bunch of sections in groups that you don't usually care about, leaving the one you constantly examine:


    Note: It's not really called the nubbin arrow except by Mike Stephens and me. Join our cool gang!

    Views and Search are better than Find

    AD Users and Computers is an MMC snap-in: this means a UI designed for NT 4.0. When it lets you search, you are limited to the Find menu, which lets you return data, but not preserve it. After closing each search, ADUC's moron brain forgets what you just asked, like a binary pothead.

    ADAC came after the birth of search and in a time where AD is now ubiquitous and huge. That means everywhere you go, it wants to help you search rather than browse. Moreover, it wants to remember things you found useful. If I am looking at my Users container, the Filter menu is right there beckoning:


    It lets me do quick and reasonable searches without a complicated menu system:


    As well as create complex queries for common attributes:


    Then save those queries for later, for use within any spot in the forest:


    I can also use global search. And I do mean global - for example, I can search all my domains at once and not be limited to Global Catalog lookups that are often missing less-travelled attributes:


    For example here, I use ambiguous name resolution to find all objects called Administrator - note how this automatically wildcards.


    Not bad, but I want only users that are going to have their passwords expire in the next month. Moreover, I've been trying to improve my LDAP query skills when scripting. No sweat, I can do it the easy way then convert it to LDAP:



    Or maybe I let ADAC do the hard work of things like date range calculation:


    Then I take that query:


    And modify it to do what I want. Like only show me groups modified in the past three days:


    Neato - on demand quasi-auditing.

    A few tricks of the trade

    Return to defaults

    If you want to zero out the ADAC console and get an out of box experience, there's no menu or button. However, if you delete this folder, you delete the whole cache of settings:


    ADAC will be slow to start the next time you run it (just as it was the first time you ever ran it) but it will be quick again after that.

    The Management List

    Have some really ginormous containers? If you navigate into one using ADAC, you will see an error like this:

    "The number of items in this container exceeds the maximum number blah blah blah…"

    The error tells you what to do - just change the "Management List" options. Right! So… ehhh… where is the management list? You have to hit the ALT key to expose that menu. Argh…


    Then you can set the turned object count as low as 2000 or as high as 100000. If you have to do this though, you need to work on organizing your objects better.

    Just think "Explorer"

    In many ways, we designed ADAC like 7's Windows Explorer. It has a breadcrumb bar, a refresh button, and forward/back buttons.


    It lets you use the address bar to quickly navigate and browse, with minimal real estate usage.


    The buttons offer a history:


    It has an obvious and "international" refresh button - very handy. ADUC made you learn weird habits like F5, which may seem natural to you now, but isn't not very friendly for new admins.


    That new Explorer probably took some getting used to but once you had them, returning to XP seems like visiting the dusty hometown you left years ago: Quaint. Inefficient. Boring. Having used the new one for a few years now, ADAC should be more intuitive.

    Sum Up

    I'm not here to argue against AD Users and Computers; it has its advantages (I miss the Copy… menu). And it's certainly familiar after 11 years of use. However, the AD Administrative Center deserves a place at any domain admins' table and can make your life easier once you know where to look. Try it for a week and see for yourself. If you come back to ADUC, it's ok - we already cashed your check.

    Until next time.

    - Ned "Ok, maybe 'fun' was a stretch" Pyle

  • What is the Impact of Upgrading the Domain or Forest Functional Level?

    Hello all, Jonathan here again. Today, I want to address a question that we see regularly. As customers upgrade Active Directory, and they inevitably reach the point where they are ready to change the Domain or Forest Functional Level, they sometimes become fraught. Why is this necessary? What does this mean? What’s going to happen? How can this change be undone?

    What Does That Button Do?

    Before these question can be properly addressed, if must first be understood exactly what purposes the Domain and Forest Functional Levels serve. Each new version of Active Directory on Windows Server incorporates new features that can only be taken advantage of when all domain controllers (DC) in either the domain or forest have been upgraded to the same version. For example, Windows Server 2008 R2 introduces the AD Recycle Bin, a feature that allows the Administrator to restore deleted objects from Active Directory. In order to support this new feature, changes were made in the way that delete operations are performed in Active Directory, changes that are only understood and adhered to by DCs running on Windows Server 2008 R2. In mixed domains, containing both Windows Server 2008 R2 DCs as well as DCs on earlier versions of Windows, the AD Recycle Bin experience would be inconsistent as deleted objects may or may not be recoverable depending on the DC on which the delete operation occurred. To prevent this, a mechanism is needed by which certain new features remain disabled until all DCs in the domain, or forest, have been upgraded to the minimum OS level needed to support them.

    After upgrading all DCs in the domain, or forest, the Administrator is able to raise the Functional Level, and this Level acts as a flag informing the DCs, and other components as well, that certain features can now be enabled. You'll find a complete list of Active Directory features that have a dependency on the Domain or Forest Functional Level here:

    Appendix of Functional Level Features

    There are two important restrictions of the Domain or Forest Functional Level to understand, and once they are, these restrictions are obvious. Once the Functional Level has been upgraded, new DCs on running on downlevel versions of Windows Server cannot be added to the domain or forest. The problems that might arise when installing downlevel DCs become pronounced with new features that change the way objects are replicated (i.e. Linked Value Replication). To prevent these issues from arising, a new DC must be at the same level, or greater, than the functional level of the domain or forest.

    The second restriction, for which there is a limited exception on Windows Server 2008 R2, is that once upgraded, the Domain or Forest Functional Level cannot later be downgraded. The only purpose that having such ability would serve would be so that downlevel DCs could be added to the domain. As has already been shown, this is generally a bad idea.

    Starting in Windows Server 2008 R2, however, you do have a limited ability to lower the Domain or Forest Functional Levels. The Windows Server 2008 R2 Domain or Forest Functional level can be lowered to Windows Server 2008, and no lower, if and only if none of the Active Directory features that require a Windows Server 2008 R2 Functional Level has been activated. You can find details on this behavior - and how to revert the Domain or Forest Functional Level - here.

    What Happens Next?

    Another common question: what impact does changing the Domain or Forest Functional Level have on enterprise applications like Exchange or Lync, or on third party applications? First, new features that rely on the Functional Level are generally limited to Active Directory itself. For example, objects may replicate in a new and different way, aiding in the efficiency of replication or increasing the capabilities of the DCs. There are exceptions that have nothing to do with Active Directory, such as allowing NTFRS replacement by DFSR to replicate SYSVOL, but there is a dependency on the version of the operating system. Regardless, changing the Domain or Forest Functional Level should have no impact on an application that depends on Active Directory.

    Let's fall back on a metaphor. Imagine that Active Directory is just a big room. You don't actually know what is in the room, but you do know that if you pass something into the room through a slot in the locked door you will get something returned to you that you could use. When you change the Domain or Forest Functional Level, what you can pass in through that slot does not change, and what is returned to you will continue to be what you expect to see. Perhaps some new slots added to the door through which you pass in different things, and get back different things, but that is the extent of any change. How Active Directory actually processes the stuff you pass in to produce the stuff you get back, what happens behind that locked door, really isn't relevant to you.

    If you carry this metaphor forward into the real world, if an application like Exchange uses Active Directory to store its objects, or to perform various operations, none of that functionality should be affected if the Domain or Forest Functional Mode changes. In fact, if your applications are also written to take advantage of new features introduced in Active Directory, you may find that the capabilities of your applications increase when the Level changes.

    The answer to the question about the impact of changing the Domain or Forest Functional Level is there should be no impact. If you still have concerns about any third party applications, then you should contact the vendor to find out if they tested the product at the proposed Level, and if so, with what result. The general expectation, however, should be that nothing will change. Besides, you do test your applications against proposed changes to your production AD, do you not? Discuss any issues with the vendor before engaging Microsoft Support.

    Where’s the Undo Button?

    Even after all this, however, there is a great concern about the change being irreversible, so that you must have a rollback plan just in case something unforeseen and catastrophic occurs to Active Directory. This is another common question, and there is a supported mechanism to restore the Domain or Forest Functional Level. You take a System State back up of one DC in each domain in the forest. To recover, flatten all the DCs in the forest, restore one for each domain from the backup, and then DCPROMO the rest back into their respective domains. This is a Forest Restore, and the steps are outlined in detail in the following guide:

    Planning for Active Directory Forest Recovery

    By the way, do you know how often we’ve had to help a customer perform a complete forest restore because something catastrophic happened when they raised the Domain or Forest Functional Level? Never.

    Best Practices

    What can be done prior to making this change to ensure that you have as few issues as possible? Actually, there are some best practices here that you can follow:

    1. Verify that all DCs in the domain are, at a minimum, at the OS version to which you will raise the functional level. Yes… I know this sounds obvious, but you’d be surprised. What about that DC that you decommissioned but for which you failed to perform metadata cleanup? Yes, this does happen.
    Another good one that is not so obvious is the Lost and Found container in the Configuration container. Is there an NTDS Settings object in there for some downlevel DC? If so, that will block raising the Domain Functional Level, so you’d better clean that up.

    2. Verify that Active Directory is replicating properly to all DCs. The Domain and Forest Functional Levels are essentially just attributes in Active Directory. The Domain Functional Level for all domains must be properly replicated before you’ll be able to raise the Forest Functional level. This practice also addresses the question of how long one should wait to raise the Forest Functional Level after you’ve raised the Domain Functional Level for all the domains in the forest. Well…what is your end-to-end replication latency? How long does it take a change to replicate to all the DCs in the forest? Well, there’s your answer.

    Best practices are covered in the following article:

    322692 How to raise Active Directory domain and forest functional levels;EN-US;322692

    There, you’ll find some tools you can use to properly inventory your DCs, and validate your end-to-end replication.

    Update: Woo, we found an app that breaks! It has a hotfix though (thanks Paolo!). Mkae sure you install this everywhere if you are using .Net 3.5 applications that implement the DomainMode enumeration function.

    FIX: "The requested mode is invalid" error message when you run a managed application that uses the .NET Framework 3.5 SP1 or an earlier version to access a Windows Server 2008 R2 domain or forest


    To summarize, the Domain or Forest Functional Levels are flags that tell Active Directory and other Windows components that all DCs in the domain or forest are at a certain minimal level. When that occurs, new features that require a minimum OS on all DCs are enabled and can be leveraged by the Administrator. Older functionality is still supported so any applications or services that used those functions will continue to work as before -- queries will be answered, domain or forest trusts will still be valid, and all should remain right with the world. This projection is supported by over eleven years of customer issues, not one of which involves a case where changing the Domain or Forest Functional Level was directly responsible as the root cause of any issue. In fact, there are only cases of a Domain or Forest Functional Level increase failing because the prerequisites had not been met; overwhelmingly, these cases end with the customer's Active Directory being successfully upgraded.

    If you want to read more about Domain or Forest Functional Levels, review the following documentation:

    What Are Active Directory Functional Levels?

    Functional Levels Background Information

    Jonathan “Con-Function Junction” Stephens

  • Friday Mail Sack: Gargamel Edition

    Hi folks, Ned here again. This week we talk about 10 reasons not to use list object access dsheuristics, USMT trivia nuggets, poor man’s DFSDIAG, how to get network captures without installing a network capture tool, and some other random goo. Oh yeah, and friggin’ Smurfs.


    We’re thinking about using List Object Access dsheuristics mode to control people seeing data in Active Directory. Are there any downsides to this?


    There are a few – here are at least ten in no particular order (thanks to PFE Matt Reynolds for some of these, although he may never realize it):

    1. This can greatly increase the number of access check calls that are made, and can have a significant negative effect on performance.
    2. This will require a huge amount of work and ongoing maintenance. You will need to create and look after – forever - selective “views” for admins, help desks, service accounts, etc.
    3. This was designed more for hosted “multi-tenant solutions” that are very specialized.
    4. Microsoft applications are not generally tested with this setting.
    5. If you can find a third party vendor that tests this, I will have a heart attack and die from shock. If you can then find a vendor that is willing to change their code if you run into problems, I will then rise from the grave and eat my own pants.
    6. It’s very difficult to test how well apps are handling this, as it’s designed to “omit data”. That could have all sorts of weird effects on apps expecting to see certain built-in or “always available” objects.
    7. Active Directory is a… directory. It’s designed to share info. Specific sensitive attribute data can always be marked confidential and that’s probably really what you want here.
    8. Doing this is one of the least useful security measures in a whole litany of things that you probably haven’t implemented – encrypting your LDAP traffic, using IPSEC everywhere, using two-factor smart cards for all user access, encrypting all drives, preventing physical removal of computers. Or making sure your web servers don’t allow ancient SQL injection attacks. Focus!
    9. This makes you unique. You don’t want to be unique.
    10. Just because you can do something does not mean you should do something. We provide an option to format your hard drive as well.

    Strangely, two people asked about this in the past few weeks.


    Can USMT perform “incremental” or “differential” scans into a store? We have a lot of data to capture and it may take awhile, especially when going to a remote store. We’d like to do it in phases if possible.


    Sorry, no. USMT completely deletes the destination store contents when you start a scanstate (this is why you have to specify /o if the store already exists). If you perform a hardlink migration though, you are not copying data and it will scan much faster than a classic store.

    If you have to use a remote compressed classic store and you’re worried about reliability, run your scanstate to a local store location on the disk, then copy that store folder to a network location afterwards. Make sure you calculate space estimations to ensure you are not going to run out of disk, naturally.


    I don’t have any Win2008 servers – so I cannot use DFSDIAG.EXE – but I’d like to report on their DFS Namespace health. Are there other tools?


    File Services Management Pack for System Center Operations Manager 2007

    That will monitor health of Win2003 DFSN very well indeed. You can also use DFSDIAG via RSAT on Vista and Win7 clients; why do I suspect that you’re looking for a more… frugal… option, though? ;-P

    The old DFSUTIL.EXE tool will stand in for DFSDIAG in a pinch, but it requires you to both run more commands and interpret the results carefully. It’s not going to spend much time explaining what’s wrong, so much as show you what it thinks is configured and let you decide if that’s wrong or not. Some of the more useful commands:

    dfsutil.exe /root:<dfs name> /view /verbose

    dfsutil.exe /server:<root server> /view

    dfsutil.exe /domain:<domain> /view

    dfsutil /sitename:<root server or dc or target or client>

    dfsutil /root: <dfs name> /sitecosting /display

    dfsutil /root: <dfs name> /insite /display

    dfsutil /root: <dfs name> /targetfailback /display

    dfsutil /root: <dfs name> /targetpriority /server:<target> /display

    dfsutil.exe /root:<dfs name> /checkblob

    dfsutil /viewdfsdirs:<volume name>



    No complaining, we released DFSDIAG two OSes ago and you’re on a dying one. Plus we wrote it for a reason!


    The USMT hotfix KB2023591 only lists downloads for Windows 7/Windows Server 2008 R2.


    Is there a version for older operating systems?


    USMT 4.0 only cares that you run it against a client OS SKU, and that it be XP or later. The download is a CAB file and doesn’t have any OS checking for installation, only scanstate and loadstate enforce the OS. If you dig into the nugget of that main KB at the bottom you will see only:


    The reason it lists the OS on the download page is it has to say something, and USMT is built from the Windows 7/R2 source tree. So there you go.

    Awesome Technique for Win7/2008 R2 Network Captures

    Not a question, but a cool method that is too small to rate a full blog post: if you need to get a network capture on a Windows 7 or Windows Server 2008 R2 computer and you do not have or want Netmon installed, you can use NETSH.EXE. From an elevated CMD prompt run:

    netsh trace start capture=yes tracefile=c:\yourcapture.etl

    Do whatever you needed to do

    netsh trace stop

    Boom – network capture, written in ETL format.


    Open that file in Netmon 3.4 and you get all the usual capture info, plus other conversation and process info. AND other cool stuff – open the CAB file it created and you find a bunch of useful files with IP info, firewall event logs, applied group policies, driver versions, and more. All the goo I gather manually when I am getting a capture. Sweet!


    Thanks to Tim “Mighty” Quinn for demoing this here.

    Other Stuff

    A few years ago TechNet Magazine stopped printing paper copy and switched to a web-only format. I lost track of them after that, but this weekend, I started going through their online versions from 2010 and 2011. It turns out there’s good stuff I’d been missing. Here are a few cherry picked articles; feel free to point out some other favorites in the Comments:

    Windows Confidential: Testing, Testing (Raymond Chen)

    An interesting explanation of what Beta used to mean, and what it means now, from a Principal SDE who has been developing Windows since the Tithonian age. Heck, his blog is ready to collect Social Security.

    Troubleshooting 201: Ask the Right Questions (Stephanie Krieger)

    How to be an effective troubleshooter. Don’t stop reading just because the author is an Office expert; it’s applicable across all aspects of IT. A truly excellent article that should be required reading for new admins.

    Toolbox (Greg Steen)

    Unlike me, these folks can recommend useful third party utilities. It’s a monthly column and some of these are pretty slick.

    Windows PowerShell: HTML Reports in PowerShell (Don Jones)

    An easy technique to take harsh text output and turn it into fluffy HTML. Perfect for punching up reporting to show your manager with zero extra effort, leaving more time for you to work on real issues. Or, you know, see your children grow up. Cat’s in the cradle and the silvaaaaah spoooon…

    Using Kerberos for SharePoint Authentication (Pav Cherny)

    Yes please! If you have a friend that admins SharePoint, share this with them. In fact, bribe them to follow it. Whatever it takes. NTLM is the Devil and SharePoint feeds him a jalapenos.


    The Daily Mail was granted a “rare and remarkable” interview with Bill Gates last week. It’s a very interesting read.

    Remember when I said yesterday that it sucks to use the Internet in Australia and Canada? Well it sucks in other places too… The article isn’t what I’d call “complete” (it misses 98% of the world and doesn’t include my gigantic US ISP, Time Warner, for example – TW doesn’t care if I download 5 TB or 5KB, as fast and as often as I like, as long as I pay on time; I use Sprint for my phone for the very same reason – flat rate unlimited data without metering rules). A nifty piece – I recommend the comments.

    Why, Andalusia? Whyyyyyyyyyyy!?!?!?!?! I mean, I expect this from Belgium… Maybe Platformas knows.

    Have a nice weekend folks.

    - Ned “those dudes totally smurfed their smurf up” Pyle

  • Target Group Policy Preferences by Container, not by Group

    Hello again AskDS readers, Mike here again. This post reflects on Group Policy Preference targeting items, specifically targeting by security groups. Targeting preference items by security groups is a bad idea. There is a better way that most environments can accomplish the same result, at a fraction of the cost.

    Group Membership dependent

    The world of Windows has been dependent on group membership for a long time. This dependency is driven by the way Windows authorizes access to resources. The computer or user must be a member of the group in order to access the printer or file server. Groups are and have been the bane of our existence. Nevertheless, we should not let group membership dominate all aspects of our design. One example where we can move away from using security groups is with Group Policy Preference (GPP) targeting.

    Targeting by Security Group

    GPP Targeting items control the scope of application for GPP items. Think of targeting items as Group Policy filtering on steroids, but they only apply to GPP items included in a Group Policy object. They introduce an additional layer of administration that provides more control over "how" GPP items apply to a specific user or computer.

    Figure 1 - List of Group Policy Preference Targeting items

    The most common scenario we see using the Security Group targeting item is with the Drive Map preference item. IT Professionals have been creating network drive mappings based on security groups since Moby Dick was a sardine-- it's what we do. The act is intuitive because we typically apply permissions to the group and add users to the group.

    The problem with this is that not all applications determine group membership the same way. Also, the addition of Universal Groups and the numerous permutations of group nesting make this a complicated task. And let's not forget that some groups are implicitly added when you log on, like Domain Users, because it’s the designated primary group. Programmatically determining group membership is simple -- until it's implemented, and its implementation's performance is typically indirectly proportional to its accuracy. It either takes a long time to get an accurate list, or a short time to get a somewhat accurate list.

    Security Group Computer Targeting

    Using GPP Security Group targeting for computers is a really bad idea. Here's why: in most circumstances, the application retrieves group memberships from a domain controller. This means network traffic from the client to the domain controller and back again. Using the network introduces latency. Latency introduces slow process, and slow processing is the last thing you want when the computer is processing Group Policy. Also, Preference Targeting allows you to create complex targeting scenarios using Boolean operators such as AND, OR, and NOT. This is powerful stuff and lets you combine one or more logon scripts into a single GP item. However, the power comes at a cost. Remember that network traffic we created by make queries to the domain controller for group memberships? Well, that information is not cached; each Security Group targeting item in the GPO must perform that query again- yes, the same one it just did. Don't hate, that's just the way it works. This behavior does not take into account nest groups. You need to increase the number of round trips to the domain controller if you want to include groups of groups of groups etcetera ad nauseam (trying to make my Latin word quota).

    Security Group User Targeting

    User Security Group targeting is not as bad as computer Security Group targeting. During user Security Group targeting, the Group Policy Preferences extension determines group membership from the user's authentication token. This process if more efficient and does not require round trips to the domain controller. One caveat with depending on group membership is the risk of the computer or user's group membership containing too many groups. Huh- too many Groups? Yes, this happens more often than many realize. Windows creates an authentication token from information in the Kerberos TGT. The Kerberos TGT has a finite amount of storage for this information. User and computers with large group memberships (groups nested with groups…) can maximize the finite storage available in the TGT. When this happens, the remaining groups memberships are truncated, which creates the effect that the user is not a member of that group. Groups truncated from the authentication token results in the computer or user not receiving a particular Group Policy preference item.

    You got any better ideas?

    A better choice of targeting Group Policy Preference items is to use Organization Unit targeting items. It's da bomb!!! Let's look at how Organizational Unit targeting items work.

    Figure 2 Organizational Unit Targeting item

    The benefits Organizational Unit Targeting Items

    Organization Unit targeting items determines OU container membership by parsing the distinguished name of the computer or user. So, it simply uses string manipulation to determine what OUs are in scope with the user or computer. Furthermore, it can determine if the computer or user has direct container membership of an OU by simply looking for the first occurrence of OU immediately following the principal name in the distinguished name.

    Computer Targeting using OUs

    Computer Preference targeting with OUs still has to contact a domain controller. However, it’s an LDAP call and because we are not chasing nested groups, it's quick and efficient. First, the preference client-side extension gets the name of the computer. The CSE gets name from the local computer, either from the environment variable or from the registry, in that order. The CSE then uses the name to look up the security identifier (SID) for the computer. Windows performs an LDAP bind to the computer object in Active Directory using the SID. The bind completes and retrieves the computer object's distinguished name. The CSE then parses the distinguished name as needed to satisfy the Organizational Unit targeting item.

    User Targeting using OUs

    User Preference targeting requires fewer steps because the client-side extension already knows the user's SID. The remaining work performed by the CSE is to LDAP bind to the user object using the user's SID and retrieve the distinguished name from the user object. Then, business as usual, the CSE parses the distinguished name to satisfy the Organizational Unit targeting item.

    Wrap Up

    So there you have it. The solution is clean and it takes full advantage of your existing Active Directory hierarchy. Alternatively, it could be the catalyst needed to start a redesign project. Understandably, this only works for Group Policy Preferences items; however-- every little bit helps when consolidating the number of groups to which computer and users belong-- and it makes us a little less dependent on groups. Also, it's a better, faster, and more efficient alternative over Security Group targeting. So try it.


    We recently published a new article around behavior changes with Group Policy Preferences Computer Security Group Targeting.  Read more here.

    - Mike "This is U.S. History; I see the globe right there" Stephens



  • RSA SecurID Do Over

    Ned here. If you are using RSA SecurID, you’re probably aware they were compromised several months ago. You may also have heard that since then, hackers have been using that stolen info to attack or compromise various organizations. What you may not know is RSA is now issuing replacement tokens for their customers. The catch is you need to contact them; they are not necessarily going to contact you. More info from their executive chairman here:

    1-800-782-4362, Option #5 for RSA, Option #1 for the RSA SecurID Remediation Program
    1-800-543-4782, Option #5 for RSA, Option #1 for the RSA SecurID Remediation Program
    +1-508-497-7901, Option #5 for RSA, Option #1 for RSA SecurID Remediation Program

    None of this is directly AD or Microsoft-related, but I’d be remiss if I didn’t spread the word – RSA has a large customer base. That said, if you’re interested in alternatives, here’s some reading on understanding and deploying two-factor smartcard authentication:

    Ned “fobbing any questions off on Jonathan” Pyle

  • Friday Mail Sack: LeBron is not Jordan Edition

    Hi folks, Ned here again. Today we discuss trusts rules around domain names, attribute uniqueness, the fattest domains we’ve ever seen, USMT data-only migrations, kicking FRS while it’s down, and a few amusing side topics.

    Scottie, don’t be that way. Go Mavs.


    I have two forests with different DNS names, but with duplicate NetBIOS names on the root domain. Can I create a forest (Kerberos) trust between them? What about NTLM trusts between their child domains?


    You cannot create external trusts between domains with the same name or SID, nor can you create Kerberos trusts between two forests with the same name or SID. This includes both the NetBIOS and FQDN version of the name – even if using a forest trust where you might think that the NB name wouldn’t matter – it does. Here I am trying to create a trust between and forests – I get the super useful error:

    “This operation cannot be performed on the current domain”


    But if you are creating external (NTLM, legacy) trusts between two non-root domains in two forests, as long as the FQDN and NB name of those two non-root domains are unique, it will work fine. They have no transitive relationship.

    So in this example:

    • You cannot create a domain trust nor a forest trust between and
    • You can create a domain (only) trust between and
    • You cannot create a domain trust between and
    • You cannot create a domain trust between and


    Why don’t the last two work? Because the trust process thinks that the trust already exists due to the NetBIOS name match with the child’s parent. Arrrgh!



    You could still have serious networking problems in this scenario regardless of the trust. If there are two same-named domains physically accessible through the network from the same computer, there may be a lot of misrouted communication when people just use NetBIOS domain names. They need to make sure that no one ever has to broadcast NetBIOS to find anything – their WINS environments must be perfect in both forests and they should convert all their DFS to using DfsDnsConfig. Alternatively they could block all communication between the two root domains’ DCs, perhaps at a firewall level.

    Note: I am presuming your NetBIOS domain name matches the left-most part of the FQDN name. Usually it does, but that’s not a requirement (and not possible if you are using more than 15 characters in that name).


    Is possible to enforce uniqueness for the sAMAccountName attribute within the forest?


    [Courtesy of Jonathan Stephens]

    The Active Directory schema does not have a mechanism for enforcing uniqueness of an attribute. Those cases where Active Directory does require an attribute to be unique in either the domain (sAMAccountName) or forest (objectGUID) are enforced by other code – for example, AD Users and Computers won’t let you do it:


    The only way you could actually achieve this is to have a custom user provisioning application that would perform a GC lookup for an account with a particular sAMAccountName, and would only permit creation of the new object should no existing object be found.

    [Editor’s note: If you want to see what happens when duplicate user samaccountname entries are created, try this on for size in your test lab:

    1. Enable AD Recycle Bin.
    2. Create an OU called Sales.
    3. Create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis'.
    4. Delete the user.
    5. In the Users container, create a user called 'Sara Davis' with a logon name and pre-windows 2000 logon name of 'saradavis' (simulating someone trying to get that user back up and running by creating it new, like a help desk would do for a VIP in a hurry).
    6. Restore the deleted 'Sara Davis' user back to her previous OU (this will work because the DN's do not match and the recreated user is not really the restored one), using:

    get-adobject -filter 'samaccountname -eq "saradavis"' -includedeletedobjects | restore-adobject -targetpath "ou=sales,dc=consolidatedmessenger,=dc=com"

    (note the 'illegal modify operation' error).

    7. Despite the above error, the user account will in fact be restored successfully and will now exist in both the Sales OU and the Users container, with the same sAMAccountName and userPrincipalName.
    8. Logon as SaraDavis using the NetBIOS-style name.
    9. Logoff.
    10. Note in DSA.MSC how 'Sara Davis' in the Sales OU now has a 'pre-windows 2000' logon name of $DUPLICATE-<something>.
    11. Note how both copies of the user have the same UPN.
    12. Logon with the UPN name of and note that this attribute does not get mangled.

    Fun, eh? – Ned]


    Which customer in the world has the most number of objects in a production AD domain?


    Without naming specific companies - I have to protect their privacy - the single largest “real” domain I have ever heard of had ~8 million user objects and nearly nothing else. It was used as auth for a web system. That was back in Windows 2000 so I imagine it’s gotten much bigger since then.

    I have seen two other customers (inappropriately) use AD as a quasi-SQL database, storing several hundred million objects in it as ‘transactions’ or ‘records’ of non-identity data, while using a custom schema. This scaled fine for size but not for performance, as they were constantly writing to the database (sometimes at a rate of hundreds of thousands of new objects a day) and the NTDS.DIT is - naturally - optimized for reading, not writing.  The performance overall was generally terrible as you might expect. You can also imagine that promoting a new DC took some time (one of them called about how initial replication of a GC had been running for 3 weeks; we recommended IFM, a better WAN link, and to stop doing that $%^%^&@).

    For details on both recommended and finite limits, see:

    Active Directory Maximum Limits - Scalability

    The real limit on objects created per DC is 2,147,483,393 (or 231 minus 255). The real limit on users/groups/computers (security principals) in a domain is 1,073,741,823 (or 230). If you find yourself getting close on the latter you need to open a support case immediately!


    Is a “Data Only” migration possible with USMT? I.e. no application settings or configuration is migrated, only files and folders.


    Sure thing.

    1. Generate a config file with:

    scanstate.exe /genconfig:config.xml

    2. Open that config.xml in Notepad, then search and replace “yes” with “no” (including the quotation marks) for all entries. Save that file. Do not delete the lines, or think that not including the config.xml has the same effect – that will lead to those rules processing normally.

    3. Run your scanstate, including config.xml and NOT including migapp.xml. For example:

    scanstate.exe c:\store /config:config.xml /i:migdocs.xml /v:5

    Normally, your scanstate log should be jammed with entries around the registry data migration:

    Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.ico]
    Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jfif]
    Processing Registry HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\AllowedDragImageExts [.jpe]
    Processing Registry HKCU\Control Panel\Accessibility\HighContrast
    Processing Registry HKCU\Control Panel\Accessibility\HighContrast [Flags]
    Processing Registry HKCU\Control Panel\Accessibility\HighContrast [High Contrast Scheme]

    If you look through your log after using the steps above, none of those will appear.

    You might also think that you could just rename the DLManifests and ReplacementManifests folders to get the same effect and you’d almost be right. The problem is that Vista or Windows 7also use the built in %systemroot%\winsxs\manifests folders, and you certainly cannot remove those. Just go with the config.xml technique.


    After we migrate SYSVOL from FRS to DFSR on Windows Server 2008 R2, we still see that the FRS service is set to automatic. Is it ok to disable?


    Absolutely. Once an R2 server stops replicating SYSVOL with FRS, it cannot use that service for any other data. If you try to start the FRS service or replicate with it will log events like these:

    Log Name: File Replication Service
    Source: NtFrs
    Date: 1/6/2009 11:12:45 AM
    Event ID: 13574
    Task Category: None
    Level: Error
    Keywords: Classic
    User: N/A
    The File Replication Service has detected that this server is not a domain controller. Use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated and therefore, the service has been stopped. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

    Log Name: File Replication Service
    Source: NtFrs
    Date: 1/6/2009 2:16:14 PM
    Event ID: 13576
    Task Category: None
    Level: Error
    Keywords: Classic
    User: N/A
    Replication of the content set "PUBLIC|FRS-REPLICATED-1" has been blocked because use of the File Replication Service for replication of non-SYSVOL content sets has been deprecated. The DFS Replication service is recommended for replication of folders, the SYSVOL share on domain controllers and DFS link targets.

    We document this in the SYSVOL Replication Migration Guide but it’s easy to miss and a little confusing – this article applies to both R2 and Win2008, and Win2008 can still use FRS:

    7. Stop and disable the FRS service on each domain controller in the domain unless you were using FRS for purposes other than SYSVOL replication. To do so, open a command prompt window and type the following commands, where <servername> is the Universal Naming Convention (UNC) path to the remote server:

    Sc <servername>stop ntfrs

    Sc <servername>config ntfrs start=disabled

    Other stuff

    Another week, another barrage of cloud.

    (courtesy of the blog)

    Finally… friggin’ Word. Play this at 720p, full screen.



    Have a great weekend folks.

    - Ned “waiting on yet more email from humor-impaired MS colleagues about his lack of professionalism” Pyle

  • Friday Mail Sack: Wahoo Edition

    Hi folks, Ned here again. This week we talk GUI metadata cleanup, your useless manager (attributes), USMT abandonment and weight issues, the meaning of the DFSR nothing state, and the usual “other stuff.”


    TechNet says if you use DSA.MSC to delete a DCs computer object, the metadata cleanup process is started. Will a metadata cleanup start if you move the DC computer object from the "Domain Controllers" OU to another OU? I was reading this here "...the metadata is automatically cleaned up when a domain controller account is removed from the Domain Controllers organizational unit (OU)." 


    You only trigger the metadata cleanup when the DC computer object is deleted. You can move it to another OU (although we really wish you wouldn’t) and cleanup won’t occur. Here I have a domain with three DCs. I move one, then I force replication between all DCs in the forest, and restart that moved DC.


    He’s fine afterwards – still replicating, still in the DC group, not metadata cleaned. Of course, he’s no longer applying the Default Domain Controller policy and is now getting all kinds of weird OU policy, but that’s a different problem!


    Do the manager and managedBy attributes in AD do anything, other than for the Exchange global address list info?


    For groups, managedBy is an administrative convenience to designate “group admins”. When set like below, whatever principal listed in managedBy gets permission to update a group’s membership (the actual security is updated on the group’s AD object to allow this).

    So when you populate this:


    This happens under the covers:


    This is done by DSA.MSC, DSAC.EXE, and perhaps other tools; it is not some special function of the DC.

    In Win2008 and later managedBy also became the way you delegated local administration on an RODC, allowing branch admins to install patches, manage shares, etc. ( 


    Undocumented Bonus Alert:

    On the RODC, this is updating the RepairAdmin registry value within RODCRoles:


    Totally Documented Non-Bonus Not-Alert:

    You can use NTDSUTIL.EXE LOCAL ROLES to add accounts to other roles and they are stored here based on their well-known RID. See this goo.

    ManagedBy is also often used as an inventory marker by companies to denote which business unit run certain computers. It could perhaps be useful in an ADFS/Claims-aware scenario (“everyone who reports to Bob gets to access the team fantasy football league pool”), but I’ve not tried.

    I don’t know of any pure AD security usage for the manager attribute; I’ve only seen it used for the GAL and HR apps as a way to build organizational chains, like you mentioned earlier.


    The documentation on “Rerouting files and folders” mentions that the XML will migrate the contents of the source folder to the destination folder. What we have observed in our lab is that it also makes a copy of the folder’s contents in the destination folder. For instance, if I have a folder C:\TestFolder with a few files I tell USMT to migrate them to the CSIDL_PERSONAL (i.e. “My Documents”) folder of each user, it makes a copy of the contents into each user’s Documents folder and also migrates the C:\TestFolder folder to destination computer in the same c:\ location. Is this the expected behavior and is there a way to avoid the duplication?


    This is expected, because of migdocs.xml. It is making sure the folder contents on the root of the drive are copied as part of MigXmlHelper.GenerateDocPatterns. To override this, you need additional custom XML that runs in the SYSTEM context and blocks that special folder you are redirecting to all users:

    <component type="Documents" context="System">

        <displayName>Exclude folder and override migdocs.xml</displayName>

        <role role="Data">




                <pattern type="File"> C:\testfolder\* [*]</pattern>







    That will result the testfolder contents copying to every user profile Documents folder and not copying to c:\testfolder on the destination.

    It’s very rare for anyone to do this, that’s why the behavior isn’t well documented. Mainly because it uses up a ton of additional drive space duplicating all those files. This is what miguser.xml used to do by default, which is why that XML file was deprecated – people kept running out of disk space.


    What does the DFSR replicated folder “Uninitialized” state mean? From: The others seem self-explanatory or are well documented in that article.


    State 0 (Uninitialized) has no real meaning; it is a state placeholder so that we have some point of reference instead of NULL or blank. It is expected when you first configure a replicated folder that has not yet been detected by DFSR polling due to AD replication latency or timing.


    I'm in the process of USMT customization and have run into an issue where I need to block most of a folder’s contents from migrating, but still include one specific file. This is an issue for us because we'd like to use the MigDocs.XML file - our users have a habit of storing data outside of their profiles. This particular case deals with Oracle's tnsnames.ora file located in C:\Oracle\network\admin.To use the MigDocs.XML file and not migrate C:\Oracle\*, I'd have to use an unconditional exclude. But then I wouldn't be able to migrate the tnsnames.ora file. Any suggestions?


    This is tricky because you’re doing the opposite of what USMT was designed for (it wants to granularly exclude and grossly include). I can think of three options:

    • [Most recommended] Determine the known files/file types that exist in the oracle folder and specifically unconditionalExclude those with [] and [*.bar], leaving only the tnsnames.ora to migrate through “omission of exclusion”. I’d imagine there aren’t too many file types in that folder and that they are fairly predictable. This also has the good side effect of not nuking any non-oracle files someone saved there in a fit of usery’ness.
    • [Sort of recommended] Use a batch file to run USMT. That batch file copies the tnsnames.ora file after you run scanstate, and puts it in the store folder. Another batch file that runs loadstate copies it out of the store folder back to that path on the destination computer.
    • [Not recommended] You can edit the actual migdocs.xml and add an explicit exclude rule in the MigDocSystem component that excludes c:\Oracle\*[*]. The two rules (implicit include generated by GenerateDocPatterns and your explicit exclude rule) have the same specificity and in that case the exclude should win. This negates the include created by GenerateDocPatterns. Now with a clean slate you can have an explicit include in another component that migrates that inner folder with the tnsnames.ora file. For example:




    So even though my oracle folder is like this:


    My actual migration store gets only this:


    I call this “not recommended” because it is very difficult to version control the included XML files in USMT and you end up with thousands of instances of USMT running dozens of different version of the factory XML files. Eventually, somebody screws one up, but no one knows that the default XML is now tainted. I’ve seen support cases where the customer had been troubleshooting this for weeks before they finally broken down and called us, so because of that, so I still recommend the other two options. If you go this route make sure you carefully track the edited migdocs.xml files and rename them so there is less confusion.

    If possible, use some version control software to check XML in and out – there are plenty of free ones out there or you can throw us some cash for TFS if you like what you see in the trial. There also also hosting companies that will run TFS for you, for a monthly per-seat fee, if you just want this for a project like your Windows 7 rollout. It may sound like overkill but trust me – delaying your rollout for a month because some bozo decided to monkey with the xml is not cool. You’re writing migration code, you need to treat it with the same seriousness that you’d give C++.

    Naturally, these all work for any folder/file combination. That was an awesome question.

    Oh, I just thought of a fourth option: switch to SQL server.

    Other Stuff

    Are you new to your organization, or new to the IT field? Maybe your annual review could have gone better? Here’s useful advice from Eric Brechner, an MS veteran and Principal Dev Manager; you may have read his book Hard Code. He usually only posts once a month, but each article is phenomenal, even when you disagree with him. Here’s a sample:

    The new guy

    You're no bargain either

    I messed up

    Individual leadership

    Controlling your boss for fun and profit

    Superfan Mark Morowczynski points out that he already pwned the Internet before last week’s tip on using NETSH.EXE for captures, and that I owe him royalties. In lieu of money, I’ll push his blog a bit. When he can be bothered to write, he generates great stuff. Infrequent PFE bloggers are like corrupt politicians – expected, and full of excuses. Oh, and they both claim way too much on their expense reports.

    A few months back, some of us moved to a different location in the building for a project. I came to say hi, and I found this on the common area whiteboard:

    All done with love, I’m sure. If the Keebler reference doesn’t make sense, go here.

    University of Virginia survived their College World Series elimination game last night, so my wife can breathe easy… until tonight, where they play the Gamecocks, who beat them on Tuesday. She’s a Wahoo Cavalier by way of graduate school. I moved to North Carolina in 2000 and found that Southerners take their college sports very seriously. When UNC loses a basketball game, my sister-in-law acts as if someone died.

    And yes, I said their. My wife tells me that no matter how long I live here, no matter how assimilated I become, no matter how many grits I eat: I will always be a damyankee. Since everyone in Chicago thinks I’m a redneck now, I have no citizenship and I’m thinking of forming my own country. I’ve already picked out my state bird:

    The noble vulture

    Have a nice weekend folks.

    - Ned “carpetbag full of cookies” Pyle

  • Blocking Wallpaper Migration with USMT (or: you are a jerk)

    Hi folks, Ned here again. Do you hate your users? Do you revel in removing the slightest joy they have in their day? Do you wish to crush their hope and dreams, to the point of removing the small shreds of humanity they see while walled into their bleak veal pens?

    If so, this post is for you.

    Today I talk about how to block wallpaper and theme migration from Vista or Windows 7 source computers when running USMT. The actual blocking part here is trivial, so if that's all you care about skip to the end. If you want to learn something, start at the beginning: this is part of an informal series that explains USMT reverse engineering. Once you get good at this, you can figure out any weird little corner-case on your own. Even blocking default behaviors. Like not letting people have pictures of their grandkids.

    How do you live with yourself?

    Understanding how wallpaper works

    If you're migrating from Windows XP, you are already good to go - as you know from previous reading, wallpapers are not migrated automatically from that OS unless you write custom XML. However, if you are migrating from Windows Vista or Windows 7, user backgrounds do transfer over and work fine. It's tricky to turn off though, due to the shell's various personalization options. Let's walk through this.

    Here for example, looking at a Windows Vista computer, you see that the desktop key (which contains a wallpaper registry value) is migrated using an OS built-in manifest:

    C:\Windows\winsxs\Manifests>findstr /i "[wallpaper]" *.manifest



    <pattern type="Registry">HKCU\Control Panel\Desktop [Wallpaper]</pattern>

    A quick Internet search seems to confirm that this is the right key data, as does looking at the registry:


    If you set that Win32k manifest to NO in your config.xml (which is a bad idea, as that manifest migrates quite a few other settings and the users are going to be noticeably affected), then test a migration:

    <component displayname="Microsoft-Windows-Win32k-Settings" migrate="no"

    ... the wallpaper customizations are still migrated.

    This shows the danger of searching for references in the manifests without actually looking at the rules in the XML. Let's zoom out a bit:

    pattern type=
    "Registry">HKCU\Control Panel\Desktop [Wallpaper]</pattern>

    Examining that manifest closely shows that it actually already blocks migration of the wallpaper setting. But if you look at an actual user migration you will see that this wallpaper registry key does migrate even with this exclusion. You can search XML all you want for this one but you will only figure it out by the process of elimination: it's coming from:


    … which has no rules and only runs a "SHMIG.DLL" plugin that is completely opaque to you.

    That manifest gets called by this guy in the config.xml:

    <component displayname="Microsoft-Windows-shmig" migrate="yes"

    Which is overruling the Win32K manifest. Gotta love the shell… turning that manifest off is definitely a bad idea, as it does a bazillion other things that you definitely want to migrate.

    So what to do? It turns out there are two places in the user registry settings that control the wallpaper: the wallpaper registry value in the Desktop key, and the user's custom theme:

    Ahhh Vista. We hardly knew ya.


    The theme encapsulates the wallpaper settings as well. That does migrate over, thanks to this component manifest:

    <component displayname="Personalized Settings" migrate="yes" ID="appearance_and_display\personalized_settings">

    <component displayname="Microsoft-Windows-uxtheme" migrate="yes" ID=""/>

    <component displayname="Microsoft-Windows-themeui" migrate="yes" ID=""/>

    Which is the OS-supplied manifest:


    Which only does this:

    <rules context="User">



       <pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Themes\* [*]</pattern>




    If you block the Themeui settings explicitly by using the config.xml, you block the wallpaper migration implicitly. This works because when the user first logs on and has blank theme settings, the default theme is set by the OS and voila: no customization happened, even if the Control Panel\Desktop\Wallpaper value is set. It sets the user to the default Windows theme. Sweet!

    Nevermind all that, let's block some wallpaper!

    Either of these works ion its own, pick one. The second one is the recommended one, the first one is the easier one:

    1. Use the themeui config.xml (this can have unwanted side effects for various manifests, but in this case it's atomic to wallpaper and themes) to set only this change:

    <component displayname="Microsoft-Windows-themeui" migrate="no"

    2. Use custom XML (this is a best practice and generally recommended when blocking default migration behaviors):

    scanstate c:\store /o /i:migapp.xml /i:miguser.xml /i:blockwallpaper.xml

    <?xml version="1.0" encoding="UTF-8"?>
    migration urlid="
    component type="Documents" context="User"
    displayName>Block Wallpaper and Themes</displayName
    role role="Data"
    Blocks wallpaper and themes (which include wallpaper) from migrating from Vista/7
    pattern type="Registry">HKCU\Control Panel\Desktop [WallPaper]</pattern
    pattern type="Registry">HKCU\Software\Microsoft\Windows\CurrentVersion\Themes\* [*]</pattern

    And that's it. I hope you're happy with yourself, you made that nice user cry…

    Ned "they probably had LOLCat pictures, no big loss" Pyle