Blog - Title

May, 2011

  • USMT and U: Migrating only fresh domain profiles

    Hi folks, Ned here again. Frequently someone asks me how to make USMT 4.0 migrate only recently used domain user profiles. This might sound like a simple request for a USMT veteran, but there are some subtleties caused by processing rules and behaviors. Today I go through how this works, talk about pitfalls, and ultimately show how to solve the issue. Forearmed, you can solve quite a few more migration scenarios once you understand the rules and technique.

    The goal and initial results

    The plan here was to migrate only Cohowinery.com domain user profiles that had been accessed in the past three months, while skipping all local user profiles. This test computer had a variety of local and domain profiles, some of them very stale:

    image

    The USMT syntax used was similar to this:

    Scanstate.exe c:\store /uel:90 /ui:cohowinery:\* /i:migdocs.xml /i:migapp.xml /c /o

    So far so good. Then they restored the data:

    Loadstate c:\store /uel:90 /ui:cohowinery:\* /i:migdocs.xml /i:migapp.xml /c

    Which failed and returned to the console:

    Starting the migration process

    Processing the settings store

     

    Selecting migration units

     

    Failed.

      Unable to create a local account because /lac was not specified

      See the log file for more information.

     

    LoadState return code: 14

    Looking in the loadstate.log they saw:

    The account 7-x64-rtm-01\admin is chosen for migration, but the target does not have account 7-X64-RTM-01\admin. See documentation on /lac, /lae, /ui, /ue and /uel options.

    Unable to create a local account because /lac was not specified[gle=0x00000091]

    What the? The "admin" user is a local account. Why was that migrated?

    image

    So they went back and examined the scanstate console and noticed something else:

    image

    Huh. All the domain users were migrating, even though the BShirley and SDavis had not logged on in more than two years. They could enable /LAC to stop the loadstate failure but that wouldn't accomplish the goals.

    Understanding what happened

    USMT has complex rules around profile precedence and the /UI, /UEL, and /UE command-line switches. Despite the best efforts of our talented TechNet writer Susan, the syntax is inherently nutty. Let's get the rules straight here first:

    • /UI (User Include) always migrates a profile, regardless of /UE or /UEL. This looks simple until you learn that unless you also specify /UE:* then /UI always migrates all profiles regardless of arguments supplied. You must always use /UE:* if using /UI.
    • /UE (User Exclude) blocks migrating profiles as expected, but it is always overridden by /UEL and /UE.
    • /UEL (User Exclude based on last Logon) migrates profiles based on "newer than" rules; either a date or a number of days. Meaning that if a user's NTUSER.DAT has been modified within the time or date, that profile will be included in migration. /UEL always supersedes /UE.
    • You don’t have to include /UI if you’re using/UEL or /UE. If you're blocking one thing - like if I only want to block local using /UE:%computername%\* - leaving the /UI off is sufficient and less confusing.
    • All users are implicitly migrated. So not providing any switches gets everyone.

    Returning to the scenario above, we now know what happened:

    1. /UI was set for all domain users, meaning all domain users will migrate despite /UEL.
    2. /UEL would have blocked local users from migrating, but those accounts had logged on recently.
    3. Because the local users were included and (naturally) did not exist on the destination computer, loadstate required the /LAC switch to recreate them.

    One more insidious point: by running a scanstate on a computer - even if you have various profile filters listed and even if you cancel the scanstate from completing the examination phase - all of the profiles are loaded and unloaded. Meaning that if you run scanstate even once, /UEL will no longer work because all of your NTUSER.DAT files are modified to "right now".

    When testing UEL be sure to keep one of those handy "file date modification" utilities close. I use an internal one here called FILEDATE.EXE, but there are a zillion similar freebies on the internet (often with the same name). All you need to do is change the date on a given NTUSER.DAT to make it "stale" to USMT.

    image

    Making it work

    You know how I like to ramble on about how things work and why they do what they do. Many readers just want the fix so they can get back to FaceBook YouTube work. No one says you have to use the same command-line on your scanstate and loadstate. In this case, that's the solution:

    Step 1

    Scanstate the source computer using only /UEL. This meets the need for only getting current profiles. This may catch some local user profiles but a local user is unlikely to have much data to migrate. It's also unlikely that a local user is in regular use in a domain environment, meaning /UEL probably catches it as well. For example:

    Scanstate c:\store /uel:90 /i:migdocs.xml /i:migapp.xml

    image

    Even in my example where a local user was gathered above, it added only 25MB to each store when using a Windows 7 source computer (due to the default Windows Mail files created for all users). If I had used a hardlink migration, it wouldn't even have been that. In reality, my local profiles were quite stale as I had not used them since creating the computer - I had to log on to them to make them "fresh" for my example. J

    Step 2

    Loadstate the destination computer using /UI and /UE. This prevents the restore of any local profiles captured earlier. Since I only catch "fresh" profiles in the scanstate, there's no need to provide /UEL here. For example:

    Loadstate c:\store /ue:* /ui:cohowinery\* /i:migdocs.xml /i:migapp.xml

    image

    With that I finally meet the goal of only migrating domain users that have logged on within the past 90 days.

    Note: If the source and destination computer names are identical (perhaps a wipe and load scenario), you could alternatively use:

                    Loadstate c:\store /ue:%computername%\* /i:migdocs.xml /i:migapp.xml

    Musings

    To wrap up the USMT rules: nothing means everything, something means everything except when it means nothing, and sometimes often means never. Simple!

    Other notes:

    • More detail and examples on all of this here:

    http://technet.microsoft.com/en-us/library/dd560781(v=WS.10).aspx
    http://technet.microsoft.com/en-us/library/dd560804(v=WS.10).aspx

    • I used /C a few times for ease of repro. Microsoft still recommends using a config.xml file with error handling rules rather than arbitrarily bypassing non-fatal errors with /c.

    • There is a paradox Easter egg in my examples. Whoever points it out in the Comments first gets the "AskDS Silverback Alpha Geek Crown", currently worn by Darkseid64.

    Until next time.

    Ned "U-Haul" Pyle

  • Friday Mail Sack: Goat Riding Bambino Edition

    Hi folks, Ned here again. I’m trying to get back into the swing of having a mail sack every week but they can be pretty time consuming to write (hey, all this wit comes at a price!) so I am experimenting with making them a little shorter. This week we talk AD PowerShell secrets, USMT and Profile scalability, a little ADUC and DFSR, and some other random awesomeness.

    Question

    Can you explain how the AD PowerShell cmdlet Get-ADComputer gets IP information? (ex: Get-ADComputer -filter * -Properties IPv4Address). Properties are always AD attributes, but I can not find that IPv4Address attribute on any computer object and even after I removed the A records from DNS I still get back the right IP address for each computer.

    Answer

    That’s an excellent question and you were on the right track. This is what AD PowerShell refers to as an ‘extendedAttribute’ internally, but what a human might call a ‘calculated value’. AD PowerShell special-cases a few useful object properties that don’t exist in AD by using other LDAP attributes that do exist, and then uses that known data to query for the rest. In this case, the dnsHostName attribute is looked up normally, then a DNS request is sent with that entry to get the IP address.

    Even if you removed the A record and restarted DNS, you could still be returning the DNS entry from your own cache. Make sure you flush DNS locally where you are running PowerShell or it will continue to “work”.

    To demonstrate, here I run this the first time:

    clip_image002

    Which queries DNS right after the powershell.exe contacts the DC for the other info (all that buried under SSL here, naturally):

    clip_image002[4]

    Then I run the identical command again – note that there is no DNS request or response this time as I’m using cached info.

    clip_image002[6]

    It still tells me the IP address. Now I delete the A record and restart the DNS service, then flush the DNS cache locally where I am running PowerShell, and run the same PowerShell command:

    clip_image002[8]

    Voila! I have broken it. :)

    Question

    Is there is a limit on the number of profiles that USMT 4.0 can migrate? 3.01 used to have problems with many (20+) profiles, regardless of their size.

    Answer

    Updated with new facts and fun, Sept 1, 2011

    Yes and no. There is no limit real limit, but depending on the quantity of profiles and their contents, combined with system resources on the destination computer, you can run into issues. If possible you should use hardlink migration, as that as fast as H… well, it’s really fast.

    To demonstrate (and to show erstwhile USMT admins a quick and dirty way to create some stress test profiles):

    1. I create 100 test users:

    image

    image

    2. I log them all on and create/load their profiles, using PSEXEC.EXE:

    image

    image

    3. Copy a few different files into each profile. I suggest using a tool that creates random files with random contents. In my case I added a half dozen 10MB files to each profile’s My Documents folder. You can’t use the same files in each profile, as USMT is smart enough to reuse them and you will not get the real user experience.

    4. I run the harshest, slowest possible migration I can, where USMT writes to a compressed store on a remote file share, with AES_256 encryption, from an x86 Windows 7 computer with only 768MB of RAM, while cranking all logging to the max:

    image

    This (amazingly, if you ever used USMT 3.01) takes only 15 minutes and completes without errors. Loadstate memory and CPU isn’t very stressful (in one test, I did this with an XP computer that had only 256MB of RAM, using 3DES encryption).

    5. I restore them all to another another computer – here’s the key: you need plenty of RAM on your destination Windows 7 computer. If you have 100 profiles that all have different contents, our experience shows that 4GB of RAM is required. Otherwise you can run out the OS resources and receive error “Close programs to prevent information loss. Your computer is low on memory. Save your files and close your programs: USMT: Loadstate”. More on this later.

    image

    This takes about 30 minutes and there are no issues as long as you have the RAM.

    image

    6. I bask in the turbulence of my magnificence.

    If you do run into memory issues (so far we’ve only seen it with one customer since USMT 4.0 released more than two years ago), you have a few options:

    a. Validate your scanstate/loadstate rules – despite what you may think, you might be gathering all profiles and not just fresh ones. Review: http://blogs.technet.com/b/askds/archive/2011/05/05/usmt-and-u-migrating-only-fresh-domain-profiles.aspx. Hopefully that cuts you down to way fewer than 100 per machine. Read that post carefully, there are some serious gotchas: such as once you run scanstate once on a computer, all profiles are made fresh afterwards for any subsequent scanstate runs. The odds that all 100+ profiles are actually active is pretty unlikely.

    b. Get rid of old XP profiles with DELPROF before using USMT at all. This is safer than UEL, as like I mentioned, once you run scanstate that’s it – it has to work perfectly on the first try, as all profiles are now “fresh”. (On Vista+ you instead use http://support.microsoft.com/kb/940017, as I’m sure you remember)

    c. Get more RAM.

    Question

    Is it possible in DSA.MSC to have the Find: Users, Contacts, and Groups default to finding computers or include computers with the user, contacts, and groups? Is there a better way to search for computers?

    Answer

    The Find tool does not provide for user customization – even starting it over without closing DSA.MSC loses your last setting. ADUC is a cruddy old tool, DSAC.EXE is the (much more flexible) replacement and it will do what you want for remembering settings.

    There are a few zillion other ways to find computers also. Not knowing what you are trying to do, I can’t recommend one over the other; but there’s DSQUERY.EXE, CSVDE.EXE, many excellent and free 3rd parties, etc.

    Question

    If I delete or disable the outbound connection from a writable DFSR replicated folder, I get warning that the “topology is not fully connected”. Which is good.

    image

    But if that outbound connection is for a read-only replica, no errors. Is this right?

    Answer

    It’s an oversight on our part. While technically nothing bad will happen in this case (as read-only servers - of course - do not replicate outbound), you should get this message in all cases (There are also 6020 and 6022 DFSR warning events you can use to track this condition). A read-only can be converted to a read-write, and you will definitely want an outbound connection for that.

    We’re looking into this; in the meantime, just don’t do it anywhere. :)

    Other Things

    Just to make myself feel better: “Little roller up along first. Behind the bag! It gets through Buckner!”

    • If you have parents, siblings, children away at college, nephews, cousins, grandparents, or friends, we have the newest weapon in the war on:
      1. Malware
      2. Your time monopolized as free tech support

    Yes, it’s the all new, all web Microsoft Safety Scanner. It even has a gigantic button, so you know it’s gotta be good. Make those noobs mash it and tell you if there are any problems while you go make a sandwich.

    • Finally: thank goodness my wife hasn’t caught this craze yet. She has never met a shoe she didn’t buy.

    Have a nice weekend folks.

    Ned “86 years between championships? That’s nothing… try 103, you big babies!” Pyle

  • Friday Mail Sack: “Who am I kidding, more like Monthly” Edition

    Hi folks, Ned here again with another tri-weekly Friday Mail Sack. This time we talk service auditing, trust creation, certificates and USMT, SYSVOL migration with RODCs, DFS stuff, RPC and firewalls, virtualization, and the zombie corpse of FRS.

    Shoot it in the head!

    Question

    We’re setting up a trust between two domains in two forests. When we type in the name of the domain we are immediately prompted for credentials in that domain and the message “to create this trust relationship, you must supply user credentials for the specified domain”. We can enter any domain credentials here from that domain and it will work – some nobody user works, never mind an admin":

    image

    We are later prompted for administrative credentials like usual when finalizing the trust. Everything works, it’s just weird.

    Answer

    Anyone can reproduce this issue by removing the NullSessionPipes registry entry for LSARPC. NullSessionPipes – along with RestrictNullSessAccess - controls anonymous access to Named Pipes. Very legacy stuff. The list of default allowed protocols varies between OS and server role; for instance, a pure Windows Server 2008 R2 DC has a default list of:

    NETLOGON
    LSARPC
    SAMR

    You’ll find various security documents giving valid (or crazy) advice about messing with these settings but it boils down to “what do you need for your specific server, client, and application workloads to function?” If you get so secure that no one can work, you’ve gone too far.

    In this case, setting up a trust uses the LSARPC protocol to connect to a DC in the other domain and find out basic information about it. If you can’t connect to it anonymously for this “phone book” kind of directory info that dates back to NT, you get prompted for creds. Since the info is public knowledge in that domain, any user is adequate.

    These are often set through security policies and if you have this issue, look there first.

    image

    I’ve also seen it as part of a server image from someone who had too much time on their hands.

    Question

    DFSN is awesome. What is decidedly not awesome is when the requisite antivirus software absolutely kills client-side performance. What can loyal DFSN evangelists do (short of removing the AV or completely disabling network file scanning) on the client-side to prevent our users from suffering a dreaded antivirus performance hit when using DFS Namespaces?

    Answer

    Sort of a sideways approach, but if you are using Windows 7 clients then Offline Files might be an option. As an experiment with some test computers/users, you can configure:

    • Enable Transparent Caching
    • Configure Background Sync
    • Configure Slow-Link Mode

    You could make these computers work as if they are on a “slow network”, working primarily out of their Offline Files cache and trickle synchronizing their data back to the servers in the background continuously.

    I specifically call out Windows 7 as Vista doesn’t support all these features, and XP supports none of them. XP is also gross.

    Ultimately, you can only bandage things in this scenario. Whaling on your vendor (even if it’s us!) to improve performance is the only thing left. Like beer, they are the cause of - and solution to - all of life’s problems…

    Question

    I read your previous post here where you talked about how USMT 4.0 migrates computer certificates without private keys. Generally speaking this has not been an issue, as we have certificate auto-enrollment and the new computers get new valid certs. One application is having problems with these migrated invalid certs though and we need to block them from migrating, is that possible?

    Answer

    Yes. While this should be avoided if possible (a machine cert without a private key might still mean something useful to some strange application), it's simple to block computer certificate migration. Here is sample unconditional exclusion XML named skipmachinecerts.xml that you would run only with scanstate.exe (no need for loadstate to run it):

    scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml /i:skipmachinecerts.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <migration urlid="http://www.microsoft.com/migration/1.0/migxmlext/sampleskipcomputercerts">
    <component type="Documents" context="System">
      <displayName>SkipComputerCertMig</displayName>
       <role role="Data">
        <rules>
         <!--
    This override XML prevents computer (not user) certificates from migrating. –
    >
         <!--
    This should ONLY be used if machine certs with no private keys are causing issues –
    >
         <!--
    Nice applications consider these certs invalid and computers request auto-enrollment –
    >
         <
    unconditionalExclude
    >
          <
    objectSet
    >
           <
    pattern type=
    "Registry">HKLM\SOFTWARE\Microsoft\SystemCertificates\My\Certificates\*[*]</pattern>
          </objectSet>
         </
    unconditionalExclude
    >
        </
    rules
    >
       </
    role
    >
      </
    component
    >
    </
    migration
    >

    You should never block user certificate migration as they have private keys and if users are securing data like EFS-encrypted files you would be be locking them out of their files. If there's no DRA it would be permanent.

    Question

    What is the event, if any, that is triggered when we perform a D2 on a FRS non-sysvol replica set?  Is it the same error message we get when we perform in on SYSVOL, but we insert the new replica set name? 

    Answer

    Ha! You wish it were that cool. You get these events (in this order – here I D2’ed just a single custom replica set and did not touch SYSVOL at all):

    clip_image002

    clip_image002[4]

    clip_image002[6]

    clip_image002[8]

    Some old docs also say you should get a 13565 when you BURFLAG a replica – but you do not unless it’s SYSVOL:

    clip_image002[10]

    “Oh, but this is a DC” you are saying. Ok. Here’s a member server getting D2’ed:

    1. 13520 like above
    2. 13553 like above
    3. 13554 like above
    4. Done.

    Question

    We have a server that is part of a simple DFS Namespace and Replication setup. Is there any issue with virtualizing a DFS server, shutting down the old host, and bringing the virtual one online. We would do this during a period of downtime so data change would be minimal?

    Answer

    That’s pretty much the point of SCVMM so I can’t really say no, can I? :)

    http://technet.microsoft.com/en-us/library/cc764232.aspx

    The important thing (as always with P2V) is that you do a one-to-one change. You cannot have both servers alive at the same time. This is the risk with tools like disk2vhd.exe and other stuff on the internet, and why SCVMM is less risky – it ensures you don’t shoot yourself in the foot. Once the new DFS server looks like it’s working, destroy the old server so there is no chance it can come back up (format drive – you got a complete bare-metal capable backup of it first. Right???). To the other servers it would just like that server was rebooted and reappeared no worse for wear.

    Question

    We rolled back a DFSR SYSVOL migration (don’t ask). All the DC’s rolled back fine except one – an RODC ended up in an inconsistent state. He is the only one that has entries under DFSR-LocalSettings and he is constantly switching between state 5 and 9.

    The event logs show:

    Log Name:      DFS Replication
    Source:        DFSR
    Date:          5/5/2011 9:00:00 AM
    Event ID:      6016
    Task Category: None
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      rodc1.contoso.com
    Description:
    The DFS Replication service failed to update configuration in Active Directory Domain Services. The service will retry this operation periodically.

    Additional Information:
    Object Category: msDFSR-LocalSettings
    Object DN: CN=DFSR-LocalSettings,CN=rodc1,OU=Domain Controllers,DC=contoso,DC=com
    Error: 2 (The system cannot find the file specified.)
    Domain Controller: writabledc1.contoso.com

    Polling Cycle: 60

    I’m not sure of the recommended way to clean it up.

    Answer

    Run on your PDC Emulator DC:

    DFSRMIG.EXE /DeleteRoDfsrMember <name of the rodc>

    Ensure that AD replication converges to the RODC. Then update the DFSR service with:

    DFSRDIAG.EXE POLLAD /mem:<name of the rodc>

    As you can see, we planned for this eventuality. :)

    Question

    Do you have docs on configuring Advanced Audit Policy granular object access for HiPAA, Sarbanes-Oxley, or other US regulatory acts?

    Answer

    Neither the HiPAA nor SOX Acts make any specific mention of actual object access auditing settings in Windows or any OS - only that you must audit… stuff. Your customer should talk to whoever audits them to find out what their (arbitrary) requirements are so they satisfy the audit. There is an entire industry of “compliance” vendors out there that sell solutions and settings recommendations that vary greatly between each company. We even have one, although it wisely makes no mention of HiPAA or Sarbanes and then completely indemnifies itself by saying it’s totally up to the customer to determine the right settings and we have no opinion. I bet our lawyers had a crack at that one :-D.

    Question

    What is the best method for cleaning out the PreExisting folder? I've done quite a bit of searching, but most of the results are cleaning out the Conflict directory or recovering files from the Pre-Existing folder.

    Answer

    If you don’t care about the files anymore (I recommend you at least back them up), you can delete the files and the preexistingManifest.xml file. You don’t need to stop the service or anything, once initial sync is done DFSR no longer cares about those files either. :)

    Question

    When using the netsh.exe command to set the port range for dynamic RPC, what is the minimum number of ports that you recommend be provisioned? We need to set this value for application servers in an Extranet and want to make sure we provision enough ports but satisfy our firewall folks.

    Answer

    There’s no rule, it’s just as many as you find you need with testing. Our recommendation is not to mess with these if you are trying to lower the number of ports open in a firewall and instead use IPSEC tunnels between computers – this means you only have to open a couple ports and the traffic is protected regardless. Opening “only 500” ports is not much better than the default of many thousands. Going too low and you will cause mysterious random outages that take forever to figure out.

    Barring that, I usually recommend first leaving default and evaluating to see what the usage patterns are – then setting to match with maybe a +10% extra fudge factor for unexpected growth. Then document the heck out of it because when you’re gone and someone else inherits that system, as they are going to be fornicated when problems happen. No one will be expecting that sort of restriction.

    Question

    It’s pretty easy to audit who is services starting and stopping in Windows Server 2003, I just examine the System Event Log for events 7035 and 7036, sourced to Service Control Manager. The User field will show who stopped and started a service.

    But Windows Server 2008 and later don’t do this. Is there a way to audit their services?

    Answer

    Yes. You will need to decide which services you want to audit as there is no simple way to turn it all on for everything, though. You probably only want to know about some specific ones anyway. Who cares that Ned restarted the Zune Wireless service on his laptop?

    1. Logon as an administrator, make sure an elevated CMD prompt if UAC is on.
    2. Run on the affected server:

    SC QUERY > Svcs.txt

    3. Examine the svcs.txt for your service “DISPLAY_NAME” that is being restarted.

    For example in my case, I looked for “DFS Namespace” (no quotes) and see:

    SERVICE_NAME: W32time
    DISPLAY_NAME: Windows Time
    TYPE : 20 WIN32_SHARE_PROCESS
    STATE : 4 RUNNING
    (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
    WIN32_EXIT_CODE : 0 (0x0)
    SERVICE_EXIT_CODE : 0 (0x0)
    CHECKPOINT : 0x0
    WAIT_HINT : 0x0

    4. Above the display name you will see the SERVICE_NAME. Note that for below.
    5. Run:

    SC SDSHOW <service name> > sd.txt

    Example:

    SC SDSHOW w32time > sd.txt

    6. Open this text file. It will contain SDDL data similar (not necessarily the same as below, do not re-use my example) to this:

    D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)

    7. Copy the following and add it to the end of the SDDL string in that text file:

    (AU;SAFA;RPWPDT;;;WD)

    So if you had used my example SDDL data and then added the above string, you now
    have all one line:

    D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)S:(AU;SAFA;RPWPDT;;;WD)

    Note that there is an S: that separates the DACL and SACL sections. If your exported SDDL did not contain an S: you must prepend it to your SACL entry like so:

    S:(AU;SAFA;RPWPDT;;;WD)

    8. Copy and paste that whole new string and run:

    SC SDSET <name of the service> <the big new string>

    Example":

    SC SDSET w32time D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)S:(AU;SAFA;RPWPDT;;;WD)

    Note: What we are doing is adding an audit SACL to the service so that when the previous auditing steps I gave you are used, the restart of the service will be audited and we’ll know who did what. Remember that if there was no auditing in place on the service already (after the "S:") then you will need to add that to the string.

    9. Audit Subcategory "Other Object Access Events" for success and "Handle Manipulation" for success.
    10. Note events for 4656. Object Server will be "SC Manager", Object Name will the name of the service, Access Request Information will show the operation (ex: "Stop the Service").

    image

    Until next time.

    - Ned “yes, bwaamp is a technical term here” Pyle

  • Friday Mail Sack: Tuesday To You Edition

    Hi folks, Ned here again. It’s a long weekend here in the United States, so today I talk to you tell myself about a domain join issue one can only see in Win7/R2 or later, what USMT hard link migrations really do, how to poke LDAP in legacy PowerShell, time zone migration, and an emerging issue for which we need your feedback.

    Question

    None of our Windows Server 2008 R2 or Windows 7 computers can join the domain – they all show error:

    “The following error occurred attempting to join the domain "contoso.com": The service cannot be started, either because it is disabled or because it has no enabled devices associated with it.”

    image

    Windows Vista, Widows Server 2008, and older operating systems join without issue in the exact same domain while using the same user credentials.

    Answer

    Not a very actionable error – which service do you mean, Windows!? If you look at the System event log there are no errors or mention of broken services. Fortunately, any domain join operations are logged in another spot – %systemroot%\debug\netsetup.log. If you crack open that log and look for references to “service” you find:

    05/27/2011 16:00:39:403 Calling NetpQueryService to get Netlogon service state.
    05/27/2011 16:00:39:403 NetpJoinDomainLocal: NetpQueryService returned: 0x0.
    05/27/2011 16:00:39:434 NetpSetLsaPrimaryDomain: for 'CONTOSO' status: 0x0
    05/27/2011 16:00:39:434 NetpJoinDomainLocal: status of setting LSA pri. domain: 0x0
    05/27/2011 16:00:39:434 NetpManageLocalGroupsForJoin: Adding groups for new domain, removing groups from old domain, if any.
    05/27/2011 16:00:39:434 NetpManageLocalGroups: Populating list of account SIDs.
    05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: status of modifying groups related to domain 'CONTOSO' to local groups: 0x0
    05/27/2011 16:00:39:465 NetpManageLocalGroupsForJoin: INFO: No old domain groups to process.
    05/27/2011 16:00:39:465 NetpJoinDomainLocal: Status of managing local groups: 0x0
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: status of setting ComputerNamePhysicalDnsDomain to 'contoso.com': 0x0
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: Controlling services and setting service start type.
    05/27/2011 16:00:39:637 NetpControlServices: start service 'NETLOGON' failed: 0x422
    05/27/2011 16:00:39:637 NetpJoinDomainLocal: initiating a rollback due to earlier errors

    Aha – the Netlogon service. Without that service running, you cannot join a domain. What’s 0x422?

    c:\>err.exe 0x422

    ERROR_SERVICE_DISABLED winerror.h
    # The service cannot be started, either because it is
    # disabled or because it has no enabled devices associated
    # with it.

    Nice, that’s our guy. It appears that the service was disabled and the join process is trying to start it. And it almost worked too – if you run services.msc, it will say that Netlogon is set to “Automatic” (and if you look at another machine you have not yet tried to join, it is set to “Disabled” instead of the default “Manual”). The problem here is that the join code is only setting the start state through direct registry edits instead of using Service Control Manager. This is necessary in Win7/R2 because we now always go through the offline domain join code (even when online) and for reasons that I can’t explain without showing you our source code, we can’t talk to SCM while we’re in the boot path or we can have hung startups. So the offline code set the start type correctly and the next boot up would have joined successfully – but since the service is still disabled according to SCM, you cannot start it. It’s one of those “it hurts if I do this” type issues.

    And why did the older operating systems work? They don’t support offline domain join and are allowed to talk to the Service Control Manager whenever they like. So they tell him to set the Netlogon service start type, then tell him to start the service – and he does.

    The lesson here is that a service set to Manual by default should not be set to disabled without a good reason. It’s not like it’s going to accidentally start in either case, nor will anyone without permissions be able to start it. You are just putting a second lock on the bank vault. It’s already safe enough.

    Question

    USMT is always going on about hard link migrations. I’ve used them and those migrations are fast… but what the heck is it and why do I care?

    Answer

    A hard link is simply a way for NTFS to point to the same file from multiple spots, always on the same volume. It has nothing to do with USMT (who is just a customer). Instead of making many copies of a file, you are making copies of how you get to the file. The file itself only exists once. Any changes to the file through one path or another are always reflected on the same physical file on the disk. This means that when USMT is storing a hard link “copy” of a file it is just telling NTFS to make another pointer to the same file data and is not copying anything – which makes it wicked fast.

    Let’s say I have a file like so:

    c:\hithere\bwaamp.txt

    If I open it up I see:

    image

    Really though, it’s NTFS pointing to some file data with some metadata that tells you the name and path. Now I will use FSUTIL.EXE to create a hard link:

    C:\>fsutil.exe hardlink create c:\someotherplace\bwaamp.txt c:\hithere\bwaamp.txt
    Hardlink created for c:\someotherplace\bwaamp.txt <<===>> c:\hithere\bwaamp.txt

    I can use that other path to open the same data (it helps if you don’t think of these as files):

    image

    I can even create a hard link where the file name is not the same (remember – we’re pointing to file data and giving the user some friendly metadata):

    C:\>fsutil.exe hardlink create c:\yayntfs\sneaky!.txt c:\hithere\bwaamp.txt
    Hardlink created for c:\yayntfs\sneaky!.txt <<===>> c:\hithere\bwaamp.txt

    And it still goes to the same spot.

    image

    What if I edit this new "”sneaky!.txt” file and then open the original “bwaamp.txt”?

    image

    Perhaps a terrible Visio diagram will help:

    hardlink

    When you delete one of these representations of the file, you are actually deleting the hard link. When the last one is deleted, you are deleting the actual file data.

    It’s magic, smoke and mirrors, hoodoo. If you want a more disk-oriented (aka: yaaaaaaawwwwnnn) explanation, check out this article. Rob and Joseph have never met a File Record Segment Header they didn’t like. I bet they are a real hit at parties…

    Question

    How can I use PowerShell to detect if a specific DC is reachable via LDAP? Don’t say AD PowerShell, this environment doesn’t have Windows 7 or 2008 R2 yet! :-)

    Answer

    One way is going straight to .NET and use the DirectoryServices namespace:

    New-Object System.DirectoryServices.DirectoryEntry(LDAP://yourdc:389/dc=yourdomaindn)

    For example:

    image
    Yay!

    image
    Boo!

    Returning anything but success is a problem you can then evaluate.

    As always, I welcome more in the Comments. I suspect people have a variety of techniques (third parties, WMI LDAP provider, and so on).

    Question

    Is USMT supposed to migrate the current time zone selection?

    Answer

    Nope. Whenever you use timedate.cpl, you are updating this registry key:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation

    Windows XP has very different data in that key when compared to Vista and Windows 7:

    Windows XP

     

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

    "ActiveTimeBias"=dword:000000f0

    "Bias"=dword:0000012c

    "DaylightBias"=dword:ffffffc4

    "DaylightName"="Eastern Daylight Time"

    "DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

     

    "StandardBias"=dword:00000000

    "StandardName"="Eastern Standard Time"

    "StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

     

    Windows 7

     

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\TimeZoneInformation]

    "ActiveTimeBias"=dword:000000f0

    "Bias"=dword:0000012c

    "DaylightBias"=dword:ffffffc4

    "DaylightName"="@tzres.dll,-111"

    "DaylightStart"=hex:00,00,03,00,02,00,02,00,00,00,00,00,00,00,00,00

    "DynamicDaylightTimeDisabled"=dword:00000000

    "StandardBias"=dword:00000000

    "StandardName"="@tzres.dll,-112"

    "StandardStart"=hex:00,00,0b,00,01,00,02,00,00,00,00,00,00,00,00,00

    "TimeZoneKeyName"="Eastern Standard Time"

    The developers from the Time team simply didn’t want USMT to assume anything as they knew there were significant version differences; to do so would have taken an expensive USMT plugin DLL for a task that would likely be redundant to most customer imaging techniques. There are manifests (such as "INTERNATIONAL-TIMEZONES-DL.MAN") that migrate any additional custom time zones to the up-level computers, but again, this does not include the currently specified time zone. Not even when migrating from Win7 to Win7.

    But that doesn’t mean that you are out of luck. Come on, this is me! :-)

    To migrate the current zone setting from XP to any OS you have the following options:

    To migrate the current zone setting from Vista to Vista, Vista to 7, or 7 to 7, you have the following options:

    • Any of the three mentioned above for XP
    • Use this sample USMT custom XML (making sure that nothing else has changed since this blog post and you reading it). Woo, with fancy OS detection code!

    <?xml version="1.0" encoding="utf-8" ?>
    <
    migration urlid="http://www.microsoft.com/migration/1.0/migxmlext/currenttimezonesample">
    <
    component type="Application" context="System">
    <
    displayName>Copy the currently selected timezone as long as Vista or later OS</displayName>
    <
    role role="Settings">
    <!--
    Check as this is only valid for uplevel-level OS >= than Windows Vista –>
    <
    detects>
      <
    detect>
       <
    condition>MigXmlHelper.IsOSLaterThan("NT", "6.0.0.0")</condition>
      </
    detect>
    </
    detects>
    <
    rules>
    <
    include>
      <
    objectSet>
       <
    pattern type="Registry">HKLM\SYSTEM\CurrentControlSet\Control\TimeZoneInformation\* [*]</pattern>
      </
    objectSet>
    </
    include>
    </
    rules>
    </
    role>
    </
    component>
    </
    migration>

    Question for our readers

    We’ve had a number of cases come in this week with the logon failure:

    Logon Process Initialization Error
    Interactive logon process initialization has failed.
    Please consult the Event Logs for more details.

    You may also find an application event if you connect remotely to the computer (interactive logon is impossible at this point):

    ID: 4005
    Source: Microsoft-Windows-Winlogon
    Version: 6.0
    Message: The Windows logon process has unexpectedly terminated.

    In the cases we’ve seen this week, the problem was seen after restoring a backup when using a specific third party backup product. The backup was restored to either Hyper-V or VMware guests (but this may be coincidental). After the restore large portions of the registry were missing and most of our recovery tools (SFC, Recovery Console, diskpart, etc.) would not function. If you have seen this, please email us with the backup product and version you are using. We need to contact this vendor and get this fixed, and your evidence will help. I can’t mention the suspected company name here yet, as if we’re wrong I’d be creating a legal firestorm, but if all the private emails say the same company we’ll have enough justification for them to examine this problem and fix it.

    ------------

    Have a safe weekend, and take a moment to think of what Memorial Day really means besides grilling, racing, and a day off.

    Ned “I bet SGrinker has the bratwurst hookup” Pyle

  • Vista SP1 End of Support Reminder

    Hi folks, Ned here again with a quick public service announcement:

    Are you running Windows Vista Service Pack 1? SP1 support ends on July 12, 2011 - that's barely two months from now. Once that happens, computers running Vista SP1 do not get further security updates, MS Support will insist on SP2 installation before further troubleshooting, and you are generally in a world of hurt. Service Pack 2 has been out for two years; it's time to make it happen, IT pros.

    Obviously, an even better option is installing Service Pack 7.

    Chop chop!

    - Ned 'I still run NT 4.0 SP3 just to be a jerk" Pyle

  • USMT and Invalid User Mapping

    Hi folks, Ned here again. Today I discuss an insidious issue you can see with USMT when you have previously migrated computers between forests with ADMT. This scenario is rare enough that I didn't bother with a KB article - not to mention it's easier to explain with pictures and "wit". If you get into it though, you will need the right answer, as there are some wrong answers floating around the Internet.

    What you see

    You will only have issues here during the loadstate operation, dropping data onto a fresh new computer. At first blush, the error will be generic and useless:

    Error while performing migration

    Failed.

      Software malfunction or Unknown exception

      See the log file for more information.

     

    LoadState return code: 26

    But if you look at the loadstate.log (having made sure to provide /v:5 on the command-line), you see:

    Info [0x000000] Apply started at 4/18/2011 21:17:39
    Error [0x000000] Invalid user mapping: BYAIR\testuser1 -> S-1-5-21-4187903345-2522736461-4139203548-1110
    Info [0x080000] Received request to restart the explorer process from context: SYSTEM
    Info [0x000000] Leaving MigApply method
    Error [0x000000] Error while performing migration
    Warning [0x000000] Internal error 2 was translated to a generic error
    Info [0x000000] Failed.[gle=0x00000091]
    Info [0x000000] Software malfunction or Unknown exception[gle=0x00000091]

    What it means

    An incomplete migration with ADMT causes this issue. Sometime in the past - perhaps distantly enough that it was before your time - this computer and user lived in another AD forest. In my examples below there was once an Adatum.com (ADATUM) domain and currently there's a blueyonderairlines.com (BYAIR) domain. I demonstrate this going from Windows XP to Windows 7, as most people are in that boat currently.

    Let's go backwards through the sands of time and see where I went wrong…

    1. I migrate users with ADMT, making sure to enable SIDHistory migration (this is on by default). See here how my TestUser1 account has its old ADATUM SID added to the sIDHistory attribute after migration between the forests:

    image

    image

    image

    2. I then migrate the users’ computers with ADMT and do not translate profiles during the migration (below should have been checked ON, but was not):

    image

    3. This means that the migrated users’ profilelist registry entries do not translate during migration. Note below how they are still the old domain SID:

    image

    4. Then the user logs on to his migrated computer with his migrated user and ends up with a new profile, as expected. Note how the old profile is still here and the user's new profile is too:

    image

    5. The user – probably confused at this point – calls the help desk. “Where are all my files?” The files are still in the old folder and because I performed sidHistory migration, the user still has full access to that folder and its contents. The user or help desk might copy files over into his new profile (probably just “My Documents to My Documents”). Or they might just leave the files there and the user knows to go access older files out of their old folder. It’s just a folder to them, they don’t understand profiles after all. It’s highly unlikely a user was still accessing this old location from here as it is obfuscated in Windows Explorer, but unlikely isn’t impossible - they could have been shown how by a misguided help desk staff who should have copied the files and alerted me to the real problem.

    At this point, the “good” profile is “c:\documents and settings\testuser1.byair”. The “bad” one with data in it from the ADMT migration is the “c:\documents and settings\testuser1”. Again, someone may have reconciled these file contents by hand or they may not.

    image

    image

    6. Years go by and I retire to Barbados. Along come you and your project to migrate old XP computers to the new Win7 machines. The ADMT migration is ancient history and that old domain probably doesn’t even exist anymore.

    7. USMT scanstate.exe gathers the XP computer and gets all the profiles. If you watch this closely, you see that there are reconciliation problems:

    image

    8. You run USMT loadstate.exe on the Win7 computer that has none of these user profiles at all and because the user to SID mapping is invalid (there is no such thing as a user with that SID directly set anymore but there is a sidHistory that belongs to another profilelist SID entry for a “different” user), we finally error.

    Uggh. Technically speaking, nothing is broken here: USMT is working by design and ADMT was following my (bad) instructions.

    What you can do

    There are three realistic solutions to this, one overkill solution, and one that you should never use ever that I include only for completeness and to stop people from making a mistake through ignorance - five options. All SIDs provided below are samples, bien sûr.

    • Option 1: You tell scanstate to ignore the old domain by providing the SID of the new domain during the gather. From my example SIDs and domain names above, you know that the old dead domain SID was S-1-5-21-4187903345-2522736461-4139203548 and the new one is S-1-5-21-1120787987-3120651664-2685341087. You can also use PSGETSID to look up the domain SID. You could choose only to migrate the data from the current domain using this:

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:* /ui:S-1-5-21-1120787987-3120651664-2685341087* /ui:%computername%\*

    image

    That quickly gathers only the current “good domain” user profiles and local user profiles, skipping the old “bad domain” profiles. If you have multiple “good” domains you might need to feed it multiple /ui of those you want to include.

    • Option 2: Conversely, you skip the old domain (this is harder to predict – what if there were many ADMT migrations and I had collapsed various domains? You’d have to go find out all the “bad” SIDs, and this would probably be a trial and error process where you kept discovering old SIDs during your migration). Note this time the syntax does not include /ui.

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:S-1-5-21-4187903345-2522736461-4139203548*

    image

    Note: The issue remains with both option 1 and 2 that the old so-called "bad domain” profile might still contain data, as I outlined in the "What it Means" section. You must do some investigation to find out if these old profiles really are just garbage and can be freely abandoned, or if you need to also gather that data. On the plus side, you are not actually deleting data – you are only opting out of migrating it. The old data is still there and if you’re following the best practices, it’s contained in the backup taken before the migration. You let your users decide if the data is valuable by them asking for it back or not. The backup can be stored locally on that new computer for safekeeping indefinitely.

    • Option 3: You can remove the invalid profilelist registry entries beforehand using a script. The net is that this has the same risk as above (it will prevent USMT from gathering the file data with that supposedly abandoned profile) and is probably more work (the customer or MCS has to write the script). Again, that profile may not actually be abandoned from a data perspective, only from a user logon perspective. However, if your investigations show that they truly are abandoned and they no longer contain useful data, this is an option.
    • Option 4 (overkill): You can extend the above three options with a custom XML file you provide that gathers up all the important user shell folders (My Documents, Favorites, and Desktop are likely to have any meaningful user files in an XP profile) – effectively a catchall override that will not leave data behind. This drops the files into a central spot, maintaining relative folder paths so that you could go rescue data as needed. Once you think they’ve had enough time you can get rid of the folder to return disk space, or leave it there indefinitely. The main downsides to this technique is that it uses up extra disk space, it’s complicated, it makes the migration a lot slower, and it needs some ACLs to ensure you aren’t exposing user data to everyone on the computer (I’d recommend it being set to Administrators and Help Desk-only; if the user has “missing data” problems they call the helpdesk and you retrieve it. If no one complains for 3 months can probably just delete it). The main advantage to this is that it leans towards caution and prudence; always a good plan when it comes to a user's data.

    That being said, this option 4 is probably overkill, especially if you are getting backups before scanstate like we recommend. If you do the legwork to find out the status of all these old profiles, find out how people dealt with the new profiles after migration, and determine they truly aren’t being used, I’d not bother with this and stick with Options 1, 2, or 3 above.

    Update: Octavian makes the excellent point that Favorites and Desktop are likely filled with valuable user goo. I updated the sample.

    Sample XML named getdocsandset.xml:

    <?xml version="1.0" encoding="UTF-8"?>
    <
    migration urlid="
    http://www.microsoft.com/migration/1.0/migxmlext/coallescemydocumentstoasafeplace
    ">
     <component type="Documents" context="System">
      <
    displayName>copy all the useful shell folders to a hidden folder on destination </displayName
    >
       <
    role role="Data"
    >
        <
    rules
    >
         <!—
    All copies shell folders saved so admins can decide what to do with the data later
    -->
         <
    include
    >
          <
    objectSet
    >
           <
    pattern type="File">C:\documents and settings\*\my documents\* [*]</pattern
    >
          </
    objectSet
    >
         </
    include
    >
         <!—
    Copies shell folders to a special folder for admin attention or destruction.
    -->
         <
    locationModify script="MigXmlHelper.RelativeMove('C:\documents and settings','c:\extramydocsdata')"
    >
          <
    objectSet
    >
           <
    pattern type="File">C:\documents and settings\*\my documents\* [*]</pattern
    >
           <pattern type="File">C:\documents and settings\*\desktop\* [*]</pattern>
           <pattern type="File">C:\documents and settings\*\favorites\* [*]</pattern>

          </
    objectSet
    >
         </
    locationModify
    >
        </
    rules
    >
       </
    role
    >
     </
    component
    >
    </
    migration
    >

     

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:* /ui:S-1-5-21-1120787987-3120651664-2685341087*/ui:%computername%\* /i:getdocsandset.xml

     

    loadstate c:\store i:migdocs.xml /i:migapp.xml /c /v:5 /i:getdocsandset.xml

    Just to be sure in this case, you now have all the profiles from the source computer saved to a central location after migration:

    image

    You set permissions to Administrators-only and Hidden, mainly to stop users from mistakenly using this folder as live data. You can do that through a batch file like this sample:

    icacls.exe c:\extramydocsdata\ /grant administrators:(F) /inheritance:r

    icacls.exe c:\extramydocsdata\* /grant administrators:(F) /inheritance:e /t

    attrib.exe +H c:\extramydocsdata

    • Option 5 (DO NOT USE): Remove the sidHistory entry for all these migrated users in AD. This is categorically the worst and most dangerous solution, as you probably have thousands of users accessing files, databases, email, etc. with their SID History alone. I mention this option only to be complete; it is highly discouraged, as you could cause a massive outage that would be very difficult to and time consuming to repair (probably an authoritative restore of all users in the forest; a nightmare scenario for most customers).

    Final thoughts

    Migrating users and computers without translating profiles is usually the wrong plan. See the ADMT Migration Guide in section "Translate Local User Profiles". Even without USMT in the mix here, the users were unhappy campers after ADMT and Evil Twin Ned finished with them.

    Until next time.

    Ned "I like MT" Pyle

  • Speaking in Ciphers and other Enigmatic tongues…

    Hi! Jim here again to talk to you about Cryptographic Algorithms, SChannel and other bits of wonderment. So, your company purchases this new super awesome vulnerability and compliance management software suite, and they just ran a scan on your Windows Server 2008 domain controllers and lo! The software reports back that you have weak ciphers enabled, highlighted in RED, flashing, with that "you have failed" font, and including a link to the following Microsoft documentation –

    KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll:

    http://support.microsoft.com/kb/245030/en-us

    The report may look similar to this:

    SSL Server Has SSLv2 Enabled Vulnerability port 3269/tcp over SSL

    THREAT:

    The Secure Socket Layer (SSL) protocol allows for secure communication between a client and a server.

    There are known flaws in the SSLv2 protocol. A man-in-the-middle attacker can force the communication to a less secure level and then attempt to break the weak encryption. The attacker can also truncate encrypted messages.

    SOLUTION:

    Disable SSLv2.

    Upon hearing this information, you fire up your browser and read the aforementioned KB 245030 top to bottom and RDP into your DC’s and begin checking the locations specified by the article. Much to your dismay you notice the locations specified in the article are not correct concerning your Windows 2008 DC’s. On your 2008 DC’s you see the following at this registry location:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL

    image

    "Darn you Microsoft documentation!!!!!!" you scream aloud as you shake your fist in the general direction of Redmond, WA….

    This is how it looks on a Windows 2003 Server:

    image

    Easy now…

    The registry key’s and their content in Windows Server 2008, Windows 7 and Windows Server 2008 R2 look different from Windows Server 2003 and prior. The referenced article isn't accurate for Windows Server 2008. I am working on getting this corrected.

    Here is the registry location on Windows7 – 20008 R2 and its default contents:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel]
    "EventLogging"=dword:00000001
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\CipherSuites]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Hashes]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001

    Allow me to explain the above content that is displayed in standard REGEDIT export format:

    ·         The Ciphers key should contain no values or subkeys

    ·         The CipherSuites key should contain no values or subkeys

    ·         The Hashes key should contain no values or subkeys

    ·         The KeyExchangeAlgorithms key should contain no values or subkeys

    ·         The Protocols key should contain the following sub-keys and value:
        Protocols
            SSL 2.0
                Client
                    DisabledByDefault REG_DWORD 0x00000001 (value)

    Windows Server 2008, 2008 R2 and Windows 7 support the following Protocols:

    ·         SSL 2.0

    ·         SSL 3.0

    ·         TLS 1.0

    ·         TLS 1.1

    ·         TLS 1.2

    Similar to Windows Server 2003, these protocols can be disabled for the server or client architecture. Meaning that either the protocol can be omitted from the list of supported protocols included in the Client Hello when initiating an SSL connection, or it can be disabled on the server such that even if a client requests SSL 2.0, the server wouldn't respond with that protocol.

    The client and server subkeys designate each protocol. You can disable a protocol for either the client or the server, but disabling Ciphers, Hashes, or CipherSuites affects BOTH client and server sides.  You would have to create the necessary subkeys beneath the Protocols key to achieve this.

    For example:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

    This is how it looks in the registry after they have been created:

    image

    Client SSL 2.0 is disabled by default on Windows Server 2008, 2008 R2 and Windows 7.

    This means the computer will not use SSL 2.0 to initiate a Client Hello.

    So it looks like this in the registry:

    image

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001

    Just like Ciphers and KeyExchangeAlgorithms, Protocols can be enabled or disabled.

    To disable other protocols, select which side of the conversation you want to disable the protocol, and add the "Enabled"=dword:00000000 value. The example below disables the SSL 2.0 for the server in addition to the SSL 2.0 for the client.

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001   <Default client disabled as I said earlier>

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
    "Enabled"=dword:00000000 <Disables SSL 2.0 server-side>

    image

    After this, you will need to reboot the server. You probably do not want to disable TLS settings. I just added them here for a visual reference.

    So why would you go through all this trouble to disable protocols and such, anyway? Well, there may be a regulatory requirement that your company's web servers should only support Federal Information Processing Standards (FIPS) 140-1/2 certified cryptographic algorithms and protocols. Currently, TLS is the only protocol that satisfies such a requirement. Luckily, enforcing this compliant behavior does not require you to manually modify registry settings as described above. You can enforce FIPS compliance via group policy as explained by the following:

    The effects of enabling the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" security setting in Windows XP and in later versions of Windows - http://support.microsoft.com/kb/811833

    The 811833 article talks specifically about the group policy setting below which by default is NOT defined –

    Computer Configuration\ Windows Settings \Security Settings \Local Policies\ Security Options

    image

    The policy above when applied will modify the following registry locations and their value content.

    Be advised that this FipsAlgorithmPolicy information is stored in different ways as well –

    Windows 7/2008 –

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy]
    "Enabled"=dword:00000000 <Default is disabled>

    Windows 2003/XP –

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa]
    Fipsalgorithmpolicy =dword:00000000 <Default is disabled>

    Enabling this group policy setting effectively disables everything except TLS.

    Let’s continue with more examples. A vulnerability report may also indicate the presence of other Ciphers it deems to be “weak”. Below I have built a .reg file that when imported will disable the following Ciphers:

    56-bit DES
    40-bit RC4

    Behold!

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 256]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56]

    "Enabled"=dword:00000000

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]

    "Enabled"=dword:00000000  ß We are also disabling the NULL cipher suite as well

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]

    "Enabled"=dword:00000000

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168]

    After importing these registry settings, you must reboot the server.

    The vulnerability report might also mention that 40-bit DES is enabled, but that would be a false positive because Windows Server 2008 doesn't support 40-bit DES at all. For example, you might see this in a vulnerability report:

    Here is the list of weak SSL ciphers supported by the remote server:

      Low Strength Ciphers (< 56-bit key)

        SSLv3

          EXP-ADH-DES-CBC-SHA        Kx=DH(512)    Au=None    Enc=DES(40)      Mac=SHA1   export    

        TLSv1

          EXP-ADH-DES-CBC-SHA        Kx=DH(512)    Au=None    Enc=DES(40)      Mac=SHA1   export    

    If this is reported and it is necessary to get rid of these entries you can also disable the Diffie-Hellman Key Exchange algorithm (another components of the two cipher suites described above -- designated with Kx=DH(512)).

    To do this, make the following registry changes:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms\Diffie-Hellman]
    "Enabled"=dword:00000000

    You have to create the sub-key Diffie-Hellman yourself. Make this change and reboot the server. This step is NOT advised or required….I am offering it as an option to you to make your server pass the vulnerability scanning test.

    Keep in mind, also, that this will disable any cipher suite that relies upon Diffie-Hellman for key exchange.

    You will probably not want to disable ANY cipher suites that rely on Diffie-Hellman. Secure communications such as IPSec and SSL both use Diffie-Hellman for key exchange. If you are running OpenVPN on a Linux/Unix server you are probably using Diffie-Hellman for key exchange. The point I am trying to make here is you should not have to disable the Diffie-Hellman Key Exchange algorithm to satisfy a vulnerability scan.

    Being secure is a good thing and depending on your environment, it may be necessary to restrict certain cryptographic algorithms from use. Just make sure you do your diligence about testing these settings. It is also well worth your time to really understand how the security vulnerability software your company just purchased does it’s testing. A double-sided network trace will reveal both sides of the client server hello and what cryptographic algorithms are being offered from each side over the wire.

    Jim “Insert cryptic witticism here” Tierney

  • Does USMT Migrate <This Goo>?

    Hi folks, Ned here again. Below are the two most common USMT questions I get asked these days:

    1. USMT is not migrating setting X for application Y. Is USMT broken or is this expected behavior?
    2. I am in the planning phase and have not yet tried, but will USMT migrate setting X for application Y?

    There are a million little knobs in Windows and all of its applications and it’s impossible to document them all with much specificity. Today I discuss how to determine this no matter what the registry setting or application. I also explain how USMT decides to migrate settings using component XML files. As long as you’re systematic in your investigation, you can’t go wrong. Perhaps this saves you a support case or delayed rollout someday.

    The Sample Scenario

    I’ll approximate a recent real-world example, where the customer is testing migration from Windows XP to Windows 7 and is having trouble with customized Internet Explorer Favorites. The Favorites migrate fine but the user’s customized sort order is not preserved. In my semi-fictionalized example, they are also migrating from IE7 to IE9 and from 32-bit to 64-bit, just to make things interesting.

    image
    The user’s custom IE7 favorites

    The key to solving any problem is a disciplined approach. You must remove distractions and carefully narrow down the genuine error conditions. Sir Arthur Conan Doyle said it best:

    “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”

    Let’s walk through solving this USMT behavior case, and therefore, any USMT behavior case.

    Systematic Investigation

    Step 1 – Find the actual settings

    Naturally, you must discover where the settings really live. The easiest way is to let the Internet do the walking: search. Maybe someone has already documented this. Start with trustworthy spots like TechNet, MSDN, and the MS Knowledgebase, and then fan out to the woolier Internet.

    In this case, I’m not finding much, so I move to direct data analysis. There are a couple ways to do this with Microsoft tools:

    1. Process Monitor (download x86/x64)

    2. Windows System State Analyzer, included as part of the Windows Server Logo Tools (download x86, x64)

    These two utilities can monitor for changes and tell you what changed under the covers when you flipped some switch in the UI. They each have their advantages and limitations: WSSA is simple and shows changes without much modification, but is slower and sometimes unstable (I found it crashing on third party virtual hosts, but working fine on Virtual PC and Hyper-V). Process Monitor is fast, but complex and cumbersome. I demonstrate both below and you can decide – remember, my example is with Internet Explorer Favorites customization on XP so your steps will naturally vary. Before you get all bothered, there are plenty of snapshot comparison tools out there too; I can’t discuss their merits obviously or lawyers with sharks for arms will eat me.

    Scanning with Windows System State Analyzer

    1. I install WSSA and start it up (as an Administrator).

    2. I use the Tools menu to modify the options to monitor only registry and the user profiles directory. You can go as deep as the test user but some settings might exist in the All Users or Public profiles. I avoid scanning the entire drive, drivers, or services.

    image

    3. I start my application. In this case, I run Internet Explorer and open the Favorites menu (I want to be as close as possible to the change to avoid gathering unrelated data)

    4. I start a Baseline Snapshot and allow it to complete then move to section “Make the Change”.

    image

    Scanning with Process Monitor

    1. I Install ProcMon and start it up (as an Administrator).

    image

    2. I disable capturing and set the following filters to gather registry changes:

    Operation is regSetValue

    image

    Note: I may have to do this multiple times for various filters. For example, if I wanted to see file changes I’d switch my filter to instead:

    Operation is CreateFile
    Path begins with c:\documents and settings

    3. I start my application In this case, I run Internet Explorer and open the Favorites menu (I want to be as close as possible to the change to avoid gathering unrelated data)

    4. I clear the current data then start capturing.

    Make the change

    1. I alter my setting using the application. For example, here I move the AskDS blog link to the top of my favorites menu. Where it belongs. :)

    image

    2. I close the application to flush the change into the registry (there’s no guarantee that my change was immediately stored).

    Examine the Results with Process Monitor

    I stop capturing and look at the filtered results:

    HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\MenuOrder\Favorites\
    Order = <binary goo>

    image

    Examine the Results with Windows System State Analyzer

    I create a Current Snapshot, then compare to the baseline:

    menuorderssa
    click picture to see in large-o-vision

    Evaluate

    Isn’t that interesting? There’s a registry value named “Order” in our “MenuOrder\Favorites” key. I think we have a winner. I can confirm by looking up that specific key and yep, it’s the one. This also tells me that the key has been in use since (at least) Internet Explorer 4.0 and in every operating system since NT 4.0. The odds that this setting exists and works like this in Windows 7 and IE9 have gone way up.

    I take a quick look and yes, the same keys exist on my brand new test destination computer. A couple of red herrings out of the way – the OS and Browser version don’t matter.

    Sometimes you can understand the before and after settings, but this example isn’t pretty:

    image
    Yuck, REG_BINARY 

    Step 2 – Find the XML

    Ok, so now I need to find out if USMT migrates the Favorites customizations (I‘m sure it will migrate the Favorites themselves, that’s simply a shell folder in the user profile that we gather up). Luckily, much of the data migrated by USMT is defined in human-readable XML files. Since I now know the registry paths, I can examine them with confidence using FINDSTR.EXE within the USMT folder structure.

    Findstr.exe /s /i "currentversion\explorer\menuorder" *.*

    image

    See how FINDSTR returned the .man files containing data about those registry keys. Many application’s settings are migrated using these files rather than the migapp.xml (it is mainly for apps not included with Windows). There two kinds of manifests here: the DLManifests used on Windows XP and the ReplacementManifests used on Windows 7 and Vista.

    Since I’m on XP I’ll examine microsoft-windows-ie-internetexplorer-dl.man (using Visual Studio 2010 Express, as I previously described here). Right away, I see that these Favorites customizations absolutely will migrate.

    <include>

    <objectSet>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Internet Explorer\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\AutoComplete\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MenuOrder\Favorites\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Webcheck\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Ext\* [*]</pattern>

      <pattern type="Registry">HKCU\SOFTWARE\Microsoft\Search Assistant\* [*]</pattern>

      <pattern type="File">%CSIDL_LOCAL_APPDATA%\Microsoft\Internet Explorer\* [*]</pattern>

      <pattern type="File">%CSIDL_LOCAL_APPDATA%\Microsoft\Windows\History\* [*]</pattern>

      <pattern type="File">%CSIDL_APPDATA%\Microsoft\Windows\Cookies\* [*]</pattern>

      <pattern type="File">%CSIDL_APPDATA%\Microsoft\Internet Explorer\* [*]</pattern>

    </objectSet>

    </include>

    I chose wisely here but it might take you a little digging to find the migrating manifest. For example, the entire HKCU\Software\Microsoft\Windows key and its contents could be migrated and there would be no mention of the much deeper Favorites folder – my search would have failed in that case. If you don’t find it low, search high.

    Important note: Windows Vista and Windows 7 also include XML in the %systemroot%\winsxs\manifests folder, stored as .manifest files. You should examine these files with FINDSTR.EXE if you do not return any data when searching the USMT manifests and using a Vista or Windows 7 computer as the source machine. They are used automatically if there is no replacement manifest.

    There are a lot of manifests here and the ones that have a migration scope that includes “USMT” are the only ones that matter; the latest versioned one will be used if there are multiples:

    <migration scope="Upgrade,MigWiz,USMT">

    To find them all on your test computer, search your windows\winsxs\manifests folder like this:

    C:\Windows\winsxs\Manifests>findstr /m /i "usmt" *.manifest

    Step 3 – Validate the Processing

    This part should be easy: I migrate and logon as the test user to make sure the settings migrated successfully.

    But what if they didn’t?

    My investigation looked correct and I am confident that USMT will migrate these settings. I need to ensure that nothing else is blocking transfer, despite USMT’s best efforts. Again, the methodical approach works best.

    1. I run scanstate to gather the settings on my test XP computer.

    2. I run loadstate to restore the settings on my test Windows 7 computer.

    Critical note: remember, I am using the USMT manifests for both the scanstate and loadstate. If those manifests are not available, my settings will not migrate. If you look very carefully at the debug logs you will see the indications of failure:

    "Downlevel Manifests Folder is not present. System component settings will not be gathered."

     

    The ReplacementManifests folder used to service system component manifests is not present. OS settings migration will be done with system component manifests installed onto the system.

    The manifests might be missing because:

      • You forgot to copy them to the computer.
      • The manifest folders are not in the USMT working directory. For example, you are doing this (bad):

        C:\>c:\usmt\x86\scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml

     

        C:\>\\server\share\scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml

     

        C:\>c:\usmt\amd64\scanstate.exe c:\store /i:c:\usmt\xml\migdocs.xml /i:c:\usmt\xml\migapp.xml

    Instead of doing this (good):

    C:\usmt\x86\>scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml

     

    C:\>NET USE U: \\server\share

    U:\>scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml

     

    C:\usmt\AMD64>scanstate.exe c:\store /i:c:\usmt\xml\migdocs.xml /i:c:\usmt\xml\migapp.xml

    Notice how the executable is running within the folder path in the latter examples. There’s no USMT command-line to provide a path to the manifest folders! You can also run into this issue with SCCM or MDT as well, see:

    3. Before I log on to the test destination computer with my test user I load his user registry hive with regedit.exe and validate that the settings transferred:

    image

    image

    It looks good so I unload the hive. Don’t forget to do this!

    4. Then I logon as the migrated test user, but don’t start Internet Explorer. I examine the registry again and make sure it’s still unchanged. Why this step? The settings could be changing due to:

    • A logon script
    • A group policy
    • The act of logging on itself (as the first logon triggers a number of “personalization” steps that might override certain migrated settings).

    5. If I still didn’t know why the data was changing, I’d run Process Monitor in boot logging mode to see when and where the settings changed. That’s a theory though - I’ve never reached this stage using this methodical approach.

    Sir Arthur also said, “It is a capital mistake to theorize before one has data.”  Sherlock Holmes would have made a good Support Engineer.

    Final Note

    These techniques apply throughout USMT, not just with IE settings in a manifest file. I'm often asked about Office and migapp.xml, for example, and the answer is always the same - do the legwork above and you'll know. These techniques also apply to preventing migration of settings - once you know how USMT migrates it, making an unconditionalExclude to prevent it is easy.

    For those keeping score: the customer’s issue was the insidious missing manifest folders I pointed out in the validation phase. It happens to nearly everyone once… but never twice. :)

    Until next time.

    Ned “The game is afoot!” Pyle