Blog - Title

May, 2011

  • Friday Mail Sack: “Who am I kidding, more like Monthly” Edition

    Hi folks, Ned here again with another tri-weekly Friday Mail Sack. This time we talk service auditing, trust creation, certificates and USMT, SYSVOL migration with RODCs, DFS stuff, RPC and firewalls, virtualization, and the zombie corpse of FRS.

    Shoot it in the head!

    Question

    We’re setting up a trust between two domains in two forests. When we type in the name of the domain we are immediately prompted for credentials in that domain and the message “to create this trust relationship, you must supply user credentials for the specified domain”. We can enter any domain credentials here from that domain and it will work – some nobody user works, never mind an admin":

    image

    We are later prompted for administrative credentials like usual when finalizing the trust. Everything works, it’s just weird.

    Answer

    Anyone can reproduce this issue by removing the NullSessionPipes registry entry for LSARPC. NullSessionPipes – along with RestrictNullSessAccess - controls anonymous access to Named Pipes. Very legacy stuff. The list of default allowed protocols varies between OS and server role; for instance, a pure Windows Server 2008 R2 DC has a default list of:

    NETLOGON
    LSARPC
    SAMR

    You’ll find various security documents giving valid (or crazy) advice about messing with these settings but it boils down to “what do you need for your specific server, client, and application workloads to function?” If you get so secure that no one can work, you’ve gone too far.

    In this case, setting up a trust uses the LSARPC protocol to connect to a DC in the other domain and find out basic information about it. If you can’t connect to it anonymously for this “phone book” kind of directory info that dates back to NT, you get prompted for creds. Since the info is public knowledge in that domain, any user is adequate.

    These are often set through security policies and if you have this issue, look there first.

    image

    I’ve also seen it as part of a server image from someone who had too much time on their hands.

    Question

    DFSN is awesome. What is decidedly not awesome is when the requisite antivirus software absolutely kills client-side performance. What can loyal DFSN evangelists do (short of removing the AV or completely disabling network file scanning) on the client-side to prevent our users from suffering a dreaded antivirus performance hit when using DFS Namespaces?

    Answer

    Sort of a sideways approach, but if you are using Windows 7 clients then Offline Files might be an option. As an experiment with some test computers/users, you can configure:

    • Enable Transparent Caching
    • Configure Background Sync
    • Configure Slow-Link Mode

    You could make these computers work as if they are on a “slow network”, working primarily out of their Offline Files cache and trickle synchronizing their data back to the servers in the background continuously.

    I specifically call out Windows 7 as Vista doesn’t support all these features, and XP supports none of them. XP is also gross.

    Ultimately, you can only bandage things in this scenario. Whaling on your vendor (even if it’s us!) to improve performance is the only thing left. Like beer, they are the cause of - and solution to - all of life’s problems…

    Question

    I read your previous post here where you talked about how USMT 4.0 migrates computer certificates without private keys. Generally speaking this has not been an issue, as we have certificate auto-enrollment and the new computers get new valid certs. One application is having problems with these migrated invalid certs though and we need to block them from migrating, is that possible?

    Answer

    Yes. While this should be avoided if possible (a machine cert without a private key might still mean something useful to some strange application), it's simple to block computer certificate migration. Here is sample unconditional exclusion XML named skipmachinecerts.xml that you would run only with scanstate.exe (no need for loadstate to run it):

    scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml /i:skipmachinecerts.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <migration urlid="http://www.microsoft.com/migration/1.0/migxmlext/sampleskipcomputercerts">
    <component type="Documents" context="System">
      <displayName>SkipComputerCertMig</displayName>
       <role role="Data">
        <rules>
         <!--
    This override XML prevents computer (not user) certificates from migrating. –
    >
         <!--
    This should ONLY be used if machine certs with no private keys are causing issues –
    >
         <!--
    Nice applications consider these certs invalid and computers request auto-enrollment –
    >
         <
    unconditionalExclude
    >
          <
    objectSet
    >
           <
    pattern type=
    "Registry">HKLM\SOFTWARE\Microsoft\SystemCertificates\My\Certificates\*[*]</pattern>
          </objectSet>
         </
    unconditionalExclude
    >
        </
    rules
    >
       </
    role
    >
      </
    component
    >
    </
    migration
    >

    You should never block user certificate migration as they have private keys and if users are securing data like EFS-encrypted files you would be be locking them out of their files. If there's no DRA it would be permanent.

    Question

    What is the event, if any, that is triggered when we perform a D2 on a FRS non-sysvol replica set?  Is it the same error message we get when we perform in on SYSVOL, but we insert the new replica set name? 

    Answer

    Ha! You wish it were that cool. You get these events (in this order – here I D2’ed just a single custom replica set and did not touch SYSVOL at all):

    clip_image002

    clip_image002[4]

    clip_image002[6]

    clip_image002[8]

    Some old docs also say you should get a 13565 when you BURFLAG a replica – but you do not unless it’s SYSVOL:

    clip_image002[10]

    “Oh, but this is a DC” you are saying. Ok. Here’s a member server getting D2’ed:

    1. 13520 like above
    2. 13553 like above
    3. 13554 like above
    4. Done.

    Question

    We have a server that is part of a simple DFS Namespace and Replication setup. Is there any issue with virtualizing a DFS server, shutting down the old host, and bringing the virtual one online. We would do this during a period of downtime so data change would be minimal?

    Answer

    That’s pretty much the point of SCVMM so I can’t really say no, can I? :)

    http://technet.microsoft.com/en-us/library/cc764232.aspx

    The important thing (as always with P2V) is that you do a one-to-one change. You cannot have both servers alive at the same time. This is the risk with tools like disk2vhd.exe and other stuff on the internet, and why SCVMM is less risky – it ensures you don’t shoot yourself in the foot. Once the new DFS server looks like it’s working, destroy the old server so there is no chance it can come back up (format drive – you got a complete bare-metal capable backup of it first. Right???). To the other servers it would just like that server was rebooted and reappeared no worse for wear.

    Question

    We rolled back a DFSR SYSVOL migration (don’t ask). All the DC’s rolled back fine except one – an RODC ended up in an inconsistent state. He is the only one that has entries under DFSR-LocalSettings and he is constantly switching between state 5 and 9.

    The event logs show:

    Log Name:      DFS Replication
    Source:        DFSR
    Date:          5/5/2011 9:00:00 AM
    Event ID:      6016
    Task Category: None
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      rodc1.contoso.com
    Description:
    The DFS Replication service failed to update configuration in Active Directory Domain Services. The service will retry this operation periodically.

    Additional Information:
    Object Category: msDFSR-LocalSettings
    Object DN: CN=DFSR-LocalSettings,CN=rodc1,OU=Domain Controllers,DC=contoso,DC=com
    Error: 2 (The system cannot find the file specified.)
    Domain Controller: writabledc1.contoso.com

    Polling Cycle: 60

    I’m not sure of the recommended way to clean it up.

    Answer

    Run on your PDC Emulator DC:

    DFSRMIG.EXE /DeleteRoDfsrMember <name of the rodc>

    Ensure that AD replication converges to the RODC. Then update the DFSR service with:

    DFSRDIAG.EXE POLLAD /mem:<name of the rodc>

    As you can see, we planned for this eventuality. :)

    Question

    Do you have docs on configuring Advanced Audit Policy granular object access for HiPAA, Sarbanes-Oxley, or other US regulatory acts?

    Answer

    Neither the HiPAA nor SOX Acts make any specific mention of actual object access auditing settings in Windows or any OS - only that you must audit… stuff. Your customer should talk to whoever audits them to find out what their (arbitrary) requirements are so they satisfy the audit. There is an entire industry of “compliance” vendors out there that sell solutions and settings recommendations that vary greatly between each company. We even have one, although it wisely makes no mention of HiPAA or Sarbanes and then completely indemnifies itself by saying it’s totally up to the customer to determine the right settings and we have no opinion. I bet our lawyers had a crack at that one :-D.

    Question

    What is the best method for cleaning out the PreExisting folder? I've done quite a bit of searching, but most of the results are cleaning out the Conflict directory or recovering files from the Pre-Existing folder.

    Answer

    If you don’t care about the files anymore (I recommend you at least back them up), you can delete the files and the preexistingManifest.xml file. You don’t need to stop the service or anything, once initial sync is done DFSR no longer cares about those files either. :)

    Question

    When using the netsh.exe command to set the port range for dynamic RPC, what is the minimum number of ports that you recommend be provisioned? We need to set this value for application servers in an Extranet and want to make sure we provision enough ports but satisfy our firewall folks.

    Answer

    There’s no rule, it’s just as many as you find you need with testing. Our recommendation is not to mess with these if you are trying to lower the number of ports open in a firewall and instead use IPSEC tunnels between computers – this means you only have to open a couple ports and the traffic is protected regardless. Opening “only 500” ports is not much better than the default of many thousands. Going too low and you will cause mysterious random outages that take forever to figure out.

    Barring that, I usually recommend first leaving default and evaluating to see what the usage patterns are – then setting to match with maybe a +10% extra fudge factor for unexpected growth. Then document the heck out of it because when you’re gone and someone else inherits that system, as they are going to be fornicated when problems happen. No one will be expecting that sort of restriction.

    Question

    It’s pretty easy to audit who is services starting and stopping in Windows Server 2003, I just examine the System Event Log for events 7035 and 7036, sourced to Service Control Manager. The User field will show who stopped and started a service.

    But Windows Server 2008 and later don’t do this. Is there a way to audit their services?

    Answer

    Yes. You will need to decide which services you want to audit as there is no simple way to turn it all on for everything, though. You probably only want to know about some specific ones anyway. Who cares that Ned restarted the Zune Wireless service on his laptop?

    1. Logon as an administrator, make sure an elevated CMD prompt if UAC is on.
    2. Run on the affected server:

    SC QUERY > Svcs.txt

    3. Examine the svcs.txt for your service “DISPLAY_NAME” that is being restarted.

    For example in my case, I looked for “DFS Namespace” (no quotes) and see:

    SERVICE_NAME: W32time
    DISPLAY_NAME: Windows Time
    TYPE : 20 WIN32_SHARE_PROCESS
    STATE : 4 RUNNING
    (STOPPABLE, NOT_PAUSABLE, ACCEPTS_SHUTDOWN)
    WIN32_EXIT_CODE : 0 (0x0)
    SERVICE_EXIT_CODE : 0 (0x0)
    CHECKPOINT : 0x0
    WAIT_HINT : 0x0

    4. Above the display name you will see the SERVICE_NAME. Note that for below.
    5. Run:

    SC SDSHOW <service name> > sd.txt

    Example:

    SC SDSHOW w32time > sd.txt

    6. Open this text file. It will contain SDDL data similar (not necessarily the same as below, do not re-use my example) to this:

    D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)

    7. Copy the following and add it to the end of the SDDL string in that text file:

    (AU;SAFA;RPWPDT;;;WD)

    So if you had used my example SDDL data and then added the above string, you now
    have all one line:

    D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)S:(AU;SAFA;RPWPDT;;;WD)

    Note that there is an S: that separates the DACL and SACL sections. If your exported SDDL did not contain an S: you must prepend it to your SACL entry like so:

    S:(AU;SAFA;RPWPDT;;;WD)

    8. Copy and paste that whole new string and run:

    SC SDSET <name of the service> <the big new string>

    Example":

    SC SDSET w32time D:(A;;CCLCSWLOCRRC;;;AU)(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;BA)(A;;CCDCLCSWRPWPDTLOCRSD
    RCWDWO;;;SO)(A;;CCLCSWRPWPDTLOCRRC;;;SY)S:(AU;SAFA;RPWPDT;;;WD)

    Note: What we are doing is adding an audit SACL to the service so that when the previous auditing steps I gave you are used, the restart of the service will be audited and we’ll know who did what. Remember that if there was no auditing in place on the service already (after the "S:") then you will need to add that to the string.

    9. Audit Subcategory "Other Object Access Events" for success and "Handle Manipulation" for success.
    10. Note events for 4656. Object Server will be "SC Manager", Object Name will the name of the service, Access Request Information will show the operation (ex: "Stop the Service").

    image

    Until next time.

    - Ned “yes, bwaamp is a technical term here” Pyle

  • USMT and U: Migrating only fresh domain profiles

    Hi folks, Ned here again. Frequently someone asks me how to make USMT 4.0 migrate only recently used domain user profiles. This might sound like a simple request for a USMT veteran, but there are some subtleties caused by processing rules and behaviors. Today I go through how this works, talk about pitfalls, and ultimately show how to solve the issue. Forearmed, you can solve quite a few more migration scenarios once you understand the rules and technique.

    The goal and initial results

    The plan here was to migrate only Cohowinery.com domain user profiles that had been accessed in the past three months, while skipping all local user profiles. This test computer had a variety of local and domain profiles, some of them very stale:

    image

    The USMT syntax used was similar to this:

    Scanstate.exe c:\store /uel:90 /ui:cohowinery:\* /i:migdocs.xml /i:migapp.xml /c /o

    So far so good. Then they restored the data:

    Loadstate c:\store /uel:90 /ui:cohowinery:\* /i:migdocs.xml /i:migapp.xml /c

    Which failed and returned to the console:

    Starting the migration process

    Processing the settings store

     

    Selecting migration units

     

    Failed.

      Unable to create a local account because /lac was not specified

      See the log file for more information.

     

    LoadState return code: 14

    Looking in the loadstate.log they saw:

    The account 7-x64-rtm-01\admin is chosen for migration, but the target does not have account 7-X64-RTM-01\admin. See documentation on /lac, /lae, /ui, /ue and /uel options.

    Unable to create a local account because /lac was not specified[gle=0x00000091]

    What the? The "admin" user is a local account. Why was that migrated?

    image

    So they went back and examined the scanstate console and noticed something else:

    image

    Huh. All the domain users were migrating, even though the BShirley and SDavis had not logged on in more than two years. They could enable /LAC to stop the loadstate failure but that wouldn't accomplish the goals.

    Understanding what happened

    USMT has complex rules around profile precedence and the /UI, /UEL, and /UE command-line switches. Despite the best efforts of our talented TechNet writer Susan, the syntax is inherently nutty. Let's get the rules straight here first:

    • /UI (User Include) always migrates a profile, regardless of /UE or /UEL. This looks simple until you learn that unless you also specify /UE:* then /UI always migrates all profiles regardless of arguments supplied. You must always use /UE:* if using /UI.
    • /UE (User Exclude) blocks migrating profiles as expected, but it is always overridden by /UEL and /UE.
    • /UEL (User Exclude based on last Logon) migrates profiles based on "newer than" rules; either a date or a number of days. Meaning that if a user's NTUSER.DAT has been modified within the time or date, that profile will be included in migration. /UEL always supersedes /UE.
    • You don’t have to include /UI if you’re using/UEL or /UE. If you're blocking one thing - like if I only want to block local using /UE:%computername%\* - leaving the /UI off is sufficient and less confusing.
    • All users are implicitly migrated. So not providing any switches gets everyone.

    Returning to the scenario above, we now know what happened:

    1. /UI was set for all domain users, meaning all domain users will migrate despite /UEL.
    2. /UEL would have blocked local users from migrating, but those accounts had logged on recently.
    3. Because the local users were included and (naturally) did not exist on the destination computer, loadstate required the /LAC switch to recreate them.

    One more insidious point: by running a scanstate on a computer - even if you have various profile filters listed and even if you cancel the scanstate from completing the examination phase - all of the profiles are loaded and unloaded. Meaning that if you run scanstate even once, /UEL will no longer work because all of your NTUSER.DAT files are modified to "right now".

    When testing UEL be sure to keep one of those handy "file date modification" utilities close. I use an internal one here called FILEDATE.EXE, but there are a zillion similar freebies on the internet (often with the same name). All you need to do is change the date on a given NTUSER.DAT to make it "stale" to USMT.

    image

    Making it work

    You know how I like to ramble on about how things work and why they do what they do. Many readers just want the fix so they can get back to FaceBook YouTube work. No one says you have to use the same command-line on your scanstate and loadstate. In this case, that's the solution:

    Step 1

    Scanstate the source computer using only /UEL. This meets the need for only getting current profiles. This may catch some local user profiles but a local user is unlikely to have much data to migrate. It's also unlikely that a local user is in regular use in a domain environment, meaning /UEL probably catches it as well. For example:

    Scanstate c:\store /uel:90 /i:migdocs.xml /i:migapp.xml

    image

    Even in my example where a local user was gathered above, it added only 25MB to each store when using a Windows 7 source computer (due to the default Windows Mail files created for all users). If I had used a hardlink migration, it wouldn't even have been that. In reality, my local profiles were quite stale as I had not used them since creating the computer - I had to log on to them to make them "fresh" for my example. J

    Step 2

    Loadstate the destination computer using /UI and /UE. This prevents the restore of any local profiles captured earlier. Since I only catch "fresh" profiles in the scanstate, there's no need to provide /UEL here. For example:

    Loadstate c:\store /ue:* /ui:cohowinery\* /i:migdocs.xml /i:migapp.xml

    image

    With that I finally meet the goal of only migrating domain users that have logged on within the past 90 days.

    Note: If the source and destination computer names are identical (perhaps a wipe and load scenario), you could alternatively use:

                    Loadstate c:\store /ue:%computername%\* /i:migdocs.xml /i:migapp.xml

    Musings

    To wrap up the USMT rules: nothing means everything, something means everything except when it means nothing, and sometimes often means never. Simple!

    Other notes:

    • More detail and examples on all of this here:

    http://technet.microsoft.com/en-us/library/dd560781(v=WS.10).aspx
    http://technet.microsoft.com/en-us/library/dd560804(v=WS.10).aspx

    • I used /C a few times for ease of repro. Microsoft still recommends using a config.xml file with error handling rules rather than arbitrarily bypassing non-fatal errors with /c.

    • There is a paradox Easter egg in my examples. Whoever points it out in the Comments first gets the "AskDS Silverback Alpha Geek Crown", currently worn by Darkseid64.

    Until next time.

    Ned "U-Haul" Pyle

  • Speaking in Ciphers and other Enigmatic tongues…

    Hi! Jim here again to talk to you about Cryptographic Algorithms, SChannel and other bits of wonderment. So, your company purchases this new super awesome vulnerability and compliance management software suite, and they just ran a scan on your Windows Server 2008 domain controllers and lo! The software reports back that you have weak ciphers enabled, highlighted in RED, flashing, with that "you have failed" font, and including a link to the following Microsoft documentation –

    KB245030 How to Restrict the Use of Certain Cryptographic Algorithms and Protocols in Schannel.dll:

    http://support.microsoft.com/kb/245030/en-us

    The report may look similar to this:

    SSL Server Has SSLv2 Enabled Vulnerability port 3269/tcp over SSL

    THREAT:

    The Secure Socket Layer (SSL) protocol allows for secure communication between a client and a server.

    There are known flaws in the SSLv2 protocol. A man-in-the-middle attacker can force the communication to a less secure level and then attempt to break the weak encryption. The attacker can also truncate encrypted messages.

    SOLUTION:

    Disable SSLv2.

    Upon hearing this information, you fire up your browser and read the aforementioned KB 245030 top to bottom and RDP into your DC’s and begin checking the locations specified by the article. Much to your dismay you notice the locations specified in the article are not correct concerning your Windows 2008 DC’s. On your 2008 DC’s you see the following at this registry location:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL

    image

    "Darn you Microsoft documentation!!!!!!" you scream aloud as you shake your fist in the general direction of Redmond, WA….

    This is how it looks on a Windows 2003 Server:

    image

    Easy now…

    The registry key’s and their content in Windows Server 2008, Windows 7 and Windows Server 2008 R2 look different from Windows Server 2003 and prior. The referenced article isn't accurate for Windows Server 2008. I am working on getting this corrected.

    Here is the registry location on Windows7 – 20008 R2 and its default contents:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel]
    "EventLogging"=dword:00000001
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Ciphers]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\CipherSuites]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Hashes]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001

    Allow me to explain the above content that is displayed in standard REGEDIT export format:

    ·         The Ciphers key should contain no values or subkeys

    ·         The CipherSuites key should contain no values or subkeys

    ·         The Hashes key should contain no values or subkeys

    ·         The KeyExchangeAlgorithms key should contain no values or subkeys

    ·         The Protocols key should contain the following sub-keys and value:
        Protocols
            SSL 2.0
                Client
                    DisabledByDefault REG_DWORD 0x00000001 (value)

    Windows Server 2008, 2008 R2 and Windows 7 support the following Protocols:

    ·         SSL 2.0

    ·         SSL 3.0

    ·         TLS 1.0

    ·         TLS 1.1

    ·         TLS 1.2

    Similar to Windows Server 2003, these protocols can be disabled for the server or client architecture. Meaning that either the protocol can be omitted from the list of supported protocols included in the Client Hello when initiating an SSL connection, or it can be disabled on the server such that even if a client requests SSL 2.0, the server wouldn't respond with that protocol.

    The client and server subkeys designate each protocol. You can disable a protocol for either the client or the server, but disabling Ciphers, Hashes, or CipherSuites affects BOTH client and server sides.  You would have to create the necessary subkeys beneath the Protocols key to achieve this.

    For example:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 3.0\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.0\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.1\Server]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Client]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\TLS 1.2\Server]

    This is how it looks in the registry after they have been created:

    image

    Client SSL 2.0 is disabled by default on Windows Server 2008, 2008 R2 and Windows 7.

    This means the computer will not use SSL 2.0 to initiate a Client Hello.

    So it looks like this in the registry:

    image

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001

    Just like Ciphers and KeyExchangeAlgorithms, Protocols can be enabled or disabled.

    To disable other protocols, select which side of the conversation you want to disable the protocol, and add the "Enabled"=dword:00000000 value. The example below disables the SSL 2.0 for the server in addition to the SSL 2.0 for the client.

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Client]
    "DisabledByDefault"=dword:00000001   <Default client disabled as I said earlier>

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\Protocols\SSL 2.0\Server]
    "Enabled"=dword:00000000 <Disables SSL 2.0 server-side>

    image

    After this, you will need to reboot the server. You probably do not want to disable TLS settings. I just added them here for a visual reference.

    So why would you go through all this trouble to disable protocols and such, anyway? Well, there may be a regulatory requirement that your company's web servers should only support Federal Information Processing Standards (FIPS) 140-1/2 certified cryptographic algorithms and protocols. Currently, TLS is the only protocol that satisfies such a requirement. Luckily, enforcing this compliant behavior does not require you to manually modify registry settings as described above. You can enforce FIPS compliance via group policy as explained by the following:

    The effects of enabling the "System cryptography: Use FIPS compliant algorithms for encryption, hashing, and signing" security setting in Windows XP and in later versions of Windows - http://support.microsoft.com/kb/811833

    The 811833 article talks specifically about the group policy setting below which by default is NOT defined –

    Computer Configuration\ Windows Settings \Security Settings \Local Policies\ Security Options

    image

    The policy above when applied will modify the following registry locations and their value content.

    Be advised that this FipsAlgorithmPolicy information is stored in different ways as well –

    Windows 7/2008 –

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\FipsAlgorithmPolicy]
    "Enabled"=dword:00000000 <Default is disabled>

    Windows 2003/XP –

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa]
    Fipsalgorithmpolicy =dword:00000000 <Default is disabled>

    Enabling this group policy setting effectively disables everything except TLS.

    Let’s continue with more examples. A vulnerability report may also indicate the presence of other Ciphers it deems to be “weak”. Below I have built a .reg file that when imported will disable the following Ciphers:

    56-bit DES
    40-bit RC4

    Behold!

    Windows Registry Editor Version 5.00

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\AES 256]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\DES 56]

    "Enabled"=dword:00000000

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\NULL]

    "Enabled"=dword:00000000  ß We are also disabling the NULL cipher suite as well

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 128/128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 40/128]

    "Enabled"=dword:00000000

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\RC4 56/128]

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\Triple DES 168]

    After importing these registry settings, you must reboot the server.

    The vulnerability report might also mention that 40-bit DES is enabled, but that would be a false positive because Windows Server 2008 doesn't support 40-bit DES at all. For example, you might see this in a vulnerability report:

    Here is the list of weak SSL ciphers supported by the remote server:

      Low Strength Ciphers (< 56-bit key)

        SSLv3

          EXP-ADH-DES-CBC-SHA        Kx=DH(512)    Au=None    Enc=DES(40)      Mac=SHA1   export    

        TLSv1

          EXP-ADH-DES-CBC-SHA        Kx=DH(512)    Au=None    Enc=DES(40)      Mac=SHA1   export    

    If this is reported and it is necessary to get rid of these entries you can also disable the Diffie-Hellman Key Exchange algorithm (another components of the two cipher suites described above -- designated with Kx=DH(512)).

    To do this, make the following registry changes:

    [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\Schannel\KeyExchangeAlgorithms\Diffie-Hellman]
    "Enabled"=dword:00000000

    You have to create the sub-key Diffie-Hellman yourself. Make this change and reboot the server. This step is NOT advised or required….I am offering it as an option to you to make your server pass the vulnerability scanning test.

    Keep in mind, also, that this will disable any cipher suite that relies upon Diffie-Hellman for key exchange.

    You will probably not want to disable ANY cipher suites that rely on Diffie-Hellman. Secure communications such as IPSec and SSL both use Diffie-Hellman for key exchange. If you are running OpenVPN on a Linux/Unix server you are probably using Diffie-Hellman for key exchange. The point I am trying to make here is you should not have to disable the Diffie-Hellman Key Exchange algorithm to satisfy a vulnerability scan.

    Being secure is a good thing and depending on your environment, it may be necessary to restrict certain cryptographic algorithms from use. Just make sure you do your diligence about testing these settings. It is also well worth your time to really understand how the security vulnerability software your company just purchased does it’s testing. A double-sided network trace will reveal both sides of the client server hello and what cryptographic algorithms are being offered from each side over the wire.

    Jim “Insert cryptic witticism here” Tierney

  • USMT and Invalid User Mapping

    Hi folks, Ned here again. Today I discuss an insidious issue you can see with USMT when you have previously migrated computers between forests with ADMT. This scenario is rare enough that I didn't bother with a KB article - not to mention it's easier to explain with pictures and "wit". If you get into it though, you will need the right answer, as there are some wrong answers floating around the Internet.

    What you see

    You will only have issues here during the loadstate operation, dropping data onto a fresh new computer. At first blush, the error will be generic and useless:

    Error while performing migration

    Failed.

      Software malfunction or Unknown exception

      See the log file for more information.

     

    LoadState return code: 26

    But if you look at the loadstate.log (having made sure to provide /v:5 on the command-line), you see:

    Info [0x000000] Apply started at 4/18/2011 21:17:39
    Error [0x000000] Invalid user mapping: BYAIR\testuser1 -> S-1-5-21-4187903345-2522736461-4139203548-1110
    Info [0x080000] Received request to restart the explorer process from context: SYSTEM
    Info [0x000000] Leaving MigApply method
    Error [0x000000] Error while performing migration
    Warning [0x000000] Internal error 2 was translated to a generic error
    Info [0x000000] Failed.[gle=0x00000091]
    Info [0x000000] Software malfunction or Unknown exception[gle=0x00000091]

    What it means

    An incomplete migration with ADMT causes this issue. Sometime in the past - perhaps distantly enough that it was before your time - this computer and user lived in another AD forest. In my examples below there was once an Adatum.com (ADATUM) domain and currently there's a blueyonderairlines.com (BYAIR) domain. I demonstrate this going from Windows XP to Windows 7, as most people are in that boat currently.

    Let's go backwards through the sands of time and see where I went wrong…

    1. I migrate users with ADMT, making sure to enable SIDHistory migration (this is on by default). See here how my TestUser1 account has its old ADATUM SID added to the sIDHistory attribute after migration between the forests:

    image

    image

    image

    2. I then migrate the users’ computers with ADMT and do not translate profiles during the migration (below should have been checked ON, but was not):

    image

    3. This means that the migrated users’ profilelist registry entries do not translate during migration. Note below how they are still the old domain SID:

    image

    4. Then the user logs on to his migrated computer with his migrated user and ends up with a new profile, as expected. Note how the old profile is still here and the user's new profile is too:

    image

    5. The user – probably confused at this point – calls the help desk. “Where are all my files?” The files are still in the old folder and because I performed sidHistory migration, the user still has full access to that folder and its contents. The user or help desk might copy files over into his new profile (probably just “My Documents to My Documents”). Or they might just leave the files there and the user knows to go access older files out of their old folder. It’s just a folder to them, they don’t understand profiles after all. It’s highly unlikely a user was still accessing this old location from here as it is obfuscated in Windows Explorer, but unlikely isn’t impossible - they could have been shown how by a misguided help desk staff who should have copied the files and alerted me to the real problem.

    At this point, the “good” profile is “c:\documents and settings\testuser1.byair”. The “bad” one with data in it from the ADMT migration is the “c:\documents and settings\testuser1”. Again, someone may have reconciled these file contents by hand or they may not.

    image

    image

    6. Years go by and I retire to Barbados. Along come you and your project to migrate old XP computers to the new Win7 machines. The ADMT migration is ancient history and that old domain probably doesn’t even exist anymore.

    7. USMT scanstate.exe gathers the XP computer and gets all the profiles. If you watch this closely, you see that there are reconciliation problems:

    image

    8. You run USMT loadstate.exe on the Win7 computer that has none of these user profiles at all and because the user to SID mapping is invalid (there is no such thing as a user with that SID directly set anymore but there is a sidHistory that belongs to another profilelist SID entry for a “different” user), we finally error.

    Uggh. Technically speaking, nothing is broken here: USMT is working by design and ADMT was following my (bad) instructions.

    What you can do

    There are three realistic solutions to this, one overkill solution, and one that you should never use ever that I include only for completeness and to stop people from making a mistake through ignorance - five options. All SIDs provided below are samples, bien sûr.

    • Option 1: You tell scanstate to ignore the old domain by providing the SID of the new domain during the gather. From my example SIDs and domain names above, you know that the old dead domain SID was S-1-5-21-4187903345-2522736461-4139203548 and the new one is S-1-5-21-1120787987-3120651664-2685341087. You can also use PSGETSID to look up the domain SID. You could choose only to migrate the data from the current domain using this:

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:* /ui:S-1-5-21-1120787987-3120651664-2685341087* /ui:%computername%\*

    image

    That quickly gathers only the current “good domain” user profiles and local user profiles, skipping the old “bad domain” profiles. If you have multiple “good” domains you might need to feed it multiple /ui of those you want to include.

    • Option 2: Conversely, you skip the old domain (this is harder to predict – what if there were many ADMT migrations and I had collapsed various domains? You’d have to go find out all the “bad” SIDs, and this would probably be a trial and error process where you kept discovering old SIDs during your migration). Note this time the syntax does not include /ui.

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:S-1-5-21-4187903345-2522736461-4139203548*

    image

    Note: The issue remains with both option 1 and 2 that the old so-called "bad domain” profile might still contain data, as I outlined in the "What it Means" section. You must do some investigation to find out if these old profiles really are just garbage and can be freely abandoned, or if you need to also gather that data. On the plus side, you are not actually deleting data – you are only opting out of migrating it. The old data is still there and if you’re following the best practices, it’s contained in the backup taken before the migration. You let your users decide if the data is valuable by them asking for it back or not. The backup can be stored locally on that new computer for safekeeping indefinitely.

    • Option 3: You can remove the invalid profilelist registry entries beforehand using a script. The net is that this has the same risk as above (it will prevent USMT from gathering the file data with that supposedly abandoned profile) and is probably more work (the customer or MCS has to write the script). Again, that profile may not actually be abandoned from a data perspective, only from a user logon perspective. However, if your investigations show that they truly are abandoned and they no longer contain useful data, this is an option.
    • Option 4 (overkill): You can extend the above three options with a custom XML file you provide that gathers up all the important user shell folders (My Documents, Favorites, and Desktop are likely to have any meaningful user files in an XP profile) – effectively a catchall override that will not leave data behind. This drops the files into a central spot, maintaining relative folder paths so that you could go rescue data as needed. Once you think they’ve had enough time you can get rid of the folder to return disk space, or leave it there indefinitely. The main downsides to this technique is that it uses up extra disk space, it’s complicated, it makes the migration a lot slower, and it needs some ACLs to ensure you aren’t exposing user data to everyone on the computer (I’d recommend it being set to Administrators and Help Desk-only; if the user has “missing data” problems they call the helpdesk and you retrieve it. If no one complains for 3 months can probably just delete it). The main advantage to this is that it leans towards caution and prudence; always a good plan when it comes to a user's data.

    That being said, this option 4 is probably overkill, especially if you are getting backups before scanstate like we recommend. If you do the legwork to find out the status of all these old profiles, find out how people dealt with the new profiles after migration, and determine they truly aren’t being used, I’d not bother with this and stick with Options 1, 2, or 3 above.

    Update: Octavian makes the excellent point that Favorites and Desktop are likely filled with valuable user goo. I updated the sample.

    Sample XML named getdocsandset.xml:

    <?xml version="1.0" encoding="UTF-8"?>
    <
    migration urlid="
    http://www.microsoft.com/migration/1.0/migxmlext/coallescemydocumentstoasafeplace
    ">
     <component type="Documents" context="System">
      <
    displayName>copy all the useful shell folders to a hidden folder on destination </displayName
    >
       <
    role role="Data"
    >
        <
    rules
    >
         <!—
    All copies shell folders saved so admins can decide what to do with the data later
    -->
         <
    include
    >
          <
    objectSet
    >
           <
    pattern type="File">C:\documents and settings\*\my documents\* [*]</pattern
    >
          </
    objectSet
    >
         </
    include
    >
         <!—
    Copies shell folders to a special folder for admin attention or destruction.
    -->
         <
    locationModify script="MigXmlHelper.RelativeMove('C:\documents and settings','c:\extramydocsdata')"
    >
          <
    objectSet
    >
           <
    pattern type="File">C:\documents and settings\*\my documents\* [*]</pattern
    >
           <pattern type="File">C:\documents and settings\*\desktop\* [*]</pattern>
           <pattern type="File">C:\documents and settings\*\favorites\* [*]</pattern>

          </
    objectSet
    >
         </
    locationModify
    >
        </
    rules
    >
       </
    role
    >
     </
    component
    >
    </
    migration
    >

     

    scanstate c:\store /i:migdocs.xml /i:migapp.xml /o /c /v:5 /ue:* /ui:S-1-5-21-1120787987-3120651664-2685341087*/ui:%computername%\* /i:getdocsandset.xml

     

    loadstate c:\store i:migdocs.xml /i:migapp.xml /c /v:5 /i:getdocsandset.xml

    Just to be sure in this case, you now have all the profiles from the source computer saved to a central location after migration:

    image

    You set permissions to Administrators-only and Hidden, mainly to stop users from mistakenly using this folder as live data. You can do that through a batch file like this sample:

    icacls.exe c:\extramydocsdata\ /grant administrators:(F) /inheritance:r

    icacls.exe c:\extramydocsdata\* /grant administrators:(F) /inheritance:e /t

    attrib.exe +H c:\extramydocsdata

    • Option 5 (DO NOT USE): Remove the sidHistory entry for all these migrated users in AD. This is categorically the worst and most dangerous solution, as you probably have thousands of users accessing files, databases, email, etc. with their SID History alone. I mention this option only to be complete; it is highly discouraged, as you could cause a massive outage that would be very difficult to and time consuming to repair (probably an authoritative restore of all users in the forest; a nightmare scenario for most customers).

    Final thoughts

    Migrating users and computers without translating profiles is usually the wrong plan. See the ADMT Migration Guide in section "Translate Local User Profiles". Even without USMT in the mix here, the users were unhappy campers after ADMT and Evil Twin Ned finished with them.

    Until next time.

    Ned "I like MT" Pyle