Blog - Title

August, 2009

  • The AD Recycle Bin: Understanding, Implementing, Best Practices, and Troubleshooting

    Ned here again. Starting in Windows Server 2008 R2, Active Directory now implements a true recycle bin. No longer will you need an authoritative restore to recover deleted users, groups, OU’s, or other objects. Instead, it is now possible to use PowerShell commands to bring back objects with all their attributes, backlinks, group memberships, and metadata. AD Recycle Bin (ADRB) was a long time coming and it definitely has its idiosyncrasies, but I think you are going to love it.

    Today I am going to talk about a few aspects of this new system:

    • Understanding how ADRB works under the covers.
    • What the requirements are and how to turn ADRB on.
    • Using ADRB, along with some best practices.
    • Troubleshooting common issues people run into with ADRB.

    Armed with this information, you should be able to speak with authority on the AD Recycle Bin and perhaps, save your company from a disaster someday.

    Let’s get cranking, IT super hero.

    How AD Recycle Bin Works

    Simply put, ADRB allows you to recover objects immediately, without the need to use your System State backups, latent sites, or 3rd party add-ons. It does this by implementing two new attributes, and using two existing attributes:

    • isDeleted

      • Has existed since Windows 2000
      • Exists on every object
      • Describes if an object is deleted but restorable
    • isRecycled

      • New to Windows Server 2008 R2
      • Exists on every object once it is recycled
      • Describes if an object is deleted but not restorable
    • msDS-deletedObjectLifetime

      • New to Windows Server 2008 R2
      • Is set on the “CN=Directory Service,CN=Windows NT, CN=Services, CN=Configuration, DC=COMPANY,DC=COM” container
      • Describes how long a deleted object will be restorable
    • tombstoneLifetime

      • Has existed since Windows 2000
      • Is set on the “CN=Directory Service,CN=Windows NT, CN=Services, CN=Configuration, DC=COMPANY,DC=COM” container
      • Describes how long a deleted object will not be restorable

    Note: I am not mentioning another dozen new attributes that were created for ADRB, just covering the ones that really glue it all together. To see all new attributes in the 2008 R2 Schema for ADRB review Windows Server 2008 R2: Schema Updates.

    Now pay close attention; this gets complicated:

    1. You have a live object - a user account called SaraDavis that lives in the Sales OU in the Contoso.com domain.

    2. An administrator deletes the SaraDavis object.

    3. SaraDavis is moved into the container CN=Deleted Objects,DC=Contoso,DC=Com.

    4. SaraDavis has its isDeleted attribute set to TRUE.

    Note: At this point, SaraDavis is a logically deleted object that can be recovered by the administrator, and will contain all of its data. The amount of time that SaraDavis can be recovered is controlled by the Deleted Object Lifetime (DOL). This time range can be set on the msDS-deletedObjectLifetime attribute. By default, it will be the same number of days as the Tombstone Lifetime (TSL). The TSL set for a new forest since Windows Server 2003 SP1 has been 180 days*, and since by default DOL = TSL, the default number of days that an object can be restored is therefore 180 days. If tombstoneLifetime is NOT SET or NULL, the tombstone lifetime is that of the Windows default: 60 days. This is all configurable by the administrator. Stay with me here.

    5. After the Deleted Object Lifetime has been exceeded - remember, 180 days by default - SaraDavis has its isRecycled attribute set to TRUE. Its isDeleted attribute stays set to TRUE. The SaraDavis object stays in the CN=Deleted Objects,DC=Contoso,DC=Com container.

    Note: At this point, SaraDavis is a recycled object that cannot be recovered by an administrator, and no longer contains all of its attribute data. Its only purpose now is to let other DC’s know that the object is gone and that the object is now a normal, run of the mill tombstone.

    6. After the SaraDavis recycled object has existed for the value of the Tombstone Lifetime, it is then physically deleted from the database via garbage collection. At the next online defrag, that whitespace will be recovered from the database.

    * The tombstone lifetime in a  new forest is always 180 days.

     

    That’s a pretty hard read, so here’s a diagram that hopefully fills in the gaps for you:

    image

    By the way: when I use the term “FALSE” for these attributes, I’m simplifying things. A more precise term would be “NOT TRUE”, as if the value is not set, it counts as FALSE. Also, the isRecycled attribute will not exist on an object until the object is actually recycled.

    If you’re wondering why isRecycled actually means ‘not recoverable’, remember that plenty of applications know about isDeleted from the past 10 years. We couldn’t go change what isDeleted meant!

    Requirements and Enabling

    Forest Requirements

    In order to turn AD Recycle Bin you will need to have implemented a Windows Server 2008 R2 Forest Functional Level. Wait, come back! Despite the terror it seems to inspire in our customers, increasing functional levels is not a big deal. In order to do it, you must:

    1. Have extended your schema to Windows Server 2008 R2.
    2. Have only Windows Server 2008 R2 DC’s in your forest.
    3. Raise your domain(s) functional level.
    4. Raise your forest’s functional level.

    Note: Did you know that in Windows Server 2008 R2, you can actually lower the functional level back to 2008? As long as you have not turned the Recycle Bin feature on, the domain and forest functional levels that are at 2008 R2 can be reverted to 2008 with a simple PowerShell command. Here's an example of lowering it back to 2008 when it was already at 2008 R2:

    Set-AdForestMode -identity contoso.com -server dc1.contoso.com -forestmode Windows2008Forest

    Set-AdDomainMode -identity contoso.com -server dc2.child.contoso.com -forestmode Windows2008Domain

    This means you can go to the R2 functional level, make sure your environment is not having any issues, then if you are satisfied you can enable the Recycle Bin. At that point you can no longer revert.

    Ok, back on topic.

    Enabling AD Recycle Bin

    To turn on the Recycle Bin you will use AD PowerShell. Don’t you roll your eyes at me! I know there are some PowerShell haters out there but if you want to recycle, you are going to have to bend a little. Don’t worry, it won’t hurt.

    1. Logon to your “Domain Naming Master” DC as an Enterprise Administrator and start PowerShell.exe - it’s that big blue icon on your taskbar.

    2. Load the AD PowerShell module:

    Import-module ActiveDirectory

    image

    3. Run the following cmdlet to turn on the Recycle Bin:

    Enable-ADOptionalFeature 'Recycle Bin Feature' -Scope ForestOrConfigurationSet -Target <your forest root domain name>

    So for example, where my forest root domain is contoso.com:

    image

    4. The command will prompt you for a last chance. Enter “Y” to turn the Recycle Bin on.

    image

    5. That’s it, you’re done.

    Note: If you just can’t be bothered to logon to your domain naming master or type in forest names, you can use the following PowerShell command to do everything for you (and learn a useful technique for using object properties):

    Enable-ADOptionalFeature "Recycle Bin Feature" -server ((Get-ADForest -Current LocalComputer).DomainNamingMaster) -Scope ForestOrConfigurationSet -Target (Get-ADForest -Current LocalComputer)

    I don’t mind if you copy and paste. ;-)

    A final critical point: The AD Recycle bin is not retroactive. Turning it on after someone has deleted all your users will not help you!

    Controlling the Lifetime of Deleted Objects

    To control the length of a time that deleted objects will be recoverable, you will need to modify the msDS-deletedObjectLifetime attribute that lives on the Directory Service container. Microsoft really hopes you won’t mess with it but I know you will, so here’s how to do it correctly in PowerShell. Remember that you are setting this value in days:

    Set-ADObject -Identity "CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=<your forest root domain>" -Partition "CN=Configuration,DC=<your forest root domain>" -Replace:@{"msDS-DeletedObjectLifetime" = <value in days>}

    For example, in my Contoso.com forest I will set my Deleted Object Lifetime to 365 days:

    image

    To see the current Deleted Object Lifetime, I use Get-AdObject:

    image

    Using the Recycle Bin

    Restoring Single Objects

    We’ve got everything setup, so let’s start with a simple restore scenario. Again, we will use PowerShell to do all the work.

    The Sales OU contains a few users, including SaraDavis:

    image

    Last night one of the administrators was told that an employee named Sarah Davis had left the company and her account needed to be deleted. Unfortunately, the email from HR misspelled her name, and so the Administrator deleted Sara Davis. Let’s look at a few ways to examine the deleted user information with PowerShell:

    Get-ADObject -filter 'isdeleted -eq $true -and name -ne "Deleted Objects"' -includeDeletedObjects -property *

    The command above will list out all deleted objects and all the attribute data on those objects. So I run it:

    image

    image

    Note how all the attribute data has been preserved, including group memberships - SaraDavis was a member of the Sales VPs group. Ouch, deleting an executive is never good for a career.

    There is a ton of data being returned here, and while interesting I’m not sure I care all that much. When restoring users I just need the bare minimum information to make sure that this is the right object I need to get back; I don’t want this whole Sarah/Sara debacle again. So I’ll narrow the output with some pipelining:

    Get-ADObject -filter 'isdeleted -eq $true -and name -ne "Deleted Objects"' -includeDeletedObjects -property * | Format-List samAccountName,displayName,lastknownParent

    image

    By adding the pipeline like above, I am feeding the query results to the Format-List cmdlet, and return the user’s logon ID, display name, and last known parent location when it was deleted; much more concise. I can use a couple methods to restore the user, such as querying for the user and feeding it through a pipeline to Restore-ADObject:

    Get-ADObject -Filter 'samaccountname -eq "SaraDavis"' -IncludeDeletedObjects | Restore-ADObject

    I can also simply call Restore-ADObject directly as long as I have dumped out the user’s distinguished name or GUID:

    image

    Either way, the user is restored.

    Restoring multiple objects

    In this next scenario it turns out that someone dropped an apple on their keyboard and deleted all the users in the Sales OU (yes, I have heard that reason in real life). Until I get those users back they can’t do their work, and my company can’t sell its products. Knowing what I know now from the first scenario, this starts getting easier and more familiar.

    Since there have been a bunch of users deleted, returning the data in a list format isn’t ideal. This time I’ll use a table format:

    Get-ADObject -filter 'isdeleted -eq $true -and name -ne "Deleted Objects"' -includeDeletedObjects -property * | Format-Table msds-lastKnownRdn,lastknownParent -auto -wrap

    To restore the actual users, I will simply base my query on the last known parent OU. Since all the users in Sales were deleted, my query simply finds all objects in that OU and pipelines them to Restore-ADobject:

    Get-ADObject -filter 'lastKnownParent -eq "OU=Sales,DC=Contoso,DC=com"' -includeDeletedObjects | restore-adobject

    image

    Restoring all objects based on time and date

    In a large complex environment I may not simply want to restore all users deleted from an OU. For example, if I had all my users stored in the default Users container, simply restoring every object could bring back users that were supposed to be deleted. In that case I can use a date and time rule to simply restore all the objects that were deleted by a provisioning script went berserk at 1:40AM. To get these users back, I will first populate a variable that describes the time criteria:

    $changedate = New-Object Datetime(2009, 8, 22, 1, 40, 00)

    This variable stores the output of the Datetime conversion for 1:40:00 AM, August 22, 2009. Now I can search for any objects deleted after that time using a variation on my usual syntax. Part of my filter will now include ‘whenChanged is greater than <date time>’:

    Get-ADObject -filter 'whenChanged -gt $changedate -and isDeleted -eq $true' -includeDeletedObjects -property * | ft samaccountname,lastknownparent -auto -wrap

    image

    Having examined what I could be restoring, I then restore those objects:

    image

    This is one of the powers of AD Recycle Bin over the older authoritative restore system. You never had this much control before, and were stuck restoring extraneous objects even if they were intentionally deleted. Not to mention they were usually from yesterday, not 30 minutes ago.

    Restoring all objects in a deleted OU

    A key point to understand and remember with AD Recycle Bin is that you must restore hierarchically; a parent object must be restored before a child object. So if I was to delete an entire OU and all its contents, I must first restore the OU before I can restore its contents.

    This time someone managed to delete the entire Sales OU and its five users. In order to demonstrate how objects are stored in the Deleted Objects container I will be doing a few extra steps here. First, I’ll dump out all deleted objects with their last known parents and its last known relative distinguished name.

    Get-ADObject -filter 'isDeleted -eq $true' -and name -ne "Deleted Objects"' -includeDeletedObjects -property * | ft msds-lastKnownRdn,lastKnownParent -auto -wrap

    image

    Neat, I can see all the deleted users including the Sales OU. Note how lastKnownParent attribute for the deleted users is actually the deleted Sales OU distinguished name. If I tried to restore those child objects right now nothing would happen, as their parent is deleted. So I’d better restore the Sales OU first; to do that I will specify the OU’s last known RDN and its last known parent:

    Get-ADObject -filter 'msds-lastKnownRdn -eq "Sales" -and lastKnownParent -eq "DC=contoso,DC=com"' -includeDeletedObjects | Restore-ADObject

    image

    Just for grins, before I restore the user objects I’ll take a look at the same query I ran originally. Notice how the last known parent has magically changed! Now if I run the restore, pointing the last known parent to the Sales OU, all the child objects are restored.

    image

    Restoring multiple objects and nested OU’s

    Ok, last one. Consider this complex restore scenario:

    image

    We have multiple child OU’s, each with child user objects. And now someone has gone and deleted the parent Sales OU. Restoring all those parent-child relationships is going to be a bit of a drag, right?

    Not if you keep this script handy: http://technet.microsoft.com/en-us/library/dd379504(WS.10).aspx

    By running this script and specifying a last known RDN and perhaps last known parent, it will walk all child objects and do the work for you. It even has pretty colors. Slick and simple.

    image

    Best Practices

    After the last year’s testing experience, here are some of the best practices MS Support have come up with. This list may grow, but I doubt it will shrink. ;-)

    • Keep taking real System State backups – The AD recycle Bin makes object restores quick and easy. But it is not going to help you if your only four DC’s are ruined by the same leaky plumbing in your data center. And you won’t be restoring the deleted contents of SYSVOL with the recycle bin. And you won’t be using the Recycle Bin to undo the damage from script that replaces every user’s mailbox attribute with the wrong server name.
    • Lowering deleted object lifetime not recommended – There may be a temptation to drop the msds-DeletedObjectLifetime down to a few days in order to save space in your AD database. The longer you keep deleted objects around the better; you wouldn’t believe how old some of the restores I’ve walked a customer through. And when it gets down to it – hard drive disk space is ridiculously cheap. A massive AD database would be 40GB; you cannot buy a platter-based hard drive at a Best Buy that’s smaller than 160GB. In fact, when I looked at the lowest end small business servers offered by HP and Dell, their smallest drives were 160GB too. And this is a sub-$300USD Dell!
    • Practice and train – Don’t just turn on the Recycle Bin. Run through disaster recovery exercises periodically to make sure you and your staff know what they are doing. You don’t want to be figuring out how to recover your CEO’s user account; it should simply be a routine restore operation. Hyper-V makes test labs cheap and easy.
    • Turn on DS change auditing for deletions– The Recycle Bin will not tell you who deleted your users, groups, and computers. Principal deletions are rare operations in AD and should definitely be audited. I can tell you from many, many CritSits here that after the objects are back, the next thing your management will want to know is “what happened?” Make sure you have an answer, or your own account may be deleted. :-)

    And since you are using Windows Server 2008 R2 at this point, you can even make use of the new group policies for granular auditing:

    image

    • Use PowerShell, not LDP.exe – There are a number of limitations within LDP.EXE, such as its clunky interface and need for cryptic optional settings. The biggest one I see though is that when displaying deleted objects, it cannot display all the attribute data, such as the critical memberOf that shows you group memberships.

    Yes, I know – woooo, scary, there’s no GUI! It’s just a little command-line work; I have faith in you.

    Troubleshooting common issues

    Here are a few errors people tend to run into the first time they start using the Recycle Bin. Don’t worry, they happen to everyone and have straightforward solutions:

    Error: An attempt was made to add an object to the directory with a name that is already in use

    Cause: Someone has created an object with the same distinguished name. This may be because someone jumped the gun and tried to ‘fix’ the deleted user by recreating it. Just move it elsewhere temporarily, make sure it has no other attribute duplications as well, finish the restore of the real object, then go figure out what happened.

    Error: The operation could not be performed because the object's parent is either uninstantiated or deleted

    Cause: The object’s parent was also deleted and hasn’t been restored. Usually it’s an OU. Restore that parent first.

    Error: Illegal modify operation. Some aspect of the modification is not permitted

    Cause: Often this is caused by trying to restore an object without having the Recycle Bin enabled. You may see this error in other scenarios though.

    References and Final Notes

    Make sure you bookmark these sites – they cover tons of info about the Recycle Bin and PowerShell:

    Oh yeah – Recycle Bin works with AD Lightweight Directory Services too. That’s for another blog post.

    That’s all for now. Good luck saving your environment, IT super hero.

    - Ned ’10 cent deposit’ Pyle

  • Tracking a Remote File Deletion Back to the Source

    Ned here again. A long time ago, I blogged about how to track down file deletions in FRS and DFSR. At the end I casually mentioned that auditing should be used if you really want to see who deleted a file from a server. It’s not as easy as simply turning on some security policy, so today I will go into the technique.

    Background

    As we’ve discussed previously, Windows Server 2003 (or older) and Windows Server 2008 (or newer) have very different auditing systems. Win2003’s was based on the auditing introduced in Windows NT 3.5 and works at a very macro level. Win2008’s was based on Vista’s system, and features very granular subcategory-based tracking.

    I’m not covering how to enable auditing in great detail here, it’s well-documented:

    The key in Win2003 is that you audit categories Logons and Object Access. In Win2008 you’ll want to audit sub-categories Logons, File System, and File Share.

    For the actual folders, we only need SUCCESS auditing here (who cares if someone can’t delete a file), and it should be done for the built-in EVERYONE group.

    image

    image

    Analysis

    So you’ve got your auditing enabled and you get the fateful call – someone has deleted an important file. This is no big deal from a data standpoint because you have a backup to restore (right?), but you need to find out who needs a talking to.

    Here are the important things to understand:

    1. You must work backwards from the deletion.

    2. There is no single event that will tell you everything.

    Windows Server 2003 Audit Trail

    1. First you must find the file being accessed for deletion – it will be an event 560 and contain the full file name and path on the server. On the file server you open eventvwr.exe and filter on ID 560 and provide the deleted file path as part of the description:

    image

    The file to be deleted is accessed with a DELETE flag – but this does not guarantee it is going to be deleted! Note that you now have the user and the unique Logon ID, plus you have a specific file Handle ID, path, and access flag:

    Event Type:     Success Audit
    Event Source:   Security
    Event Category: Object Access
    Event ID:       560
    Date:           7/16/2009
    Time:           3:41:08 PM
    User:           INTRANET\Administrator
    Computer:       2003-X64-04
    Description:
    Object Open:
           Object Server:  Security
           Object Type:    File
           Object Name:    C:\temp\working.cap
           Handle ID:      1924
           Operation ID:   {0,2159437}
           Process ID:     4
           Image File Name:        
           Primary User Name:      2003-X64-04$
           Primary Domain: INTRANET
           Primary Logon ID:       (0x0,0x3E7)
           Client User Name:       Administrator
           Client Domain:  INTRANET
           Client Logon ID:        (0x0,0x20F206)
           Accesses:       DELETE 
                            ReadAttributes 
           Privileges:     -

    2. Next we filter on event ID 564 and a description of the Handle ID. We see that the file is truly deleted. So this Handle ID was our baby, which means the 560’s info is accurate on who did this.

    Event Type:     Success Audit
    Event Source:   Security
    Event Category: Object Access
    Event ID:       564
    Date:           7/16/2009
    Time:           3:41:08 PM
    User:           INTRANET\Administrator
    Computer:       2003-X64-04
    Description:
    Object Deleted:
           Object Server:  Security
           Handle ID:      1924
           Process ID:     4

    3. So knowing all that, now you go backwards to see where the user came from. You have the unique Logon ID from the 560 event. So now if you filter on event 540 and the Logon ID, you get the user, the computer IP address, and the Logon ID:

    Event Type:     Success Audit
    Event Source:   Security
    Event Category: Logon/Logoff
    Event ID:       540
    Date:           7/16/2009
    Time:           3:40:59 PM
    User:           INTRANET\Administrator
    Computer:       2003-X64-04
    Description:
    Successful Network Logon:
           User Name:      Administrator
           Domain:         INTRANET
           Logon ID:               (0x0,0x20F206)
           Logon Type:     3
           Logon Process:  Kerberos
           Authentication Package: Kerberos
           Workstation Name:       
           Logon GUID:     {edaa0263-3264-463b-838a-6b65c3757482}
           Caller User Name:       -
           Caller Domain:  -
           Caller Logon ID:        -
           Caller Process ID: -
           Transited Services: -
           Source Network Address: 10.10.0.159

    Windows Server 2008 Audit Trail

    1. First you must find the file being accessed for deletion – it will be an event 4663 and contain the full file name and path on the server. On the file server you open eventvwr.exe and filter on ID 4663,4624,5140, and 4660.

    image

    Then in the results you can use the Find command in eventvwr to look for the actual file path, which gives you the 4663 event. The file to be deleted is accessed with a DELETE flag – but this does not guarantee it is going to be deleted! Note that you now have the user and the unique Logon ID, plus you have a specific file Handle ID, path, and access flag:

    Log Name:      Security
    Source:        Microsoft-Windows-Security-Auditing
    Date:          7/16/2009 9:20:30 AM
    Event ID:      4663
    Task Category: File System
    Level:         Information
    Keywords:      Audit Success
    User:          N/A
    Computer:      2008f-x64-01.humongousinsurance.com
    Description:
    An attempt was made to access an object.
    Subject:
            Security ID:            HI\administrator
            Account Name:           Administrator
            Account Domain:         HI
            Logon ID:               0x121467
    Object:
            Object Server:  Security
            Object Type:    File
            Object Name:    C:\temp\repreport.cmd
            Handle ID:      0x754
    Process Information:
            Process ID:     0x4
            Process Name:  
    Access Request Information:
            Accesses:       DELETE
                      

    2. Next we find the Handle ID matching on event ID 4660. We see that the file is truly deleted. So this Handle ID was our baby, which means the 5663’s info is accurate on who did this.

    Log Name:      Security
    Source:        Microsoft-Windows-Security-Auditing
    Date:          7/16/2009 9:20:30 AM
    Event ID:      4660
    Task Category: File System
    Level:         Information
    Keywords:      Audit Success
    User:          N/A
    Computer:      2008f-x64-01.humongousinsurance.com
    Description:
    An object was deleted.
    Subject:
            Security ID:            HI\administrator
            Account Name:           Administrator
            Account Domain:         HI
            Logon ID:               0x121467
    Object:
            Object Server:  Security
            Handle ID:      0x754
    Process Information:
            Process ID:     0x4
            Process Name:   

    3. For more info, we can examine the 5140 event for this Logon ID. That lets us know the share that was used to access the file (this step is optional, obviously – we can likely derive the share from knowing where the file was deleted).

    Log Name:      Security
    Source:        Microsoft-Windows-Security-Auditing
    Date:          7/16/2009 9:20:24 AM
    Event ID:      5140
    Task Category: File Share
    Level:         Information
    Keywords:      Audit Success
    User:          N/A
    Computer:      2008f-x64-01.humongousinsurance.com
    Description:
    A network share object was accessed.
    Subject:
            Security ID:            HI\administrator
            Account Name:           Administrator
            Account Domain:         HI
            Logon ID:               0x121467
    Network Information:    
            Source Address:         10.90.0.102
            Source Port:            56897
    Share Name:                    
    \\*\C$ 

    4. So knowing all that, now you go backwards to see where the user came from. You have the unique Logon ID from the 4660 and 4663 events. So now if you find the 5140 event for that Logon ID, you get the user, the computer IP address, and the Logon ID:

    Log Name:      Security
    Source:        Microsoft-Windows-Security-Auditing
    Date:          7/16/2009 9:20:24 AM
    Event ID:      4624
    Task Category: Logon
    Level:         Information
    Keywords:      Audit Success
    User:          N/A
    Computer:      2008f-x64-01.humongousinsurance.com
    Description:
    An account was successfully logged on.
    Subject:
            Security ID:            NULL SID
            Account Name:           -
            Account Domain:         -
            Logon ID:               0x0
    Logon Type:                     3
    New Logon:
            Security ID:            HI\administrator
            Account Name:           Administrator
            Account Domain:         HI
            Logon ID:               0x121467
            Logon GUID:             {20451c9b-2fcb-ea46-8e09-32a702a1828a}
    Process Information:
            Process ID:             0x0
            Process Name:           -
    Network Information:
            Workstation Name:       
            Source Network Address: 10.90.0.102
            Source Port:            56897
    Detailed Authentication Information:
            Logon Process:          Kerberos
            Authentication Package: Kerberos
            Transited Services:     -
            Package Name (NTLM only):       -
            Key Length:             0 

    Summary

    And now you know who, when, where, and what. All that’s left is to sit down with that user and demand the why. :-)

    - Ned ‘Polygraph’ Pyle

  • RSAT Released for Windows 7

    Remoter Server Administrations Tools are RTM, come and get 'em.

    http://www.microsoft.com/downloads/details.aspx?FamilyID=7d2f6ad7-656b-4313-a005-4e344e43997d&displaylang=en

    Some things to keep in mind:

    1. This version is only for Windows 7 (Business, Professional, and Ultimate).
    2. Make sure to uninstall any RC/Beta/whatever copies of RSAT for Win7 you had previously installed.
    3. Make sure to remove any hacked up Vista RSAT copies you might have installed.
    4. Make sure to remove any hacked up Win2003 ADMINPAK copies you might have installed.

    Have fun.

    - Ned 'Snap in to a Snap-In' Pyle

  • Extended Validation support for websites using internal certificates

    Hey all Rob here again. One feature that that is new with Windows Server 2008R2 / Windows 7 is the ability to configure your internal certification authority hierarchy in order to issue certificates that can show as Extended Validation certificates.

    So for those of you who do not know, this means that you will get a shaded green bar within Internet Explorer proving that a site is ‘extra trustworthy’. If you want to learn more about extended validation click here. The feature works on the following operating system / IE Versions:

    • Windows XP SP3/ 2003 SP2 – Internet Explorer 8.
    • Windows Vista SP1/Windows Server 2008 – Internet Explorer 7 and 8.
    • Windows 7/2008R2 – Internet Explorer 8.

    image

    Enabling this feature is a two step process to configure:

    Create a new “Issuance Policy” on a certificate template to support EV certificates:

    The below steps require you to be logged in as an Enterprise Admin unless you have modified the permissions on your certificate templates.

    1. Open the Certificate Templates MMC (CertTmpl.msc).

    2. Create a new Version 2 or Version 3 template (or modify an existing v2/v3 template).

    3. Click on the Extensions tab.

    4. Select Issuance Policies, and click on the Edit button.

    5. Click the Add… button.

    6. Click New… button.

    7. Type in a name for the new Extended Validation Policy. The name for the policy can be anything you like. In my example I used “Contoso Extended Validation (EV)” as the name.

    8. Type in the URL to the Certificate Practice Statement (CPS) for your extended validation policy.

    NOTE: When you create a certificate policy you should have a practice statement defining how the certificate type is to be used, how the certificate type is approved to be issued, and what the requirements are to be fulfilled before issuance. CPS’s are beyond the scope of this blog however and you should do your due diligence in crafting a CPS.

    9. The Object Identifier field will be filled out. You can of course replace this with an custom OID (that you obtained) from an internet authority that manages OIDs. Be sure to document and copy this OID for later use.

    image

    10. Click OK

    11. Highlight the Issuance Policy you just created and click OK.

    12. Do not check “Make this extension critical” and click OK.

    13. Click “OK” to close the certificate template dialog box.

    Create / modify a Group Policy to support the feature:

    It’s actually pretty easy to setup, you will need either a Windows Server 2008R2 / Windows 7 client with RSAT tools (GPMC) installed, or a 2008R2 server with the Group Policy Management feature added .

    image

    It is important to note, that it is not required that you have a Windows Server 2008 R2 domain controller, you only need the ability to manage group policies from the newer operating system.

    1. Launch Group Policy Management (GPMC.MSC).

    2. Edit an existing policy / create a new policy.

    3. Navigate to the following location: Computer Configuration\Policies\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities

    4. Right click on Trusted Root Certification Authorities and select Import

    5. You need to import your internal Root Certification Authority certificate using the import wizard.

    6. Once the Root Certification Authority certificate has been imported, right click on the certificate and select “Properties”

    7. Click on the Extended Validation tab.

    8. Paste in the OID from Issuance Policy you created above.

    9. Click the Add OID button.

    10. Click OK.

    image

    Have fun with Extended Validation and enjoy your green validated address bar in Internet Explorer.

    - Rob ‘OID vey!’ Greene

  • BitLocker and Active Directory

    Paul Fragale coming to you again from the digital world we live in. Today, I want to share with you some information about BitLocker and storing the recovery keys in Active Directory (AD). What is actually created in AD? What happens when I decrypt a drive and re-encrypt it? What about additional drives? What if the drive was encrypted before I implemented the Group Policy to copy the recovery information to AD?

    So let’s dive right in. For this post we are going to focus only on the Windows Vista\2008 implementation. Group Policy is required to configure a client to send the BitLocker recovery information to Active Directory. To set this up please take a look at Configuring Active Directory to Back up Windows BitLocker Drive Encryption and Trusted Platform Module Recovery Information. A key point to remember is that, it needs to be done before encrypting any drives. If a drive is encrypted before the policy is applied to the computer, it will not upload the BitLocker recovery information to AD. The only solution currently is to decrypt and then re-encrypt the drive after the policy is applied.

    Once group policy is configured, then you can then perform the encryption process on a computer. Since you will want to assure yourself that the recovery information is stored in Active Directory, you can check manually. Upon encrypting the drive a new child object is created under the Computer Object in Active Directory. The name of the BitLocker recovery object incorporates a globally unique identifier (GUID) and date-time information, for a fixed length of 63 characters. The class for the BitLocker recovery object is ms-FVE-RecoveryInformation. Inside this child object are the attributes required for bit locker recovery. Here is what you should see under the computer object once you have encrypted a drive.

    image

    Here is a view of what is included in that object using LDP:

    image

    A description of these attributes can be found in Configuring Active Directory to Back up Windows BitLocker Drive Encryption and Trusted Platform Module Recovery Information.

    Ok, now that you know have an idea of what to look for in Active Directory after implementing BitLocker, let us discuss the administration pieces. What does happen if I decrypt a drive? Well first of all, AD is just a storage container. There are zero functions AD will perform to validate, maintain or update this information. This is completely handled by BitLocker. BitLocker does not notify AD of a drive decryption so the ms-FVE-RecoveryInformation object does not get removed. So if the user re-encrypts the drive, then Bitlocker will sync new information to AD. So what you will see is two entries for the same drive. And taking that a step further you will also see a new entry for each drive encrypted on that system.

    Some key things to remember:

    1. Every drive encrypted creates a new child object. This includes additional drives.

    2. The Group Policy for storing the recovery information in Active Directory needs to be configured and applied to any computer before encrypting the first drive.

    3. If you decrypt a drive the Bitlocker recovery information in Active Directory will remain. It is not updated. If a drive is later re-encrypted, then a new child object will be created. The existing ms-FVE-RecoveryInformation object is not deleted or modified.

    4. Active Directory is just a storage location for Bitlocker recovery information. All functions are handled by the Bitlocker application on the computer where the drive is encrypted.

    Well that is all I have for now. Hopefully this helps answer some of the more common questions about implementing BitLocker into your Active Directory.

    Until next time,

    Paul ‘Fragale

  • Implementing an OCSP Responder: Part VI Configuring Custom OCSP URIs via Group Policy

    Chris here again. If you have read the previous five part of the series you are at this point very familiar with the installation and configuration of the OCSP Responder. I covered implementing the OCSP Responder to support a variety of scenarios. One thing I have not covered, however, is the configuration of the OCSP Client.

    If you have read my blog series on Implementing and OCSP Responder you will be aware that one of the configuration steps is to specify the OCSP URI on the CA so that it is included in issued certificates. This would definitely help with newly issued certificates, but how about certificates that have already been issued? If you could point clients to an OCSP Responder, you would now be able to use OCSP with previously issued certificates.

    After some leg work by my colleague, he was able to determine that this feature already exists as of Service Pack 1. Needless to say, I felt ecstatic and dumb at the same time. Ecstatic that the feature was already implemented, and dumb that I was not aware of it. As of Windows Vista Service Pack 1, you can point clients to a specific OCSP server. You will need Windows 2008 servers or Windows Vista clients with RSAT installed to have the ability to implement this setting as a Group Policy. In other words, there is no requirement to have Windows 2008 domain controllers, only a requirement to manage the group policy with a Windows Vista SP1 /Windows Server 2008 computer.

    Directing clients to an OCSP URL for certificates

    The first step is to export the Certification Authority certificate from the CA. Logon to the CA and open a command prompt, then type certutil  -ca.cert <CA Name>.cer and press Enter.

    1. Open up the Group Policy Management Console. Find the GPO for which you would like to make the change and right click on that policy and select Edit.

    image

    2. In the Group Policy Editor navigate to Computer Configuration\Windows Settings\Security Settings\Public Key Policies\Trusted Root Certification Authorities if your issuing CA for example is not a Root CA the CA certificate would be located in the Intermediate Certification Authorities container. So, you can import the CA cert to that container in the Group Policy and add the appropriate OCSP URI.

    image

    3. This will start the Certificate Import Wizard, click Next

    4. Then on the File to Import page of the wizard, click Browse…

    5. Then browse to the CA certificate that was previously exported, select the certificate and then select Open

    6. Then click Next

    7. On the Certificate Store page, verify that Trusted Root Certification Authorities is selected and select Next

    8. Then click Finish to close the wizard.

    9. When prompted that The import was successful click OK

    10. Then right click on the certificate that was just imported and select Properties.

    11. Then click on the OCSP Tab, enter the URL for the OCSP server I want clients to query (FCOSP.FourthCoffee.com/ocsp) in the text box, and select Add URL. Also, if you want to disable CRL checking, you can check the Disable Certificate Revocation Lists (CRL) check box. I then Click OK when finished.

    After group policy is updated you see two CA certificates for the CA in the Trusted Root Certification Authorities store. This is because the CA certificate is already in that store prior to adding it to Group Policy. The net result of which is that you will have two of the CA certificates in the Trusted Root Certification Authorities store. Regardless, when the chain is built, the OCSP location that was added via the group policy will be incorporated in the revocation checking process. Now clients will check the OCSP URL that you configured for revocation status even if the OCSP URI is not included in certificates.

    image

    Conclusion

    The option to add the OCSP URI via group policy adds additional flexibility when using the OCSP Client included in Windows Vista. This feature will also be extremely helpful to customers that do have isolated networks as well as those customers that want OCSP support and are not ready to renew their CA hierarchy. It is also useful if you need to change the DNS name of your OCSP Responder which may occur for many reasons, including transitioning to a load balanced array, or adding additional OCSP responders.

    Implementing an OCSP responder: Part I Introducing OCSP
    Implementing an OCSP responder: Part II Preparing Certificate Authorities
    Implementing an OCSP responder: Part III Configuring OCSP for use with Enterprise CAs
    Implementing an OCSP responder: Part IV Configuring OCSP for use with Standalone CAs 
    Implementing an OCSP Responder: Part V High Availability
    Implementing an OCSP Responder: Part VI Configuring Custom OCSP URIs via Group Policy

    -Chris “TGIOCSP” Delay

  • Implementing an OCSP Responder: Part V High Availability

    Chris Here Again. In the four previous parts of this series we covered the basics of OCSP, as well as the steps required to prepare the CA and implement the OCSP Responder. In this section I would like to talk about how to implement a High Availability OCSP Configuration.

    There are two major pieces in implementing the High Availability Configuration. The first step is to add the OCSP Responders to what is called an Array. When OCSP Responders are configured in an Array, the configuration of the OCSP responders can be easily maintained, so that all Responders in the Array have the same configuration. The configuration of the Array Controller is used as the baseline configuration that is then applied to other members of the Array.

    The second piece is to load balance the OCSP Responders. Load balancing of the OCSP responders is what actually provides fault tolerance. I am going to demonstrate using the built in Windows Network Load Balancing feature of Windows Server 2008. You can of course use a third party hardware load balancer if you wish. In this example, we are going to deploy two OCSP Servers in a highly available configuration.

    Firewall Exceptions

    In Windows Server 2008 the Windows Firewall is enabled by default. Depending on the requirements of your enterprise, you may have the firewall in its default state, you may have it turned off, or you may have a custom configuration.

    If you are unfamiliar with Windows Firewall with Advanced Security, you may want to review Windows Firewall with Advanced Security and IPSEC, which has links to a variety of sources for learning about as well configuring and implementing the Windows Firewall with Advanced Security. The document includes a link on how to deploy firewall settings with Group Policy.

    The Windows Firewall with Advanced Security there are three types of profiles:

    • Domain. Windows automatically identifies networks on which it can authenticate access to the domain controller for the domain to which the computer is joined in this category. No other networks can be placed in this category.
    • Public. Other than domain networks, all networks are initially categorized as public. Networks that represent direct connections to the Internet or are in public places, such as airports and coffee shops should be left public.
    • Private. A network will only be categorized as private if a user or application identifies the network as private. Only networks located behind a NAT device (preferably a hardware firewall) should be identified as private networks. Users will likely want to identify home or small business networks as private.

    http://technet.microsoft.com/en-us/library/cc748991(WS.10).aspx

    In a higher security environment you may want to configure this setting for a specific profile. For example inside an enterprise you may want to enable the rule just for the Domain Profile.

    In this example when we configure the rules, we are configuring them for Any profile, which will allow the responder to be managed regardless of which profile is applied.

    When you install the OCSP Role the following Inbound Rules will be configured on the Windows Firewall:

    World Wide Web Services (HTTP Traffic-IN)

    World Wide Web Services (HTTPS Traffic-IN)

    Also, the following Outbound Rule will be enabled:

    Online Responder Service (TCP-Out)

    These rules allow the OCSP Responder to receive the OCSP requests from the client and to respond to the OCSP clients.

    You will also need to enable the following rules to manage the OCSP Responders as well as allowing the OCSP Responder to sync the configuration with the Array Controller:

    Online Responder Service DCOM-In

    Online Responder Service RPC-In

    To enable the rules, open the Windows Firewall with Advanced Security MMC (WF.msc) and click on Inbound Rules. Find the rule, right click on the rule and select Enable Rule from the context menu.

    image

    You should perform this action on every OCSP Responder that will be a member of the array. A more scalable solution is to place all of the OCSP Responders in a common OU, and use group policy to maintain a consistent configuration.

    CA Preparation

    In Part II of this series we discussed preparing the certificate authorities for use with the OCSP Responder. One of the configuration steps was to configure the Authority Information Access (AIA) extensions with the OCSP Extension that included the URL that points to the OCSP Responder. When configuring an OCSP Responder in a Load Balanced Configuration you will need to specify the name of the Load Balancer. Below is a diagram of the OCSP Infrastructure that I will walk through implementing in this blog posting. Notice that the name of the two OCSP Responders are FCOCSP01.FourthCoffee.Com and FCOCSP02.FourthCoffee.com. You will also notice that I have decided to assign the name of FCOCSP.FourthCoffee.Com to the NLB Cluster. Since I want clients to access the load balancer and let the load balancer determine which OCSP Responder that the OCSP Requests goes to, I must specify FCOCSP.FourthCoffee.Com in the OCSP URI.

    image

    DNS Configuration

    As mentioned above, you will want OCSP clients to send the OCSP Requests to the Load Balancer. This allows the Load Balancer to balance requests, this is especially important if one of the OCSP Responders is offline. To ensure that clients can resolve the DNS name of the cluster you will want to register the hostname in DNS.

    To register the A record for the NLB cluster in DNS, perform the following steps:

    1. Open up the DNS Manager MMC (dnsmgmt.msc)

    2. Right click on the appropriate zone and select New Host (A or AAAA)… from the context menu.

    image

    3. In the New Host dialogue box enter the hostname that will be used for the NLB Cluster, and enter the appropriate IP address. You can provide additional configuration such as Create associated (PTR) record if appropriate for your environment.

    image

    Configuring OCSP Responder Array

    In the upcoming section we will configure two OCSP Responders in an array. The purpose of configuring an array is to maintain the same configuration between OCSP Responders. It is important to be aware of what Revocation Configurations you will be supporting with the Array. In the case of Revocation Configurations that support Enterprise CAs and that are configured to automatically enroll for an OCSP signing certificate, the process is somewhat transparent since the responders added to the array will automatically request the OCSP signing certificate. In the case of Revocation Configuration’s that support Standalone CA’s, an OCSP Signing certificate will need to be manually requested, installed and configured. And of course the OCSP Responder can support both types of Revocation Configurations on the same responder.

    OCSP Responder Array Setup

    Prerequisites: The Windows Firewall has been configured as shown in the Firewall Exceptions sections above.

    Steps:

    1. OCSP Signing Certificate must of course be available on the Enterprise CA that the array is going to provide revocation information for. All OCSP Responders that are going to be members of the Array must have Read and Enroll permissions for the OCSP Signing Certificate. Alternatively if the Array is going to support a Revocation Configuration for a Standalone CA, the OCSP signing certificate will need to be installed manually. Remember to give read permission to the Private Key for any OCSP Signing certificates that are installed manually. If you are unfamiliar with this process, instructions for giving the Network Service read permissions to the private key of the OCSP signing certificate are available in Part I of this series.

    2. Configure the OCSP Responder that will become the Array Controller. For guidance on deploying an OCSP Responder please see Part III and Part IV of this series.

    3. Configure the first OCSP Responder as an Array Controller.

    4. Add additional OCSP Responders to the array.

    Note: if using OCSP Responders on hyper-V guests, see extra steps here on how to configure NLB virtual guests.

    I will be covering the final two steps as the other steps are covered elsewhere in this Blog Series.

    1. In the Online Responder Management Console, expand Array Configuration. Select the Responder that you wish to make the Array Controller, right click on the responder name, and select Set as Array Controller from the context menu.

    image

    2. To add an OCSP Responder to the array, right click on Array Configuration, and select Add Array Member from the context menu.

    image

    3. You will then receive the Select Computer dialog box. Click on the Browse… button.

    4. Enter the name of the OCSP Responder that you wish to add, and click on the Check Names button.

    5. Once the computer name of the OCSP Responder has been resolved, click OK.

    6. The Select Computer dialogue box will now be populated with FQDN of the computer that is hosting the Online Responder, click OK.

    7. You will then be prompted to confirm that you wish to add the array member. This dialogue box will give you one last chance to abort before the configuration of the OCSP Responder is overwritten with the configuration of the Array Controller. Click Yes to continue.

    image

    8. To verify the configuration expand Array Configuration in the OCSP MMC and select the name of the Responder that was just added. The Revocation Configuration Status should be the same as illustrated in the figure below.

    image

    Note: If you are using a manually installed certificate, such as from a Standalone CA, you will receive the error in the figure below.

    image

    To rectify this issue you will need to manually assign the certificate after it is installed in the Local Machine Store. Expand Array Configuration, click on the name of the OCSP Server that was just added to the Array, and Right click on the Revocation Configuration that will be using a manually assigned signing certificate. Select Assign Signing Certificate from the context menu.

    image

    Select the appropriate certificate, and click OK.

    image

    You will then get the error listed below. This error simply indicates that the OCSP Responder has not yet retrieved revocation information, so it can not verify that the configuration is correct.

    image

    If you would like to clear this error, Right click on Array Configuration and select Refresh Revocation Data.

    image

    Installing the Network Load Balancing Feature

    Before you install and configure the NLB cluster, there are some key items you will need to know ahead of time:

    • What is the IP address you are going to assign to the NLB cluster?
    • What DNS name you are going to associate with this cluster?

    Before you can configure the NLB Cluster, you must first install the Network Load Balancing feature on all of the OCSP Responders that will be a member of the NLB cluster.

    To install the NLB feature, open a command prompt, and type ServerManagerCmd –install NLB, as illustrated below.

    image

    Configuring the NLB Cluster

    1. Once the Network Load Balancing feature is installed, open the Network Load Balancing Manager.

    2. Select Cluster from the Menu Bar, and then select New. This will start the New Cluster Wizard.

    image

    3. Enter the hostname of the first node and click Connect, then click Next.

    4. This will open the Host Parameters page of the New Cluster Wizard. Accept the defaults and click Next.

    5. Next on the Cluster IP address page of the Wizard, click Add…

    6. Here you will add the IP address and subnet mask of the Load Balancer. After you enter the network information, click OK.

    7. Then click Next.

    8. On the Cluster Parameters page add the FQDN of the cluster in the Full Internet Name text box. Configure the Cluster Operation Mode as appropriate for your environment. In this example I have selected Unicast.

    9. On the Port Rules Page click Finish.

    Add Nodes to the cluster

    For each node that you would like to add to the NLB cluster you will need to perform the following steps.

    1. Expand Network Load Balancing Clusters in the Network Load Balancing Manager. Right click on the name of the cluster and select Add Host to Cluster from the context menu. This will start the Add Host to Cluster Wizard.

    image

    2. On the Connect Page of the Wizard, enter the hostname of the node you wish to add to the cluster and click Connect.

    3. On the Host Parameters page click Next.

    4. On the Port Rules page of the Wizard click Finish.

    Conclusion

    In this posting we covered implementing a highly available OCSP Responder. In the next part of this series I will be covering how to configure clients to obtain revocation information from an OCSP Responder that is not listed in the OCSP URI of the certificate.

    Implementing an OCSP responder: Part I Introducing OCSP
    Implementing an OCSP responder: Part II Preparing Certificate Authorities
    Implementing an OCSP responder: Part III Configuring OCSP for use with Enterprise CAs
    Implementing an OCSP responder: Part IV Configuring OCSP for use with Standalone CAs 
    Implementing an OCSP Responder: Part V High Availability
    Implementing an OCSP Responder: Part VI Configuring Custom OCSP URIs via Group Policy

    -Chris ‘Tickle Fight’ Delay

  • Mapping One Smartcard Certificate to Multiple Accounts.

    Good morning world, Paul Fragale here to bring you the latest trend in smart card logon requests. Some people have been reading on our TechNet pages, such as Smart Card Authentication Changes, about the ability to allow users to have one smart card, one certificate on that smart card, and map to multiple users. This one certificate will allow them to authenticate both to a user account and to an account with special privileges (like an administrator). Why would they want to do this, you ask? They do not want to give administrator permissions to the user accounts but still need to be able to track who made the changes. This will effectively reduce the number of administrator accounts on the machine or environment.

    However, this comes with a cost to administrative overhead. To set this up correctly, some steps must be done manually by an administrator that has access to the Active Directory Users and Computers Snap-in.

    Also Windows Server 2008 DCs are required for the smartcard authentication. Smart card logon authentication requirements for Windows Server 2003 DCs have a strict User Principal Name (UPN) requirement. That means that a UPN has to be provided in the certificate for proper authentication. This restriction prevents the ability to log on using the name mapping feature that is required for this scenario.

    The rest of this blog post contains the step by step for setting up this environment.
    To enable this ability the following things will have to be done:

    • Create Smartcard user Certificate Template that does not include the UPN as an alternate subject name.
    • Enable the Group Policy for "User Name Hint"
    • Create smart card certificate for a user using the new template.
    • Export the user smart card certificate.
    • Enable Name Mapping to both accounts.
    Environment Required:
    • Clients: Windows Vista or later
    • Domain Controllers: Windows Server 2008 or later (Any Domain or Forest Functional level.)
    • Certification Authority: Windows Server 2003, Windows Server 2008 or later Enterprise Issuing CA (Smartcard User or Smartcard Logon template is required)

    Note: The CA must be running on an Enterprise Edition of the Operating System to meet this requirement.

    Create Smart card Certificate Template

    Creating a smart card template for this scenario is 90% the same as creating a duplicate template for any other function. The one exception is in step 7 of the procedure. The certificates issued must not reference a UPN for any mapped user or the authentication for the other mapped accounts will fail.

    On the Certificate Authority perform the following tasks:

    1. Open certsrv.msc.
    2. Expand the name of the CA.
    3. Right click Certificate Templates and choose Manage.
    4. Right click the Smartcard User or Smartcard Logon template and choose Duplicate Template:

    image

    Note: If you are using a Windows 2008 CA you will be prompted to select the minimum CA for your new template. You can select either template option.

    5. Provide a name for the smartcard template and set the validity period that you desire for the environment.
    6. On the Subject tab, deselect User Principal Name (UPN):

    image

    7. On Issuance Requirements tab, do the following

        a. Select The number of authorized signatures: and set it to 1.
        b. Under Policy type required in signature, select Application Policy.
        c. Under Application Policy select Certificate request Agent:

       image

    8. Click Apply and then OK.
    9. Close Certificate Templates console.
    10. In the Certificate Authority snap-in, right click Certificate Templates folder and select New.
    11. Select "Certificate Template to Issue”:

    image

    12. Select the new template and click Ok:

    image

    13. Restart Certificate Services.

    Note: It is important to restart the CA services to ensure the CA is processing all the latest information.

    Enable Group Policy for "User Name Hint"

    Now that we have created and added the smart card certificate template, we need to configure the clients to show the Username Hint upon logon.

    image

    To enable the Allow user name hint Group Policy setting, follow these steps on a

    Domain controller:

    1. Open the Group Policy Management Console.
    2. Right click the domain name and choose Create a GPO in this domain, and Link it here….
    3. Name it something like "Smart card Auth Policy".
    4. Right click the policy and choose Edit:

    image

    5. Expand Computer Configuration >Policies > Administrative Templates > Windows Components, and then expand Smart Card.
    6. Double-click Allow user name hint":

    image

    7. Click Enabled and then click OK:

    image

    8. Run Gpupdate /force to update group policies on the workstations with smart card readers.

    Create smart card certificate for a user using the new template

    1. Log on to system that has a smart card reader with a user that has an Enrollment Agent certificate.
    2. Start certmgr.msc
    3. Expand Personal, and then right-click on the Certificates folder.
    4. Select All Tasks > Advanced Operations > Enroll on behalf of from the context menu:

    image

    5. Click Next.
    6. When prompted, browse to the signing certificate for the enrollment agent. Click Next:

    image

    7. Select the certificate template you created, and click Next:

    image

    8. Browse and select the user name (This will be the subject of the smartcard certificate.) Click Enroll:

    image

    Export the user smart card certificate

    Ok, so we’ve got a certificate on a smart card; now we have to associate it with the accounts we want the user to be able to use. We first need to export the certificate. You can do this from the client, Active Directory Users and Computers or the Certificate Authority that issued the cert. One way of accomplishing this can be found at the following TechNet article: http://technet.microsoft.com/en-us/library/cc779668(WS.10).aspx

    Enable Name Mapping to both accounts

    Now that we have the certificate file we can map the certificate to our user’s accounts.

    1. Open Active Directory Users and Computers.
    2. Click View and select Advanced Features:

    image

    3. Navigate to the user account.
    4. Right click the user account and choose Name Mappings:

    image

    5. Click Add and select the certificate file that was exported. Click Open:

    image

    6. Click Ok.
    7. Click Ok.

    image

    8. Repeat steps 3-7 to add the same certificate file to each additional account that that the user logs on with.

    That is all there is to it. Now when that user inserts his smart card, they will have a Username Hints window. The user simply types the name of the account he wants to logon as and the PIN for his smartcard. The added benefit is that the user does not need to know two different passwords. They simply have to know the pin for the smartcard.

    Until next time,

    - Paul ‘One Cert to Rule Them All’ Fragale