Blog - Title

September, 2009

  • Managed Service Accounts: Understanding, Implementing, Best Practices, and Troubleshooting

    Ned here again. One of the more interesting new features of Windows Server 2008 R2 and Windows 7 is Managed Service Accounts. MSA’s allow you to create an account in Active Directory that is tied to a specific computer. That account has its own complex password and is maintained automatically. This means that an MSA can run services on a computer in a secure and easy to maintain manner, while maintaining the capability to connect to network resources as a specific user principal.

    Today I will:

    • Describe how MSA works
    • Explain how to implement MSA’s
    • Cover some limitations of MSA’s
    • Troubleshoot a few common issues with MSA’s

    Let’s be about it.

    How Managed Service Accounts Work

    The Windows Server 2008 R2 AD Schema introduces a new object class called msDS-ManagedServiceAccount. Create an MSA, examine its objectClass attribute, and notice the object has an interesting object class inheritance structure:

    Computer
    msDS-ManagedServiceAccount
    organizationalPerson
    Top
    User

    The object is a user and a computer at the same time, just like a computer account. But it does not have an object class of person like a computer account typically would; instead it has msDS-ManagedServiceAccount. MSA’s inherit from a parent object class of “Computer”, but they are also users. MSA objects do not contain new attributes from the Win2008 R2 schema update.

    And this leads me to how MSA’s handle passwords – it’s pretty clever. An MSA is a quasi-computer object that utilizes the same password update mechanism used by computer objects. So, the MSA account password is updated when the computer updates its password (every 30 days by default). This can be controlled - just like a computer’s password - with the following two DWORD values:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NetLogon\Parameters

    DisablePasswordChange = [0 or 1, default if value name does not exist is 0]
    MaximumPasswordAge = [1-1,000,000 in days, default if value name does not exist is 30]

    MSA’s, like computers, do not observe domain or fine-grained password policies. MSA’s use a complex, automatically generated password (240 bytes, which is 120 characters, and cryptographically random). MSA’s cannot be locked out, and cannot perform interactive logons. Administrators can set an MSA password to a known value, although there’s ordinarily no justifiable reason (and they can be reset on demand; more on this later).

    All Managed Service Accounts are created (by default) in the new CN=Managed Service Accounts, DC=<domain>, DC=<com> container. You can see this by configuring DSA.MSC to show “Advanced Features”:

    image

    image

    As you will see later though, there isn’t much point to looking at this in AD Users and Computers because… wait for it… all your administration will be done through PowerShell. You knew that was coming, didn’t you?

    MSA’s automatically maintain their Kerberos Service Principal Names (SPN), are linked to one computer at a time, and support delegation. A network capture shows a correctly configured MSA using Kerberos:

    image

    image

    Implementing MSA’s

    Forest and OS Requirements

    To use MSAs you must:

    • Use Active Directory
    • Extend your AD schema to Windows Server 2008 R2
    • Host services using MSAs on Windows Server 2008 R2 and Windows 7 computers (MSAs cannot be installed on down-level operating systems)
    • PowerShell, AD PowerShell (part of the RSAT), and the .Net 3.5x framework enabled on any computers using or configuring MSAs

    MSAs do not require a specific Forest Functional Level, but there is a scenario where part of MSA functionality requires a Windows Server 2008 Domain Functional Level. This means:

    • If your domain is Windows Server 2008 R2 functional level, automatic passwords and SPN management will work
    • If your domain is less than WIndows Server 2008 R2 Domain Functional Level, automatic passwords will work. Automatic SPN management will not work, and SPN’s will have to be maintained by administrators

    Deployment

    Using a new MSA always works in four steps:

    1. You create the MSA in AD.

    2. You associate the MSA with a computer in AD.

    3. You install the MSA on the computer that was associated.

    4. You configure the service(s) to use the MSA.

    We begin by using PowerShell to create the new MSA in Active Directory. You can run this command on Windows Server 2008 R2 or Windows 7 computer that has the RSAT feature “Active Directory Module for Windows PowerShell” enabled. Perform all commands as an administrator.

    1. Start PowerShell.

    2. Import the AD module with:

    Import-Module ActiveDirectory

    3. Create an MSA with:

    New-ADServiceAccount -Name <some new unique MSA account name> -Enabled $true

    image

    4.    Associate the new MSA with a target computer in Active Directory:

    Add-ADComputerServiceAccount -Identity <the target computer that needs an MSA> -ServiceAccount <the new MSA you created in step 3>

    image

    5. Now logon to the target computer where the MSA is going to be running. Ensure the following features are enabled:

    • Active Directory Module for Windows PowerShell
    • .NET Framework 3.5.1 Feature

    image

    image

    6. Start PowerShell.

    7. Import the AD module with:

    Import-Module ActiveDirectory

    8. Install the MSA with:

    Install-ADServiceAccount -Identity <the new MSA you created in step 3>

    image

    Note: Besides being a local administrator on the computer, the account installing the MSA needs to have permissions to modify the MSA in AD. If a domain admin this "just works"; otherwise, you would need to delegate modify permissions to the service account's AD object. 

    9. Now you can associate the new MSA with your service(s).

    The GUI way:

    a. Start services.msc.

    b. Edit your service properties.

    c. On the Log On tab, set “This Account” to the domain\name$ of your MSA. So if your MSA was called “AskDS” in the “contoso.com” domain, it would be:

    contoso\askds$

    d. Remove all information from Password and Confirm password – they should not contain any data:

    image

    e. Click Apply and Ok to the usual “Logon as a Service Right granted” message:

    image

    f. Start the service. It should run without errors.

    image

    image

    The PowerShell way:

    a. Start PowerShell.

    b. Paste this sample script into a text file:

    # Sample script for setting the MSA password through PowerShell
    # Provided "AS IS" with no warranties, and confers no rights.
    # See
    http://www.microsoft.com/info/cpyright.mspx

    # Edit this section:

    $MSA="contoso\askds$"
    $ServiceName="'testsvc'"

    # Don't edit this section:

    $Password=$null
    $Service=Get-Wmiobject win32_service -filter "name=$ServiceName"
    $InParams = $Service.psbase.getMethodParameters("Change")
    $InParams["StartName"] = $MSA
    $InParams["StartPassword"] = $Password
    $Service.invokeMethod("Change",$InParams,$null)

    c. Modify the highlighted red sections to correctly configure your MSA and service name.

    d. Save the text file as MSA.ps1.

    e. In your PowerShell console, get your script policy with:

    Get-ExecutionPolicy

    image

    f. Set your execution policy to remote signing only:

    Set-ExecutionPolicy remotesigned

    image

    g. Run the script:

    image

    h. Set your execution policy back to whatever you had returned in step E:

    image

    Note: Obviously, I made this example very manual; it could easily be automated completely. That’s the whole point of PowerShell after all. Also, it is ok to shake your fist at us for not having the User and Password capabilities in the V2 PowerShell cmdlet Set-Service. Grrr.

    Removal

    Removing an MSA is a simple two-part process. Now that you know all the PowerShell rigmarole, here are the two things you do:

    1. Use the following PowerShell cmdlet to remove the MSA from a local computer:

    Remove-ADServiceAccount –identity <your MSA name>

    image

    2. Optionally, remove the service account from Active Directory. You can skip this step if you just want to reassign an existing MSA from one computer to another.

    Remove-ADComputerServiceAccount –Identity <the computer the MSA was assigned to previously> -ServiceAccount <the MSA>

    image

    Group Memberships

    The Set-ADServiceAccount and New-ADServiceAccount cmdlets do not allow you to make MSA’s members of groups. To do this you will instead use DSA.MSC or Add-ADGroupMember.

    AD Users and Computers method:

    1. Start DSA.MSC.

    2. Select the group (not the MSA).

    3. Add the MSA through the Members tab:

    image

    PowerShell method:

    1. Start PowerShell.

    2. Run:

    Add-ADGroupMember "<your group>" "<DN of the MSA>"

    So for example:

    image

    Note: Use the distinguished name of the MSA; otherwise Add-ADGroupMember will return “cannot find object with identity”. Don’t try to use NET GROUP as it doesn’t know how to find MSA’s.

    Limitations

    Managed Service Accounts are useful in most service scenarios. There are limits though, and understanding these up front will save you planning time later.

    • MSA’s cannot span multiple computers – An MSA is tied to a specific computer. It cannot be installed on more than one computer at once. In practical terms, this means MSAs cannot be used for:
      • Cluster nodes
      • Authenticated load-balancing using Kerberos for a group of web servers

    The MSA can only exist on one computer at a time; therefore, MSAs are not compatible with cluster fail-over scenarios. And authentication through a load balancer would require you to provide a Kerberos SPN of the MSA account-- that won’t work either. Load balancing scenarios include Microsoft software-based and third-party hardware and software-based load balancing solutions. If you’re clustering or NLB’ing, then you are still going to need to use old fashioned service accounts.

    A key clarification: You can have multiple MSAs installed on a single computer. So if you have an application that uses 5 services, it’s perfectly alright to use one MSA on all five services or five different MSA’s at once.

    • The supportability of an MSA is determined by the component, not Windows – Just because you can configure an MSA on a service doesn’t mean that the folks who make that service support the configuration. So, if the SQL team here says “we don’t support MSA’s on version X”, that’s it. You have to convince them to support their products, not me :-). Some good places to start asking, as we get closer to the general availability of Windows Server 2008 R2 in October:

    TechNet Forums - http://social.technet.microsoft.com/Forums

    MSDN Forums - http://social.msdn.microsoft.com/Forums

    SQL Support Blog - http://blogs.msdn.com/psssql/default.aspx

    Exchange Blog - http://msexchangeteam.com/

    SharePoint Blog - http://blogs.msdn.com/sharepoint/

    Dynamics Blog - https://community.dynamics.com/blogs/

    BizTalk Blog - http://blogs.msdn.com/biztalk_server_team_blog/

    Troubleshooting

    For the most part MSA’s are straightforward and have easily understandable errors. There are a few issues that people seem to run into repeatedly though:

    Error: Error 1069: The service did not start due to a logon failure.

    Cause: Typically caused by the MSA being disabled. Use Set-ADServiceAccount to enable your MSA.

    Error: Duplicate Backlink. The service account 'AskDS2' has backlinks to computer 'CN=2008R2-F-05,CN=Computers,DC=contoso,DC=com'. This add operation will potentially disable service accounts installed on the other computer. Cannot install service account. Error Message: 'Unknown error (0xc000005a)

    Cause: You are trying to associate an MSA with a computer that is already used by another computer. The error notes the server (in this case, 2008r2-f-05) currently using the MSA. Un-associate and uninstall the MSA from the old computer before using it on the new one.

    Error: Add-ADComputerServiceAccount : The object could not be found on the server.

    Cause: You gave an incorrect identity for the MSA and PowerShell cannot find it. Either it’s been deleted or you typed in the wrong name.

    Error: Please enter a valid password.

    Cause: You did not remove the password information in the service’s Logon On properties when editing in services.msc. See the setup steps above.

    Error: The account name is invalid or does not exist, or the password is invalid for the account name specified.

    Cause: This is typically caused by not adding the “$” character to the end of the account name used in the Log On tab in the service’s properties in services.msc. Also, this error is caused by simply mistyping the name of the account or forgetting to add the appropriate domain.

    Final Notes and References

    For further reading on Managed Service Accounts, check out:

    And there you go – now go forth and tame your environment.

    - Ned ‘120 characters ought to be enough for anyone’ Pyle

  • Designing and Implementing a PKI: Part I Design and Planning

    The series:

    Designing and Implementing a PKI: Part I Design and Planning

    Designing and Implementing a PKI: Part II Implementation Phases and Certificate Authority Installation

    Designing and Implementing a PKI: Part III Certificate Templates

    Designing and Implementing a PKI: Part IV Configuring SSL for Web Enrollment and Enabling Key Archival

    Designing and Implementing a PKI: Part V Disaster Recovery

    Chris here again. This is part of a five part series. In Part I, I will cover design considerations, and planning for deploying a PKI. When implementing a PKI planning is the most important phase, and you can prevent a lot of issues by properly planning your PKI implementation.

    I recommend reading the following MSPress books on PKI and Certificate Services before implementing a Windows PKI, or any PKI for that matter. Both books are written by Brian Komar.

    Here is a link to the Windows 2003 Book: http://www.microsoft.com/learning/en/us/Books/6745.aspx

    And a link to the Windows 2008 Book: http://www.microsoft.com/learning/en/us/books/9549.aspx

    They are both excellent resources for anyone implementing, managing, or designing solution that use a Microsoft PKI. And both books go into far more detail then I can here.

    Why Deploy a PKI?

    There are a host of reasons to deploy a PKI; a few are listed here:

    • Control access to the network with 802.1x authentication
    • Approve and authorize applications with Code Signing
    • Protect user data with EFS
    • Secure network traffic IPSec
    • Protect LDAP-based directory queries Secure LDAP
    • Implement two-factor authentication with Smart Cards
    • Protect traffic to internal web-sites with SSL
    • Implement Secure Email

    In addition a number of applications can use certificates in some fashion. Here is a brief list:

    • Active Directory
    • Exchange
    • IIS
    • Internet Security & Acceleration Server
    • Office Communications Server
    • Outlook
    • System Center Configuration Manager
    • Windows Server Update Services

    Another thing to consider is what future applications you may need to support with your PKI. This may not be an answerable question, nor should you be expected to know for sure. In fact some of the applications or technologies that your PKI may be required to support may not have even been conceived of yet. My point here is that your design should incorporate plenty of flexibility. Not only do you want to deploy a PKI solution that supports existing technologies, but one that is scalable, and can support future technologies.

    Costs

    The next thing you want to think about is cost. I understand how difficult it can be to get budgets approved in any business. Despite our wishes as technology professionals that we could implement the appropriate solutions, sometimes we are handicapped by the budget that we have to complete a project. How much money is your business willing to invest in the PKI solution? What are the costs for implementing a PKI? Here are some items that may need to be included in your budget:

    Hardware Costs

    • Servers
    • Hardware Security Modules (HSMs)
    • Backup Devices
    • Backup Media

    Software Costs

    • Windows Server Licenses

    Human Capital

    • Paying someone to design, implement, and manage the PKI infrastructure.

    Cost Savings

    While you are planning your budget, it is important not to forget the cost savings that a Windows PKI solution can provide. The two key areas of savings that I see in a PKI solution are:

    Integration

    Microsoft CA’s, especially Enterprise CAs, have a tight integration with Microsoft products. The integration makes managing and requesting certificates from Microsoft Operating Systems and applications fairly straight forward, to the point that you do not really need any PKI experience to be able to request a certificate.

    Automation

    The greatest advantage of the Windows PKI solution is automation. An Enterprise CA is tightly integrated with Active Directory. Using autoenrollment, a simple group policy can be configured to automate the deployment of certificates to computers and users. The deployment is so transparent, that users do not have to do anything to request a certificate.

    Manageability

    In designing your PKI solution you will have to take into account the resources you have to manage the PKI solution. Day-to-day management for the most part is very limited, but you will need someone to provide the care and feeding of your PKI. You will need someone to issue and revoke certificates. You will need to have someone manage the hardware, apply patches, take backups. In other words, you need a Server Administrator. Also, you will need to have someone that publishes Certificate Revocation Lists and manages the CA itself.

    Security

    You will need to determine the level of security required for your PKI. In order to determine the level of security it is important to step back and understand what a Public Key Infrastructure and the certificates associated with the Public Key Infrastructure can be used for. Certificates can be used for identification, encryption, non-repudiation, and in some cases authentication. In your organization you probably have some standard on how a user receives a user account. When hired there was some form indicating that he/she needs a domain user account and the manager approves this form; in other words, the manager was assuring the identity of the user. Since certificates can be used for identification the same standard should be used when issuing certificates, if they are going to be used for that purpose. If you are using certificates just for encryption, you may be less concerned with the user’s identity. If using the keys from the certificate for encryption, it would depend on what data is being decrypted. If a user is just encrypting his/her recipes you may perhaps not require the same level of protection of the private keys as you would if the user is encrypting top secret government documents. In other words the level of security is going to be determined by the level of risk. This determination should include any corporate security policies for PKI and certificates. When creating your PKI security policy, you should also consider any industry or government regulations.

    Flexibility and Scalability

    The flexibility and scalability of your solution should be taken into consideration. If you have a high level of confidence that you will not need to change or adapt your PKI solution you can have a fairly simple design. However, if you need a solution that will need to support a variety of technologies, different levels of security, and a global presence, then your solution can get much more complicated.

    Physical Design

    When designing your PKI solution you will have to determine the hierarchy that you will use. There are generally three types of hierarchies, and they are denoted by the number of tiers.

    Single/One Tier Hierarchy

    image

    A single tier Hierarchy consists of one CA. The single CA is both a Root CA and an Issuing CA. A Root CA is the term for the trust anchor of the PKI. Any applications, users, or computers that trust the Root CA trust any certificates issued by the CA hierarchy. The Issuing CA is a CA that issues certificates to end entities. For security reasons, these two roles are normally separated. When using a single tier hierarchy they are combined. This may be sufficient for simple implementations where ease of manageability and lower cost outweigh the need for greater levels of security or flexibility. The level of security can be enhanced if the CA’s keys are protected by an HSM, but at the expense of higher equipment and management costs.

    Two Tier Hierarchy

    image

    A two tier hierarchy is a design that meets most company’s needs. In some ways it is a compromise between the One and Three Tier hierarchies. In this design there is a Root CA that is offline, and a subordinate issuing CA that is online. The level of security is increased because the Root CA and Issuing CA roles are separated. But more importantly the Root CA is offline, and so the private key of the Root CA is better protected from compromise. It also increases scalability and flexibility. This is due to the fact that there can be multiple Issuing CA’s that are subordinate to the Root CA. This allows you to have CA’s in different geographical location, as well as with different security levels. Manageability is slightly increased since the Root CA has to be brought online to sign CRL’s. Cost is increased marginally. I say marginally, because all you need is a hard drive and Windows OS license to implement an Offline Root. Install the hard drive, install your OS, build your PKI hierarchy, and then remove the hard drive and store it in a safe. The hard drive can be attached to existing hardware when CRLs need to be re-signed. A virtual machine could be used as the Root CA, although you would still want to store it on a separate hard drive that can be stored in a safe.

    Three Tier Hierarchy

    image

    Specifically the difference between a Two Tier Hierarchy is that second tier is placed between the Root CA and the issuing CA. The placement of this CA can be for a couple different reasons. The first reason would be to use the second tier CA as a Policy CA. In other words the Policy CA is configured to issue certificates to the Issuing CA that is restricted in what type of certificates it issues. The Policy CA can also just be used as an administrative boundary. In other words, you only issue certain certificates from subordinates of the Policy CA, and perform a certain level of verification before issuing certificates, but the policy is only enforced from an administrative not technical perspective.

    The other reason to have the second tier added is so that if you need to revoke a number of CAs due to a key compromise, you can perform it at the Second Tier level, leaving other “branches from the root” available. It should be noted that Second Tier CAs in this hierarchy can, like the Root, be kept offline.

    Following the paradigm, security increases with the addition of a Tier, and flexibility and scalability increase due to the increased design options. On the other hand, manageability increases as there are a larger number of CAs in the hierarchy to manage. And, of course, cost goes up.

    Security

    Private Key Protection

    One of the key aspects of designing a PKI solution is to make sure the proper controls are in place. Security for a PKI solution mostly revolves around protecting the private key pair of the CA. Each CA has a private/public key pair. The private key is used to sign CRL’s as well as certificates that are issued by the CA. Clients and application verify the signature so that they can be assured that a certificate was issued by a particular CA. If you install a Microsoft CA, the private key is protected by software, or more specifically the Data Protection API (DPAPI). Although this method does provide protection it does not prevent a user that is a member of the Administrators group on the CA from accessing the private key. This can be a cause for concern, because you may have administrators whose job is just to patch the system, and yet they have access to the private key which violates the concept of least privilege.

    There are generally two methods for protecting the private key of a CA. The first method is to keep the CA offline and the hard drive stored in a safe. By controlling the conditions the hard drive can be used, the opportunities for key compromise are reduced. The second method is to use a hardware device to protect the private key. For example, a smartcard can be used to store the private key of the CA. This is not the best solution since the smart card must remain in the reader attached to the CA in order to be used. Also, a smart card may not be as resilient, or provide the level of security that is required. It is however a low cost solution. A more standard solution is to use a Hardware Security Module (HSM). HSM’s are fairly expensive, but are normally certified for FIPS compliance -- a standardized measure of relative security. HSM’s are accepted as the most secure way to protect the private key for a CA.

    Role Separation

    Aside from private key protection you will most likely want to have some control as to the level of administrative access to a CA. In some cases you may have administrators that are responsible for performing every function on the CA. But in larger or higher security environments you will want to have some granular control over what access different role holders have. Below is a list of common roles on a CA:

    • CA or PKI Administrator whose role is to manage the CA itself.
    • Certificate Manager who issues and revokes certificates.
    • Enrollment Agent is typically a role used in conjunction with smart cards; an Enrollment Agent enrolls for a certificate on behalf of another user.
    • Key Recovery Manager if using key archival. If you are using key archival, the Key Recovery Manager is responsible for recovering private keys. Also, if you are using EFS an EFS Recovery Agent role may be created to recover data encrypted using EFS.

    In addition to these roles that have direct interaction with the CA, you also will have ancillary roles that support the CA. These include:

    • Backup Operator who is responsible for backing up the CA and restoring data in case of failure.
    • Auditor who is responsible for reviewing audit logs and ensuring policy is not being violated.

    Physical Security

    Certificates issued by CAs are used in many cases for very sensitive operations such as establishment of identity, authentication and encryption. As such, it is important to not only protect the private key but to protect physical access. Law #3 of the Ten Immutable Laws of Security, states: “If a bad guy has unrestricted physical access to your computer, it's not your computer anymore.” For this reason you will want to protect physical access to the CAs. This will depend on the resources you have available, but typically in larger organizations the CAs are stored in a locked cage in a data center. Only individuals that need physical access to the CA to perform their duties should be given access.

    Policy

    Generally the security requirements, such as those mentioned above, are dictated by a corporate security policy. A security policy usually takes into consideration regulatory and industry requirements as well as unique requirements for the individual company. The policy may also specify technical aspects of the PKI such as the encryption algorithms that must be used as well as operation of the Certificate Authorities.

    In addition to security policies there may be CA-specific policies that need to be developed before implementing the PKI. These include Certificate Policy and Certification Practice Statement. The Certificate Policy explains what methods are used to establish the identity of a subject before issuing a certificate. A Certification Practice Statement outlines the operation of the PKI from a security perspective. Many companies, especially third parties companies that issue certificates, have their Certificate Policies and Certification Practice Statements available publicly. It may be helpful to view one of these public documents when writing your own policy documents.

    Additional Security Considerations

    In addition to the topics discussed it is important to apply any relevant security patches to your online CAs and to install them in a timely manner. In addition to patches, you should have an anti-malware solution installed on your CA.

    So far we have covered reasons to deploy a Public Key Infrastructure. We also have covered the various costs involved in a PKI infrastructure, as well as the impact of various design considerations. Now we will dive a little deeper into specific configuration decisions and technical aspects of the Certificate Authorities.

    CA Configuration

    Certificate Validity Period

    Digital certificates have a lifetime, a start date and an end date for which they are considered valid. You should determine what values for this lifetime are appropriate for each CA certificate and end-entity certificate issued by your CA’s. For CA’s, this lifetime is set when the CA is installed and when the private key is renewed. For end-entity certificates there are a number of factors taken into account:

    • These include the validity period for the issuing CA. The CA will not issue certificates that are valid past the CA’s lifetime.
    • The validity period specified in the Certificate Template.
    • The value of this registry key, specified in this KB article: http://support.microsoft.com/kb/254632

    The certificate issued will be configured with the validity period that is the shortest of these items.

    Key Length

    The length of a key definitely affects security of information protected with that key. Thus, you will need to determine the key lengths you will use with each key pair. First you will need to determine the key lengths that will be used for each of the CA key pairs. Additionally, you will need to determine the key lengths for any certificates issued by the issuing CA. The key lengths for the CA certificates are determined by the key size requested when the CA is installed and when the key pair is renewed. The key length at installation is set during the CA Setup process. The key length for renewal is determined by a value set in the CAPolicy.inf configuration file installed on the CA.

    For certificates issued by the issuing CA the maximum key size is limited by the CSP that is being used. The specific key size that is required can be specified in the certificate request or in the Certificate Template if using an Enterprise CA. As a general guideline, the longer the lifetime of the certificate the longer the key length should be. For applications that will be using certificates you will need to determine the maximum key length they support. Some applications have limitation on the key size not only in the actual certificate it is using, but also for any certificates in the CA hierarchy. From a security standpoint it is recommend that 4096 bit key is used for Certification Authorities key pair. However, if you wanted to insure maximum compatibility with network devices and applications a 2048 bit key would be the better choice.

    AIA Locations

    When a client or application is validating a certificate it needs to not only validate the certificate that is being used but also the entire chain of the certificate. In other words, the application or client needs a certificate from each CA in the chain beginning with the issuing CA and ending with the Root CA. If the application or client does not have access to the certificates in the chain locally the application or client needs a place from which to obtain the certificates. This location is called the Authority Information Access or AIA. The AIA location is the repository where the CA certificate is stored so that it can be downloaded by clients or applications validating a certificate. The AIA location is included in the AIA extension of a certificate. Before implementing your PKI it is important to think about what types of clients will be validating the certificates and where they reside. If you are using Windows clients that are internal to your network and are domain members then LDAP locations in Active Directory are a good place for clients to access the AIA repository. If you have non-Windows clients or Windows clients that are not domain members that are internal then an internally hosted web site would be the ideal location for the AIA repository. However, if clients may need to validate a certificate when outside the network, then you will need an AIA repository that is available externally, perhaps on the public network.

    CDP Locations

    A CRL Distribution Point (CDP) is where clients or applications that are validating a certificate download the certificate revocation list (CRL) to obtain revocation status. CA’s periodically publish CRLs to allow clients and applications to determine if a certificate has been revoked. CRLs contain the serial number of the certificate that has been revoked, a timestamp indicating when the certificate was revoked, as well as the reason for revocation. Similar to AIA Locations, you need to keep in mind what types of clients you are supporting and where they are located.

    CRL Validity and Overlap Periods

    Like certificates, CRLs have a start date and an end date denoting a period for which they are valid. As such, you will need to consider what the CRL lifetime should be for each CA. In general, the CRL lifetime is proportional to the number of certificates the CA is expected to issue. Offline CA’s that issue relatively few certificates, and those only to other CAs, would tend to have CRLs with a more extended lifetime, for example, six months to a year. This reflects the fact that, in a properly managed PKI, an offline CA would rarely revoke a certificate. Issuing CAs, on the other hand, can be expected to issue large numbers of certificates to end-entities. It is quite common to revoke an end-entity certificate for any number of reasons, so the lifetime of an issuing CA’s CRL can be quite short; a few days or even hours.

    Another thing to consider is the overlap period. The overlap period is a short time interval beyond the expiration date of the CRL, and reflects the period between when a new CRL is published, and when the old CRL actually expires. During this time both CRLs are valid. This allows time for the new CRL to replicate to all of the repositories before the old CRL expires.

    Delta CRLs

    Delta CRLs are CRLs that contain revocation information for certificates that have been revoked since the base CRL was last published. For example, you have a Certificate A and it is revoked. The CA then publishes a new Base CRL that includes the revocation information for Certificate A. Shortly thereafter, Certificate B is revoked. At the designated interval a Delta CRL is published which contains the serial number and reason for revocation for Certificate B. When a client needs to determine revocation status Certificates A or B it downloads both the base CRL and the Delta CRL. The client determines that Certificate A is revoked from the base CRL, and then determines that Certificate B is revoked from the Delta CRL.

    The reason for Delta CRLs is due to limitations with base CRLs. Base CRLs can grow rather large over time as they contain the serial number and revocation reason for every valid certificate that has been revoked from a CA. Instead of publishing a large CRL over and over again, revocation status can be updated with the smaller Delta CRL. In this way clients that have a time valid CRL will just need to download the Delta CRL. Like base CRLs you will need to determine how often Delta CRLs are published. It should be noted that the use of Delta CRLs is completely optional and is not normally used with offline CAs for obvious reasons.

    OCSP URIs

    Windows Vista, Windows 7, Windows 2008 and Windows 2008 R2 can obtain revocation information from an Online Responder via the Online Certificate Status Protocol. If you are using an Online Responder to provide revocation status, you should include the URI that points to the Online Responder.

    Microsoft PKI

    Other than the benefits of the Windows PKI, most of the things I have mentioned so far apply to any Public Key Infrastructure. I am now going to focus on a Microsoft-specific implementation.

    Operating System Version

    Currently both Windows 2003 and Windows 2008 are supported so you need to determine which OS you’re going to use for your CAs. In order to make that decision you will need to know what additional features Windows Server 2008 has over Windows Server 2003. Here are a few of the many new features in Windows Server 2008:

    Windows Server 2008 R2

    Windows Server 2008 R2 adds a number of new features to Certificate Services. These features include:

    • Cross-Forest enrollment- Windows 2008 R2 Supports Cross-Forest enrollment which will allow a CA or multiple CAs in one forest to support clients in multiple Forests.
    • Certificate Enrollment Web Service and Policy Service- Allows clients to enroll for certificates over web interfaces. This new capability allows clients to retrieve certificates even if they are not located on the same physical network as Active Directory and the CA. Clients query the Enrollment Policy Service, to determine which Certificates they should enroll for, the Enrollment Policy Service contacts Active Directory and responds to the client with CA and Certificate Template information. The client then queries the Enrollment Web Service, to enroll for certificates. The Enrollment Web Service than contacts the CA on behalf of the client, and returns the enrolled certificates back to the client.
    • Non-persistent certificates (not stored in the CA database) - Certificate Templates can be configured to not store certificates in the CA database. The is useful for CAs that issue certificates for network authentication, in which certificates have a lifetime of hours or days and the storage of the certificates in the database would impact CA performance unnecessarily.

    Operating System Edition

    There are three editions of the OS on which you can install the Certificate Authority role. Those editions are Standard, Enterprise, and Datacenter. Standard or Enterprise Editions are normally used.

    Below are the key features that Enterprise and Datacenter Edition supports and Standard Edition does not. It is important to note that Datacenter Edition does not offer any additional functionality in terms of Certificate Services over Enterprise Edition.

    • Version 2 Templates
    • The ability to duplicate and modify Certificate Templates
    • Certificate Autoenrollment (requires version 2 templates)
    • Common Criteria Role Separation enforcement
    • Key Archival and Retrieval

    Certificate Authority Type

    Next, you need to consider what type of Windows CA is required. A Standalone CA does not require Active Directory and can be installed on a non-domain member server. Requests for certificate enrollment can be sent through Web Enrollment if installed, or by sending a request file to the CA. The request files are usually generated through the Certreq.exe tool. Also, in Windows Server 2008 and Vista you can use the Certificate Management Console to build custom requests.

    An Enterprise CA is integrated with Active Directory and requires AD in order to function. The Enterprise CA supports the same enrollment methods as the Standalone CA. In addition, however, an Enterprise can receive requests submitted through the Certificates MMC console. An Enterprise CA also allows for computer or user Autoenrollment which allows certificate request and issuance to be automated through a Group Policy setting. Enterprise CAs also use certificate templates which allow define what types of certificates a user or computer request from the CA.

    Additionally, PKI related configuration information is stored in Active Directory. This makes it easy for applications and clients to locate an issuing CA, associated Certificate Templates, CRL’s and AIA information.

    Certificate Templates

    Certificate Templates allow you to create a template from which the values in the certificate request are generated. When a user requests a certificate based on a Certificate Template, the request generated by the client is built based upon the configuration of the template. Certificate templates are key to Autoenrollment, since Autoenrollment requires certificate templates. The importance of certificate templates is that it reduces management cost, and makes the CA easier to use. This is due to the fact that the user does not have to use complicated methods such as using certreq.exe and the associated configuration file to generate requests. Certificate Templates are stored in Active Directory and replicated to all domain controllers in the forest. This makes them highly available in order to support all clients in the forest.

    For more information about Certificate Templates, please visit the following URLs:

    Enrollment

    Another important aspect to consider is how clients will enroll for certificates. There are several methods to enroll for certificates and the actual method you choose may vary. But for the majority of customers most enrollments will be done through Autoenrollment. For one off situations and testing enrollment, manual enrollment or Web Enrollment will typically be used. Beginning with Windows 2008 there is Network Device Enrollment Service that can be used for network devices to enroll.

    Autoenrollment

    Autoenrollment adds a high level of automation to certificate issuance. In order to deploy Autoenrollment, you must first configure a certificate template from which clients will generate their requests. You then need to enable a Group Policy setting that enables Autoenrollment. Once you enable autoenrollment the computer or user, depending on what template is configured will automatically request a certificate based on the template after Group Policy refreshes.

    For more information about Autoenrollment, please visit the following URLs:

    Manual Enrollment

    There are several different methods for manual enrollment. The first method is to use the Certificates Management Console to enroll for certificates directly. The second method is to use the Certificate Management Console to generate a request file that can later be submitted to the Certification Authority. Third, a tool called certreq.exe can be used to create requests that can later be submitted to the CA. Certreq.exe can also be used to submit the request to the CA and also to download, and install the resulting certificate.

    Web Enrollment

    Web Enrollment is a web page that can be used to submit requests and download issued certificates from a CA. Web enrollment has typically been used to generate custom requests. However, with the ability to create custom requests in Windows Vista and Windows 2008 there is less of a need to use Web Enrollment

    Application Specific Enrollment

    Many Microsoft applications, such as Internet Information Services and Office Communication Server have built in wizards that assist with enrolling for certificates that are used by those applications.

    Network Device Enrollment Service (NDES)

    NDES is a role that uses the SCEP protocol to allow network devices to enroll for certificates.

    Additional Considerations

    Key Archival and Retrieval

    Key Archival is a feature that allows the CA to archive the private key associated with a certificate in its database. A Key Recovery Manager can than recover the private key for a certificate if required. Although separate from Key Archival, an EFS Recovery Agent can be configured to recover EFS encrypted files.

    For more information about Key Archival and Retrieval, please visit the following URLs:

    Active Directory and Group Policy

    Aside from Autoenrollment, Active Directory and Group Policy allow the configuration of PKI related settings for clients. This includes EFS-related configuration, automatically publishing Root CA certificates to the Trusted Root Certification Store on clients, revocation checking configuration, and more…

    For more information about PKI-related Group Policy settings, please visit the following URL:

    Configure Public Key Group Policy: http://technet.microsoft.com/en-us/library/cc962057.aspx

    Conclusion

    I attempted to cover many aspects of the PKI implementation you will need to consider before deploying a Public Key Infrastructure. There is a great deal of information out there. As I mentioned before I would strongly recommend reading the Brian Komar books. I would also invite you to review PKI related posts on the Ask Directory Services Team Blog:

    http://blogs.technet.com/askds/archive/tags/PKI/default.aspx

    and the Windows PKI Blog:

    http://blogs.technet.com/pki/default.aspx

    As well as the Best Practices for Implementing a Microsoft Windows Server 2003 Public Key Infrastructure, which is located at:

    http://www.microsoft.com/downloads/details.aspx?familyid=0BC67F4E-4FCF-4717-89E8-D0EE5E23A242&displaylang=en.

    And the Windows Server 2003 PKI Operations Guide that can be downloaded from:

    http://www.microsoft.com/downloads/details.aspx?FamilyID=8e25369f-bc5a-4083-a42d-436bdb363e7e&DisplayLang=en .

    - Chris “two-fer” Delay

  • So you have a slow logon…? (Part 2)

    Bob Drake again and welcome back for the second part of the slow logon series. In this next part I want to dive into some simple troubleshooting techniques to assist you in isolating the cause of your slow logon.

    Click here for part 1!

    To dissect where it is slow is not that difficult to tackle….

    • Is it slow from when you hit the power button to the point where you get to the login screen?
    • Is it quick to get to the login screen but then hangs for a while to get to the desktop?
    • Is it all users, and not administrators?
    • All the above

    Troubleshooting will be dictated by the answers to those questions. We will start off with a slow boot that occurs when the power button is hit and it takes forever to get to the logon screen (even though a slow boot is NOT a slow logon). If the slowness occurs when the machine first boots up before you get to the login splash screen, then typically there is either an issue with the core OS, the applications installed, or a combination of both. A great place to start troubleshooting is to enable verbose startup, shutdown, logon and logoff messages (providing the operating system is XP or higher) according to KB 325376. With this enabled you will receive additional information during the boot/login process:

    Policy location (XP and 2003)

    image

    View of additional messages (XP and 2003)

    image

    image

    image

    Policy location (Windows 7, Windows 2008)

    image

    View of additional messages (Windows 7, Windows 2008)

    image

    image

    image

    image

    image

     

     

     

     

     

     

     

     

     

    image

    The first thing that should be determined is if the delay happens when the machine is “clean booted (Windows XP/2003)(Vista/2008/Win7)”. With MSCONFIG you can selectively disable all third party services and applications from loading. Now before the bashing begins here about the necessity of the applications that are on the machine, this step is quite essential to know if the OS is the issue or the applications that are installed on the OS. Here is how it’s done:

    1. Click Start then Run and type “MSCONFIG”

    image

    2.  Select the “Services” tab as displayed and check the “Hide all Microsoft Services” and click “Disable All”.

    NOTE: When you “Hide all Microsoft Services” you will see the applications that are installed on the system. Often times there are applications that are crucial to the boot/logon process (like drive encryption software) which will cause the machine even more problems. You will need to review the applications and disable what you can (more the better).

    Select the “Startup” tab and click “Disable All” once again. If you find disabling all the third party applications causes a bigger issue you can press F8 at startup and select “Last Known Good” or “Safe mode” to back out the changes to “msconfig”.

    image

    3. Once the third party services are disabled you will need to reboot (a window will appear stating you need to reboot. Once the machine comes back up another window will appear when you logon, just click “OK”).

    4. Test and see if the boot time is the same or not.

    When you disabled the third party services did the computer boot and logon faster or normally? If the answer is yes then you have a conflicting piece of software on the machine that is causing the delay. To find out which one can be a little more laborious but the quickest way will be to enable half the services (making sure to list which ones you are enabling) and see if the delay comes back. If not, repeat process by enabling half of the other services you haven’t tried yet until you get issue to return. Once you identify which one it is, try updating that application or components. You may also take a quicker approach if you believe that one particular application is the issue (due to a recent install) and simply disable that one only for confirmation. You will have to reboot several times during this process to be confident that you have discovered the cause. When you believe you have discovered the application causing the delay, re-enable it and see if the delay comes back just to be sure.

    Once you identify the application with the issue call the vendor and explain why it has been pinpointed as the issue and seek guidance from them. Often times there are updates or hotfixes that will resolve the conflict.

    NOTE: Some antivirus applications will still load filter level drivers even though their services are disabled from starting with MSCONFIG. The only way to truly rule out antivirus as a possible contributor to a slow logon is to uninstall it during the test.

    So it’s still slow!?!

    If the boot up is still slow check your client DNS configuration. DNS servers along with other hardware (like switches, routers) could also be the source of the problem. If you find that one section of your network is having the issue but other portions don’t, there is a good chance that you may have some network issues. A good way to determine network and DNS type issues is to take a network trace using a packet capture application like Netmon . The hard thing to do is capture a trace when the machine is starting up. This can be accomplished by monitoring the computer with another computer that is plugged in to the same hub or switch (port mirroring). You enable the network capture utility from one and filter for the other’s IP address.

    Things to look for in the traces are the following:

    • Valid DNS responses (does the query response match what was queried for?)
    • Delayed or unanswered responses (both from the DNS servers to domain controllers)
    • Kerberos failures
    • SMB failures
    • Numerous TCP resets or retransmits
    • A specific Domain Controller consistently used when the issue happens (possible issue with the Domain Controller itself)

    If there are any network related issues you should see them stand out without having to be a master at reading traces.

    If you have disabled all the third party software using the above method and still find that you have a lengthy boot process, then the next step is to look at what policies are being applied to the machine. A great place to start troubleshooting group policies is the technical reference.

    In most environments there will be numerous policies applied, so how do you rule them out as being an issue? The quickest method is to create a new “TEST” organization unit (OU) in the Active Directory Users and Computers snap-in and block policy inheritance to the OU. Once this is done, you can move the problem computer to that OU. Verify that there are no policies being “enforced” or set to “no override”. If there are polices with those settings, they will still be applied to an OU where policy inheritance is blocked.

    Before you move the computer you should run the following command to find out exactly what group policy objects are linked to it:

     

    image

    When you open the “gp.txt” you can view the policies as shown:

    image

    image

    You can see that the only policies that are applied to the machine are the “Default Domain Policy” and “Local Group Policy” (The above snip was shortened here to show user and computer).

    Once you have the policies identified you can move forward with creating the test OU. Here is a step by step on creating the OU and blocking the inheritance:

    1. Create a “Test” OU and move the computer account to it.

     

     

    image

    2. Open the Group Policy Management Console and block inheritance to the TEST OU.

    image

    Note: You will know that you are blocking inheritance when the OU icon has a blue exclamation as seen:

    image

    3. Once you have inheritance blocked and the computer moved to the OU, reboot the computer at least two times to clear the previously set policies.

    There are times where you cannot remove all policies if they are enforced. At least you will have a short subset of policies at this time. Time to test again…

    If the computer boots fast now then you have a group policy (or combination of policies) that ARE causing the delay. To find which policy is causing the issue you will need to link them one at a time rebooting in between and monitor when the delay occurs again. This is done by selecting the “Link an Existing GPO” as seen in above picture. Once you have the policy identified a thorough audit should be done to determine which setting in the policy is causing the delay.

    So it’s still slow…!?!

    If you have gone through the above steps and were not able to find why the boot up is slow and you were not able to disable all software or policies then the next step to do is enable debug logging. There are a few ways to enable logging depending on which operating system you are using.

    For 2000, XP, 2003 you can enable logging by following the article: “How to enable user environment debug logging in retail builds of Windows “http://support.microsoft.com/default.aspx/kb/221833. Lucky for me one of my co-workers has already written a blog on how to interpret the output (find more in his two part section: Section 1 Section 2). For Vista, 2008 and Windows 7 Microsoft has changed the debug logging format to what is called “Event Tracing”. Basically the data output is the same as the output from the above KB article once it is converted from the binary output. You will need Microsoft’s assistance with converting these files since they contain source code (to view the policy portion without the profile use gpsvcdebug logging).

    Another great tool to use is “Autoruns”. This utility will show you programs that are configured to run during system start or during login. One of the best features of this tool is the “Hide Microsoft Entries”:

    image

    It also allows you to select the autoruns per user as seen:

    image

    Autoruns may also find items not normally seen with other applications.

    To wrap it up, most of the time a slow boot or slow logon happens there is a conflict with an application, restrictive group policy or configuration issue. With the above troubleshooting and a little homework you will be able to identify where and why you have a slow logon and be able to resolve it in minimal time.

    More reading here!

    - Bob ‘Quasi-Manager’ Drake

  • So you have a slow logon…? (Part 1)

    Hi, Bob Drake here again after a short blogging hiatus. I have put this two-part blog post together with hope that it will save you countless hours and a few aspirin when troubleshooting a slow logon. I have had the luxury of working many different slow logon cases and I have to say that these can be the most grueling to handle, depending on how they are approached and what information you have. There are multiple reasons why slow logons can occur and sometimes they are a result of multiple issues masked as one.

    For this first part in the series I want to cover some well-known causes of slow logons, optimizing logon for your environment, and assist you with documenting your baseline to identify when you really have a slow logon issue. But before I do we need to set some expectations.

    The “logon process” (I use this term to encompass both the boot up of the workstation and the user login that is completed with a functioning desktop) has a lot of moving parts. The most important question to address is “What is an acceptable logon time to you?” If you have expectations that your logon should only take 3-5 minutes from the time you turn your computer on to the point you get your desktop, you will have a brief window to perform all tasks. Your business requirements will dictate what you will be able to accomplish during the logon, so a thorough understanding of your goals is needed before moving forward.

    Once you have your logon task list, then you can start testing the logon time frame. If all is configured and you are over your accepted limit, then an adjustment will need to be made by either limiting your tasks or accepting the lengthier time. There is a saturation point that will be reached when you try to accomplish too much in too little time.

    So you want to know what the top items are that will definitely slow your logon process? Here’s a list of configurations that will have an impact on your logon time:

    • Outdated drivers: Your network interface card (NIC) should use the latest drivers installed.
    • Outdated operating system (OS) patch level: Your operating system should have the latest service pack installed from windows update
    • Roaming user profiles: Roaming profiles change the way group policy processing is performed. When roaming profiles are configured the processing is changed from “asynchronous” (background processing or multiple at a time) to “synchronous” (foreground processing or one at a time). This disables “Fast logon Optimization” which will delay the user getting the desktop by waiting for the network to initialize first.

    Note: This is really important to understand that when roaming profiles are implemented, group policy software installations and folder redirection requires that the user is NOT logged on before the network is initialized and processes policy synchronously- ONE AT A TIME. This is the default behavior and changing it could cause inconsistencies with your logon.

    • Home folders: This could impact your logon times because instead of looking at a local location for system DLL’s, the client machine will look for them in the home folder instead. If that mapped network share is across a wide area network (WAN) link that is slow you can bet that your logon time is going to suffer even more.

    Note: If home folders are needed with roaming profiles there is a registry key tweak (SafeDllSearchMode) that can be added that will change the behavior. If you’re not sure that this is an issue in your environment, take a network trace at logon and see if DLL’s are being queried across the network to the home folder. There is also another tweak on the same page (StartRunNoHOMEPATH) that will assist with applications doing this behavior.

    • Start up applications: Applications that are configured to automatically run during startup will slow the logon down.
    • Profile scanning: There are many antivirus software applications that will scan profiles at login and at their home location if they are roaming. This is not limited to just antivirus software but other applications will as well. (In the troubleshooting section we will review how to discover if this is happening)
    • Excessive group policies: Having a ton of group policies that perform extensive tasks or configurations (like software restrictions) will increase your logon time. A few policies that accomplish everything are better than many policies that do a handful of things each. If possible consolidate your group policies.
    • Excessive startup/logon scripts: Scripts that run at logon or start up can delay the process significantly if they perform a lot of tasks or use inefficient code
    • Excessive WMI filters: Having excessive WMI filters can slow group policy processing
    • No local domain controllers: Not having local domain controllers (users authenticating across a wide area network-WAN) will cause a logon delay

    Before we get into troubleshooting a slow login we need to first identify what is a slow login and where is it slow. To be able to say a logon or boot up is slow you must know what a normal logon or boot time looks like in YOUR environment. With the above expectations the next step is to document the time a logon takes under normal conditions and under load (morning and afternoon rush hours). This should be done with all the different operating system builds in your environment (desktops, laptops, servers, XP, Vista, Win 7, 2003, 2008, 2008 R2) to have a standard baseline to work with.

    Here is a short starter list of things to include in your baseline documentation:

    • Network topology
    • Active Directory Topology
    • User and computer group membership
    • Operating system and service pack level
    • Installed applications
    • Network bandwidth and latency )
    • NIC driver information
    • “UserEnv” log (from several users who are members of different security groups) from XP or 2003, and ETL logs from Vista, 2008 and Win7
    • Network traces
    • Group Policy information (both computer and user)

    Once you have a solid baseline of average times, then you will know right away when logon times increase and where to narrow your search for the culprit. With the above documentation in hand the issue will be resolved much quicker. Without the documentation you’re setting yourself up for hours of agony and a costly resolution.

    Be sure to check out the next part in the series on slow logon where we actually get into the troubleshooting steps.

    See you then…..

    Click here for Part 2!

    More reading here:

     

    - Bob “My idea of a short hiatus is 18 months” Drake

  • Understanding LDAP Security Processing

    It’s Randy again, here to discuss LDAP security. Lightweight Directory Access Protocol is an interface used to read from and write to the Active Directory database. Therefore, your Active Directory Administration tools (i.e. AD Users and Computers, AD Sites and Services, etc.) as well as third party tools are often going to use LDAP to bind to the database in order to manage your domain. As you can imagine, we rely on Windows security to authorize what users can do when accessing this important database. Access to this database is controlled by the Directory System Agent (DSA), which is represented by Ntdsa.dll on a domain controller. Ntdsa.dll runs as a part of the Local Security Authority (LSA), which runs as Lsass.exe.

    The process of granting access is a two step process; Authentication and Authorization. This blog post will focus on the Authentication portion (verifying the user’s identity.) If you would like to learn more about the Authorization process, please read my post on security tokens. Objects in the Active Directory database conform to the same rules as other Windows objects. They have permissions and privileges that govern what the authenticated user can do.

    I use the LDP.EXE utility in Windows 2008 to reproduce all of the scenarios that follow. This tool is a client GUI to connect, bind and administrate Active Directory. You also want to download Network Monitor if you are troubleshooting an LDAP problem or want to follow along in your own network traces.

    If you open up LDP and choose “Connect…” under the Connection menu and do not put in any parameters:

    image

    image

    And choose bind as currently logged on user:

     

    image

    The first thing that you will notice is that it just works. We locate a valid domain controller when we do not specify a target DC by leveraging the DC Locator process and we bind as the currently logged on user by leveraging the security packages included in Windows. So what do we see in a network trace? We will be filtering on LDAP when opening in Network Monitor 3.3.

    Why do we get results in LDP.exe before we even Bind?

    The first set of traffic that we see is a couple of LDAP Search Requests and Search Responses. You can see this in the Description field of the Frame Summary:

    image

    So why do we see an LDAP search request before we do the LDAP bind? This is because we first access the “RootDSE” partition of the Active Directory Database as an anonymous connection to query information on the rules to bind and authenticate to this DSA. We allow anonymous access to this root, but in Windows Server 2003 and later we deny anonymous access to all other partitions in Active Directory. You can change this setting by changing the DSHeuristics value; this will be a forest-wide change for the behavior of all domain controllers.

    You can see the LDAP request parameters as “BaseDN: NULL” if you look at the Frame Details pane of the LDAP search request. Expand the “LDAP: Search Request “ , then expand the “Parser: Search Request” , then expand the “Search Request”:

    image

    “BaseDN” is the container where the search begins in the LDAP query. NULL is going to be the RootDSE of the domain controller. This is why results return in the right hand pane of the LDP utility before we even bind.

    image

    The “LDAP: Search Result” shows this same data if you expand the “LDAP: Search Result”, then expand the “Parser: Search Result”, then expand the “SearchResultEntry”:

    image

    Notice that the results in the trace are identical to what displays on the LDP.exe result pane. One important piece of information that we get is “supportedSASLMechanisms”. These are the supported security mechanisms that we can use in our LDAP bind. We will discuss this later in the post.

    Why do we log on as ‘Null’ when we select to pass credentials?

    So next we bind and select the default of currently logged on user. In the result pane of LDP it shows that we are logging on as ‘NULL’.

     

     

     

    image

    I consider this more as an idiosyncrasy of the tool and not completely accurate. When we perform the Bind, we do present ‘NULL’ credentials, but specify SASL as the authentication type. At this point, we are passing the authentication process off to the SASL mechanism to complete. We can see evidence of this in the network trace.

    Looking at the Bind Request in the frame details pane, you will see some interesting information as you expand out the LDAP packet:

    image

    Here we see that we are passing NULL credentials, but negotiate SASL authentication. Simple Authentication and Security Layer (SASL) is a method for adding authentication support to connection-based protocols. So basically, LDAP binds with NULL credentials because we are handing off the logon process to SASL and letting it do all the work. We see details of the negotiation process in the bind request and where we present the Kerberos session ticket as a result of selecting the GSS-SPEGNO SASL mechanism:

    image

    So how does SASL provide authentication?

    For detailed information, you can check out the IETF explanation of SASL. Similar to how the Security Support Provider Interface (SSPI) uses security packages, SASL will use registered SASL mechanisms to complete the authentication process. During the initial LDAP query for supportedSASLMechanisms described earlier. We retrieve a list of all the mechanisms from which the client and server will choose. Below is a list of the default mechanisms in Windows Server 2008.

     

    Authentication Protocols

     

     

    Comments

     

     

    GSS-SPNEGO

     

     

    GSS-SPNEGO, in turn, uses Kerberos or NTLM as the underlying authentication protocol.

     

     

    GSSAPI

     

     

    GSSAPI, in turn, always uses Kerberos as the underlying authentication protocol.

     

     

    EXTERNAL

     

     

    Used to allow a client to authenticate itself using information provided outside of the LDAP communication.  Can be used to present certificates to establish TLS or IPSec.

     

     

    DIGEST-MD5

     

     

    Encrypts authentication using Digest-MD5

     

     

    The LDP tool allows you to choose various mechanisms and is a great tool to test connections when other tools fail. You can select the appropriate bind specifications in order to closely simulate what your application is trying to perform.

    image

    The application will decide how it will bind to the database by what functions are used to establish the connection (i.e. LDAP_Simple_bind, LDAP_Sasl_bind, etc )

    What about LDAP signing?

    If you have ever looked through security settings in Group Policy, you may have stumbled on a couple related to LDAP.

    Domain controller: LDAP server signing requirementsNetwork security: LDAP client signing requirements

    These are both under Computer Configuration \ Windows Settings \ Security Settings \ Local Policies\Security Options .

    Signing LDAP traffic is a way to prevent man-in-the-middle attacks. By signing the LDAP traffic, this guarantees that the LDAP response did originate from the DC of whom the request was made. With these settings enabled, computers would not be able to intercept the traffic and modify the data on the wire. You can see the digital signing value in the network trace below. This value will be on every LDAP response from the DC where signing is done.

    image

    What is a Simple Bind?

    An LDAP Simple Bind will send username and password in clear text to provide credentials to the LDAP server. This is not secure, so if your application is using simple binds it needs to be reconfigured or updated. You can create a Simple Bind using this option in LDP.exe or calling the LDAP_Simple_bind function in your code. In a network trace we can see the username and password in the LDAP BIND Frame.

    image

    With Signing required in my domain, I was unable to perform a simple bind with LDP.exe. I received the error “LDAP_Simple_Bind failed: Strong Authentication Required”.

    What is an Anonymous Bind?

    As mentioned earlier, every LDAP connection is going to perform an anonymous bind to query RootDSE of the LDAP server. Also mentioned earlier, by changing the DSHueristics value on your AD Forest, you can permit Anonymous Bind LDAP searches for all the AD Partitions. The anonymous bind is still restricted by the permissions on the Directory Objects themselves. I did an anonymous bind and performed a search for all the user objects in the domain and resulted in only one user because I had manually granted permissions on that User object.

    image

     

     

     

    I also had to grant permissions on the Organization Unit that contained the user; otherwise the anonymous logon was unable to traverse containers where it had no permissions. I did not have to grant access to the Domain Naming Context at the root. I was able to set permissions in “AD Users and Computers” by selecting the “Advanced Features” option in the View Menu.

    image

    What about LDAPS?

    LDAPS uses SSL/TLS technology to establish an encrypted tunnel between the client and the LDAP server. The tunnel is encrypted with the LDAP server’s PKI Certificate, this way no one else can read the traffic except for the client and LDAP server so the Client is free to perform a simple bind and safely pass the credentials in clear text. LDAPS is an entirely different subject that deserves its own post. Luckily James Carr has written it, check it out here.

    Developers have a plethora of ways to bind to an Active Directory database. This provides supportability for numerous flavors of LDAP clients, but it does require some configuration on the part of the AD administrator. Farewell for now…

    Randy “Baby Elvis, thank you verah muuuch” Turner

  • O’DFS Shares! Where Art Thou? – Part 1/3

    Hello, this is Sabin and Shravan from the Microsoft Directory Services Support team. We are here today to discuss one of the common customer issues where they are seeing a delay in accessing a DFS Namespace. Some of these issues may manifest as affecting user’s experience on the machine where redirected folders may be slow to access or home drives don’t work. You may need to perform other troubleshooting which may bring you to the point where you are now left to troubleshoot slow DFS access. In essence, users can get to the shares but it’s slower than usual – in the order of minutes – resulting in lower productivity. Accessing “\\contoso.com\DFSroot\share” is slower as compared to directly accessing the closest DFS server using the UNC path, i.e. “\\DFSserver_netbios Name\share” or “\\DFSserver_fully qualified domain name\share”.

    This is Blog 1 of the 3 part series:

    - Part 1: Understanding the Referral Process for Domain-based Namespaces
    - Part 2: Client experiencing slowness though DFS server accessed is in the same AD site
    - Part 3: Client experiencing slowness due to DFS server accessed in a different AD site (out of site)

    Part 1: Understanding the Referral Process for Domain-based Namespaces

    Before we start trouble shooting slow access for DFS, it’s important to understand the following:

    1. DFSN terminology
    2. Referral process
    3. Referral cache (viewed by dfsutil /pktinfo)
    4. UseCount and Type

    All these are described in details on the technet site under How DFS works? Understanding the DFS referral process is critical.

    Simplified Referral Process for Domain-based Namespaces

    1. A user attempts to access a link in a domain-based namespace, such as by typing \\Contoso.com\Public\Software in the Run dialog box.
    2. The client computer sends a query to the active domain controller to discover a list of root targets for the domain-based namespace.
    3. The domain controller returns a list of root targets defined for the requested namespace.
    4. The client chooses the first root target in the domain-based root referral. The client connects to the root server and navigates the subfolders under the root folder. When a client encounters a link folder, the root server sends a STATUS_PATH_NOT_COVERED status message to the client, indicating that this is a link folder that requires redirection.
    5. The client checks its referral cache for an existing link referral. If this link referral is in the cache, the client computer connects to the first link target in the list. If the link referral is not in the cache, the client connects to the IPC$ (named pipe) of the root server in the context of the Current User in Vista clients (LocalSystem account in case of Pre-vista clients) and requests a link referral from the root server. The root server constructs a list of link targets in the referral depending on the target selection method and sends the referral list to the client.
    6. The client establishes a connection to the first link target in the list.

    NOTE: The Referrals in steps 3 and 5 above are grouped as a TARGETSET which indicates set of targets of equivalent cost. By default the servers are grouped in two TARGETSETS – in site and out of site.

    All this is true if there are no referrals in the client cache. For a detailed understanding of this process (especially when client has cached the referral), please refer to Referral Process for Domain-Based Namespaces on TechNet site.

    Lab Setup (Ex. 1) - Using Windows 2003 servers for DFS servers and Windows Vista as a client

    The troubleshooting steps in this post are based on the following lab setup:

     

    O.S VERSION

    NAME

    Domain

    Windows Server 2003

    Contoso.com

    Domain Controllers

    Windows Server 2003

    DC1, DC2

    Member Servers hosting DFS roots

    Windows Server 2003

    MS1 (root), MS2 (root replica)

    Client

    Windows Vista

    VistaClient

    DFS Root

    Windows Server 2003

    RootDFSN

    DFS Link

    N/A

    Data

    DFS Link target hosted on MS1

    N/A

    Data

    DFS Link target hosted on MS2

    N/A

    DataReplica

    Distributed File System is as shown below

    - Root - \\contoso.com\rootdfsn
      Root Targets – 
                            
    - \\MS1\rootdfsn
                             - \\MS2\rootdfsn

    - Link - \\contoso.com\rootdfsn\data
      Link targets –
                          
    - \\MS1\Data
                           - \\MS2\DataReplica

    Snapshot of DFS Snap-in:

    ROOT  image

    LINK image

    Tools Used:

    1. DFSDiag (for Vista/2008 or later clients only) is part of the Remote Server Administration Tools (RSAT) for Windows Vista.
    2. Netmon 3.3 can be downloaded from here.

    Lab Setup (Ex.2) - Using Windows 2008 servers for DFS Namespace Servers and Windows Vista as a client

    NOTE: While the troubleshooting steps in this whitepaper are based on the Lab setup ex. 1, the method to troubleshoot DFSN slow access in the lab setup ex. 2 configuration is very similar.

     

    O.S VERSION

    NAME

    Domain

    Windows Server2003

    Contoso.com

    Domain Controllers

    Windows Server 2003

    DC1, DC2

    Member Servers hosting DFS roots

    Windows Server 2008

    2008-MS1 (root), 2008-MS2 (root replica)

    Client

    Windows Vista

    VistaClient

    DFS Namespace( or Root)

    Windows Server 2008

    2008NS

    DFS Target ( or Link)

    N/A

    Testfolder

    DFS Link target hosted on MS1

    N/A

    2008DFSN

    DFS Link target hosted on MS2

    N/A

    2008DFSN

    Distributed File System is as shown below:

    - DFS Namespace - \\contoso.com\2008NS
      Namespace servers –
                      
    - \\2008-MS1\rootdfsn
                       - \\2008-MS2\rootdfsn
    - Folder name - \\contoso.com\2008NS\Testfolder
      Folder targets –
                      
    - \\2008-MS1\2008DFSN
                       - \\2008-MS2\2008DFSN

    Snapshot of DFS Management Snap-in:

    NAMESPACE image

    FOLDER image

    WORKING SCENARIO ILLUSTRATION:

    Following is a detailed analysis of the expected response from DFS where client is in the same site as the DFS server MS1 that is hosting the information. “DFSutil /PKTINFO” shows the following:

    Entry: \contoso.com\rootdfsn
    ShortEntry: \contoso.com\rootdfsn
    Expires in 208 seconds
    UseCount: 1 Type:0x81 ( REFERRAL_SVC DFS )
       0:[\Ms1\rootdfsn] State:0x110 ( ACTIVE TARGETSET )
       1:[\Ms2\rootdfsn] State:0x100
    Entry: \Ms1\rootdfsn\Data
    ShortEntry: \Ms1\rootdfsn\Data
    Expires in 1729 seconds
    UseCount: 1 Type:0x1 ( DFS )
       0:[\Ms1\Data] State:0x110 ( ACTIVE TARGETSET )
       1:[\Ms2\DataReplica] State:0x100

    ANALYSIS:

    1. User tries to access \\contoso.com\rootdfsn\Data.
    2. Client sends a referral request for domain controllers in Contoso.com.
    3. Client receives a list of domain controllers (DC1 and DC2) in the Contoso.com domain.

    NOTE:

    •  
      • The domain controllers in the client’s site are at the top of the list.
      • If least-expensive target selection is enabled, domain controllers outside of the targets site are sorted by lowest cost.
      • If same-site target selection is enabled, DFS ignores this setting and lists the remaining domain controllers in random order.

    4. Client sends a query to the domain controller (DC2) to discover a list of root targets for the DFS Namespace

    5. Domain controller DC2 returns a list of root targets (MS1 and MS2) as seen below

      0:[\Ms1\rootdfsn] State:0x110 (TARGETSET )
      1:[\Ms2\rootdfsn] State:0x100

    NOTE: MS1 and MS2 are listed under a single TARGETSET being servers in the same site/cost boundary.

    6. Client selects the first root target (MS1) in the referral and sends a query for the requested DFS link (Data).

    NOTE: PKTINFO indicates a successful connection to the referral returned by listing it as ACTIVE.

    Entry: \contoso.com\rootdfsn
    ShortEntry: \contoso.com\rootdfsn
    Expires in 208 seconds
    UseCount: 1 Type:0x81 ( REFERRAL_SVC DFS )
       0:[\Ms1\rootdfsn] State:0x110 ( ACTIVE TARGETSET )
       1:[\Ms2\rootdfsn] State:0x100

    7. Root server MS1 constructs a list of link targets (Data and DataReplica)

    0:[\Ms1\Data] State:0x110 (TARGETSET )
    1:[\Ms2\DataReplica] State:0x100

    8. The client establishes a connection to the first link target in the list (\MS1\Data).

    NOTE: PKTINFO indicates a successful connection to the referral returned by listing it as ACTIVE.

    Entry: \Ms1\rootdfsn\Data
    ShortEntry: \Ms1\rootdfsn\Data
    Expires in 1729 seconds
    UseCount: 1 Type:0x1 ( DFS )
       0:[\Ms1\Data] State:0x110 ( ACTIVE TARGETSET )
       1:[\Ms2\DataReplica] State:0x100

    Now that you have a better understanding of the working scenario, this will greatly help in understanding the problem cases where the expected response from DFS shares is slower. Be sure to check out the next part in the series on slow DFS access where we actually get into the troubleshooting steps in different scenarios. Ciao!

    -Sabin Nair and Shravan Kumar

  • Windows 2008 R2: Managing AD LDS using the AD PowerShell Module

    Hello it’s LaNae again. Now that Windows 2008 R2 is available we get to use the coolness of PowerShell with AD LDS. When you install the AD LDS role on a Windows 2008 R2 server it will also install the AD PowerShell module.

    Unfortunately the documentation in the help files for each cmdlet does not give an example of what the syntax would be for AD LDS. You can find a list of the cmdlets in the “What’s New in AD DS: Active Directory Module for Windows PowerShell” located at

    http://technet.microsoft.com/en-us/library/dd378783(WS.10).aspx

    Active Directory Cmdlets used with AD LDS

    Below you will find a list of Active Directory cmdlets as well as the syntax that can be used to manage AD LDS instances.

    Enable-ADOptionalFeature: Enable an optional feature.

    Example: Enable-ADOptionalFeature “Recycle Bin Feature” –server servername:port –scope ForestorConfigurationSet –target “CN=Configuration,CN={GUID}”

    Get-ADObject: Gets one or more AD LDS objects.

    Example: Get-ADObject -filter ‘objectclass -eq "user"’ -searchbase 'partition DN' -server servername:port - properties DistinguishedName | FT Name, DistinguishedName –A

    image

    Get-ADOrganizationalUnit: Gets one or more AD LDS OUs

    Example: Get-ADOrganizationalUnit -Filter {Name -Like '*'} -searchbase "partition DN" -server 'servername:port' - AuthType Negotiate | FT Name, DistinguishedName –A

    image

    Get-ADUser: Gets one or more AD LDS users

    Example: Get-ADUser -Filter 'Name -like "*"' -searchbase "partition DN” -server 'servername:port'

    image

    Get-ADGroup: Gets one or more AD LDS groups

    Example: Get-ADGroup' -Filter 'Name -like "*"' -searchbase "DN of partition to search" -server 'servername:port'

    image

    Get-ADGroupMember: Gets the members of an AD LDS group

    Example: Get-ADGroupMember -identity 'DN of group' -server 'servername:port' -partition "DN of partition where group resides" | FT Name,DistinguishedName -A

    image

    New-ADGroup: Creates a new AD LDS group

    Example: New-ADGroup -Name "groupname" -server 'servername:port' -GroupCategory Security -GroupScope Global -DisplayName "group display name" -path "DN where new group will reside"

    image

    New-ADUser: Creates a new AD LDS user

    Example: New-ADUser -name "username" -Displayname "Display Name" -server 'servername:port' -path "DN of where the new user will reside"

    image

    ADD-ADGroupMember: Adds an AD LDS user to a group

    Example: Add-ADGroupMember -identity "DN of group" -member "DN of user" -partition "DN of partition where group resides"

    image

    New-ADOrganizationalUnit: Creates a new AD LDS OU

    Example: New-ADOrganizationalUnit -name "OU Name" -server 'servername:port' -path "DN of OU location"

    image

    Remove-ADGroup: Removes an AD LDS group

    Example: Remove-ADGroup 'SID of Group' -server 'servername:portnumber' -partition "partition where group resides"

    image

    Remove-ADGroupMember: Removes an AD LDS user from a group.

    Example: Remove-ADGroupMember -identity "DN of group" -member "DN of user" -server 'servername:port' -partition "DN of partition where group resides"

    image

    Remove-ADOrganizationalUnit: Deletes an OU in AD LDS

    Example: Remove-ADOrganizationalUnit -identity "DN of OU" -recursive -server 'servername:port' -partition "DN of partition where OU resides"

    image

    Remove-ADUser: Deletes a user from AD LDS

    Example: Remove-ADUser -identity "DN of user" -server 'servername:port' -partition "DN of partition where user resides"

    image

    -LaNae Wade

  • ADFS – Federated WebSSO with Forest Trust Scenario and its Limitations

    Hi, it's Adam Conkle again. Today I’d like to talk about an ADFS case I had recently where the customer ran into some limitations with their Federated WebSSO with Forest Trust setup. They had their environment set up similar to what is described in the TechNet design article:

    image

    As you can see, they implemented a one-way forest trust between their internal network and the DMZ where the DMZ AD trusts the internal AD. Their ADFS-enabled application was not claims-aware. It was an NT token-based application, so it needed to be protected with the NT token-based Web Agent. The application also did Kerberos delegation to a backend SQL server (not shown). They had user accounts in both the internal AD and the DMZ AD which needed access to this application.

    The forest trust is utilized by ADFS only when both the Account Partner (their internal AD) and Resource Partner (their DMZ AD) ADFS consoles are configured as follows:

    -On the Account Partner:

    image

    -On the Resource Partner:

    image

    When ADFS is utilizing the forest trust and a user from the internal AD accesses the application, the user SID from the Account Partner AD is now available on the Resource Partner side where the NT token is built. The NT token-based Web Agent builds an NT token local on the web server. Once the NT token is built and used to access the application, ADFS authentication is complete. Next, their application needs to access SQL, so we must transition to Kerberos to do delegation to the backend SQL service.

    Whenever we need to do Kerberos delegation with Federated WebSSO with Forest Trust it must be configured for Kerberos Protocol Transition and Constrained Delegation. If you have read documentation on Kerberos Constrained Delegation, you will remember that cross-forest authentication requires a two-way forest trust. Having ADFS authentication in the picture does not negate the two-way forest trust requirement.

    During this process of delegation, a S4U2SELF ticket is requested and we also need to be able to get a TGT for the Account Partner domain. Without a two-way trust in place, this TGT request will fail with S_PRINCIPAL_UNKNOWN.

    Rather than use the Federated WebSSO with Forest Trust , you have two other options:

    Option 1: Create resource accounts in the Resource Partner forest

    • Create an Alternative UPN suffix in the Resource Partner forest that matches the UPN suffix of the Account Partner forest
    • Create user accounts in the Resource Partner forest with the same usernames and UPN suffixes as the Account Partner forest
    • Set your Resource Account option in the ADFS console:

    image

    Disadvantage of using this option: High administrative overhead of managing the resource accounts.

    Option 2: Create a group-to-UPN mapping for the Account Partner users

    • Create a group claim for the Account Partner users
    • Create a single resource account on the Resource Partner
    • Map that group claim to the UPN of the resource account you created:
      • Open ADFS.msc on the Resource Partner
      • Expand Account Partners and select the appropriate Account Partner
      • In the right pane, select the UPN Identity Claim, right-click, Properties
      • Select the Groups tab and configure an Organizational Claim to map to a user account

    image

    Disadvantage of using this option: The identity of individual users is lost. All users from the Account Partner are authenticated to the application with the UPN of the single resource account.

    For more information about resource account mapping methods, check out this TechNet Article:
    http://technet.microsoft.com/en-us/library/cc779214(WS.10).aspx

    To conclude, the Federated WebSSO with Forest Trust scenario will work just fine with a one-way forest trust when there is no Kerberos delegation involved. As soon as you need to delegate Kerberos with your ADFS-enabled application and you are using Federated WebSSO with Forest Trust, you must be using the NT Token-based Web Agent and you must have a two-way forest trust in place. If a two-way trust will not work for your environment, consider the alternative options described above.

    Take care,

    Adam “I wish I had cooler t-shirts” Conkle