Microsoft's official enterprise support blog for AD DS and more
Hi, Steve again. I thought I would speak through a series of posts about what knowledge is critical to fulfilling the Windows Server Domain Admin role. This topic carries a ton of breadth and depth knowledge. As a beginning, you have to find out where all this knowledge and training is located. My goal is to get you started down this path so you are exposed to different technologies that you will need to understand and master in order to become a Domain or Enterprise administrator.
The process of building the depth of knowledge required may take years to acquire. With some help and guidance I hope to reduce this time to several months. For other folks that have already started on this path or are already fulfilling this role, there may be topics that I reference that may hold some value for them as well. A lot of great information for Windows Server 2003 exists and I will focus on these resources.
As I go through this blog there will be links to more information. The links will consist of required reading to achieve our goal of being domain admin ready. It may take weeks to progress through this blog for some folks. I intend to develop follow up blogs that discuss in more detail, especially focused on ideas revolving around the design portions for Active Directory (AD). I would like to present some examples of a theoretical company’s environment and build an actual AD design.
So let’s get started. You have been using a computer for your personal use and you have just been hired to the helpdesk at your company to manage user accounts.
Microsoft has published an enormous library of technical information and other information on TechNet. What is TechNet? Well, that would be a blog unto itself, but as a quick reference you can find the technical libraries for most of our products there. You will also find educational resources, downloads, event information, webcasts and newsgroups. Microsoft also provides a learning portal designed for IT Professionals here.
Let’s talk about design first; each company has to choose an AD design. The simplest is a single forest/single domain where all of your accounts are stored in one domain. By default the first domain you create is the forest root domain, you can add more domains to your forest as a child domain of the root or add new domains as a separate tree in the forest. So how do you decide how many domains should you have? The vast majority of companies can live comfortably within one forest, but may require multiple domains within the same forest for a variety of reasons. This link discusses planning an AD deployment and choosing a logical structure. The more complicated your design the more time and effort required to manage your environment. You might ask yourself if your company requires more than one group of administrators to manage computer and user accounts and requires isolation of data or resources for security purposes. If the answer is clearly yes then you can plan on having more than one domain. You want to try to match your company structure to your AD structure whenever possible as a general rule of thumb. Domains quite often are used to isolate and group resources and normally domain administrators don’t have access to resources within another domain. You also decide that locality might play an important role in domain structure. It may be due to network isolation or even language differences but several companies have chosen to isolate different geographic locations into separate domains. In order to choose the correct design for your company you will need to engage participants from all of your business divisions so they can share their requirements for resources.
AD allows admins to create logical containers within a domain that allows you to group resources for control and/or manageability. These items are called Organizational Units or OU’s for short. You may decide to create an OU structure for User Accounts that is separate from Group Accounts or Computer Accounts. You can further refine your collections of accounts based on business function or geography.
AD also supports using objects to describe the physical organization of your network. For example, a site is defined by one or more network subnets. AD sites control what AD resources within the domain or forest a client should use. Typically we want our clients to use resources within their local site rather than traversing to other sites.
By now, you are probably getting the picture that AD design is flexible enough to support a wide variety logical models. As an exercise you might consider an AD design for a large international company named Contoso. Just to make it easy say we have 30,000 employees in Redmond, WA, USA and 8,000 in North America, South America, Europe and Asia. Right from the start you are going to have questions where you will need to engage other business divisions to get answers. For instance, the first question you might want to answer is; “Is one forest enough or should I isolate a segment of our business entirely and setup a trust between two forests?” The next question might be “Should I use one domain or should I use multiple domains to organize and manage my resources?” You should also engage the network experts in your organization to understand how best to map the physical structure of your network into AD.
The steps in choosing a design are critical to the success of a company’s IT infrastructure. Even though it seems like there is no “right” answer there are definitely going to be “wrong” answers. The best advice I can give is design a few models and start discovering the pro’s and con’s of each design.
There are fewer factors involved in choosing a namespace design. This document covers your choices well. Different business divisions might have some requirements with regards to namespace and they will need to be engaged in this discussion. You will find this discussion loops into the AD design as well and can be considered jointly.
Ok let’s say we have selected a namespace and we know how many forests, domains and sites fit our company best. What’s our next step in AD? We need to choose structure for where our accounts in AD will be stored. We want to choose a logical structure that makes our objects in AD easy to find and manage. You do this by creating OU’s within your domain. Some choices you may make are creating OU’s for either user and computer accounts or combining them. Keeping them separate will make manageability easier. You may group similar machines in the same OU or you may break out accounts based on business function or geographic location.
Having a methodical and tested plan is key here. AD is an important component in the organization and we must maintain its availability. There are several scenarios that need to be covered in a plan developed to cover that potential event. Here are points of failure that should be included in a recovery plan:
Each one of these events or a combination is possible. You will need to work through these potential events and determine a clear and concise set of steps that need to be completed to resolve the problem.
Regular backup of AD is critical. Testing these backups to make sure you can quickly and easily restore the data is best practice. All too often backup is scheduled but it’s not running or you cannot restore the data.
What are FSMO roles and how should they be distributed in your environment? This varies based on forest size. In the smallest environments we might put all roles on one server. In a large environment we might choose to place each role on a different machine. The FSMO PDC Emulator (PDCe) role brings the highest volume of work. In a large domain you would probably want to make sure the DC hosting the PDCe role is isolated from performing other roles, such as: not targeted by LDAP based applications, deployment server, web server, file and print server etc… The FSMO roles are important to make sure the forest and domain have these roles assigned to a specific DC.
In addition to server selection we might also distribute the FSMO role holder by physical location. This maybe the most secure site or best network availability or highest number of client requests.
Many companies have time requirements for their computer systems. By default the PDCe FSMO role holder is at the top of the time pyramid for the domain. Other domain members use W32Time service to synchronize their system clocks. Keeping the PDCe synchronized with an accurate source will help keep you domain member’s time accurate.
As domain admin you will need to know how new accounts are created. Question: “Will HR be creating user accounts within their own software and does that software create new user accounts in AD?”. Some environments have very complex user provisioning scenarios. In more simple scenarios the user account maybe created by an administrator and manually gets their mailbox created and user group membership configured. In more distributed environments the Account Operators group maybe used and in the most complex scenarios it maybe the Human Resources Dept or the hiring manager who creates the user accounts.
There is a lot of information that falls under the group policy umbrella that a domain admin needs to be familiar with. In a nutshell you can use group policy to configure computer and user configuration settings on machines throughout your domain. You’ll want to familiarize yourself with GPMC, the group policy management tool.
There are approximately 2,400 settings in Windows Vista that can be set in group policy. This gives the admins a good deal of flexibility in configuring computer and user settings. You use group policy not only to restrict the abilities of specific users in your domain but you can also enhance their experience through group policy as well.
There are several general categories that can be controlled including: Application Deployment, Certificate Enrollment and Trust, Logon-logoff-startup-shutdown scripts, Restrict Groups settings, Internet Explorer configuration, disk quotas, user folder redirection, user rights and security configuration, etc… The list goes on and on depending on your needs.
Group policy is also flexible; you can link group policies at a domain, site or OU level. Clients that have memberships in these containers either by direct membership or inheritance will receive these policies. Therefore, it follows that your design of AD can help ease the distribution of user and computer configurations.
Here again collaborating with different business divisions within your organization is a must. Workstations in the accounting department will most likely need different software and access to their local machines than users in the Marketing department. Similarly, web servers will have different configurations than domain controllers. You can individually set the configuration as part of an image during the build process or you can manually change a machines configuration, but when you want to change thousands of machines at once, Group Policy is definitely the way to go.
On top of normal GPO settings, we now have group policy preferences which increases the flexibility and extends the capability of what administrators can do with group policy.
As the domain admin you have the proverbial “keys to the kingdom” for the resources in your domain. Security is a big responsibility to protect your domain’s resources. Domain security access requires authentication. There are several levels of authentication and you will want to implement the highest level of security where possible.
Windows 2003 implements several authentication protocols: Negotiate, Kerberos, NTLM, Secure Channel, and Digest . It’s also extensible so other authentication protocols can be added.
NTLM is a challenge-response authentication mechanism. The client attempts to access a resource and is challenged for credentials by the server. The client sends the username and a hash of the user account’s password and the server attempts to authenticate your credentials on a domain controller in the user’s domain. Therefore, the server must chain back to the user’s domain to be authenticated. NTLM has several variations and this is only one iteration. You would also loop anonymous access under this category if the username and password are null the target machine will attempt to logon the user as anonymous. If the server resource accepts anonymous authentication then the client will get access.
Kerberos is a more secure and efficient form of authentication than NTLM. It is the default authentication package in most cases beginng with Windows 2000. To summarize kerberos authentication, a client will ask for a service ticket for the server resource it wants to access. The client receives the ticket and forwards the ticket to the resource server to be authenticated. Wherever possible you would want to configure authentication to use Kerberos.
Certificates can be used for authentication as well. Certificate technologies have grown in scope and complexity over the past several years. More and more technologies are using certificates to increase security. So even though it’s not an authentication protocol it is used in conjunction with authentication protocols to increase security. For example, smartcard authentication uses a certificate that is installed on a physical card. The card is placed in a smartcard reader and the user provides a PIN to access certificates on the card. In this way its two factor authentication because we are using something we have, the smartcard, and something we know, the PIN, to provide authentication.
Certificates can also be used to increase security by encrypting the network traffic. Secure Sockets Layer, SSL is a well known method of encrypting traffic and can provide server identity. SMIME is another common scenario where users can encrypt and digitally sign their email.
We are seeing more and more companies implementing their own internal Certificate Authority infrastructure. Having a certificate authority for your domain allows you to assign both user and computer certificates through both automated and manual methods. Using these certificates can significantly increase the security inside and outside your network.
Authentication can be difficult to manage. Two very common scenarios are choosing authentication methods in SQL and IIS. It would be nice if all your applications in the enterprise supported Kerberos and you could just worry about one method but that’s not realistic. It may be an overwhelming task to determine the configuration of all applications. Where you would have concerns are scenarios where plain text or basic authentication is being used. You’ll want to restrict this behavior as much as possible and never use your domain admin credentials to access those applications. If however it is the only method that can be used at the very least the authentication should be encrypted using certificates.
Domain admins must determine if they will allow a trust to be established with another domain or forest. Moving to 2003 forest level allows you to establish a forest level trust and therefore inherit trusts for domains within the other forest.
We can use certificates to provide encrypted sessions to servers. The most common example with be using HTTP over SSL. In this case, we would issue a server certificate to the web server that would confirm the server’s identity and allow for users to establish an encrypted session. For internet facing web servers, normally you would purchase a certificate from a trusted authority.
Another example of using SSL internally within your organization is for LDAP over SSL. Typically our domain controllers service client request over port 389. We can leverage application that are LDAPS enable by installing a server authentication certificate on our DC’s.
These two technologies provide file encryption. Bitlocker was introduced in the Windows Vista operating system. It helps provide whole drive encryption that is seamless to the user. It helps protect both data and operating system files and is especially useful on laptops where a user may not be able to maintain physical security of the device. EFS is the technology we use to encrypt specific files on a computer. By default the domain admin is the recovery agent for all EFS files in the domain. These encryption technologies can remove your access to data and it may be lost. Care needs to be given to design a proper system where the domain admin decides who can use encryption and for what purposes. As well as who will be the recovery agent in case a client cannot decrypt their files.
Resource access is controlled through an access control list, (ACL), in most situations. Fundamentally, we need to determine whether we will create ACL’s based on users or groups. We recommend setting security on resources by using domain based groups when more than one user will be accessing the resource. Adding a user to that group gives them access to all those resources and, conversely, removing them restricts their access. Change management is much easier if group membership matches resource needs. Securing groups and controlling group membership is important as a domain admin to strike a balance between people that do not want to use any groups and those who would like a person to be a member of a 1000 groups.
Some businesses are required by government regulations to maintain auditing at a certain level. Outside of these bounds, security auditing needs to be controlled closely on the domain controllers. This would include not only capturing the data but also periodically reviewing the audit logs to confirm their content.
Password Policy – This is rather straight forward. Mainly need to determine the level of complexity, password expiration age and lockout threshold. Here, you want to have a secure password that changes on a regular basis but not so stringent that it costs your company money in lost productivity and helpdesk related issues. Complex passwords are the best way to increase security. On the other hand, there is a certain theory that a low account lockout threshold will increase security.
The lower the number for account lockout the more frequently accounts will be locked out. Any number less than 7 will most likely increase lockout dramatically.
Choosing the right combination of lockout threshold, duration and complexity will help keep everyone working with an acceptable level of password security.
Care should be given to examine specific accounts that handle sensitive data. Not only should the data be closely protected but also the accounts that are used to control that data. This may include company executives, domain administrators, HR and Finance employees and application service accounts.
Delegation of Control – Depending on your organization’s size you may have a highly distributed group of users that modify Active Directory objects. The goal would be to give each person responsible for AD management the least privilege required to perform their responsibilities.
For ease of use we create specific groups with Active Directory to achieve common tasks: Server Operators, Account Operators, Backup Operators etc… These groups have predefined access to domain resources. Other actions may be non-standard and require specific permissions in Active Directory. Several applications write to Active Directory and their service accounts will need specific access rights. Typically this would be applications that may need Enterprise Admin permissions to install. In addition, they may create specific groups particular to their application that allows them to write to Active Directory. Microsoft Exchange is a good example.
ACLs on AD objects – For the most part, the default permissions on an object within Active Directory will be acceptable. It can be very difficult to manage and troubleshoot access problems when you are not using a standard approach to control access. Setting specific restrictions to particular objects is where administration can turn into a nightmare. Make sure strict change control and documentation is enforced whenever making changes to AD. Keep in mind Active Directory will outlast many of your administrators. Nobody wants to be in a position where you are trying to back out changes that were completed a year ago without documentation of what changes were made.
Although these technologies are managed and mostly controlled by our networking group, domain admins need to understand the concepts associated with network design and administration. At the core, TCP/IP is our primary communication protocol suite.
DHCP is how our hosts get network addresses that are dynamically assigned and how we configure clients for DNS registration, Wins and DNS discovery.
WINS is older technology for NetBIOS name resolution that is still in use in many networks.
DNS is more critical to a fully functioning and distributed AD environment. Netlogon is used to register the domain controller records in DNS. These records allow client to discover domain roles and services within their AD site.
Users are accessing our network in greater frequency outside of the LAN. In order to work closely with your network counterparts you need to understand some of their technologies. These would include but are not limited to: Routing, Remote Access, VPN, IPSec, Wireless, RDP, etc…
There are two types of firewalls that you will encounter: the one at the edge of your network’s boundaries and one installed on workstation/server computers. While the network team will manage the perimeter firewall, the firewall installed on your clients and servers may be managed by Group Policy and, hence, in the domain admins’ realm.
For the firewalls on the perimeter, you will need to be familiar with the ports that are required to be open.
The FSMO roles: Schema Master and DomainNaming Master are forest based roles. The PDCe, RID Master and Infrastructure Master are domain specific roles. In a small domain scenario you may have all 5 roles installed on the same server. In larger environments you will most likely decide to distribute these roles to separate machines, and in the case of having more than one domain in your forest, you will need to host these roles on multiple machines. Usually the roles are hosted on machines in the data center or hub site. As far as the roles are involved the PDCe role hosts a lot more activity then all the other roles combined.
Global Catalog(GC) server placement is also a concern for efficiency. GC’s are required for logon and need to be distributed efficiently. Having a GC present in remote sites will help to significantly reduce the amount of logon traffic and the time required for logon. Conversely, a GC in a remote site will consume network bandwidth during replication cycles.
The schema is the base configuration for Active Directory. It defines the types of objects that will be created inside the database. Changes to the schema can be difficult or impossible to be undone. As the domain is upgraded to new versions there is typically going to be a schema update associated. Other applications may also extend the schema such as Exchange or third party applications. Before any schema update, ensure a rollback or recovery process is in place. Removing an upgrade to the schema is may not be possible or corruption of the schema may cause permanent malfunction of AD. You will need to be a member of the Schema Admins group to be able to modify the schema and the modification needs to take place on the Schema Master.
The configuration container has a lot of information stored there but very little that will be actively managed. For instance, there is information about the configuration of the active directory forest and their associated partitions. This includes information about AD sites and Enterprise services such as Certificates and Exchange email. Extended Rights inside AD are defined here such as changing the FSMO role holders. There is just one configuration per forest so whatever is written here is available to all domains in the forest.
This is where all of your user, computers and groups are stored. But if you enable advanced features in AD Users and Computers snapin, ADUC.MSC you will see a lot more data is contained in the system container. There is information associated with the AdminSDHolder process which protects your admin accounts from loosing permissions to AD, Domain DNS information and Group policy is also stored here. Rarely, if ever, will you modify any items in the system container using ADUC. It would be wise to restrict delegation to this container. On the other hand, the AD delegation wizard makes distributing permissions to other admins or account management people easy.
You can store and replicate DNS information for the forest and for specific domains to DC’s. Active Directory integrated zones are stored in AD. Standard primary zones are stored as files on the DNS server.
Just to mention a brief description of the core toolset a domain admin may use on a regular basis or under emergency conditions.
This is a small subset of tools that an admin will use but these tools seem to be used regularly. There are frequently other applications with similar or overlapping features. There are even software packages that will monitor and manage Active Directory.
There are many new features and services available that released with Windows 2008 Server. Some of the key changes for 2008 Active Directory Domain services are: Read Only DC’s, Disaster Recovery and Fine Grained Password Policy.
It’s been a long road to get to this point if you have been following all the links in the document along the way. In no way is this intended to be the “be all and end all” of domain admin knowledge. I hope if you are a domain admin already, you gained some knowledge in some areas. If you are new to domain administration you hopefully learned a great deal. You might now look to some official Microsoft curriculum or certification, please visit Microsoft Learning website.
- Steve ‘Milk in the fridge’ Taylor
Ned here. From the ‘holy smokes!’ file, here’s your chance to win a Cray desk-side supercomputer, MSDN subscriptions, or Xbox360 consoles:
Super Duo Cray-Microsoft Contest
Enter by January 31st, no purchase necessary – you just register and download the free datasheet and 180-day eval copy. Sorry fellow Microsofties, we’re not eligible to enter… :(
- Ned ‘640K is NOT enough for anyone’ Pyle
Hi All Rob here again. I thought I would take the time today and expand upon the Kerberos Delegation website blog to show how you can use the web site on IIS 7. Actually, Ned beat me up pretty badly for not showing how to set the site up on IIS 7 [I sure did. Rob’s revenge was to make a blog post so editorially complex that it took me forever to format and publish – Ned].
First thing, I am not going to go over the entire setup to get it working. All the Kerberos delegation steps are exactly the same. However if you have looked at IIS7 the interface is totally different from previous versions.
1. Launch Server Manager and select Roles in the tree view. 2. Next click on the Add Roles link in the right hand pane.
Figure 1 - Adding Roles
You will get the Select Server Roles dialog as shown below.
3. You need to click on Web Server and immediately you will get another dialog box for the Additional Features that need to be installed for the Web Server to function. 4. Click on the Add Additional Features button and click Next.
Figure 2 - Adding Web Server Role
You will then be shown another dialog box to select Role Services.
5. You need to make sure that you select ASP.NET. You will again be prompted for additional required Features and click on the Add Required Features button.
Figure 3- Selecting ASP.Net Role Services
NOTE: You will also need to add the Authentication methods you want the IIS server to support. For demonstration purposes we are only adding Windows Authentication, and Basic Authentication.
Figure 4 - Authentication Modes
6. Once you have selected all the Role Services, click next. 7. Just prior to the installation of the Web Server role you are given a screen that lists the role services you are about to install.
Figure 5 - Confirming Role Services
8. Then click the Finish button.
So the IIS 7 interface is totally different than previous versions of the IIS MMC snapin. IIS7 can also do authentication in Kernel mode now which was not possible in previous IIS versions.
1. Launch the Internet Information Server (IIS) Manager snapin. 2. Expand the tree view and high light the web site. 3. Right click on the web site (in the figure below we used Default Web Site), and select Add Application.
Figure 6 - Adding Web Application
4. Type in the Alias name you want to use for the application, and the file path to the application directory for the web site. 5. Make sure that you are using the Classic .NET AppPool for the application pool to be used for the web application.
Figure 7 - Configuring Web Application Settings
6. After you have added the web application, you want to select the application directory. 7. Select Authentication as shown in the figure below.
Figure 8- DeleConfig Authentication settings
8. Double click on Authentication. 9. Highlight Windows Authentication, and then right click and select Enable. If you want to support other authentication methods you can enable those and disable other authentication methods you do not want to support. 10. Now highlight Anonymous Authentication, and then right click and select Disable.
Figure 9 - Enabling Windows Authentication
Now that we have installed IIS and the application you need to decide what account will be used for the Application Pool Identity. I have found that the configuration is drastically different based on the account used. If you use Network Service you configure the system one way, and if you use a domain based account you need to configure the system another way.
I will cover both methods. For the most part the simple configuration is to use Network Service as the Application Pool Identity and can be used most of the time except in cases where you have multiple web servers in a load balance configuration.
1. In the Internet Information Service (IIS) Manager snapin select Application Pools in the tree view.
Figure 10 - Verifying AppPool Identity
2. Verify that the Identity being used is NetworkService. 3. Next navigate to the web application. In my lab it is the DelegConfig application. 4. Double click on Authentication while you have the web application node selected in the tree view.
Figure 11 - Web Application Authentication mode
5. Make sure that Windows Authentication is enabled. This should have already been done under installing the web application. 6. Right click on Windows Authentication and select Advanced Settings…
Figure 12 - Advanced Settings
7. In the Advanced Settings… dialog box you want to make sure that Enable Kernel-mode authentication is checked.
Figure 13 - Enable Kernel-mode Authentication
8. After this all the normal things need to be done in the domain to support Kerberos delegation. 9. Then reboot the server. 10. Test the application and it should work.
Figure 14 - Verifying AppPool Identity
2. Verify that the Identity being used is the domain based account you want. 3. If it is not, then right click on the Classic .NET AppPool application pool and select Set Application Pool Defaults…
Figure 15 - Setting the Application Pool Defaults
4. You will get the dialog box like the one listed below. Change the Identity being used as highlighted.
Figure 16 - Application Pool Defaults
5. Select Custom account and click on the Set button.
Figure 17 - Setting custom account identity
6. You will have to type in the domain name and user account that will be used and the password will need to be entered twice.
Figure 14 - Typing in the credentials
7. Click the OK button on the Set Credentials dialog box. 8. Click the OK button on the Application Pool Identity dialog box. 9. Click the OK button on the Application Pool Defaults dialog box. 10. Next navigate to the web application. In my lab it is the DelegConfig application. 11. Double click on Authentication while you have the web application node selected in the tree view.
Figure 19 - Web Application Authentication mode
12. Make sure that Windows Authentication is enabled. This should have already been done under installing the web application section. 13. Right click on Windows Authentication and select Advanced Settings…
Figure 20 - Windows Authentication Advanced Settings
14. In the Advanced Settings… dialog box you want to make sure that Enable Kernel-mode authentication is unchecked.
Figure 21 - Disable Kernel-mode authentication
15. You need to add the application pool identity account to the following local computer groups: Administrators, and IIS_IUSRS.
Figure 22 - Add the account to the proper groups
16. After this all the normal things need to be done in the domain to support Kerberos delegation. 17. Then reboot the server. 18. Test the application and it should work.
If you need to understand how to setup the delegation please visit my previous blog about the website located here. There you will see how to add / delete service principal names (SPN), as well as how to configure delegation within Active Directory Users and Computers (ADUC).
I hope that you have found the blog helpful in getting your first IIS7 server configured to use the DelegConfig website and get your feet wet on how to configure IIS7 to support Kerberos authentication.
- Rob ‘ScreenShot’ Greene
New KB articles related to Directory Services for the week of 1/18-1/25.
After you re-add a member server to a DFS replication group in Windows Server 2003 R2, initial replication does not occur on the member server, and changes in the replicated folder are replicated unexpectedly to other replication partners
SMB functions time out earlier than expected on a Windows 2003-based computer that has multiple processors installed
Error message when you try to browse a DFS namespace from DFS the Management Tool: "An item with the same key has already been added"
Certificate store ACL change supportability
How to enable Access-based Enumeration for a Distributed File System (DFS) share in Windows Server 2008
The User Profile Service (ProfSvc) crashes in Windows Server 2008 or Windows Vista SP1 systems when the service loads or unloads user profiles that contain compressed files or folders
Error message when you try to access a network share in a private network: "There are currently no logon servers available to service the logon request"
Error message when you try to unlock a Windows Vista-based or a Windows Server 2008-based computer that has the Fast User Switching feature disabled: "The password for this account has expired"
Error message when you try to perform certain operations on the files that are located on a network share: "Access is Denied"
Ned here again. The cat is out of the bag now and we're a little more free to talk about DFSR features that are planned (not guaranteed - planned) to release with Windows Server 2008 R2. Our friends at the File Cabinet blog have posted an excellent writeup - definitely worth a look:
DFS Replication: What’s new in Windows Server™ 2008 R2
Here's the short and sweet list of areas that were added or improved:
You can try all these out in a test environment right now - hurry up and grab the ISO's before it's too late.
- Ned 'The Short Simpson' Pyle
Hi all, Mark from Directory Services again. This time I would like to talk about one of the many tools that we use in troubleshooting network issues. At times you may see errors such as The RPC server is unavailable or There are no more endpoints available from the endpoint mapper (These error messages can be DNS related at times). Of course you may see an error code such as 1722 or 1753. So you ask, “What does the error code mean?” If you get an error such as 1722 or 1753, open a command prompt and type in the following:
Net helpmsg <error number>
An example would be net helpmsg 1722 and after hitting enter you will get The RPC server is unavailable. If the error number is something such as a hex number then they will not resolve using the net helpmsg command.
Before we get into talking about using portqry we need to understand a little bit more about RPC (Remote Procedure Call). When a server starts up applications and services that are going to be listening for other clients connecting, they may register with the RPC Endpoint Mapper (EPM). EPM will keep a database of all these registrations and what ports they are listening on. Ports are used so that applications can communicate between client and server. By default EPM will listen on port 135. When a client needs to know what port an application is listening on it queries RPC to find out. This is much like going to the hospital to see a sick friend. The address of the hospital would be the IP address of the server. When you go in the door to the information desk to find out what room your friend is in would be the same as querying RPC on port 135. Then once you’re in the room with your friend you now connected with the application you wanted to talk to. Some ports such as LDAP (389), SMB (445), etc. are hard coded so they are consistent across all Windows clients and servers. To see a list of common ports go to KB article here. Applications may register a port on what is referred to as ephemeral ports. An ephemeral port is a port above 1024 and less than 65536. Take LSASS (Local Security Authority Subsystem Service) for an example, it will usually listen on a couple of ports just above 1024 such as 1025, 1026 or 1027. An example of the output is below:
Note: I am querying port 135 against a Domain Controller in my examples.
The UUID stands for Universally Unique Identifier, no 2 applications should ever have the same UUID, and is followed by an annotated name if one exists. On the second line is the protocol the application uses and the IP address of the server. Then in the brackets will be the port number the application is listening on. The example above was taken from the LSASS service but you can see multiple entries for a single application as well. Such as this:
Now note that the UUID’s are the same, the annotated name is the same if one I present, but the protocol is different and now we have the server name path as this is the named pipe instance. So how do we query the RPC to find out what applications are registered? Glad you asked! First we need to download the portqry command line tool from here or the GUI tool from here. I prefer using the command line tool and piping the results to a text file. Once you have the tool downloaded and installed open a command prompt. Let’s say you have issues connecting to another server (call it Server1) across a WAN where there is a firewall in place. At the command prompt run the following command:
Portqry –n server1 –e 135 > port135.txt & port135.txt
This will query the RPC EPM service (-e switch) on server1 (-n switch) and pipe the results to a text file. The use of the”&” and referring to the text file again will open the text file after the command is done. Now look at the text file. The first portion is simply taking the server name that you are querying against and resolving to an IP address:
Querying target system called:
Attempting to resolve name to IP address...
Name resolved to 192.168.0.30
The next line is very critical:
TCP port 135 (epmap service): LISTENING
Querying Endpoint Mapper Database...
If something other than LISTENING is returned then there could be a problem with a port being blocked somewhere. If the port shows to be FILTERED then a firewall or VLAN could be blocking that port, if the port returns NOT LISTENING then we got to the machine but the machine is not listening on that port number. If you get a LISTENING or FILTERED response, check and see whether we are checking TCP or UDP, most likely it was attempting to use UDP and this would be a normal response as UDP is connectionless. An example of this would be if you query port 88 for Kerberos against a DC and use the following syntax:
Portqry –n server1 –e 88 –p both
TCP port 88 (kerberos service): LISTENING
UDP port 88 (kerberos service): LISTENING or FILTERED
By default we will only query the port on TCP. By using the –p switch we can tell the portqry tool which protocol we want to use. Using the both after the –p we can tell the utility to query both TCP and UDP. In the above example we use UDP by default for Kerberos so that is why I wanted to check both protocols.
Now back to querying the endpoint mapper. Let’s say you have two Domain Controllers with a firewall separating them and they are not replicating. You believe the networking admin has all the ports open on the firewall. From one DC query the other DC on port 135 and look at the output. Search the text file for lsass then note the UUID. Scroll up or down in the text file until you find the port number in brackets such as this, . Now run the portqry again using this:
Portqry –n server1 –e 1025
Does it return LISTENING? If so then good, if it shows FILTERED then the port is blocked on the firewall. Keep searching for lsass as you will find it will listen on more than one port, usually 2 different ones. Once you find the second port run the command again using the second port number and verify it is listening as well. You can also search the text file of FRS (File Replication Service) and query its port. If you query port 389 (LDAP) then you will get a stream of information from the DC such as below:
currentdate: 12/16/2008 23:37:23 (unadjusted GMT) subschemaSubentry: CN=Aggregate,CN=Schema,CN=Configuration,DC=domain,DC=com dsServiceName: CN=NTDS Settings,CN=server1,CN=Servers,CN=Corp,CN=Sites,CN=Configuration,DC=domain,DC=domnamingContexts: DC=domain,DC=com defaultNamingContext: DC=domain,DC=com schemaNamingContext: CN=Schema,CN=Configuration,DC=domain,DC=com configurationNamingContext: CN=Configuration,DC=domain,DC=com rootDomainNamingContext: DC=domain,DC=com supportedControl: 1.2.840.113518.104.22.1689 supportedLDAPVersion: 3 supportedLDAPPolicies: MaxPoolThreads highestCommittedUSN: 1541417 supportedSASLMechanisms: GSSAPI dnsHostName: server1.domain.com ldapServiceName: domain.com:server1$@DOMAIN.COM serverName: CN=SERVER1,CN=Servers,CN=Corp,CN=Sites,CN=Configuration,DC=domain,DC=com supportedCapabilities: 1.2.840.113522.214.171.1240 isSynchronized: TRUE isGlobalCatalogReady: TRUE domainFunctionality: 2 forestFunctionality: 0 domainControllerFunctionality: 2
You can use the –p both switch with this command and it will query both UDP and TCP and it will return the same information each time. Note the UDP will show up as LISTENING or FILTERED as discussed before. If you want to query multiple ports you can use a switch such as this: -o 88,389,445 All you have to do is separate the port numbers with a comma then it will run it against all the ports you specify one right after another. If you want to do a range of ports say between 1100 and 1105 use –r 1100:1105.
If you prefer to use a GUI based tool instead of the command line download the GUI version from the link above or here. Extract the files then run the portqueryui.exe to open the tool. Enter the IP address or server name in the first field then in the Query Type you will notice you have a drop down box for Service to query. This is useful as these ports will be fined in the config.xml file that is located in the same directory. This is useful to gather information against multiple ports that are commonly queried and outputs the results to a window you can simply copy and paste into a text file for review later. It also allows you to manually enter the port numbers that you want to query on and the protocols.
You can use portqry to troubleshoot several different issues. I had a case where when checking the NTFS permissions on a folder it would take up to 20 seconds to resolve the SID to a name. In a network trace we saw the traffic going from the server to a DC. In the trace we queried port 135 to find out what ports LSASS was listening on. It returned 2 different ports. In the trace we could see that the server would send a SYN packet to the first port and we would never get a response from the server. It kept trying to use that port for almost 20 seconds then it would fail over to the second port on the list and resolve the SID to a name. We used portqry using the first port number and showed the customer that port was being filtered on the VLAN which he later confirmed was true. Another time we used portqry to confirm a problem with a trust between 2 domains which is common scenario for us.
Another note is that you cannot run portqry on a server back to itself. To do this run the following command on the server itself:
Netstat –ano > netstat.txt & netstat.txt
This will list out the protocols (TCP and UDP) the local address, foreign address (this would be a remote server that has a connection to your server) it’s state then the PID of the service that is listening locally. Here is a sample:
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:53 0.0.0.0:0 LISTENING 1424
TCP 0.0.0.0:88 0.0.0.0:0 LISTENING 400
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 748
TCP 0.0.0.0:389 0.0.0.0:0 LISTENING 400
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:464 0.0.0.0:0 LISTENING 400
TCP 0.0.0.0:593 0.0.0.0:0 LISTENING 748
TCP 0.0.0.0:636 0.0.0.0:0 LISTENING 400
TCP 0.0.0.0:1025 0.0.0.0:0 LISTENING 400
TCP 0.0.0.0:1027 0.0.0.0:0 LISTENING 400
To find what process is listening open Task Manager, go to View – Select Columns – put a check mark in the box for PID then ok. You can click on the PID column to sort it by numerical order to look up the number easier. As in my example above the port 53, which is DNS, 1424 resolves to DNS.exe, 400 is lsass.exe, 748 is svchost.exe (one of many svchost.exe’s), etc.
I hope I have not confused anyone too much and hope this helps in any troubleshooting network issues. Thanks!
816103 HOW TO: Use Portqry to Troubleshoot Active Directory Connectivity Issues
Troubleshooting networks without Netmon (Ned Pyle has some good information here as well)
- Mark Ramey
Ned here. We’ve been asked a few times to show a picture of the DS Support teams that you work with everyday. I’m finally able to oblige you for my team courtesy of the outstanding SimpsonizeMe website, run by the good folks at Burger King. See if you recognize any names from support cases you’ve worked or blogs you’ve read.
The back row, from left to right: David Beach, David Fisher, Gary Mudgett, Jonathan Stephens, Rob Greene, Sean Ivey, Steve Taylor, Tait Neville. The front row, from left to right: Mike Stephens, Ned Pyle, Adam Conkle, Chris Cassidy.
And don’t worry, this didn’t affect any SLA’s – our manager put it together on his own time. We tried to get him in here too but his face was too hideous for the tool to read; it simply rejected him. Make sure you give the SimpsonizeMe site a try, it’s fun stuff.
- Ned “Mmmmm… Whopperrssss.” Pyle
Greetings DS blog readers, Todd here. I wanted to talk a little about the Negotiate security support provider (SSP) and how there are times when it will intentionally use NTLM rather than Kerberos. [And if that’s not interesting, keep reading anyway because there is a slick trick in here for network captures - Editor]
In a properly configured and functioning domain when SSP Negotiate is utilized and the client application resides on the target server to be accessed, SSP Negotiate will choose NTLM instead of Kerberos. Microsoft Negotiate acts as an application layer between Security Support Provider Interface (SSPI) and the other SSPs. When an application calls into SSPI to log on to a network, it can specify an SSP to process the request. If the application specifies Negotiate, Negotiate analyzes the request and picks the best SSP to handle the request based on customer-configured security policy. Currently, the Negotiate security package selects between Kerberos and NTLM. Negotiate selects Kerberos unless it cannot be used by one of the systems involved in the authentication, the calling application did not provide sufficient information to use Kerberos, or the client and server are the same machine. For efficiency and security reasons NTLM is chosen over Kerberos when the client and the server are the same machine. This behavior is by design. How to reproduce the behavior
Note: The test computer must be part of a domain and on a routed network for the reproduction to work. 1. On your test computer create a share. 2. Install a network capture utility. (Netmon, Wireshark, etc...) 3. Add a network route to the test machine to allow network traffic to be seen with the source and destination are the same location. Run the following command on the machine where you need to see the to and from traffic on itself: route add <IP Address of the server that you are on> <IP Address of default gateway of the server you are on>
This causes the server to send internal packets over the network that would ordinarily stay completely local and not be viewable in a network trace. The packets will just return to the test computer itself.
PLEASE NOTE: To remove the “route add” issue the command “route add <IP Address of the server that you are on>”
PLEASE NOTE: To remove the “route add” issue the command “route add <IP Address of the server that you are on>”
Note that each packet going from the server to itself will appear twice ... once exiting the server on its way to the router, once returning from the router on its way back to the server. You can ‘post-process’ the capture to eliminate the duplicate packets (i.e. don't display the packets where the source Ethernet address matches the router).
In Netmon 3.x you can use a Display Filter constructed similar to:
HTTP and Ethernet.SourceAddress == 0x010203040506
HTTP and Ethernet.SourceAddress == 0x010203040506
For Wireshark you can use the following display filter:
(tcp.port == 80) && (eth.src == 01:02:03:04:05:06)
(tcp.port == 80) && (eth.src == 01:02:03:04:05:06)
You will need to replace the 0x0102030405 or the 01:02:03:04:05:06 with the MAC address of your web server.
4. Start the network capture. 5. Access the share on the test computer from the test computer (i.e. from itself, to itself). 6. Stop the network capture. 7. Review network trace.
You should see the following sequence in the trace:
i. A GET request ii. A HTTP 401 Unauthorized response with:
i. A GET request ii. A HTTP 401 Unauthorized response with:
…in the HTTP header iii. A second GET request with NTLM authentication
…in the HTTP header iii. A second GET request with NTLM authentication
Here is the workaround In order to allow a client application to utilize Kerberos in this usage scenario; the following steps can be enacted. 1. Create an alias (CNAME) host record for the machine with a unique name for the forest. This can be accomplished using the DNS snap-in, expanding out he forward lookup zone for the domain which contains the web server, right clicking on the domain name and select New Alias (CNAME)…
2. Register SPNs for the alias on the computer object. You can use Setspn to register an SPN which would be formatted similar to “Setspn –A HTTP/<alias_ you _registered_in_DNS> <webservername>”
3. When accessing the machine with a browser use the alias. This will allow Kerberos to be utilized even if the client resides on the server. References http://msdn.microsoft.com/en-us/library/aa378748(VS.85).aspx
I hope this has been enlightening.