Blog - Title

August, 2010

  • New DNS and AD DS BPA’s released (or: the most accurate list of DNS recommendations you will ever find from Microsoft)

    Hi folks, Ned here again. We’ve released another wave of Best Practices Analyzer rules for Windows Server 2008 / R2, and if you care about Directory Services you care about these:

    AD DS rules update

    Info: Update for the AD DS Best Practices Analyzer rules in Windows Server 2008 R2
    Download: Rules Update for Active Directory Domain Services Best Practice Analyzer for Windows Server 2008 R2 x64 Editions (KB980360)

    This update BPA for Active Directory Domain Services include seven rules changes and updates, some of which are well known but a few that are not.

    DNS Analyzer 2.0

    Operation Info: Best Practices Analyzer for Domain Name System – Ops
    Configuration info: Best Practices Analyzer for Domain Name System - Config
    Download: Microsoft DNS (Domain Name System) Model for Microsoft Baseline Configuration Analyzer 2.0

    Remember when – a few weeks back – I wrote about recommended DNS configuration and I promised more info? Well here it is, in all its glory. Despite what you might have heard, misheard, remembered, or argued about, this is the official recommended list, written by the Product Group and appended/vetted/munged by Support. Which includes:

    Awww yeaaaahhh… just memorize that and you’ll win any "Microsoft recommended DNS" bar bets you can imagine. That’s the cool thing about this ongoing BPA project: not only do you get a tool that will check your work in later OS versions, but the valid documentation gets centralized.

    - Ned “Arren hates cowboys” Pyle

  • Fine-Grained Password Policy and “Urgent Replication”

    Hi folks, Ned here again. Today I discuss the so-called “urgent replication” of AD, specifically around Fine-Grained Password Policies.

    Some background

    If you’ve read the excellent guide on how AD Replication works, you have probably come across the section around so-called “urgent replication”:

    Certain important events trigger replication immediately, overriding existing change notification. Urgent replication is implemented immediately by using RPC/IP to notify replication partners that changes have occurred on a source domain controller. Urgent replication uses regular change notification between destination and source domain controller pairs that otherwise use change notification, but notification is sent immediately in response to urgent events instead of waiting the default period of 15 seconds.

    •  
      • Assigning an account lockout, which a domain controller performs to prohibit a user from logging on after a certain number of failed attempts.
      • Changing a Local Security Authority (LSA) secret, which is a secure form in which private data is stored by the LSA (for example, the password for a trust relationship).
      • Changing the password on a domain controller computer account.
      • Changing the relative identifier (known as a “RID”) master role owner, which is the single domain controller in a domain that assigns relative identifiers to all domain controllers in that domain.
      • Changing the account lockout policy.
      • Changing the domain password policy.

    So as long as the connection between the DC’s had Change Notification enabled, changing one of these special data types “urgently” replicated that change to immediately connected partners. Ordinarily this just meant DC’s in your own site, unless you have configured Inter-Site Change Notification on your Site Links. This is the part that confused most folks: urgent replication isn’t so much for security as for consistency. By default, these “urgent” changes might take a few hours or days to transitively reach outlying DC’s but maybe you don’t care because the end user experience would be consistent within every AD site.

    Suspiciously absent from the documentation though: Fine-Grained Password Policies. Does this mean that we didn’t update this old article, or that FGPP’s don’t count for urgent replication? After all, FGPP has account and password policies out the wazoo, that’s the whole point of them.

    Figuring it out

    When I first thought about writing this this article I figured I’d just look at source code, get an answer, make a three line blog post and be on my way. Except that unlike me, you don’t have that source code privilege, so that’s not super helpful. Instead I’ll show you how to determine the behavior yourself; it may be helpful in other scenarios someday.

    Let’s do this.

    1. You will be making changes on the PDC Emulator DC. You will also need to pick out a DC that directly replicates inbound from the PDCE within the same AD Site. Obviously, you better create a FGPP in this test domain you are using; it doesn’t need to be assigned to anyone. If you’re using Windows Server 2008 R2 you can load up PowerShell and quickly create a password settings object with:

    import-module activedirectory

    New-ADFineGrainedPasswordPolicy -Name "DomainUsersPSO" -Precedence 500 -ComplexityEnabled $true –Description "Test Domain Users Password Policy" -DisplayName "Domain Users PSO" –LockoutDuration "0.12:00:00" -LockoutObservationWindow "0.00:15:00" -LockoutThreshold 10

    2. Turn on Active Directory Diagnostic Event Logging for replication events on that downstream partner DC.

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics
    “5 Replication Events” [REG_DWORD] = 5

    3. Pick some trivial object on the PDCE to modify (I change a user’s "Description" attribute). Use repadmin.exe /showmeta to see what its current USN is for that description attribute:

    image

    4. Change the description. After 15 seconds the change replicates to the downstream partner DC:

    image

    5. If you look in the Directory Services event log on my downstream server, you can also see that there was a USN update as a 1364 event, from the old USN to the new one. So in my example above, the old USN was 692689 and the new one is 692706. There is also a 1412 event, more on that later. The event log reflects this also with my USN vector raising exactly 15 seconds after the originating time:

    image

    Note: I am using Hyper-V guests here so I have perfect time sync. You may not be this fortunate in your lab. :)

    6. Now you change one of the known “urgent replication” settings. For example, the account lockout threshold:

    image

    7. Neato. This time you don’t get a 1364 event. You still get a 1412 event that has the right USN (so did the Description change previously, not that it matters). But where is the 1364?

    image

    You don’t get that event because the normal change notification process was bypassed and you’re in the “urgent replication” code path. This is the key indicator that you are using urgent replication, as there is no instrumentation for it. If you choose any of the various “urgent replication” data types and try this, they will all behave the same way.

    8. So now we’re pretty confident that getting a 1364 means normal replication and not getting one means urgent replication. So back to the original question – does FGPP follow urgent change replication? Find out: change an FGPP PSO to alter its settings for account lockout threshold.

    image

    image

    As you can see, FGPP does not use urgent replication. It is treated just like everyone else and showed up roughly 15 seconds later.

    But whhhhyyyyy?

    Back when Windows 2000 Active Directory was released we were paralyzed with fear that replication traffic would overwhelm networks with massive RPC data storms and everyone would hate us. So Win2000 DC’s took 5 minutes for even intra-site replication to catch up between DC’s. What we found was that replication was low bandwidth already and customers weren’t changing that much data – but when they did change data, they wanted it on all DC’s faster. So intra-site replication became 15 seconds in Win2003 and we started telling everyone through Support cases, MCS engagements, and PFE ADRAPS to turn on change notification inter-site also. This so-called “urgent replication” mechanism was designed to quickly catch up servers for “more important” changes. But since now everything happens in a few seconds it’s mostly pointless overkill and urgent replication no longer gets any new lovin’.

    So there you go.

    - Ned “don’t forget to turn that logging off when you’re done” Pyle

  • AskDS is 0.03 Centuries Old Today

    Three years ago today the AskDS site published its first post and had its first commenter. In the meantime we’ve created 455 articles and we’re now ranked 6th in all of TechNet’s blogs, behind AskPerf, Office2010, MarkRussinovich, SBS, and HeyScriptingGuy. That’s a pretty amazing group to be lumped in with for traffic, I don’t mind saying. Especially Mark, he has incredible hair.

    Without your visits we wouldn’t be here to celebrate another weirdly composed Office Clipart birthday.

    image

    Thanks everyone,

    - Ned “and the rest of the AskDS contributors” Pyle

  • Friday Mail Sack: Scooter Edition

    Hi folks, Ned here again. It’s that time where we look back on the more interesting questions and comments the DS team got this week. Today we talk about FRS, AD Users and Computers, Load-Balancers, DFSR, DFSN, AD Schema extension, virtualization, and Scott Goad.

    Let’s ride!

    Question

    If you get a journal wrap when using FRS, there is an event 13568 like so:

    Event Type: Warning
    Event Source: NtFrs
    Event Category: None
    Event ID:
    13568
    Date: 12/12/2001
    Time: 2:03:32 PM
    User: N/A
    Computer: DC-01
    Description:
    The File Replication Service has detected that the replica set " 1 " is in JRNL_WRAP_ERROR.
    <snipped out>
    Setting the "Enable Journal Wrap Automatic Restore" registry parameter to 1 will cause the following recovery steps to be taken to automatically recover from this error state.

    But when I review KB292438 (Troubleshooting journal_wrap errors on Sysvol and DFS replica sets) it specifically states:

    Important Microsoft does not recommend that you use this registry setting, and it should not be used post-Windows 2000 SP3. Appropriate options to reduce journal wrap errors include:

    • Place the FRS-replicated content on less busy volumes.
    • Keep the FRS service running.
    • Avoid making changes to FRS-replicated content while the service is turned off.
    • Increase the USN journal size.

    So which is it?

    Answer

    The KB is correct, not the event log message. If you enable the registry setting you can get caught in a journal wrap recovery “loop” where the root cause keeps happening and getting fixed, but then happens again immediately and gets fixed, and so on: replication may sort of work – inconsistently – and you are just masking the greater problem. You should be fixing the real cause of the journal wraps.

    As to why this message is still there after 10 years and four operating systems? Inertia and our unwillingness to incur the test/localization cost of changing the event. When you have to rewrite something in all these regions and languages, the price really adds up. I am way more likely to get a bug fix from the product group that changes complex code than one that changes some text.

    Question

    I was wondering if it is intentional that the "attribute editor" tab is not visible when you use "Find" on an object in AD Users and Computers?

    Answer

    Ughh. Nope, that’s a known issue. Unfortunately for you, the business justification to fix it was not convincing. This happens in Win2008/Vista also and no Premier customer has ever put up a real struggle.

    However, you have another option: Use the “Find” in ADAC (aka AD Admin Center, aka DSAC.EXE). This lets you find and when you open those users, you will see the attribute editor property sheet. If everyone here hasn’t already figured it out, ADAC is the future due to its PowerShell integration and ADUC doesn’t appear to be getting any further love.

    Question

    Are there any issues with putting DC’s behind load-balancers?

    Answer

    If you put a domain controller behind a load balancer you will often find that LDAP/S or Kerberos authentication fail. Keep in mind that SPN’s can only be associated to one computer account, so Kerberos is going to go kaput. You will have to issue certificates manually to the domain controllers if you are trying to do LDAP/S connectivity because the subject and subject alternative name needs to match the DNS name of the load-balanced address.

    Domain controllers are load balanced already in that there are multiples of them. If you need to find a domain controller correctly your application should do a DCLocator or LDAP SRV record lookup like a proper citizen.

    Answer courtesy of Rob “Sasquatch” Greene, our tame authentication yeti.

    image

    Question

    The documentation on DFSR's cross-file RDC is pretty unclear – do I need two Enterprise Edition servers or just one? Also, can you provide a bit more detail on what cross-file RDC does?

    Answer

    Just one of the two servers in a given partnership – i.e. replicating with DFSR connections – needs to be running Enterprise Edition in order to have both servers use cross-file RDC. Proof. There is no difference in DFSR in Standard Edition versus Enterprise Edition code; once the servers agree that at least one of them is Enterprise, both will use cross-file RDC. Otherwise, anytime you got a hotfix from us there’d be one for each edition, right? But there never are: http://support.microsoft.com/kb/968429 (and yes, this article has gotten a bit out of sync with reality, we’re working on that.)

    As for what Cross-File RDC does: if you are already familiar with normal Remote Differential Compression, you understand that it takes a staged and compressed copy of a file and creates MD-4 signatures based on “chunks” of files:

    image

    This means that when a file is altered (even in the middle), we can efficiently see which signatures changed and then just send along the matching data blocks. So a doc that’s 50MB that changes one paragraph only replicates a few KB. An overall SHA-1 hash is used for the entire file - to include attributes, security info, alternate data streams etc. - as a way to know that two files match perfectly or not. DFSR can also make signatures of signatures, up to 8 levels deep, to more efficiently handle very large changes in a big file.

    Cross-file RDC takes this slightly further: by using a special hidden sparse file (located in <drive>:\system volume information\dfsr\similaritytable_1) to track all these signatures, we can use other similar files that we already have to build our copy of a new file locally. Up to five of these similar files can be used. So if an upstream server says “I have file X and here are its RDC signatures”, we the downstream server can say “ah, I don’t have that file X. But I do have files Y and Z that have some of the same signatures, so I’ll grab data from them locally and save you having to transmit it to me over the wire.” Since files are often just copies of other files with a little modification, we gain a lot of over-the-wire efficiency and minimize bandwidth usage.

    Slick, eh?

    Question

    I’m seeing DFS namespace clients going out of site for referrals. I’ve been through this article “What can cause clients to be referred to unexpected targets.” Is there anything else I’m missing?

    Answer

    There has been an explosion of so-called “WAN optimizer” products in the past few years and it seems like everyone’s buying them. The devices can be very problematic to DFS namespace clients, as the devices tend to use Network Address Translation (NAT). This means that they change the IP header info on all your SMB packets to match the subnets of the appliance endpoints – and that means that when DFS tries to figure out your subnet to give you the nearest targets, it gets the subnet of the WAN appliance, not you. So you end up using DFS targets in a totally different site, defeating the purpose of DFS in the first place – a WAN de-optimizer. :)

    A double-sided network capture will show this very clearly – packets that leave one computer will arrive at your DFS root server with a completely different IP address. Reconfigure the WAN appliance not to do this or contact their vendor about other options.

    Question

    I have created/purchased a product that will extend my active directory schema. Since it was not made or tested by Microsoft, I am understandably nervous that I am about to irrevocably destroy my AD universe. How can I test out the LDF file(s) that will be modifying my schema to ensure it is not going to ruin my weekend?

    Answer

    What you need is the free AD Schema Extension Conflict Analyzer. This script can be run anywhere you have installed PowerShell 2.0 and does not require you to use AD PowerShell (for all you late bloomers that have not yet rolled out Win7/R2).

    All you do is point this script at your LDF file(s) and your AD schema and let it decide how things look:

    set-executionpolicy unrestricted

    C:\temp\ADSchemaExtensionConflictAnalyzer.ps1 -inputfile D:\scratch\FooBarExtend-ned.ldf -outputfile results.txt

    image

    image

    image

    It will find syntax errors, mismatched attribute data types, conflicting objects, etc. plus give advice. Like here it warned me that my new attributes will be in the Global Catalog (in the “partial attribute set”). The script makes no changes to your production forest at all, but if you’re nervous anyway you can export your production schema with:

    ldifde.exe –f myschema.ldf –d cn=schema,cn=configuration,dc=contoso,dc=com

    … and have the script just compare the two files (if you’re paying attention you’ll see it call LDIFDE in a separate console window already though. You big baby.).

    Question

    I <blah blah blah> Windows <blah blah blah> running on VMWare.

    Answer

    You must be made of money, Jack. You’re already paying us for the OS you’re running everywhere. Then instead of using our free hypervisor and way less expensive management system you’re paying someone else a bunch of dough.

    “But Ned, we want dynamic memory usage, Linux support, and instantaneous guest migration between hosts”.

    Ok:

    If you really want to give your CFO a coronary, try this link: http://www.microsoft.com/virtualization/en/us/cost-compare-calculator.aspx

    Then while the EMT’s are working on him to start his ticker back up, take out your CIO with this:

    Support policy for Microsoft software running in non-Microsoft hardware virtualization software
    http://support.microsoft.com/kb/897615/

    … Microsoft will support server operating systems subject to the Microsoft Support Lifecycle policy for its customers who have support agreements when the operating system runs virtualized on non-Microsoft hardware virtualization software. This support will include coordinating with the vendor to jointly investigate support issues. As part of the investigation, Microsoft may still require the issue to be reproduced independently from the non-Microsoft hardware virtualization software.

    This is more common that you might think, we find VMware-only issues all the time and our customer is now up a creek. There are troubleshooting steps - especially with debugging - that we simply cannot do at all due to the VMware architecture. Hence why you will need to reproduce on physical hardware or hyper-v, where we can gather data. Although when we find that it no longer repro’s off VMware… now what?

    And of course, when all those VMware ESX servers stopped working for 2 days last year, their workaround could not be performed on DCs as it involved rolling back time. I know that sounds like schadenfreude, but when a customer’s DCs all go offline, we get called in even if it’s nothing to do with us - just ask me how it was when McAfee and CA decided to delete core Windows files. Spoiler alert: it blows.

    I feel strongly about this…

     

    Finally, I want to welcome Scott Goad to our fold – you have probably noticed that the KB/Blog aggregations have started again. If you look carefully you’ll see that Scott has taken that over from Craig Landis, who has moved on to getting us better equipped to support ADFS 2.0. Scott used to be a cop and he also has been working on those podcast pieces with Russ.

    image
    Naturally, Office has clipart for that precise scenario

    Welcome Scooter and thanks for all the hard work Craig!

    - Ned “I’ll let you try my Clip-Tang style!” Pyle

  • Moving Your Organization from a Single Microsoft CA to a Microsoft Recommended PKI

    Hi, folks! Jonathan here again, and today I want to talk about what appears to be an increasingly common topic: migrating from a single Windows Certification Authority (CA) to a multi-tier hierarchy. I’m going to assume that you already have a basic understanding of Public Key Infrastructure (PKI) concepts, i. e., you know what a root CA is versus an issuing CA, and you understand that Microsoft CAs come in two flavors – Standalone and Enterprise. If you don’t know those things then I recommend that you take a look at this before proceeding.

    It seems that many organizations had installed a single Windows CA in order to support whatever major project that may have required it. Perhaps they were rolling out System Center Configuration Manager (SCCM), or wireless, or some other certificate consuming technology and one small line item in the project’s plan was Install a CA. Over time, though, this single CA began to see a lot of use as it was leveraged more and more for purposes other than originally conceived. Suddenly, there is a need for a proper Public Key Infrastructure (PKI) and administrators are facing some thorny questions:

    1. Can I install multiple PKIs in my forest without them interfering with each other?
    2. How do I set up my new PKI properly so that it is scalable and manageable?
    3. How do I get rid of my old CA without causing an interruption in my business?

    I’m here to tell you that you aren’t alone. There are many organizations in the same situation, and there are good answers to each of these questions. More importantly, I’m going to share those answers with you. Let’s get started, shall we?

    Important Note: This blog post does not address the private key archival scenario. Stay tuned for a future blog post on migrating archived private keys from one CA to another.

    Multiple PKIs In The Forest? Isn’t That Like Two Cats Fighting Over the Same Mouse?

    Uh….no.

    (You know, I actually considered asking Ned to find some Office clip art that showed two cats fighting over a mouse, and then thought, “What if he found it?!” I decided I didn’t really want to know and bagged the idea.)

    To be clear, there is absolutely no issue with installing multiple Windows root CAs in the same forest. You can deploy your new PKI and keep it from issuing certificates to your users or computers until you are good and ready for it to do so. And while you’re doing all this, the old CA will continue to chug along oblivious to the fact that it will soon be removed with extreme prejudice.

    Each Windows CA you install requires some objects created for it in Active Directory. If the CA is installed on a domain member these objects are created automatically. If, on the other hand, you install the CA on a workgroup computer that is disconnected from the network, you’ll have to create these objects yourself.

    Regardless, all of these objects exist under the following container in Active Directory:

    CN=Public Key Services, CN=Services, CN=Configuration, DC=<forestRootPartition>

    As you can see, these objects are located in the Configuration partition of Active Directory which explains why you have to be an Enterprise Admin in order to install a CA in the forest. The Public Key Services Container holds the following objects:

    CN=AIA Container

    AIA stands for Authority Information Access, and this container is the place where each CA will publish its own certificate for applications and services to find if needed. The AIA container holds certificationAuthority objects, one for each CA. The name of the object matches the canonical name of the CA itself.

    CN=CDP Container

    CDP stands for CRL Distribution Point (and CRL stands for Certificate Revocation List). This container is where each CA publishes its list of revoked certificates to Active Directory. In this container, you’ll find another container object whose common name matches the host name of the server on which Certificate Services is installed – one for each Windows CA in your forest. Within each server container is a cRLDistributionPoint object named for the CA itself. The actual CRL for the CA is published to this object.

    CN=Certificate Templates Container

    The Certificate Templates container holds a list of pKICertificateTemplate objects, each one representing one of the templates you see in the Certificate Templates MMC snap-in. Certificate templates are shared objects, meaning they can be used by any Enterprise CA in the forest. There is no CA-specific information stored on these objects.

    CN=Certification Authorities Container

    The Certification Authorities container holds a list of certificationAuthority objects representing each root CA trusted by the Enterprise. Any root CA certificate published here is distributed to each and every member of the forest as a trusted root. A Windows root CA installed on a domain server will publish its certificate here. If you install a root CA on a workgroup server you’ll have to publish the certificate here manually.

    CN=Enrollment Services

    The Enrollment Services container holds a list of pKIEnrollmentService objects, each one representing an Enterprise CA installed in the forest. The pKIEnrollmentService object is used by Windows clients to locate a CA capable of issuing certificates based on a particular template. When you add a certificate template to a CA via the Certification Authority snap-in, that CA’s pKIEnrollmentService object is updated to reflect the change.

    Other Container

    There are few other objects and containers in the Public Key Services container, but they are beyond the scope of this post. If you’re really interested in the nitty-gritty details, post a comment and I’ll address them in a future post.

    To summarize, let’s look at a visual of each of these objects and containers and see how they fit together. I’ve diagrammed out an environment with three CAs. One is the Old And Busted CA, which has been tottering along for years ever since Bob the network admin put it up to issue certificates for wireless authentication.

    Now that Bob has moved onto new and exciting opportunities in the field of food preparation and grease trap maintenance after that unfortunate incident with the misconfigured VLANs, his successor, Mike, has decided to deploy a new, enterprise-worthy PKI.

    To that end, Mike has deployed the New Hotness Root CA, along with the More New Hotness Issuing CA. The New Hotness Root CA is an offline Standalone root, meaning it is running the Windows CA in Standalone mode on a workgroup server disconnected from the network. The New Hotness Issuing CA, however, is an online issuing CA. It’s running in Enterprise mode on a domain server.

    Let’s see what the AD objects for these CAs look like:

    clip_image001

    Figure 1: Sample PKI AD objects

    We’ve come an awful long way to emphasize one simple point. As you can see, each PKI-related object in Active Directory is uniquely named, either for the CA itself or the server on which the CA is installed. Because of this, you can install a (uniquely named) CA on every server in your environment and not run into the sort of conflict that some customers fear when I talk to them about this topic. You could also press your tongue against a metal pole in the dead of winter. Of course, it would hurt, and you’d look silly, but you could do it. Same concept applies here.

    So what’s the non-silly approach?

    The Non-Silly Approach

    If you need to migrate your organization from the Old And Busted CA to the New Hotness PKI, then the very first thing you should do is deploy the new PKI. This requires proper planning, of course; select your platform, locate your servers, that sort of thing. I encourage you to use a Windows Server 2008 R2 platform. WS08R2 CAs are supported with a minimum schema version of 30 which means you do not need to upgrade your Windows Server 2003 domain controllers. More details are here.

    Once your planning is complete, deploy your new PKI. Actual step-by-step guidance is beyond the scope of this blog post, but it is pretty well covered elsewhere. You should first take a look at the Best Practices for Implementing a Microsoft Windows Server 2003 Public Key Infrastructure. Yes, I realize this was for Windows Server 2003, but the concepts are identical for Windows Server 2008 and higher, and the scripts included in the Best Practices Guide are just as useful for the later platforms. It is also true that the Guide describes setting up a three tiered hierarchy, but again, you can easily adapt the prescriptive guidance to a two tiered hierarchy. If you want help with that then you should take a look at this post.

    The major benefit to using Windows Server 2008 or higher is a neat little addition to the CAPolicy.INF file. When you install a new Enterprise CA it is preconfigured with a set of default certificate templates for which it is ready immediately to start issuing certificates. You don’t really want the CA to issue any certificates until you’re good and ready for it to do so. If the Enterprise CA wasn’t configured with any templates by default then it wouldn’t issue any certificates after the CA starts up. When you were ready to switch over to the new PKI, you’d just configure the issuing CA with the appropriate templates. It turns out that as of Windows Server 2008 you can install an Enterprise issuing CA so that the default certificate templates were not automatically configured on the CA. You accomplish this by adding a line to the CAPolicy.inf file:

    LoadDefaultTemplates=False

    Now, if at this point you’re wondering, “What is a CAPolicy.INF file, and how is it involved in setting up a CA,” then guess what? That is your clue that you need to read the Best Practices Guide linked above. It’s all in there, including samples.

    “Oh…but the samples are for Windows Server 2003,” you say, accusingly. Relax; here’s a blog post I wrote earlier fully documenting the Windows Server 2008 R2 CAPolicy.INF syntax. Again, the concepts and broad strokes are all the same; just some minor details have changed. Use my earlier post to supplement the Best Practices Guide and you’ll be golden.

    I Have My New PKI, So Now What?

    So you have your new PKI installed and you’re ready to migrate your organization over to it. How does one do that without impacting one’s organization too severely?

    The first thing you’ll want to do is prevent the old CA from issuing any new certificates. You just uninstall it, of course, but that could cause considerable problems. What do you think would happen if that CA’s published CRL expired and it wasn’t around to publish a new one? Depending on the application using those certificates, they’d all fail to validate and become useless. Wireless clients would fail to connect, smart card users would fail to authenticate, and all sorts of other bad things would occur. The goal is to prevent any career limiting outages so you shouldn’t just uninstall that CA.

    No, you should instead remove all the templates from the Certificate Templates folder using the Certification Authority MMC snap-in on the old CA. If an Enterprise CA isn’t configured with any templates it can’t issue any new certificates. On the other hand, it is still quite capable of refreshing its CRL, and this is exactly the behavior you want. Conversely, you’ll want to add those same templates you removed from the Old And Busted CA into the Certificate Templates folder on the New Hotness Issuing CA.

    If you modify the contents of the Certificate Templates folder for a particular CA, that CA’s pKIEnrollmentService object must be updated in Active Directory. That means that you will have some latency as the changes replicate amongst your domain controllers. It is possible that some user in an outlying site will attempt to enroll for a certificate against the Old And Busted CA and that request will fail because the Old And Busted CA knows immediately that it should not issue any certificates. Given time, though, that error condition will fade as all domain controllers get the new changes. If you’re extremely sensitive to that kind of failure, however, then just add your templates to the New Hotness Issuing CA first, wait a day (or whatever your end-to-end replication latency is) and then remove those templates from the Old And Busted CA. In the long run, it won’t matter if the Old And Busted CA issues a few last minute certificates.

    At this point all certificate requests within your organization will be processed by the New Hotness Issuing CA, but what about all those certificates issued by the Old And Busted CA that are still in use? Do you have to manually go to each user and computer and request new certificates? Well…it depends on how the certificates were originally requested.

    Manually Requested

    If a certificate has been manually requested then, yes, in all likelihood you’ll need to manually update those certificates. I’m referring here to those certificates requested using the Certificates MMC snap-in, or through the Web Enrollment Pages. Unfortunately, there’s no automatic management for certificates requested manually. In reality, though, refreshing these certificates probably means changing some application or service so it knows to use the new certificate. I refer here specifically to Server Authentication certificates in IIS, OCS, SCCM, etc. Not only do you need to change the certificate, but you also need to reconfigure the application so it will use the new certificate. Given this situation, it makes sense to make your necessary changes gradually. Presumably, there is already a procedure in place for updating the certificates used by these applications I mentioned, among others I didn’t, as the current certificates expire. As time passes and each of these older, expiring certificates are replaced by new certificates issued by the new CA, you will gradually wean your organization off of the Old And Busted CA and onto the New Hotness Issuing CA. Once that is complete you can safely decommission the old CA.

    And it isn’t as though you don’t have a deadline. As soon as the Old And Busted CA certificate itself has expired you’ll know that any certificate ever issued by that CA has also expired. The Microsoft CA enforces such validity period nesting of certificates. Hopefully, though, that means that all those certificates have already been replaced, and you can finally decommission the old CA.

    Automatically Enrolled

    Certificate Autoenrollment was introduced in Windows XP, and it allows the administrator to assign certificates based on a particular template to any number of forest users or computers. Triggered by the application of Group Policy, this component can enroll for certificates and renew them when they get old. Using Autoenrollment, once can easily deploy thousands of certificates very, very quickly. Surely, then, there must be an automated way to replace all those certificates issued by the previous CA?

    As a matter of fact, there is.

    As described above, the new PKI is up and ready to start issuing digital certificates. The old CA is still up and running, but all the templates have been removed from the Certificate Templates folder so it is no longer issuing any certificates. But you still have literally thousands of automatically enrolled certificates outstanding that need to be replaced. What do you do?

    In the Certificates Templates MMC snap-in, you’ll see a list of all the templates available in your enterprise. To force all holders of a particular certificate to automatically enroll for a replacement, all you need to do is right-click on the template and select Reenroll All Certificate Holders from the context menu.

    clip_image002

    What this actually does is increment the major version number of the certificate template in question. This change is detected by the Autoenrollment component on each Windows workstation and server prompting them to enroll for the updated template, replacing any certificate they may already have. Automatically enrolled user certificates are updated in the exact same fashion.

    Now, how long it takes for each certificate holder to actually finish enrolling will depend how many there are and how they connect to the network. For workstations that are connected directly to the network, user and computer certificates will be updated at the next Autoenrollment pulse.

    Note: For computers, the autoenrollment pulse fires at computer startup and every eight hours thereafter. For users, the autoenrollment pulse fires at user logon and every eight hours thereafter. You can manually trigger an autoenrollment pulse by running certutil -pulse from the command line. Certutil.exe is installed with the Windows Server 2003 Administrative Tools Pack on Windows XP, but it is installed by default on the other currently supported versions of Windows.

    For computers that only connect by VPN it may take longer for certificates to be updated. Unfortunately, there is no blinking light that says all the certificate holders have been reenrolled, so monitoring progress can be difficult. There are ways it could be done -- monitoring the certificates issued by the CA, using a script to check workstations and servers and verify that the certificates are issued from the new CA, etc. -- but they require some brain and brow work from the Administrator.

    There is one requirement for this reenrollment strategy to work. In the group policy setting where you enable Autoenrollment, you must have the following option selected: Update certificates that use certificate templates.

    clip_image003

    If this policy option is not enabled then your autoenrolled certificates will not be automatically refreshed.

    Remember, there are two autoenrollment policies -- one for the User Configuration and one for the Computer Configuration. This option must be selected in both locations in order to allow the Administrator to force both computers and users to reenroll for an updated template.

    But I Have to Get Rid of the Old CA!

    As I’ve said earlier, once you’ve configured the Old And Busted CA so that it will no longer issue certificates you shouldn’t need to touch it again until all the certificates issued by that CA have expired. As long as the CA continues to publish a revocation list, all the certificates issued by that CA will remain valid until they can be replaced. But what if you want to decommission the Old And Busted CA immediately? How could make sure that your outstanding certificates would remain viable until you can replace them with new certificates? Well, there is a way.

    All X.509 digital certificates have a validity period, a defined interval time with fixed start and end dates between which the certificate is considered valid unless it has been revoked. Once the certificate is expired there is no need to check with a certificate revocation list (CRL) -- the certificate is invalid regardless of its revocation status. Revocation lists also have a validity period during which time it is considered an authoritative list of revoked certificates. Once the CRL has expired it can no longer be used to check for revocation status; a client must retrieve a new CRL.

    You can use this to your advantage by extending the validity period of the Old And Busted CA’s CRL in the CA configuration to match (or exceed) the remaining lifetime of the CA certificate. For example, if the Old And Busted CA’s certificate will be valid for the next 4 years, 3 months, and 10 days, then you can set the publication interval for the CA’s CRL to 5 years and immediately publish it. The newly published CRL will remain valid for the next five years, and as long as you leave that CRL published in the defined CRL distribution points -- Active Directory and/or HTTP -- clients will continue to use it for checking revocation status. You no longer need the actual CA itself so you can uninstall it.

    One drawback to this, however, is that you won’t be able to easily add any certificates to the revocation list. If you need to revoke a certificate after you’ve decommissioned the CA, then you’ll need to use the command line utility certutil.exe.

    Certutil.exe -resign “Old And Busted CA.crl” +<serialNumber>

    Of course, this requires that you keep the private keys associated with the CA, so you’d better back up the CA’s keys before you uninstall the role.

    Conclusion

    Wow…we’ve covered a lot of information here, so I’ll try to boil all of it down to the most important points. First, yes you can have multiple root CAs and even multiple PKIs in a single Active Directory forest. Because of the way the objects are representing those CAs are named and stored, you couldn’t possibly experience a conflict unless you tried to give more than one CA the same CA name.

    Second, once the new PKI is built you’ll want to configure your old CA so that it no longer issues certificates. That job will now belong to the issuing CA in your new PKI.

    Third, the ease with which you can replace all the certificates issued by the old CA with certificates issued by your new CA will depend mainly on how the certificates were first deployed. If all of your old certificates were requested manually then you will need to replace them in the same way. The easiest way to do that is replace them all gradually as they expired. On the other hand, if your old certificates were deployed via autoenrollment then you can trigger all of your autoenrollment clients to replace the old certificates with new ones from the new PKI. You can do this through the Certificate Templates MMC snap-in.

    And finally, what do you do with the old CA? Well, if you don’t need the equipment you can just keep it around until it either expires or all the old certificates have been replaced. If, however, you want to get rid of it immediately you can extend the lifetime of the old CA’s CRL to match the remaining validity period of the CA certificate. Just publish a new CRL and it’ll be good until all outstanding certificates have expired. Just keep in mind that this route will limit your ability to revoke those old certificates.

    If you think I missed something, or you want me to clarify a certain point, please feel free to post in the comments below.

    Jonathan “Man in Black” Stephens

    PS: Don’t ever challenge my Office clip art skills again, Jonathan.

    image

    - Ned

  • Friday Mail Sack: Mostly Edge Case Edition

    Hello all, Ned here again with this week’s conversations between AskDS and the rest of the world.  Today we talk Security, ADWS, FSMO upgrades, USMT, and why “Web 2.0 Internet” is still a poisonous wasteland of gross.

    Let’s do it to it.

    Question

    I am getting questions from my Security/Compliance/Audit/Management folks about what security settings we should be applying on XP/2003/2008/Vista/7. Are there Microsoft recommendations? Are there templates? Are there explanations of risk versus reward? Could some settings break things if I’m not careful? Can I get documentation in whitepaper and spreadsheet form? Do you also have these for Office 2007 and Internet Explorer? Can I compare to my current settings to find differences?

    [This is another of those “10 times a week” questions, like domain upgrade – Ned]

    Answer

    Yes, yes, yes, yes, yes, yes, and yes. Download the Microsoft Security Compliance Manager. This tool has all the previously scattered Microsoft security documentation in one centralized location, and it handles all of those questions. Microsoft provides comparison baselines for “Enterprise Configuration” (less secure, more functional) and “Specialized Security-Limited Functionality” (more secure, less usable) modes, within each Operating System. Those are further distinguished by role and hardware – desktops, laptops, domain controllers, member servers, users, and the domain itself.

    image

    So if you drill down into the settings and tabs of a given setting, you see more details, explanations, and reasoning on why you might want to choose something or not.

    image  image  image

    It also has further docs and allows you to completely export the settings as GPO, DCM, SCAP, INF, or Excel.

      image  image[36]

    It’s slick stuff. I think we got this right and the Internet’s “shotgun documentation” gets this wrong.

    Question

    Is it ok to have FSMO roles running on a mixture of operating systems? For example, a PDC Emulator on Windows Server 2003 and a Schema Master on Windows Server 2008?

    Answer

    Yes, it’s generally ok. The main issue people typically run into is that the PDCE is used to create special groups by certain components and if the PDC is not at that component’s OS level, the groups will not be created.

    For example, these groups will not get created until the PDCE role moves to a Win2008 or later DC:

    • SID: S-1-5- 21 domain –498
      Name: Enterprise Read-only Domain Controllers
      Description: A Universal group. Members of this group are Read-Only Domain Controllers in the enterprise
    • SID: S-1-5- 21 domain -521
      Name: Read-only Domain Controllers
      Description: A Global group. Members of this group are Read-Only Domain Controllers in the domain
    • SID: S-1-5-32-569
      Name: BUILTIN\Cryptographic Operators
      Description: A Builtin Local group. Members are authorized to perform cryptographic operations.
    • SID: S-1-5-21 domain –571
      Name: Allowed RODC Password Replication Group
      Description: A Domain Local group. Members in this group can have their passwords replicated to all read-only domain controllers in the domain.
    • SID: S-1-5- 21 domain -572
      Name: Denied RODC Password Replication Group
      Description: A Domain Local group. Members in this group cannot have their passwords replicated to any read-only domain controllers in the domain
    • SID: S-1-5-32-573
      Name: BUILTIN\Event Log Readers
      Description: A Builtin Local group. Members of this group can read event logs from local machine.
    • SID: S-1-5-32-574
      Name: BUILTIN\Certificate Service DCOM Access
      Description: A Builtin Local group. Members of this group are allowed to connect to Certification Authorities in the enterprise.

    And those groups not existing will prevent various Win2008/Vista/R2/7 components from being configured. From the most boring KB I ever had to re-write:

    243330  Well-known security identifiers in Windows operating systems - http://support.microsoft.com/default.aspx?scid=kb;EN-US;243330

    I hesitate to ask why you wouldn’t want to move these FSMO roles to a newer OS though.

    Question

    Every time I boot my domain controller it logs this warning:

    Log Name:      Active Directory Web Services
    Source:        ADWS
    Date:          6/26/2010 10:20:22 PM
    Event ID:      1400
    Task Category: ADWS Certificate Events
    Level:         Warning
    Keywords:      Classic
    User:          N/A
    Computer:      mydc.contoso.com
    Description:
    Active Directory Web Services could not find a server certificate with the specified certificate name. A certificate is required to use SSL/TLS connections. To use SSL/TLS connections, verify that a valid server authentication certificate from a trusted Certificate Authority (CA) is installed on the machine. 
    Certificate name: mydc.contoso.com

    It otherwise works fine and I can use ADWS just fine. Do I care about this?

    Answer

    Only if you:

    1. You think you have a valid Server Authentication certificate.
    2. Want to use SSL to connect to ADWS.

    By default Windows Server 2008 R2 DC’s will log this warning until they get issued a valid server certificate (which you get for free once you deploy an MS Enterprise PKI, by getting a Domain Controller certificate through auto-enrollment). Once that happens you will log a 1401 and never see this warning again.

    If you think you have the right certificate (and in this case, the customer thought he did - it had EKU of Server Authentication (1.3.6.1.5.5.7.3.1), the right SAN, and chained fine), compare it to a valid DC certificate issued by an MS CA. You can do all this in a test lab even if you’re not using our PKI by just creating a default PKI “next next next” style and examining an exported DC certificate. When we compared the exported certificates, we found that his 3rd-party issued cert was missing a Subject entry, unlike my own. We theorized that this might be it – the subject is not required for a cert to be valid, but any application can decide it’s important and it’s likely ADWS does.

    Question

    Seeing this error when doing a USMT 4.0 migration:

    [0x080000] HARDLINK: cannot find distributed store for d - cee6e189-2fd2-4210-b89a-810397ab3b7f[gle=0x00000002]
    [0x0802e3] SelectTransport: OpenDevice failed with Exception: Win32Exception: HARDLINK: cannot find all distributed stores.: There are no more files. [0x00000012] void __cdecl Mig::CMediaManager::SelectTransportInternal(int,unsigned int,struct Mig::IDeviceInitializationData *,int,int,int,unsigned __int64,class Mig::CDeviceProgressAdapter *)

    We have a C: and D: drive and when we run the migration we use these steps:

    1. Scanstate with hard-link for both drives.
    2. Delete the D: drive partition and extend out C: to use up that space.
    3. Run the loadstate.

    If we don’t delete the D: partition it works fine. I thought all the data was going into the hard-link store on “C:\store”?

    Answer

    Look closer. :) When you create a hard-link store and specify the store path, each volume gets its own hard-link store. Hard-links cannot cross volumes.

    For example:

    Scanstate /hardlink c:\USMTMIG […]

    Running this command on a system that contains the operating system on the C: drive and the user data on the D: drive will generate migration stores in the following locations:

    C:\USMTMIG\
    D:\USMTMIG\

    The store on C: is called the “main store” and the one on the other drive is called the “distributed store”. If you want to know more about the physicality and limits of the hard-link stores, review: http://technet.microsoft.com/en-us/library/dd560753(WS.10).aspx.

    Now, all is not lost – here are some options to get around this:

    1. You could not delete the partition (duh).

    2. You could move all data from the other partition to your C: drive before running scanstate and get rid of that partition before running scanstate.

    3. You could run the scanstate as before, then xcopy the D: drive store into the C: drive store, thereby preserving the data. For example:

    a. Scanstate with hard-link.

    b. Run:

      xcopy /s /e /h /k d:\store\* c:\store
      rd /s /q d:\store <-- this step optional. After all, you are deleting the partition later!

    c. Delete the the D: partition and extend C: like you were doing before.

    d. Run loadstate.

    There may be other issues here (after all, some application may have been pointing to files on D: and is now very angry) so make sure your plan takes that into consideration. You may need to pay a visit to <locationModify>.

    ===

    Finally

    The Black Hat Vegas USA 2010 folks have published their briefings and this one by Ivan Ristic from Qualys really struck me:

    State of SSL on the Internet: 2010 Survey, Results and Conclusions
    https://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf

    Some mind-blowingly disappointing interesting nuggets from their survey of 867,361 certificates being used by websites:

    • Only 37% of domains responded when SSL was attempted (the rest were all totally unencrypted)
    • 30% of SSL certificates failed validation (not trusted, not chained, invalid signature)
    • 50% of the certs supported insecure SSL v2 protocol
    • 56% of servers supported weak (less than 128-bit) ciphers

    Definitely read the whole presentation, it’s worth your time. Any questions, ask Jonathan Stephens.

    image
    Wooo, fancy hat. Looking sharp, Jonathan!

     

    That’s all folks, have a nice weekend.

    - Ned “I’m gonna pay for that one” Pyle

  • Friday Mail Sack: Delicious Ranch Dressing Edition

    Hello Earthlings, Ned here again with this week’s interesting conversations within Microsoft DS support. Today we talk DFSR, ILM, DFSN + VPNs, DNS, group expansion, Applocker, and friggin’ huge files.

    OG OG OG!

    Question

    I am running dsget.exe –user –memberof –expand on some new Windows Server 2008 R2 DC’s. It seems to be running very slowly in my very large domain. Win2003 has no issues doing the same exact commands.

    Answer

    Buggity. Grab this:

    980254  The "dsget user -memberof -expand" command returns incorrect results in Windows Server 2008 R2 and in Windows 7
    http://support.microsoft.com/default.aspx?scid=kb;EN-US;980254

    The bug makes it return all users and groups, no matter if they were related to your query. The “very large domain” made that slow.

    Question

    I am considering using DFSR on Windows Server 2008/R2 to replicate some really huge files – hundreds of GB. I know that this isn’t supported [the supported limit is 64GB – Ned]  but will it work? Some early testing shows that just copying these giant files manually with robocopy is faster than DFSR.

    Answer

    There’s a good reason not to go that large on files in DFSR: DFSR has a default RPC context inactivity timeout setting of 30 minutes. If replication of a file is  “inactive” in RPC after that 30 minutes, the session is torn down and DFSR starts over. Which means it may never finish if the file is big enough and the network is small enough. In Win2008/R2 you can modify this timeout to wait as long as 4 hours, by setting:

    HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\DFSR\Parameters\Settings
    RpcContextHandleTimeoutMs=14400000

    (This is a DWORD value and the number should be entered in decimal, not hex; it’s in milliseconds)

    Important note: We fixed this in Windows Server 2012, and the registry value is no longer needed. Now we correctly handle the situation of huge files without any intervention.

    But how could the session be inactive, you ask? Because the DFSR’s RPC worker is considering the period where RDC signatures are generated to be part of the timer – many minutes or even hours might be spent on creating all the RDC signatures, and in the meantime DFSR will keep resetting itself. If you want to see how long signatures took on a ginormous file, examine for this:

    20090924 13:42:51.421 2616 RDCX   467 StreamToIndex RDC generate begin: (0..6), uid:{01F12108-393C-49B3-835B-4B1214755EAF}-v20 gvsn:{01F12108-393C-49B3-835B-4B1214755EAF}-v34 fileName:D_SEBLDWIN7BLD22.vhd csId:{B1758DF8-51F6-4BB1-B5D7-A7256BF88CD1}
    20090924 16:16:09.880 2616 RDCX   509 StreamToIndex RDC generate end: (0..6), uid:{01F12108-393C-49B3-835B-4B1214755EAF}-v20 gvsn:{01F12108-393C-49B3-835B-4B1214755EAF}-v34 fileName:D_SEBLDWIN7BLD22.vhd csId:{B1758DF8-51F6-4BB1-B5D7-A7256BF88CD1}

    That was a 240GB file and it took roughly 1.5 hours. Yeowza.

    As a side note, robocopy is not always the fastest way to copy these huge files either. You can use tools that send data as unbuffered I/O which will make very large files fly over a network. In fact you have to be careful as you will quickly reach true network saturation (even in 1Gbps LAN’s) with these tools. Some Microsoft options would be ESEUTIL.EXE /Y and XCOPY.EXE /J (starting in Windows 7 and R2). Starting in Windows Server 2012, robocopy adds the /J option as well finally - woot.

    Let me also take a moment to get on my soapbox around DFSR and file copy speeds with a “30 second SME rant”:

    It is a common misapprehension that DFSR is designed to be faster than raw file copying. It is not. It is designed to be very resilient on high loss/error prone networks and be very efficient on low bandwidth networks, all while keeping files in sync with loose convergence in a multi-master topology. On a LAN it will underperform compared to just copying files around with robocopy or xcopy. On a WAN with poor bandwidth and files using RDC/cross-file RDC it will greatly outperform file copy tools. Within a LAN you will see some performance improvements turning off RDC on those connections as it will be faster to stage and copy the files without RDC signature computation or cross-file sparse file computation. But it will never, ever be faster than just copying some specific files by hand when doing a simple raw file and no RDC on a LAN. That’s not what it was designed to do. The same way that robocopy and xcopy were not designed to replicate 60KB of a 2GB file that was just modified, saving 99.997% of bandwidth used, all automatically between 1000 servers.

    If you just want to sync two computers on a LAN, robocopy /mot /mir <etc> is a faster solution. There are also a hundred other free and vendor tools out there. DFSR is a branch office product, not a shared disk cluster. If you have gigabit LAN connectivity, using DFSR is sometimes the wrong solution. Don’t try to make it be everything in your main office.

    /end rantsmission

    Question

    I’ve run into issues where VPN users stop being able to connect to DFS paths, they get “The network path was not found”. They can ping the servers without issues.

    Answer

    One thing we have seen with VPN clients is cached credentials causing issues. You can see by logging on as that test user and running this in a CMD prompt:

    CMDKEY.EXE /LIST

    If there are cached creds, you can use CMDKEY.EXE /DELETE to remove them individually. You can also flush the cache and disable it with:

    HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa
    Value Name: disabledomaincreds
    Value Type: REG_DWORD
    Value: 1

    (And reboot; this is just for testing to isolate the issue, but some folks leave it like this forever to avoid any possibility of the issue happening. This can also be configured via security policy using Computer Configuration \ <policies> \ Windows Settings \ Security Settings \ Local Policies \ Security Options \ Network access: Do not allow storage of credentials or .NET Passports for network Authentication = ENABLED)

    At some point the user saved mapped drive credentials in there with an old password and there those creds have remained, unable to be updated thanks to the lack of frequent network access that lead you to use a VPN in the first place. Bleh. Go use DirectAccess, VPN’s are gross.

    Question

    I have an ILM question around---

    Answer

    Stop right there bub! Despite the fact that AD and ILM/FIM are both identity products, the latter is supported by a different group. They haven’t gotten an official blog off the ground, but there are some good sites here:

    http://blogs.technet.com/b/doittoit
    http://blogs.msdn.com/b/imex
    http://blogs.technet.com/b/shawnrab

    And forums are here:

    http://social.technet.microsoft.com/Forums/en-US/identitylifecyclemanager/threads

    And you can open a support case too, naturally.

    Question

    I’m using a third party monitoring tool that states that my _msdcs zone has an SRV record for a global catalog that does not correspond with any of the known global catalogs that serve the forest. I've looked up the GC SRV records in the root domain and found a pretty long list (the forest contains 100 global catalogs). Unfortunately my tool doesn’t state which SRV record appears to be stale.

    How can I get a list of all the GC SRV records and compare them to my list of GC’s?

    Answer

    To figure this out, I’d:

    1.    Dump out that list of all the GC SRV records using DNSCMD.EXE /ZoneExport <zonename> <somefile>.
    2.    Then I would dump out a list of all the DC’s in the forest (lots of ways to do this; for example, in DSA.MSC can just right click on the Domain Controllers OU in each domain and choose Export. Could also use a variety of command-line tools, AD PowerShell, joeware, etc).
    3.    Then I would drop both of those lists into their own columns in Excel, and I’m left with all the servers in DNS and in AD. I can sort and easily see the differences of any missing or extra records.

    clip_image002

    Long term, you should ask the vendor to alter this monitor so that it returns the record(s) that tripped its rule. A monitoring tool should never raise more questions than it answers.

    Question

    I’ve been using AppLocker to control which programs my users can run on some terminal servers. I just found that one of the applications I need to allow/block is an MMC snap-in not included in Windows. I can’t just block/allow MMC.EXE, some of the other snap-ins get used by everyone. What can I do here?

    Answer

    Group policy supports allowing and restricting snap-ins, but only a specific set that are included in the OS. You can use this to your advantage here.

    1. For all non-“special snap-in” users, you could configure this GP:

    User Configuration->Administrative Templates->Windows Components->Microsoft Management Console
    Restrict Users to the explicitly permitted list of snapins = ENABLED

    2. Then for those same large population of users, you could configure the list of any snap-ins you want them to be able to load, and the rest will be implicitly denied (including the third party one; most standard users do not need to access many snap-ins, they are mainly for admin usage). To add those exceptions for your users, you would modify here as needed:

    User Configuration->Administrative Templates->Windows Components->Microsoft Management Console -> Restricted/Permitted Snapins
    <all the goo in here>

    image

    Here I turned on the restriction. When I try to run this ‘3rd party’ snap-in that’s not in my list, I get:

    image

    Nice.

    Other 

    Mike O’Reilly took umbrage with my iceberg comments last week. Not because it wasn’t true, but because he’s sad over the limited amount of Newfoundland iceberg vodka made this year. I bet it goes great with Jiggs dinner, fish and brewis, and seal flipper pie. Newfoundland’s motto: “All of our cuisine is based on a drunken bet.”

    Finally, this is what happens when you leave a nice vegetable tray out for a minute in my house. Note how the ranch dressing container appears to have been professionally cleaned. Also note how the healthy veggie bits were left completely unmolested.

    Snack (3)[6]

    It’s a mystery…

    image

    Have a great weekend

    - Ned “Hidden Valley” Pyle

  • New Directory Services Content 8/8-8/14

    KB Articles

    We have quite a few KB articles this week, so I would like to point out that some of these articles may be updates to existing content and not new.  We are working to get the reporting shored up for a better view of new content for a given period of time.

    Number

    Title

    968257

    How to upgrade Windows Vista to Windows 7 if you have AD LDS installed

    973678

    Replication between the ADAM database and Active Directory Lightweight Directory Services (AD LDS) fails in a workgroup that contains a Windows Server 2003 SP2-based computer

    977377

    Microsoft Security Advisory: Vulnerability in TLS/SSL could allow spoofing

    978886

    MS10-058: Vulnerabilities in TCP/IP could allow elevation of privilege

    980436

    MS10-049: Vulnerabilities in SChannel could allow remote code execution

    981852

    MS10-047: Vulnerabilities in Windows Kernel could allow elevation of privilege

    981997

    MS10-050: Vulnerability in Movie Maker could allow remote code execution

    982214

    MS10-054: Vulnerabilities in SMB Server could allow remote code execution

    982381

    MS10-035: Cumulative security update for Internet Explorer

    982665

    MS10-055: Vulnerability in Cinepak codec could allow remote code execution

    982799

    MS10-059: Vulnerabilities in the Tracing Feature for Services could allow an elevation of privilege

    2115168

    MS10-052 Vulnerability in Microsoft MPEG Layer-3 codecs could allow remote code execution

    2160329

    MS10-048: Vulnerabilities in Windows kernel-mode drivers could allow elevation of privilege

    2171571

    You incorrectly receive an error message when you join a computer that is running Windows 7 or Windows Server 2008 R2 to a Samba 3-based domain

    2183461

    MS10-053: Cumulative Security update for Internet Explorer

    2215778

    The RODCs are not included in a response to a DFS referral request from a computer that is running Windows Server 2003 SP2

    2254265

    The "500" error code is returned when you send an HTTP SOAP request to the "/adfs/services/trust/mex" endpoint on a computer that is running Windows Server 2008 R2 or Windows Server 2008

    2254754

    You experience a GPO report-generation issue in the GPMC window when you try to generate the report in a localized version of Windows 7 or of Windows Server 2008 R2

    2257912

    The Lsass.exe process crashes on a computer that is running a 64-bit version of Windows Server 2003 SP2

    2258620

    You cannot find the "Find Now," "Stop," and "Clear All" buttons in the GPMC snap-in on a computer that is running Windows 7 or Windows Server 2008 R2

    2261826

    You cannot find a network drive in the "Browse For Folder" dialog box in the GPMC MMC snap-in on a computer that is running Windows Server 2008 or Windows Vista

    2264072

    Microsoft Security Advisory: Elevation of privilege using Windows service isolation bypass

    2274102

    An application that uses DES encryption for Kerberos authentication cannot run on a Windows XP-based client computer in a Windows Server 2008 domain

    2275315

    You cannot read the GPO in the SYSVOL directory in Windows 7 or in Windows Server 2008 R2 if you enable the "Deny write" permission of the GPO

    2275950

    An error occurs when you try to establish SSL connections to the nodes by using the alias name from an LDAPS client computer that is running Windows 7 or Windows Server 2008 R2

    2276597

    "LDAP_AUTH_UNKNOWN (0x56)" error code occurs when you call the "ldap_set_option" function in Windows 7 or in Windows Server 2008 R2 if you use the "LDAP_OPT_SASL_METHOD" session option

    2280699

    All remote PowerShell operations fail together with the "E_ACCESSDENIED" error message when you use the CredSSP in a remote PowerShell session in Windows 7 or in Windows Server 2008 R2

    2282241

    An error occurs when you use the alias name from an LDAP client computer that is running Windows Vista or Windows Server 2008 to try to establish SSL connections to nodes that host the LDAP service

    2284538

    "Apply once and do not reapply" Group Policy setting is never applied after the first GPO deployment fails on a client computer that is running Windows 7 or Windows Server 2008 R2

    2285823

    The DFS Namespace service becomes inaccessible if the domain controller that plays the Inter-Site Topology Generator (ISTG) role is down on a Windows Server 2008 R2-based computer

    2285835

    An outgoing replication backlog occurs after you convert a read/write replicated folder to a read-only replicated folder in Windows Server 2008 R2

    2286715

    A SYSVOL share migration from FRS to the DFS Replication service stops responding at the Start state in Windows Server 2008

    2288059

    The Net Logon service does not start in Windows Server 2003 after you restart the computer

    Blogs

    Friday Mail Sack: Mostly Edge Case Edition

    Using AD Recycle Bin to restore deleted DNS zones and their contents in Windows Server 2008 R2

    Using Group Policy to Deploy a Windows PowerShell Logon Script

    Using PowerShell to Transfer FSMO Roles