Blog - Title

April, 2012

  • Friday Mail Sack: Drop the dope, hippy! edition

    Hi all, Ned here again with an actual back to back mail sack. This week we discuss:


    I was reading an article that showed how to update the computer description every time a user logs on. A commenter mentioned that people should be careful as the environment could run out of USNs if this was implemented. Is that true?


    This was a really interesting question. The current USN is a 64-bit counter maintained by each Active Directory domain controller as the highestCommittedUsn attribute on rootDSE.  Being an unsigned 64-bit integer, that means 264-1, which is 18,446,744,073,709,551,615 (i.e. 18 quintillion). Under normal use that is never going to run out. Even more, when AD reaches that top number, it would restart at 1 all over again!

    Let's say I want to run out of USNs though, so I create a script that makes 100 object write updates per second on at DC. It would take me 54 days to hit the first 1 billionth USN. At that rate, this means I am adding ~6.5 billion USN changes a year. Which means at that rate, it would take just under 3 billion years to run out on that DC. Which is probably longer than your hardware warranty.

    My further thought was around Version metadata, which we don't document anywhere I can find. That is an unsigned 32-bit counter for each attribute on an object and again, so huge it is simply not feasible that it would run out in anything approaching normal circumstances. If you were to update a computer’s description every time a user logged on and they only had one computer, at 232-1 that means they have to logon 4,294,967,295 times to run out. Let’s say they logon in the morning and always logoff for bathroom, coffee, meetings and lunch breaks rather than locking their machines – call it 10 logons a day and 250 working days a year. That is still 1.7 million years before they run out and you need to disjoin, rename, and rejoin their computer so they can start again.

    That said - the commenter was a bit off about the facts, but he had the right notion: not re-writing attributes with unchanged data is definitely a good idea. Less spurious work is always the right answer for DC performance and replication. Figure out a less invasive way to do this, or even better, use a product like System Center Config Manager; it has built in functionality to determine the “primary user” of computers, involving auditing and some other heuristics. This is part of its “Asset Intelligence” reporting (maybe called something else in SCCM 2012).

    Interesting side effect of this conversation: I was testing all this out with NTDSUTIL auth restores and setting the version artificially high on an object with VERINC. Repadmin /showmeta gets upset once your version crosses the 231 line. :) See for yourself (in a lab only, please). If you ever find yourself in that predicament, use LDP's metadata displayer, it keeps right on trucking.

    Maybe a li'l ol' casting issue here

    Ahh, that's better. Get out the hex converter.


    I find replication to be faster with RDC disabled on my LAN connected servers (hmmm, just like your blog said), so I have it disabled on the connections between my hub servers and the other servers on the same LAN. I have other servers connected over a WAN, so I kept RDC enabled on those connections.

    By having some connections with RDC enabled and others disabled, am I making my hub server do ‘twice’ the work? Would it be better if I enabled it on all connections, even the LAN ones?


    You aren’t making your servers do things twice, per se; more like doing the same things, then one does a little more.

    Consider a change made on the hub: it still stage the same file once, compresses it in staging once, creates RDC signatures for it once, and sends the overall calculated SHA-1 file hash to each server once. The only difference will be that one spoke server then receives the whole file and the other spoke does the RDC version vector and signature chunk dance to receive part of the file.

    The non-RDC LAN-based communication will still be more efficient and fast within its context, and the WAN will still get less utilization and faster performance for large files with small changes.


    I'm trying to get Network Policy Server (RADIUS) to work in my environment to enable WPA-2 authentication from a slick new wireless device. I keep getting the error "There is no domain controller available for domain CONTOSO.COM" in the event log when I try to authenticate, which is our legacy dotted NetBIOS domain name. On a hunch, I created a subdomain without a dot in the NetBIOS name and was able to authenticate right away with any user from that subdomain. Do you have any tricks or advice on how to deal with NPS in a dotted domain running in native Windows 2008 R2 mode other than renaming it (yuck).


    I don't even know how to spell NPS (it's supported by our Networking team) but I found this internal article from them. You are not going to like the answer:

    Previous versions of IAS/NPS could not perform SPN lookups across domains because it treated the SPN as a string and not an FQDN. Windows Server 2008 R2 corrected that behavior, but now NPS is treating a dotted NetBIOS name as a FQDN and NPS performs a DNS lookup on the CONTOSO.COM name. This fails because DNS does not host a CONTOSO.COM zone.

    That leaves you with three main solutions:

    • Rename your domain using rendom.exe
    • Migrate your domain using ADMT
    • Use a Windows Server 2008 NPS

    There might be some other workaround - this would be an extremely corner case scenario and I doubt we've explored it deeply.

    The third solution is an ok short-term workaround, but Win2008 isn’t going to be supported forever and you might need some R2 features in the meantime. The first two are gnarly, but I gotta tell ya: no one is rigorously testing dotted NetBIOS names anymore, as they were only possible from NT 4.0 domain upgrades and are as rare as an honest politician. They are ticking time bombs. A variety of other applications and products fail when trying to use dotted NetBIOS domain names and they might not have a workaround. A domain rename is probably in your future, and it's for the best.


    We are using USMT 4.0 to migrate data with the merge script sourcepriority option to always overwrite data on the destination with data from the source. No matter what though, the destination always wins and the source copy of the file is renamed with the losing (1) tag.


    This turned out to be quite an adventure.

    We turned on migdiag logging using SET MIG_ENABLE_DIAG=migdiag.xml in order to see what was happening here; that's a great logging option for figuring out why your rules aren’t processing correctly. When it got to the file in question during loadstate, we saw this weirdness:

    <Pattern Type="File" Path="C:\Users\someuser\AppData\Local\Microsoft\Windows Sidebar [Settings.ini]" Operation="DynamicMerge,&lt;unknown&gt;"/>

    Normally, it should have looked like:

    <Pattern Type="File" Path="C:\Users\someuser\AppData\Roaming\Microsoft\Access\* [*]" Operation="DynamicMerge,CMXEMerge,CMXEMergeScript,MigXmlHelper,SourcePriority"/>

    More interestingly, none of us could reproduce the issue here using the customer's exact same XML file. Finally, I had him reinstall USMT from a freshly downloaded copy of the WAIK, and it all started working perfectly. I've done this a few times in the past with good results for these kinds of weirdo issues; since USMT cannot be installed on Windows XP, it just gets copied around as folders. Sometimes people start mixing in various versions and DLLS, from Beta, RC, and hotfixes, and you end up with something that looks like USMT - but ain't.


    Is teaming network adapters on Domain Controllers supported by Microsoft? I found KB


    (Updated) Maybe! :-D We're still in beta and need to get a final word. Sharp-eyed readers know I was already asked this before. However, I have a new answer for Windows Server: yes, if you use Windows Server "8" Beta.


    Whoa, we joined the 1990s! Seriously though, NIC teaming is the bane of our Networking Support group's existence, so hopefully by creating and implementing our own driver system, we stop the pain customers have using third party solutions of variable quality. At least we'll be able to see what's wrong now if it doesn’t work.

    For a lot more info, grab the whitepaper. I'm confirming the whole DC-specific aspect here as well. I have heard several stories now and I want to be nice and crisp; check back later. :)


    What are the DFSR files $db_dirty$, $db_normal$, and $db_lost$ mentioned in the KB article Virus scanning recommendations for Enterprise computers that are running currently supported versions of Windows ? I only see $db_normal$ on my servers (presumably that's a good thing).


    $Db_dirty$ exists after a dirty database shutdown and acts as a marker of that fact. $Db_normal$ exists when there are no database issues and is renamed to $db_lost$ if the database goes missing, also acting as a state marker for DFSR between service restarts.


    Where is the best place to learn more about MaxConcurrentAPI?


    Right here, and only quite recently:

    Not a question (new DFSR functionality in KB 2663685)

    If you missed it, we released a new hotfix for DFSR last month that adds some long-sought functionality for file server administrators: the ability to prevent DFSR from non-authoritatively synchronizing replicated folders on a volume where the database suffered a dirty shutdown:

    Changes that are not replicated to a downstream server are lost on the upstream server after an automatic recovery process occurs in a DFS Replication environment in Windows Server 2008 R2 -

    DFSR now provides the capability to override automatic replication recovery of dirty shutdown-flagged databases. By default, the following registry DWORD value exists:


    StopReplicationOnAutoRecovery = 1

    If set to 1, auto recovery is blocked and requires administrative intervention. Set it to 0 to return to the old behavior.

    DFSR writes warning 2213 event to the DFSR event log:




    The DFS Replication service stopped replication on volume %2.

    This occurs when a DFSR JET database is not shut down cleanly and Auto Recovery is disabled. To resolve this issue, back up the files in the affected replicated folders, and then use the ResumeReplication WMI method to resume replication.

    Additional Information:

    Volume: %2

    GUID: %1


    Recovery Steps


    1. Back up the files in all replicated folders on the volume. Failure to do so may result in data loss due to unexpected conflict resolution during the recovery of the replicated folders.


    2. To resume the replication for this volume, use the WMI method ResumeReplication of the VolumeConfig class.

    For example, from an elevated command prompt, type the following command:


    wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="%1" call ResumeReplication


    For more information, see

    You must then make a decision about resuming replication. You must weigh your decision against the environment:

    • Are there originating files or modifications on this server? You can use the DFSRDIAG BACKLOG command with this server as the sending member and each of its partners as the receiving member to determine if this server had any pending outbound replication.
    • Do you need an out of band backup? You can check you latest backup logs and compare to file contents to see if you should first backup the RFs.
    • Are the replicated folders read-only? If so, there is little reason to examine the server further and you can resume replication. It is impossible for the RO RFs to have originated changes in that case.

    You then have several options:

    • Resume replication. By executing the WMI method listed in the event, the database rebuild commences for all Replicated Folders on that volume. If the database cannot be rebuilt gracefully, DFSR deletes the database and performs initial non-authoritative sync. All data local in those replicated folders is fenced to lose conflict resolutions. Any files that do not match the SHA1 hash of upstream servers move to the circular ConflictAndDeleted folder and, potentially, lost forever.


    Wmic /namespace:\\root\microsoftdfs path dfsrVolumeConfig where volumeGuid="<some GUID>" call ResumeReplication

    • Reconfigure replication on RFs to be authoritative. If the data is more up to date on the non-replicating RFs or the RFs are designed to originate data (such as Branch servers replicating back to a central hub for backups), you must manually reconfigure replication to force them to win.

    Other Stuff

    Holy crap! is a great way to find new music; I highly recommend it. It can get a little esoteric, though. Real radio will never find you a string duo that plays Guns and Roses songs, for example.


    AskDS reader Joseph Moody sent this along to us:

    "Because I got tired of forwarding the Accelerating Your IT Career post to techs in our department, we just had it printed poster size and hung it on an open wall. Now, I just point to it when someone asks how to get better."


    My wife wanted to be a marine biologist (like George Costanza!) when she was growing up and we got on a killer whale conversation last week when I was watching the amazing Discovery Frozen Planet series. She later sent me this tidbit:

    "First, the young whale spit regurgitated fish onto the surface of the water, then sank below the water and waited.

    If a hungry gull landed on the water, the whale would surge up to the surface, sometimes catching a free meal of his own. Noonan watched as the same whale set the same trap again and again. Within a few months, the whale's younger half brother adopted the practice.

    Eventually the behavior spread and now five Marineland whales supplement their diet with fresh fowl, the scientist said."

    It's Deep Blue Sea for Realzies!!!


    Have you ever wanted to know what AskDS contributor Rob Greene looks like when his manager 'shops him to a Shrek picture? Now you can:


    Have a nice weekend folks,

    - Ned “image” Pyle

  • How to NOT Use Win32_Product in Group Policy Filtering

    Hi all, Ned here again. I have worked many slow boot and slow logon cases over my career. The Directory Services support team here at Microsoft owns a sizable portion of those operations - user credentials, user profiles, logon and startup scripts, and of course, group policy processing. If I had to pick the initial finger pointing that customers routinely make, it's GP. Perhaps it's because group policy is the least well-understood part of the process, or maybe because it's the one with the most administrative fingers in the pie. When it comes down to reality though, group policy is more often not the culprit. Our new changes in Windows 8 will help you make that determination much quicker now.

    Today I am going to talk about one of those times that GPO is the villain. Well, sort of... he's at least an enabler. More appropriately, the optional WMI Filtering portion of group policy using the Win32_Product class. Win32_Product has been around for many years and is both an inventory and administrative tool. It allows you to see all the installed MSI packages on a computer, install new ones, reinstall them, remove them, and configure them. When used correctly, it's a valuable option for scripters and Windows PowerShell junkies.

    Unfortunately, Win32_Product also has some unpleasant behaviors. It uses a provider DLL that validates the consistency of every installed MSI package on the computer - or off of it, if using a remote administrative install point. That makes it very, very slow.

    Where people trip up usually is group policy WMI filters. Perhaps the customer wants to apply managed Internet Explorer policy based on the IE version. Maybe they want to set AppLocker or Software Restriction policies only if the client has a certain program installed. Perhaps even use - yuck - Software Installation policy in a more controlled fashion.

    Today I talk about some different options. Mike didn’t write this but he had some good thoughts when we talked about this offline so he gets some credit here too. A little bit. Tiny amount, really. Hardly worth mentioning.

    If you have no idea what group policy WMI filters are, start here:

    Back? Great, let's get to it.

    Don’t use Win32_Product

    The Win32_Product WMI class is part of the CIMV2 namespace and implements the MSI provider (msiprov.dll and associated msi.mof) to list and validate installed installation packages. You will see MsiInstaller event 1035 in the Application log for each application queried by the class:

    Source: MsiInstaller
    Event ID: 1035
    Windows Installer reconfigured the product. Product Name: <ProductName>. Product Version: <VersionNumber>. Product Language: <languageID>. Reconfiguration success or error status: 0.

    And constantly repeated System events:

    Event Source: Service Control Manager

    Event ID: 7035


    The Windows Installer service was successfully sent a start control.


    Event Type: Information

    Event Source: Service Control Manager

    Event ID: 7036


    That validation piece is the real speed killer. So much, in fact, that it can lead to group policy processing taking many extra minutes in Windows XP when you use this class in a WMI filter - or even cause processing to time out and fail altogether.. This is even more likely when:

    • The client contains many installed applications
    • Installation packages are sourced from remote file servers
    • Install packages used certificate validation and the user cannot access the certificate revocation list for that package
    • Your client hardware is… crusty.

    Furthermore, Windows Vista and later Windows versions cap WMI filters execution times at 30 seconds; if they fail to complete by then, they are treated as FALSE. On those OS versions, it will often appear that Win32_Product just doesn’t work at all.


    What are your alternatives?

    Group Policy Preferences, maybe

    Depending on what you are trying to accomplish, Group Policy Preferences could be the solution. GPP includes item-level targeting that has fast, efficient filtering of just about any criteria you can imagine. If you are trying to set some computer-based settings that a user cannot change and don’t mind preferences instead of managed policy settings, GPP is the way to go. As with all software, make sure you evaluate our latest patches to ensure it works as desired. As of this writing, those are:

    For instance, let's say you have a plotting printer that Marketing cannot correctly use without special Contoso client software. Rather than using managed computer policy to control client printer installation and settings, you can use GPP Registry or Printer settings to modify the values needed.


    Then you can use Item Level Targeting to control the installation based on the specialty software's presence and version.



    Alternatively, you can use the registry and file system for your criteria, which works even if the software doesn't install via MSI packages:


    An alternative to Win32_Product

    What to do if you really, really need to use a WMI filter to determine MSI installed versions and names though? If you look around the Internet, you will find a couple of older proposed solutions that - to be frank - will not work for most customers.

    1. Use the Win32reg_AddRemovePrograms class instead.
    2. Use a custom class (like described here and frequently copied/pasted on the Interwebz).

    The Win32reg_AddRemovePrograms is not present on most client systems though; it is a legacy class, first delivered by the old SMS 2003 management WMI system. I suspect one of the reasons the System Center folks discarded its use years ago for their own native inventory system was the same reason that the customer class above doesn’t work in #2 - it didn’t return 32-bit software installed on 64-bit computers. The class has not been updated since initial release 10 years ago.

    #2 had the right idea though, at least as a valid customer workaround to avoid using Win32_Product: by creating your own WMI class using the generic registry provider to examine just the MSI uninstall registry keys, you can get a fast and simple query that reasonably detects installed software. Armed with the "how", you can also extend this to any kind of registry queries you need, without risk of tanking group policy processing. To do this, you just need notepad.exe and a little understanding of WMI.

    Roll Your Own Class

    Windows Management Instrumentation uses Managed Operation Framework (MOF) files to describe the Common Information Model (CIM) classes. You can create your own MOF files and compile them into the CIM repository using a simple command-line tool called mofcomp.exe.

    You need to be careful here. This means that once you write your MOF you should validate it by using the mofcomp.exe -check argument on your standard client and server images. It also means that you should test this on those same machines using the -class:createonly argument (and not setting the -autorecover argument or #PRAGMA AUTORECOVER pre-processor) to ensure it doesn't already exist. The last thing you want to do is break some other class.

    When done testing, you're ready to give it a go. Here is a sample MOF, wrapped for readability. Note the highlighted sections that describe what the MOF examines and what the group policy WMI filter can use as query criteria. Unlike the oft-copied sample, this one understands both the normal native architecture registry path as well as the Wow6432node path that covers 32-bit applications installed on a 64-bit system.

    Start copy below =======>

    // "AS-IS" sample MOF file for returning the two uninstall registry subkeys

    // Unsupported, provided purely as a sample

    // Requires compilation. Example: mofcomp.exe sampleproductslist.mof

    // Implements sample classes: "SampleProductList" and "SampleProductlist32"

    //   (for 64-bit systems with 32-bit software)




    [dynamic, provider("RegProv"),


    class SampleProductsList {

    [key] string KeyName;

    [read, propertycontext("DisplayName")] string DisplayName;

    [read, propertycontext("DisplayVersion")] string DisplayVersion;



    [dynamic, provider("RegProv"),


    class SampleProductsList32 {

    [key] string KeyName;

    [read, propertycontext("DisplayName")] string DisplayName;

    [read, propertycontext("DisplayVersion")] string DisplayVersion;


    <======= End copy above

    Examining this should also give you interesting ideas about other registry-to-WMI possibilities, I imagine.

    Test Your Sample

    Copy this sample to a text file named with a MOF extension, store it in the %systemroot%\system32\wbem folder on a test machine, and then compile it from an administrator-elevated CMD prompt using mofcomp.exe filename. For example:


    To test if the sample is working you can use WMIC.EXE to list the installed MSI packages. For example, here I am on a Windows 7 x64 computer with Office 2010 installed; that suite contains both 64 and 32-bit software so I can use both of my custom classes to list out all the installed software:


    Note that I did not specify a namespace in the sample MOF, which means it updates the \\root\default namespace, instead of the more commonly used \\root\cimv2 namespace. This is intentional: the Windows XP implementation of registry provider is in the Default namespace, so this makes your MOF OS agnostic. It will work perfectly well on XP, 2003, 2008, Vista, 7, or even the Windows 8 family. Moreover, I don’t like updating the CIMv2 namespace if I can avoid it - it already has enough classes and is a bit of a dumping ground.

    Deploy Your Sample

    Now I need a way to get this MOF file to all my computers. The easiest way is to return to Group Policy Preferences; create a GPP policy that copies the file and creates a scheduled task to run MOFCOMP at every boot up (you can change this scheduling later or even turn it off, once you are confident all your computers have the new classes).





    You can also install and compile the MOF manually, use psexec.exe, make it part of your standard OS image, deploy it using a software distribution system, or whatever. The example above is just that - an example.

    Now that all your computers know about your new WMI class, you can create a group policy WMI filter that uses it. Here are a couple examples; note that I remembered to change the namespace from CIMv2 to DEFAULT!




    You're in business with a system that, while not optimal, is certainly is far better than Win32_Product. It’s fast and lightweight, relatively easy to manage, and like all adequate solutions, designed not to make things worse in its efforts to make things different.

    And another idea (updated 4/23)

    AskDS contributor Fabian Müller had another idea that he uses with customers:

    1. Define environment variables using GPP based on Registry Item-Level targeting filters or just deploy the variables during software installation phase, e.g. %IEversion%= 9

    2. Use this environment variable in WMI filters like this: Root\CIMV2;SELECT VARIABLEVALUE FROM Win32_Environment WHERE NAME='IEversion' AND VARIABLEVALUE='9'

    Disadvantage: First computer start or user logon will not pass the WMI filter since the ENV variable had to be created (if set by GPP). It would be better having this environment variable being created during softwareinstallation / deployment (or whatever software being deployed).

    Advantage: The environment WMI query is very fast compared. And you can use it “multi-purpose”. For example, as part of CMD-based startup and logon scripts.

    An aside

    Software Installation policy is not designed to be an enterprise software management solution and neither are individual application self-update systems. SI works fine in a small business network as a "no frills" solution but doesn’t offer real monitoring or remediation, and requires too much of the administrator to manage. If you are using these because of the old "we only fix IT when it's broken" answer, one argument you might take to management is that you are broken and operating at great risk: you have no way to deploy non-Microsoft updates in a timely and reliable fashion.

    Even though the free Windows Update and Windows Software Update Service support Windows, Office, SQL, and Exchange patching, it’s probably not enough; anyone with more than five minutes in the IT industry knows that all of your software should be receiving periodic security updates. Does anyone here still think it's safe to run Adobe, Oracle, or thousands of other vendor products without controlled, monitored, and managed patching? If your network doesn't have a real software patching system, it's like a building with no sprinklers or emergency exits: nothing to worry about… until there's a fire. You wouldn’t run computers without anti-virus protection, but the number of customers I speak to that have zero security patching strategy is very worrying.

    It's not 1998 anymore, folks. A software and patch management system isn’t an option anymore if you have a business with more than a hundred computers; those days are done for everyone. Even for Apple, although they haven't realized it yet. We make System Center, but there are other vendors out there too, and I’d rather you bought a competing product than have no patch management at all.

    Until next time,

    - Ned "pragma-tism" Pyle

  • Exclusive! Shocking New Windows Names Revealed!!!

    Ok, that might have been a slightly inflammatory and misleading title.

    • Windows 8 is now officially called... Windows 8. The full set of edition names are Windows 8, Windows 8 Pro, Windows RT (that's WOA), and Windows 8 Enterprise. Brandon Leblanc has the full breakout.
    • Windows Server "8" is now officially called... Windows Server 2012. You can read more about the strategy from Brad Anderson here. Editions to follow at a later time.

    That server name also tells you two things: One, if you had bet against that name in the office pool, you are a born loser. Two, that we may make radical changes in OS capabilities, but when it comes to server branding, we are more conservative than a prom chaperon. Who is also a nun. And voted libertarian. In Switzerland.

    Back to work, you!

     - Ned "Ned Pyle" Pyle

  • Saturday Mail Sack: Because it turns out, Friday night was alright for fighting edition

    Hello all, Ned here again with our first mail sack in a couple months. I have enough content built up here that I actually created multiple posts, which means I can personally guarantee there will be another one next week. Unless there isn't!

    Today we answer your questions around:

    One side note: as I was groveling old responses, I came across a handful of emails I'd overlooked and never responded to; <insert various excuses here>. People who know me know that I don’t ignore email lightly. Even if I hadn't the foggiest idea how to help, I'd have at least responded with a "Duuuuuuuuuuurrrrrrrr, no clue, sorry".

    Therefore, I'll make you deal: if you sent us an email in the past few months and never heard back, please resend your question and I'll answer them as best I can. That way I don’t spend cycles answering something you already figured out later, but if you’re still stuck, you have another chance. Sorry about all that - what with Windows 8 work, writing our internal support engineer training, writing public content, Jonathan having some kind of south pacific death flu, and presenting at internal conferences… well, only the usual insane Microsoft Office clipart can sum up why we missed some of your questions:


    On to the goods!


    Is it possible to create a WMI Filter that detects only virtual machines? We want a group policy that will apply specifically to our virtualized guests.


    Totally possible for Hyper-V virtual machines: You can use the WMI class Win32_ComputerSystem with a property of Model like “Virtual Machine” and property Manufacturer of “Microsoft Corporation”. You can also use class Win32_BaseBoard for the Product property, which will be “Virtual Machine” and property Manufacturer that will be “Microsoft Corporation”.


    Technically speaking, this might also capture Virtual PC machines, but I don’t have one handy to see, and I doubt you are allowing those to handle production workloads anyway. As for EMC VMWare, Citrix Xen, KVM, Oracle Virtual Box, etc. you’ll have to see what shows for Win32_BaseBoard/Win32_ComputerSystem in those cases and make sure your WMI filter looks for that too. I don’t have any way to test them, and even if I did, I'd still make you do it out of spite. Gimme money!

    Which reminds me - Tad is back:



    The Understand and Troubleshoot AD DS Simplified Administration in Windows Server "8" Beta guide states:

    Microsoft recommends that all domain controllers provide DNS and GC services for high availability in distributed environments; these options default to on when installing a domain controller in any mode or domain.

    But when I run Install-ADDSDomainController -DomainName -whatif it returns that the cmdlet will not install the DNS Server (DNS Server: No).

    If Microsoft recommends that all domain controllers provide DNS, why do I need to specify -InstallDNS argument?


    The output of DNS Server: No is a cosmetic issue with the output of -whatif. It should say YES, but doesn't unless you specifically use the $true parameter. You don't have to specify -installdns; the cmdlet will automatically* install DNS server unless you specify -installdns:$false.

    * If you are using Windows DNS on domain controllers, that is. The UTG isn't totally accurate in this version (but will be in the next). The logic is that if that domain already hosts the DNS, all subsequent DCs will also host the DNS by default. So to be very specific:

    1. New forest: always install DNS
    2. New child or new tree domain: if the parent/tree domain hosts DNS, install DNS
    3. Replica: if the current domain hosts DNS, install DNS


    How can I disable a user on all domain controllers, without waiting for (or forcing) AD replication?


    The universal in-box way that works in all operating systems would be to use DSMOD.EXE USER and feed it the DC names in a list. For example:

    1. Create a text file that contains all your DC in a forest, in a line-separated list:


    2. Run a FOR loop command to read that list and disable the specified user against each domain controller.

    FOR /f %i IN (some text file) DO dsmod user "some DN" -disabled -yes -s %i

    For instance:


    You also have the AD PowerShell option in your Win2008 R2 DC environment, and it’s much easier to automate and maintain. You just tell it the domain controllers' OU and the user and let it rip:

    get-adcomputer -searchbase "your DC OU" -filter * | foreach {disable-adaccount "user logon ID" -server $_.dnshostname}

    For instance:


    If you weren't strictly opposed to AD replication (short circuiting it like this isn't going to stop eventual replication traffic) you can always disable the user on one DC then force just that single object to replicate to all the other DCs. Check out repadmin /replsingleobj or the new Windows Server "8" Beta " sync-adobject cmdlet.


     The Internet also has many further thoughts on this. It's a very opinionated place.


    We have found that modifying the security on a DFSR replicated folder and its contents causes a big DFSR replication backlog. We need to make these permissions changes though; is there any way to avoid that backlog?


    Not the way you are doing it. DFSR has to replicate changes and you are changing every single file; after all, how can you trust a replication system that does not replicate? You could consider changing permissions "from the bottom up" - where you modify perms on lower level folders first - in some sort of staged fashion to minimize the amount of replication that has to occur, but it just sounds like a recipe to get things wrong or end up replicating things twice, making it worse. You will just have to bite the bullet in Windows Server 2008 R2 and older DFSR. Do it on a weekend and next time, treat this as a lesson learned and plan your security design better so that all of your user base fits into the model using groups.


    It is a completely different story if you switch to Windows Server "8" Beta - well really, the RTM version when it ships. There you can use Central Access Policies (similar to Windows Server 2008 R2's global object access auditing). This new kind of security system is part of the Dynamic Access Control feature and abstracts the user access from NTFS, meaning you can change security using claims policy and not actually change the files on the disk (under some but not all circumstances - more on this when I write a proper post after RTM). It's amazing stuff; in my opinion, DAC is the first truly huge change in Windows file access control since Windows NT gave us NTFS.


    Central Access Policy is not a trivial thing to implement, but this is the future of file servers. Admins should seriously evaluate this feature when testing Windows Server "8" Beta in their lab environments and thinking about future designs. Our very own Mike Stephens has written at length about this in the Understand and Troubleshoot Dynamic Access Control in Windows Server "8" Beta guide as well.


    [Perhaps interestingly to you the reader, this was my question to the developers of AD PowerShell. I don’t know everything after all… - Ned]

    I am periodically seeing error "invalid enumeration context" when querying the Redmond domain using get-adcomputer. It’s a simple query to return all the active Windows 8 and Windows Server "8" computers that were logged into since February 15th and write them to a CSV file:


    It runs for quite a while and sometimes works, sometimes fails. I don’t find any well-explained reference to what this error means or how to avoid it, but it smells like a “too much data asked for over too long a period of time” kind of issue.


    The enumeration contexts do have a finite hardcoded lifetime and you will get an error if they expire. You might see this error when executing searches that search a huge quantity of data using limited indexed attributes and return a small data set. If we hit a DC that is not very busy then the query will run faster and could have enough time to complete for a big dataset like this query. Server hardware would also be a factor here. You can also try searching starting at a deeper level. You could also tweak the indexes, although obviously not in this case.

    [For those interested, when the query worked, it returned roughly 75,000 active Windows 8 family machines from that domain alone. Microsoft dogfoods in production like nobody else, baby - Ned]


    Is there any chance that DFSR could lock a file while it is replicating outbound and prevent user access to their data?


    DFSR uses the BackupRead() function when copying a file into the staging folder (i.e. any file over 64KB, by default), so that should prevent any “file in use” issues with applications or users; the file "copying" to the staging folder is effectively instantaneous and non-exclusive. Once staged and marshaled, the copy of the file is replicated and no user has any access to that version of the file.

    For a file under 64KB, it is simply replicated without staging and that operation of making a copy and sending it into RPC is so fast there’s no reasonable way for anyone to ever see any issues there. I have certainly never seen it, for sure, and I should have by now after six years.


    Why does TechNet state that USMT 4.0 offline migrations don’t work for certain OS settings? How do I figure out the complete list?


    Manifests that use migration plugin DLLs aren’t processed when running offline migrations. It's just a by design limitation of USMT and not a bug or anything. To see which manifests you need to examine and consider creating custom XML to handle, review the complete list at Understanding what the USMT 4.0 CONFIG manifests migrate (Part 1: Introduction).


    One of my customers has found that the "Everyone" group is added to the below folders in Windows 2003 and Windows 2008:

    Windows Server 2008



    Windows Server 2003

    C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\DSS\MachineKeys

    C:\Documents and Settings\All Users\Application Data\Microsoft\Crypto\RSA\MachineKeys

    1. Can we remove the "Everyone" group and give permissions to another group like - Authenticated users for example?

    2. Will replacing that default cause issues?

    3. Why is this set like this by default?


    [Courtesy of:



    These permissions are intentional. They are intended to allow any process to generate a new private key, even an Anonymous one. You'll note that the permissions on the MachineKeys folder are limited to the folder only. Also, you should note that inheritance has been disabled, so the permissions on the MachineKeys folder will not propagate to new files created therein. Finally, the key generation code itself modifies the permissions on new key container files before the private key is actually written to the container file.

    In short, messing with these permissions will probably lead to failures in creating or accessing keys belonging to the computer. So please don't touch them.

    1. Exchanging Authenticated Users with Everyone probably won't cause any problems. Microsoft, however, doesn't test cryptographic operations after such a permission change; therefore, we cannot predict what will happen in all cases.

    2. See my answer above. We haven't tested it. We have, however, been performing periodic security reviews of the default Windows system permissions, tightening them where possible, for the last decade. The default Everyone permissions on the MachineKeys folder have cleared several of these reviews.

    3. In local operations, Everyone includes unidentified or anonymous users. The theory is that we always want to allow a process to generate a private key. When the key container is actually created and the key written to it, the permissions on the key container file are updated with a completely different set of default permissions. All the default permissions allow are the ability to create a file, read and write data. The permissions do not allow any process except System to launch any executable code.


    If I specify a USMT 4.0 config.xml child node to prevent migration, I am still seeing the settings migrate. But if I set the parent node, those settings do not migrate. The consequence being that no child nodes migrate, which I do not want.

    For example, on XP the Dot3Svc service is set to Manual startup.  On Win7, I want the Dot3Svc service set to Automatic startup.  If I use this config.xml on the loadstate, the service is set to manual like the XP machine and my "no" setting is ignored:

    <component displayname="Networking Connections" migrate="yes"ID="network_and_internet\networking_connections">

      <component displayname="Microsoft-Windows-Wlansvc" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-VWiFi" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-RasConnectionManager" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-RasApi" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-PeerToPeerCollab" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-Native-80211" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-MPR" migrate="yes"ID="<snip>"/>

      <component displayname="Microsoft-Windows-Dot3svc" migrate="no"ID="<snip>"/>



    Two different configurations can cause this symptom:

    1. You are using a config.xml file created on Windows 7, then running it on a Windows XP computer with scanstate /config

    2. The source computer was Windows XP and it did not have a config.xml file set to block migration.

    When coming from XP, where downlevel manifests were used, loadstate does not process those differently-named child nodes on the destination Win7 computer. So while the parent node set to NO would work, the child nodes would not, as they have different displayname and ID.

    It’s a best practice to use a config.xml in scanstate as described in, if going from x86 to x64; otherwise, you end up with damaged COM settings. Otherwise, you only need to generate per-OS config.xml files if you plan to change default behavior. All the manifests run by default if there is a config.xml with no modifications or if there is no config.xml at all.

    Besides being required for XP to block settings, you should also definitely lean towards using config.xml on the scanstate rather than the loadstate. If using Vista to Vista, Vista to 7, or 7 to 7, you could use the config.xml on either side, but I’d still recommend sticking with the scanstate; it’s typically better to block migration from adding things to the store, as it will be faster and leaner.

    Other Stuff

    [Many courtesy of our pal Mark Morowczynski -Ned]

    Happy belated 175th birthday Chicago. Here's a list of things you can thank us for, planet Earth; where would you be without your precious Twinkies!?

    Speaking of Chicago…

    All the new MCSE and certification news reminded me of the other side to that coin.

    Do you know where your nearest gun store is located? Map of the Dead does. Review now; it will be too late when the zombies rise from their graves, and I don't plan to share my bunker, Jim.


    If you call yourself an IT Pro, you owe it to yourself to visit right now and buy… everything. They make great alpha geek conversation pieces. To get things started, I recommend these:

    clip_image002[6] clip_image004 clip_image006
    Sigh - there is never going to be another Firefly

    And finally…

    I started re-reading Terry Pratchett, picking up where from where I left off as a kid. Hooked again. Damn you English writers, with your understated awesomeness!

    Ok, maybe not all English Writers…


    Until next time,

    - Ned "Jonathan is seriously going to kill me" Pyle

  • New USMT 5.0 Features for Windows 8 Consumer Preview

    Hi all, Ned here again. Frequent readers know that I’ve written many times about the User State Migration Tool; it’s surprising to some, but the Directory Services team owns supporting this tool within Microsoft in the United States (our European colleagues wisely made sure the Deployment team owns it there). With Windows 8 Consumer Preview, we released the new tongue twisting Windows Assessment and Deployment Kit for Windows 8 Consumer Preview (Windows ADK), which replaces the old WAIK and contains the updated User State Migration Tool 5.0 (binary version 6.2.8250). The new tool brings a long sought capability to the toolset: corrupt store detection and extraction. There are also various incremental supportability improvements and bug fixes.

    Store verification and recovery

    USMT 4.0 introduced usmtutils.exe, a simple command line tool that was mainly used to delete hardlink folders in use by some application and no longer removable through normal measures. The new usmtutils.exe now includes two new command-line arguments:

    /verify[:reportType] <filePath> [/l:logFile] [/decrypt[:<AlgID>]] [/key:keyString] [/keyfile:fileName]

    /extract <filePath> <destinationPath> [/i:<includePattern>] [/e:<excludePattern>] [/l:logFile] [/decrypt[:<AlgID>]] {/key:keyString] | [/keyfile:fileName] [/o]

    You use the /verify option after gathering a scanstate compressed store. This checks the store file’s consistency and if it contains corrupted files or a corrupted catalog. It’s just a reporting tool, and it has options for the verbosity of the report as well as the optional encryption key info used to secure a compressed store. In Microsoft experience, hardware issues typically cause corrupt compressed stores, especially when errors are not reported back from USB devices.


    You use the /extract option if you want to simply restore certain files, or cannot restore a compressed store with loadstate. For example, you’d use it if the store was later partially corrupted after validation, if loadstate cannot operate normally on a destination computer, or if a user deleted a file shortly after loadstate restoration but before their own backups were run. This new capability can restore files based on patterns (both include and exclude). It doesn’t restore setting or registry data, just files.


    Changes in capabilities

    USMT also now includes a number of other less sexy - but still important - changes. Here are the high points:

    • Warnings and logging – Scanstate and loadstate now warn you at the console with "…manifests is not present" if they cannot find the replacement and downlevel manifest folders:


    USMT also warns about the risks of using the /C option (rather than /VSC combined with ensuring applications are not locking files), and how many units were not migrated:


    Remember: you cannot use /vsc with /hardlink migrations. Either you continue to use /C or you figure out why files are in use and stop the underlying issue.

    To that point, the log contains line items for each /C skipped file as well as a summary error report at the bottom:

    ----------------------------- USMT ERROR SUMMARY ------------------------------
    * One or more errors were encountered in migration (ordered by first occurence)
    | Error Code | Caused Abort | Recurrence | First Occurrence
    | 33         | No           | 18         | Read error 33 for D:\foo [bar.pst]. Windows error 33 description: The process cannot access the file because another process has locked a portion of the file.[gle=0x00000012]
    18 migration errors would have been fatal if not for /c. See the log for more information

    • Profile scalability – USMT 4.0 can fail to migrate if there are too many profiles and not enough memory. It takes a perfect storm but it’s possible and you would see error: “Close programs to prevent information loss. Your computer is low on memory” during loadstate. USMT 5.0 now honors an environmental variable of:


    When set, loadstate trims its memory usage much more aggressively. The consequence of this is slower restoration, so don’t use this switch willy-nilly.

    • Built-in Variables - USMT now supports all of the KNOWNFOLDERID types now. Previously some (such as FOLDERID_Links) were not and required some hacking.

    • Command-line switches – the legacy /ALL switch was removed. The ALL argument was implicit and therefore pointless; it mainly caused issues when people tried to combine it with other arguments. 

    • /SF Works - the undocumented /SF switch that used to break things no longer breaks things. 
    • Scanstate Administrator requirements – Previously, loadstate required your membership in the Administrators group, but bizarrely, scanstate did not. This was pointless and confusing, as migration does not work correctly without administrative rights. Now they both require it.

    • "Bad" data handling - Certain unexpected file data formats used to lead to errors like "Windows error 4317 description: The operation identifier is not valid". Files with certain strings in alternate data streams would fail with "Windows error 31 description: A device attached to the system is not functioning". USMT handles these scenarios now.

    • NTUSER.DAT load handling - The NTUSER.DAT last modified date no longer changes after you run scanstate, meaning that /UEL now works correctly with repeated migrations.

    • Manifests and UNC paths - Previously, USMT failed to find its manifest folders if you ran scanstate or loadstate through a UNC path. Now it looks in the same folder as the running executable, regardless of that path's form.

    • Orphaned profiles - When USMT cannot load a user profile as described here, it tries 19 more times (waiting 6 seconds between tries) just like USMT 4.0. However, USMT skips any subsequent profiles that fail to load after one attempt. Therefore, no matter how many incorrectly removed profile entries exist, the most delay you can see is 2 minutes.

    • UEL and UE - In USMT 4.0, a /UEL exclusion rule would override the processing of a /UE exclusion rule, even though it was likely that if you were setting UE because you had specific need. USMT now returns to the USMT 3.01 behavior of UE overriding UEL.

    USMT 5.0 still works with Windows XP through Windows 7, and adds Windows 8 x86 and AMD64 support as well. All of the old rules around CPU architecture and application migration are unchanged in the beta version (USMT 6.2.8250).

    Feedback and Reminder about the Windows 8 Consumer Preview

    The place to send issues is the IT Pro TechNet forums. That engages everyone from our side through our main conduits and makes your feedback noticeable. Not all developers are readers of this blog, naturally.

    Furthermore, Windows 8 Consumer Preview is a pre-release product and is not officially supported by Microsoft. In general, it is not recommended pre-release products be used in production environments. For more information on the Windows 8 Consumer Preview, read this blog post from the Windows Experience Blog.

    Until next time,

    Ned “there are lots of new manifests too, but I just couldn’t be bothered” Pyle

  • Your 24 Month XP Warning

    Hi all, Ned here again with a public service announcement:

    On April 8th 2014, Windows XP support ends

    For the temporally challenged, that’s exactly two years from today. Hopefully, some of you don’t care because you’ve already gotten off XP. After all, Windows 7 has a 41% piece of Windows desktop distributions now according to Here’s their March 2012 take:


    What that number also means though is that roughly 51% of the remaining desktops are still on XP. Hundreds of millions of computers that, two years from today, will stop getting security updates and lose support from third party software vendors.

    If you have not started migrating your Windows XP environment to Windows 7 and begun evaluating Windows 8 Consumer Preview, you are probably late. According to our own customer deployment data, enterprise desktop replacement projects average 18-32 months. As someone who writes a lot about USMT, I can say that a customized PC migration undertaking is no joke. There are loads of moving parts in mass PC replacements and every company is different, even within the common areas of desktop, mobile, and work-from-home machines. If you’re prudent, you’ll spend months planning and testing before you get anywhere near your first end user. That means if you’re a company with 50,000 XP desktops, you’ll have to average around 2,100 desktops migrated a month before support ends. If you take the more realistic thinking and assume 250 working days in a year, you must average 100 migrated computers per working day, starting this minute.

    The fiscal year is drawing to a close and the 24 month clock is running. Do you know where your XP clients are?

    Until next time,

    - Ned “like the Cubs, it’s a rebuilding year” Pyle


    PS: Oh, and Vista mainstream support ended April 10th (today, as I wrote this). That means now it only gets security updates for the next 5 years, no further QFEs or service packs.

    Like you care.

  • Group Policy Management Improvements in Windows Server "8" Beta

    Hi all, Ned here again. If you've been supporting group policy for years, you’ve grown used to its behaviors. For something designed to manage an enterprise, its initial implementation wasn’t easy to manage itself. The Group Policy Management Console improved this greatly after Windows Server 2003, but there was room for enhancement.

    Windows Server "8" Beta introduces a number of interesting Group Policy management changes to advance things. These include detecting overall replication consistency as well as remote policy refresh and easier resultant set of policy troubleshooting. Windows 8 Consumer Preview benefits from some of these changes as well.

    Let's dig in.

    Infrastructure Status

    Once upon a time, someone wrote a Windows 2000 resource kit utility called gpotool.exe (no longer supported). It was supposed to tell you if the SYSVOL and AD portions of a group policy were synchronized on a given domain controller and between DCs in a domain. If it returned message "Policies OK", you were supposed to be golden.

    Unfortunately, gpotool is not very bright or honest, which is why we do not recommend customers use it. It only checks the gpt.ini files in SYSVOL. Anyone who manages group policy knows that each GP GUID folder in SYSVOL contains many files critical to applying group policy. The gpt.ini existing is immaterial if the registry.pol does not exist or is some heinous stale version. Furthermore, gpotool bases everything on the gpt.ini version matching between AD and SYSVOL and alerting you if they don't. Except that the version matching alone has not mattered since Windows 2000 and file consistency checking is super important.

    Enter Windows Server "8" Beta. When you fire up GPMC from a server or RSAT, then navigate to a domain node, you now see a new Status tab (more properly called the Group Policy Infrastructure Status tool). GPMC sets the DC it connected to as a baseline source of comparison. By default, that would be the PDC emulator, which GPMC tries to connect to first.


    If you click Detect Now, the computer running GPMC directly reaches out to all the domain controllers in that domain using the LDAP and SMB protocols. It compares all the SYSVOL group policy file hashes, file counts, ACLs, and GPT versions against the baseline server. It also checks each DC's AD group policy object count, versions, and ACLS against the baseline. If everything is copacetic, you get the good news right there in the UI.


    If it's not, you don't:


    Note how the report renders above. If the Active Directory and SYSVOL columns are blank, the versions match between gpt and AD, and this means that the file hashes or security are out of sync (an indication of latency at the least); otherwise you will see version messages. If the FRS or DFSR service isn't running on a DC other than the baseline or SYSVOL is not shared, the SysVol message changes to Inaccessible. If you turn off a DC or NTDS service, the Active Directory field changes to Inaccessible. If you just deleted or added a group policy, the Active Directory field changes to Number of GPOS for comparison. It's all straightforward.

    This new tool doesn’t grant permission to turn off your brain, of course. It's perfectly normal for AD and SYSVOL to be latent and out of sync between DCs for periods of time. Don't assume that because you see servers showing replication in progress that it is an error - that's why it specifically doesn't say “error” in GPMC. Finally, keep in mind that this new functionality version in the public Beta is naturally a bit unstable; feel free to report issues the Windows Server 8 Beta Forums along with detailed repro steps, and we can chat about if your issue is unknown. For example, stopping the DFSR service on the PDCE and then then clicking Detect Now to use that DC as the baseline terminates the MMC. Don’t take it too hard - work in progress, right? We'd love your feedback.

    Moving right along…

    Remote Policy Refresh

    You can now use GPMC to target an OU and force group policy refresh on all of its computers and their currently logged on users. Simply right click any organizational unit and click Group Policy Update. The update occurs within 10 minutes (randomized on each targeted computer) in order to prevent crushing some poor DC in a branch office.




    Windows Server "8" Beta Group Policy also updates the GroupPolicy PowerShell module to include a new cmdlet named Invoke-GpUpdate. If you examine its help, you see that it is very much like the classic gpupdate.exe. If you -force using invoke-gpupdate, you do the same as /force in gpupdate.exe, for instance.




    Invoke-GPUpdate [[-Computer] <string>] [[-RandomDelayInMinutes] <int>] [-AsJob] [-Boot] [-Force] [-LogOff] [-Target <string>] [<CommonParameters>]

    Obviously, this cmdlet gives you much more control over the remote policy refresh process than GPMC. For instance, you can target a particular computer:

    Invoke-gpupdate -computer <some computer>

    Moreover, unlike the "within 10 minutes" pseudo-random behavior of GPMC, you can make the policy refresh happen right now and forcing group policy to update regardless of version changes. I don't know about you, but if I am interactively invoking a policy update for a given computer, I am not interested in waiting!


    Since this is PowerShell, you have a great deal of flexibility compared to a purpose-built graphical or command-line tool. For example, you can get a list of computers with an arbitrary description then invoke against each one using a pipeline to for-eachobject, regardless of OU:


    If you’re interested, this tool works by creating remote scheduled tasks. That's how it works for logged on users and with randomized refresh times. Another good reason to ensure the Task Scheduler service is running.


    New RSOP Logging Data

    I saved the best for last. The group policy resultant set of planning logs include a number of changes designed make troubleshooting and policy analysis easier. Just like in the last few versions of Windows, you can still use GPMC Group Policy Results or GPRESULT /H to gather an html log file showing how and what policy applied to a user and computer.

    When you open that resulting html file, you now see an updated Summary section that provides better "at a glance" information on policy working or not and the type of network speeds detected. Even better is the new Component Status area. This shows you the time taken for each element of group policy processing to complete processing.


    It also stores the associated operational event log activity under View Log that used to require you running gplogview.exe. Rather than parsing the event log with an Activity ID for the computer and user portions of policy processing, you just click the link to see it all unfold before you.


    Finally, there is a change to the HTML result file for the applied policies. After 12 years, we’ve reached a point where there are thousands of individual Administrative template entries; far more than anyone could possibly remember or reliably discern from their titles. To make this easier, the Windows 8 version of the report now includes explanatory hotlinks to each of those policy entries.


    By clicking the links in the report, you get the full Explanation text included with that policy entry. Like in this case, the new Primary Computer policy for roaming profiles (which I’ll discuss in a future post).



    Key Point

    Remote RSOP logging and Group Policy refresh require that you open firewall ports on the targeted computers. This means allowing inbound communication for RPC, WMI/DCOM, event logs, and scheduled tasks. You can enable the built-in Windows Advanced Firewall inbound rules:

    • Remote Policy Update
      • Remote Scheduled Tasks Management (RPC)
      • Remote Scheduled Tasks Management (RPC-EPMAP)
      • Windows Management Instrumentation (WMI-in)
    • Remote Policy Logging
      • Remote Event Log Management (NP-in)
      • Remote Event Log Management (RPC)
      • Remote Event Log Management (RPC-EPMAP)
      • Windows Management Instrumentation (WMI-in)

    These are part of the “Remote Scheduled Tasks Management”, “Remote Event Log Management”, and “Windows Management Instrumentation” groups. These are TCP RPC port 135, named pipe port 445, and the dynamic ports associated with the endpoint mapper, like always.

    Feedback and Beta Reminder

    The place to send issues is the IT Pro TechNet forums. That engages everyone from our side through our main conduits and makes your feedback noticeable. Not all developers are readers of this blog, naturally.

    Furthermore, remember that this article references a pre-release product. Microsoft does not support Windows 8 Consumer Preview or Windows Server "8" Beta in production environments unless you have a special agreement with Microsoft. Read that EULA you accepted when installing!

    Until next time,

    Ned “I used a fancy arrow!” Pyle

  • Gimme Some Sugar

    Hi all, Ned here again. Like Bruce Campbell, we’ve been away for awhile, but you can always count on us to return for the sequel. Some of the Windows Server “8” Beta blogging rules have been relaxed and we’re ready to begin firing our boomstick. Look for the first one here in a few minutes.

    Besides that, I’ve had plenty of inspiration in the past month from some of your questions and have some other non-8 posts in the quench tub that should be ready to go out soon; I’m thinking new USMT tricks, WMI filtering coolness, AD forest recovery gotchas, and some other. I might even find time for a Friday Mail Sack next week, who knows?

    It’s a dirty job here, but someone has to get the backend of the pony.

    Enough with metaphor mixing – on to the goods. The next post is a doozy: group policy management changes in Windows Server “8” Beta.

    - Ned “Honey, you got reeeal ugly” Pyle