Friday Mail Sack: “Who am I kidding, more like Monthly” Edition

Friday Mail Sack: “Who am I kidding, more like Monthly�� Edition

  • Comments 4
  • Likes

Hi folks, Ned here again with another tri-weekly Friday Mail Sack. This time we talk service auditing, trust creation, certificates and USMT, SYSVOL migration with RODCs, DFS stuff, RPC and firewalls, virtualization, and the zombie corpse of FRS.

Shoot it in the head!


We’re setting up a trust between two domains in two forests. When we type in the name of the domain we are immediately prompted for credentials in that domain and the message “to create this trust relationship, you must supply user credentials for the specified domain”. We can enter any domain credentials here from that domain and it will work – some nobody user works, never mind an admin":


We are later prompted for administrative credentials like usual when finalizing the trust. Everything works, it’s just weird.


Anyone can reproduce this issue by removing the NullSessionPipes registry entry for LSARPC. NullSessionPipes – along with RestrictNullSessAccess - controls anonymous access to Named Pipes. Very legacy stuff. The list of default allowed protocols varies between OS and server role; for instance, a pure Windows Server 2008 R2 DC has a default list of:


You’ll find various security documents giving valid (or crazy) advice about messing with these settings but it boils down to “what do you need for your specific server, client, and application workloads to function?” If you get so secure that no one can work, you’ve gone too far.

In this case, setting up a trust uses the LSARPC protocol to connect to a DC in the other domain and find out basic information about it. If you can’t connect to it anonymously for this “phone book” kind of directory info that dates back to NT, you get prompted for creds. Since the info is public knowledge in that domain, any user is adequate.

These are often set through security policies and if you have this issue, look there first.


I’ve also seen it as part of a server image from someone who had too much time on their hands.


DFSN is awesome. What is decidedly not awesome is when the requisite antivirus software absolutely kills client-side performance. What can loyal DFSN evangelists do (short of removing the AV or completely disabling network file scanning) on the client-side to prevent our users from suffering a dreaded antivirus performance hit when using DFS Namespaces?


Sort of a sideways approach, but if you are using Windows 7 clients then Offline Files might be an option. As an experiment with some test computers/users, you can configure:

  • Enable Transparent Caching
  • Configure Background Sync
  • Configure Slow-Link Mode

You could make these computers work as if they are on a “slow network”, working primarily out of their Offline Files cache and trickle synchronizing their data back to the servers in the background continuously.

I specifically call out Windows 7 as Vista doesn’t support all these features, and XP supports none of them. XP is also gross.

Ultimately, you can only bandage things in this scenario. Whaling on your vendor (even if it’s us!) to improve performance is the only thing left. Like beer, they are the cause of - and solution to - all of life’s problems…


I read your previous post here where you talked about how USMT 4.0 migrates computer certificates without private keys. Generally speaking this has not been an issue, as we have certificate auto-enrollment and the new computers get new valid certs. One application is having problems with these migrated invalid certs though and we need to block them from migrating, is that possible?


Yes. While this should be avoided if possible (a machine cert without a private key might still mean something useful to some strange application), it's simple to block computer certificate migration. Here is sample unconditional exclusion XML named skipmachinecerts.xml that you would run only with scanstate.exe (no need for loadstate to run it):

scanstate.exe c:\store /i:migapp.xml /i:migdocs.xml /i:skipmachinecerts.xml

<?xml version="1.0" encoding="UTF-8"?>
<migration urlid="">
<component type="Documents" context="System">
   <role role="Data">
This override XML prevents computer (not user) certificates from migrating. –
This should ONLY be used if machine certs with no private keys are causing issues –
Nice applications consider these certs invalid and computers request auto-enrollment –
pattern type=

You should never block user certificate migration as they have private keys and if users are securing data like EFS-encrypted files you would be be locking them out of their files. If there's no DRA it would be permanent.


What is the event, if any, that is triggered when we perform a D2 on a FRS non-sysvol replica set?  Is it the same error message we get when we perform in on SYSVOL, but we insert the new replica set name? 


Ha! You wish it were that cool. You get these events (in this order – here I D2’ed just a single custom replica set and did not touch SYSVOL at all):





Some old docs also say you should get a 13565 when you BURFLAG a replica – but you do not unless it’s SYSVOL:


“Oh, but this is a DC” you are saying. Ok. Here’s a member server getting D2’ed:

  1. 13520 like above
  2. 13553 like above
  3. 13554 like above
  4. Done.


We have a server that is part of a simple DFS Namespace and Replication setup. Is there any issue with virtualizing a DFS server, shutting down the old host, and bringing the virtual one online. We would do this during a period of downtime so data change would be minimal?


That’s pretty much the point of SCVMM so I can’t really say no, can I? :)

The important thing (as always with P2V) is that you do a one-to-one change. You cannot have both servers alive at the same time. This is the risk with tools like disk2vhd.exe and other stuff on the internet, and why SCVMM is less risky – it ensures you don’t shoot yourself in the foot. Once the new DFS server looks like it’s working, destroy the old server so there is no chance it can come back up (format drive – you got a complete bare-metal capable backup of it first. Right???). To the other servers it would just like that server was rebooted and reappeared no worse for wear.


We rolled back a DFSR SYSVOL migration (don’t ask). All the DC’s rolled back fine except one – an RODC ended up in an inconsistent state. He is the only one that has entries under DFSR-LocalSettings and he is constantly switching between state 5 and 9.

The event logs show:

Log Name:      DFS Replication
Source:        DFSR
Date:          5/5/2011 9:00:00 AM
Event ID:      6016
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
The DFS Replication service failed to update configuration in Active Directory Domain Services. The service will retry this operation periodically.

Additional Information:
Object Category: msDFSR-LocalSettings
Object DN: CN=DFSR-LocalSettings,CN=rodc1,OU=Domain Controllers,DC=contoso,DC=com
Error: 2 (The system cannot find the file specified.)
Domain Controller:

Polling Cycle: 60

I’m not sure of the recommended way to clean it up.


Run on your PDC Emulator DC:

DFSRMIG.EXE /DeleteRoDfsrMember <name of the rodc>

Ensure that AD replication converges to the RODC. Then update the DFSR service with:

DFSRDIAG.EXE POLLAD /mem:<name of the rodc>

As you can see, we planned for this eventuality. :)


Do you have docs on configuring Advanced Audit Policy granular object access for HiPAA, Sarbanes-Oxley, or other US regulatory acts?


Neither the HiPAA nor SOX Acts make any specific mention of actual object access auditing settings in Windows or any OS - only that you must audit… stuff. Your customer should talk to whoever audits them to find out what their (arbitrary) requirements are so they satisfy the audit. There is an entire industry of “compliance” vendors out there that sell solutions and settings recommendations that vary greatly between each company. We even have one, although it wisely makes no mention of HiPAA or Sarbanes and then completely indemnifies itself by saying it’s totally up to the customer to determine the right settings and we have no opinion. I bet our lawyers had a crack at that one :-D.


What is the best method for cleaning out the PreExisting folder? I've done quite a bit of searching, but most of the results are cleaning out the Conflict directory or recovering files from the Pre-Existing folder.


If you don’t care about the files anymore (I recommend you at least back them up), you can delete the files and the preexistingManifest.xml file. You don’t need to stop the service or anything, once initial sync is done DFSR no longer cares about those files either. :)


When using the netsh.exe command to set the port range for dynamic RPC, what is the minimum number of ports that you recommend be provisioned? We need to set this value for application servers in an Extranet and want to make sure we provision enough ports but satisfy our firewall folks.


There’s no rule, it’s just as many as you find you need with testing. Our recommendation is not to mess with these if you are trying to lower the number of ports open in a firewall and instead use IPSEC tunnels between computers – this means you only have to open a couple ports and the traffic is protected regardless. Opening “only 500” ports is not much better than the default of many thousands. Going too low and you will cause mysterious random outages that take forever to figure out.

Barring that, I usually recommend first leaving default and evaluating to see what the usage patterns are – then setting to match with maybe a +10% extra fudge factor for unexpected growth. Then document the heck out of it because when you’re gone and someone else inherits that system, as they are going to be fornicated when problems happen. No one will be expecting that sort of restriction.


It’s pretty easy to audit who is services starting and stopping in Windows Server 2003, I just examine the System Event Log for events 7035 and 7036, sourced to Service Control Manager. The User field will show who stopped and started a service.

But Windows Server 2008 and later don’t do this. Is there a way to audit their services?


Yes. You will need to decide which services you want to audit as there is no simple way to turn it all on for everything, though. You probably only want to know about some specific ones anyway. Who cares that Ned restarted the Zune Wireless service on his laptop?

1. Logon as an administrator, make sure an elevated CMD prompt if UAC is on.
2. Run on the affected server:

SC QUERY > Svcs.txt

3. Examine the svcs.txt for your service “DISPLAY_NAME” that is being restarted.

For example in my case, I looked for “DFS Namespace” (no quotes) and see:

DISPLAY_NAME: Windows Time
WIN32_EXIT_CODE : 0 (0x0)

4. Above the display name you will see the SERVICE_NAME. Note that for below.
5. Run:

SC SDSHOW <service name> > sd.txt


SC SDSHOW w32time > sd.txt

6. Open this text file. It will contain SDDL data similar (not necessarily the same as below, do not re-use my example) to this:


7. Copy the following and add it to the end of the SDDL string in that text file:


So if you had used my example SDDL data and then added the above string, you now
have all one line:


Note that there is an S: that separates the DACL and SACL sections. If your exported SDDL did not contain an S: you must prepend it to your SACL entry like so:


8. Copy and paste that whole new string and run:

SC SDSET <name of the service> <the big new string>



Note: What we are doing is adding an audit SACL to the service so that when the previous auditing steps I gave you are used, the restart of the service will be audited and we’ll know who did what. Remember that if there was no auditing in place on the service already (after the "S:") then you will need to add that to the string.

9. Audit Subcategory "Other Object Access Events" for success and "Handle Manipulation" for success.
10. Note events for 4656. Object Server will be "SC Manager", Object Name will the name of the service, Access Request Information will show the operation (ex: "Stop the Service").


Until next time.

- Ned “yes, bwaamp is a technical term here” Pyle

  • I got a question with GPO & i know its not related to topic of this week, if i apply password policy at domain level(DFL/FFL windows 2008) & configure block inheritance on one of the OU, will password policy be still applied to set of the users in that OU or they will be excluded. Another part if, if i previously applied password policy on them will they be excluded post using block inheritance on the GPO.

    I don't see GPO(dealing with cross forest GPO apply or applying Parent-Child GPO or tress in same or different forest GPO working with great details, loopback, different PSU windows 2008, block inheritance) edition of Friday Mail Sack, i wish one in coming edition..:)May be m greedy in expectation, but we got very few info available on GPO applied at multiple domain & its working as well as cross forest GPO & best practices.



  • BWAAMP indeed!  ...can you put in a feature request that Event Viewer start making sound effects like that?  Something similar to old school Adam West Batman.  It would give more impact to the events when you open them.

  • Hi Awnish,

    Password policy does not apply to users directly - only to computers. Users of that computer become 'consumers' of password policy. Additionally, domain-based policy is special cases and must be applied at the domain level itself to take effect.

    So in that case when you are setting policy on OUs at levels lower than the domain, you are setting policy for the *local* user acounts on *member* computers only. Like if you created a local user on an XP workstation, that local user would have to follow that policy on that XP computer. Domain users woulde not be affected.

    It sounds like your needs require Fine Grain Password Policy:

  • Good idea Steve. I found quite a few batman sound themes just now but I was too chicken to download one. :)