Friday Mail Sack: I Don’t Like the Taste of Pants Edition

Friday Mail Sack: I Don’t Like the Taste of Pants Edition

  • Comments 1
  • Likes

Hi all, Ned here again. After a few months of talking about Windows Server 2012 to other ‘softies from around the globe, I’m back with the sack. It was great fun – and not over yet, it turns out – but I am finally home for a bit. The only way you don’t know that the next version of Windows is nearly done is if you live in a hobbit hole, so I’ll leave all those breathless announcements to the rest of the internet.

This week we talk:

Let’s get to it.


I accidentally chose the wrong source replicated folder when setting up DFSR and now I have a few terabytes of data in the preexisting folder. I found your RestoreDfsr script to pull out the intentionally-mangled data, but it’s taking a long time to put everything back. Is there an alternative?


The script just wraps xcopy and makes copies rather than moving files, so it is not scalable when you get into the multi-TB realm (heck, you might even run out of disk space). If it’s reaaaallly slow (when compared to another server just copying some similar files) I’d worry that your disk drivers or firmware or SAN or NAS anti-virus or third party backup agents or whatever are contributing to the performance issues.


All of the files and folders in pre-existing deeper than the root are not mangled and don’t require any special scripts to copy out. Only the files and folders at the root of the preexisting RF are mangled and require the preexistingmanifest.xml for the heavy lifting. Therefore, a quicker way to fix this would be to just figure out the original folder names at the root by examining the pre-existing manifest file with your eyeballs. Rename them to their original name and then use Windows Explorer MOVE (not copy) to just move them back into the original folder. That would leave only the mangled files in the root of the pre-existing folder, which you could then use the script to restore – presumably with less data to restore and where the slower xcopy performance no longer matters.


When I run dfsutil diag viewdfsdirs c: verbose on this Win2008 R2 server, I see errors like this:

Unable to open file by ID
Unable to open file by ID

This volume (C:\) contains 5 DFS Directories.
Done processing this command.

What is the ID in the error? How can I tell the other two folders that it’s missing?


Dfsutil.exe uses the reparse point index to find DFS links on a volume.


Due to some error, dfsutil.exe failed to open some of them. We definitely need a better error that tells you return code and failed path. Sorry.

First, look in c:\dfsroots. The two link folders not returned by your list below are probably in there. If they are not in c:\dfsroots at all, use:

DIR c:\ /S /AL

That returns all reparse points on the volume. Any besides the default ones (in user profiles, programdata, and sysvol) are probably your bad guys. You’d want to make sure they still show up correctly in fsutil, that you have no errors with chkdsk, that they have not been intercepted by some wacky third party, etc.

You can also use (if you have later OSes):

Dfsdiag /testreferral /dfspath:\\\namespace /full > output.txt


I am using USMT 4.0 to migrate users that are members of the Administrators group and using a config.xml to make those users only be members of the Users group on the destination computer.  I am running these USMT scripts as the users themselves, so they are already administrators on both the source and destination computer when scanstate and loadstate run.

I am finding that the users are still members of administrators after loadstate. Am I doing something wrong or does this XML not work?





         <changeGroup from="administrators" to="Users" appliesTo="MigratedUsers">










Long answer, deep breath:

1. USMT 4.0 requires that the user running loadstate.exe is a member of the built-in Administrators group and holds privileges SeBackupPrivilege, SeRestorePrivilege, SeTakeOwnershipPrivilege, SeDebugPrivilege, SeSecurityPrivilege.

2. It is not a best practice that you log on as the end user being migrated or that end users run their own migrations:

  • From a security perspective, it’s bad if USMT migration users have to know the end user’s domain password.
  • From a USMT perspective, it’s bad because the end user’s more complex profile and settings are more likely to be in-use and fight the migration, unlike a simple migration user that exists only to run USMT.
  • If the end user is running it himself, it’s bad because they have no way to understand if USMT is working correctly.
  • Therefore, you should always use separate migration user accounts.

It’s easy to misinterpret the results of using this XML, though. It is not retroactive - if the group memberships already exist on the destination before running loadstate, USMT does not alter them. USMT is designed to copy/skip/manipulate source groups but not destroy destination existing groups.

Since your design requires destination administrator group memberships before running loadstate, this XML cannot work as you desire. If you switch to using separate migration accounts, MDT, or SCCM, then it will work correctly.


I am using new Server Manager in Windows Server 2012 Release Candidate to manage the remote machines in my environment. If I right-click a remote server I see that list of management tools for the given roles I installed. When I run some of the GUI tools like LDP or Computer Management, they target the remote servers automatically. However, the command-line tools just show the help. Is this intentional or should it run in the context of the remote server?



All of the command-line tools are run in this fashion, even when they support remote servers (like repadmin or dcdiag) and even when targeting the local server. We can’t get into a design that deals out a million command-line arguments – imagine trying to provide the menus to support all the various remote scenarios with NETDOM, for example. :-D

Oy vey

Since providing a remote server alone isn’t enough to make most tools work – Dcdiag alone has several dozen other arguments – we just went with “let’s get the admin a  prompt and some help, and let them have at it; they’re IT pros and smart”.

If you haven’t used Server Manager yet, get to it. It’s a great tool that I find myself missing in my Win200L environments.

The “L” is for legacy. YeeaaaahhinyourfaceolderproductsthatIstillsupport!!!


Does USMT 4.0 migrate the Offline Files cache from Windows XP to Windows 7? My testing indicates no, but I find articles implying it should work.


Unfortunately not. Through an oversight, the migration manifest and down-level plugin DLL were never included. The rules of USMT 4.0 are:

  • USMT 4.0 does not migrate CSC settings and the "dirty" (unsynchronized to server) file cache from Windows XP source computers
  • USMT 4.0 does migrate CSC settings and the "dirty" (unsynchronized to server) file cache from Windows Vista and Windows 7 source computers

In order to migrate the CSC dirty cache, USMT needs plugin DLLs provided by Offline Files. The Offline Files changes from XP to Windows 7 were huge, but even Win7 to Win7 and Vista to Win7 need the plugins for the path conversions.

To workaround this issue, just ensure that users manually synchronize so that all offline files are up to date on the file server. Then migrate.

If you are migrating from Vista (why?! it’s great!!!) or Windows 7 and you want to get the entire cache of dirty and synchronized files, you can use the DWORD value to force the cscmig.dll plugin to grab everything:

MigrationParameters = 1

This is rather silly in a LAN environment, as it will take a long time and increase the size of your compressed store for little reason; these files are going to sync back to the user anyway after the migration. Maybe useful for remote users though; your call.


I'm using Windows Server 2012 Release Candidate and I'm trying to create a new Managed Service Account. I'm running the following from an elevated PS Active Directory module:

New-ADServiceAccount -name Goo

The command fails with error:

NewADServiceAccount : Key does not exist


There are new steps required for managed service accounts in Win2012. We created an object class called a group Managed Service Account (gMSA). GMSA supersedes the previous standalone Managed Service Account (sMSA) functionality introduced in Windows Server 2008 R2. It adds support for multiple computers to share the same service account with the same password automatically generated by a domain controller. This makes server farms using MSAs possible for the first time. SQL 2012, Win2012 IIS app pools, scheduled tasks, custom services, etc. all understand it. It’s very slick.

The new Microsoft Key Distribution Service on Win2012 DCs provides the mechanism to obtain the latest specific key for an Active Directory account. The KDS also continuously creates epoch keys for use with any accounts created in that epoch period. For a gMSA, the domain controller computes the password on the keys provided by the KDS in addition to other attributes of the gMSA. Administrator-authorized Windows Server 2012 and Windows 8 member hosts obtain the password values by contacting a Win2012 DC through Netlogon and cache that password for use by applications. The administrator never knows the password and it’s secure as it can be – just like a computer account password, it’s the maximum 240-bytes of randomly generated goo, changed every 30 days by default. The new KDS is used for other features besides gMSA as well.

And this finally brings me to your issue – you have to first create the root key, using the Add-KdsRootKey cmdlet. This root key is then used as part of all the subsequent gMSA work.

If you want to see some preliminary step-by-step documentation, check out Getting Started with Group Managed Service Accounts. I’ll be coming back to this new feature in detail after we RTM Windows Server 2012.


What does the what /disabledirectoryverification option do in DFSRadmin.exe membership new?


If the folder you specified for the RF with /localpath does not exist with membership new, dfsradmin.exe will create it for you by default with the correct permissions. If it does exist already, it will modify the existing permissions to let the DFSR service use that folder. If the folder already exists with the correct permissions, it does nothing. Using this argument prevents all of these convenient actions.

You would use this argument mainly if you were a crazy person.

Other Stuff

I didn’t get to go to Comic-Con this year, but thanks to the InterWaste, you can at least see some of the footage sans the special trailers. The best ones I’ve found are… well, pretty obvious:

There are often some heavy-duty spoilers in these panels – no whining if you find out that Superman is from Krypton and Batman’s parents were killed by the Joker. They also get a little sweary sometimes.

Naturally, if comic-con is your thing, you need your fill of cosplay. The two best galleries are and – rather oddly –

This is how David Fisher sees himself when he looks in the mirror.

Via tumblr

It’s summer, which means good reading. IO9 has their annual list of the best Sci-Fi and Fantasy to check out. Redshirts was pretty good and I am starting Ready Player One shortly as no one can shut up about it (Ernest Cline will give you a Delorean if you are an ultrageek). Charles Stross is an automatic if you’re in IT or a British person; in that vein I recently enjoyed The Rook and am revisiting William Gibson this month.

And finally:

Hey, look it’s Ned in a restaurant, grab a picture!


And look, he’s with Satan!


Until next time,

- Ned “I always feel like someone’s watching me” Pyle

  • BTW if you really want to run some command-line tools on a remote server, you definitely should try a Windows PowerShell session. It's worth a try but beware that all old-school interactive utilities (like diskpart, netsh or heck ntdsutil) won't work.