Blog - Title

February, 2009

  • Understanding (the Lack of) Distributed File Locking in DFSR

    Ned here again. Today’s post is probably going to generate some interesting comments. I’m going to discuss the absence of a multi-host distributed file locking mechanism within Windows, and specifically within folders replicated by DFSR.

    Some Background

    • Distributed File Locking – this refers to the concept of having multiple copies of a file on several computers and when one file is opened for writing, all other copies are locked. This prevents a file from being modified on multiple servers at the same time by several users.
    • Distributed File System Replication DFSR operates in a multi-master, state-based design. In state-based replication, each server in the multi-master system applies updates to its replica as they arrive, without exchanging log files (it instead uses version vectors to maintain “up-to-dateness” information). No one server is ever arbitrarily authoritative after initial sync, so it is highly available and very flexible on various network topologies.
    • Server Message Block - SMB is the common protocol used in Windows for accessing files over the network. In simplified terms, it’s a client-server protocol that makes use of a redirector to have remote file systems appear to be local file systems. It is not specific to Windows and is quite common – a well known non-Microsoft example is Samba, which allows Linux, Mac, and other operating systems to act as SMB clients/servers and participate in Windows networks.

    It’s important to make a clear delineation of where DFSR and SMB live in your replicated data environment. SMB allows users to access their files, and it has no awareness of DFSR. Likewise, DFSR (using the RPC protocol) keeps files in sync between servers and has no awareness of SMB. Don’t confuse distributed locking as defined in this post and Opportunistic Locking.

    So here’s where things can go pear-shaped, as the Brits say.

    Since users can modify data on multiple servers, and since each Windows server only knows about a file lock on itself, and since DFSR doesn’t know anything about those locks on other servers, it becomes possible for users to overwrite each other’s changes. DFSR uses a “last writer wins” conflict algorithm, so someone has to lose and the person to save last gets to keep their changes. The losing file copy is chucked into the ConflictAndDeleted folder.

    Now, this is far less common than people like to believe. Typically, true shared files are modified in a local environment; in the branch office or in the same row of cubicles. They are usually worked on by people on the same team, so people are generally aware of colleagues modifying data. And since they are usually in the same site, the odds are much higher that all the users working on a shared doc will be using the same server. Windows SMB handles the situation here. When a user has a file locked for modification and his coworker tries to edit it, the other user will get an error like:


    And if the application opening the file is really clever, like Word 2007, it might give you:


    DFSR does have a mechanism for locked files, but it is only within the server’s own context. As I’ve discussed in a previous post, DFSR will not replicate a file in or out if its local copy has an exclusive lock. But this doesn’t prevent anyone on another server from modifying the file.

    Back on topic, the issue of shared data being modified geographically does exist, and for some folks it’s pretty gnarly. We’re occasionally asked why DFSR doesn’t handle this locking and take of everything with a wave of the magic wand. It turns out this is an interesting and difficult scenario to solve for a multi-master replication system. Let’s explore.

    Third-Party Solutions

    There are some vendor solutions that take on this problem, which they typically tackle through one or more of the following methods*:

    • Use of a broker mechanism

    Having a central ‘traffic cop’ allows one server to be aware of all the other servers and which files they have locked by users. Unfortunately this also means that there is often a single point of failure in the distributed locking system.


    • Requirement for a fully routed network

    Since a central broker must be able to talk to all servers participating in file replication, this removes the ability to handle complex network topologies. Ring topologies and multi hub-and-spoke topologies are not usually possible. In a non-fully routed network, some servers may not be able to directly contact each other or a broker, and can only talk to a partner who himself can talk to another server – and so on. This is fine in a multi-master environment, but not with a brokering mechanism.


    • Are limited to a pair of servers

    Some solutions limit the topology to a pair of servers in order to simplify their distributed locking mechanism. For larger environments this is may not be feasible.

    • Make use of agents on clients and servers
    • Do not use multi-master replication
    • Do not make use of MS clustering
    • Make use of specialty appliances

    * Note that I say typically! Please do not post death threats because you have a solution that does/does not implement one or more of those methods!

    Deeper Thoughts

    As you think further about this issue, some fundamental issues start to crop up. For example, if we have four servers with data that can be modified by users in four sites, and the WAN connection to one of them goes offline, what do we do? The users can still access their individual servers – but should we let them? We don’t want them to make changes that conflict, but we definitely want them to keep working and making our company money. If we arbitrarily block changes at that point, no users can work even though there may not actually be any conflicts happening! There’s no way to tell the other servers that the file is in use and you’re back at square one.


    Then there’s SMB itself and the error handling of reporting locks. We can’t really change how SMB reports sharing violations as we’d break a ton of applications and clients wouldn’t understand new extended error messages anyways. Applications like Word 2007 do some undercover trickery to figure out who is locking files, but the vast majority of applications don’t know who has a file in use (or even that SMB exists. Really.). So when a user gets the message ‘This file is in use’ it’s not particularly actionable – should they all call the help desk? Does the help desk have access to all the file servers to see which users are accessing files? Messy.

    Since we want multi-master for high availability, a broker system is less desirable; we might need to have something running on all servers that allows them all to communicate even through non-fully routed networks. This will require very complex synchronization techniques. It will add some overhead on the network (although probably not much) and it will need to be lightning fast to make sure that we are not holding up the user in their work; it needs to outrun file replication itself - in fact, it might need to actually be tied to replication somehow. It will also have to account for server outages that are network related and not server crashes, somehow.


    And then we’re back to special client software for this scenario that better understands the locks and can give the user some useful info (“Go call Susie in accounting and tell her to release that doc”, “Sorry, the file locking topology is broken and your administrator is preventing you from opening this file until it’s fixed”, etc). Getting this to play nicely with the millions of applications running in Windows will definitely be interesting. There are plenty of OS’s that would not be supported or get the software – Windows 2000 is out of mainstream support and XP soon will be. Linux and Mac clients wouldn’t have this software until they felt it was important, so the customer would have to hope their vendors made something analogous.

    The Big Finish

    Right now the easiest way to control this situation in DFSR is to use DFS Namespaces to guide users to predictable locations, with a consistent namespace. By correctly configuring your DFSN site topology and server links, you force users to all share the same local server and only allow them to access remote computers when their ‘main’ server is down. For most environments, this works quite well. Alternative to DFSR, SharePoint is an option because of its check-out/check-in system. BranchCache (coming in Windows Server 2008 R2 and Windows 7) may be an option for you as it is designed for easing the reading of files in a branch scenario, but in the end the authoritative data will still live on one server only – more on this here. And again, those vendors have their solutions.

    We’ve heard you loud and clear on the distributed locking mechanism though, and just because it’s a difficult task does not mean we’re not going to try to tackle it. You can feel free to discuss third party solutions in our comments section, but keep in mind that I cannot recommend any for legal reasons. Plus I’d love to hear your brainstorms – it’s a fun geeky topic to discuss, if you’re into this kind of stuff.

    - Ned ‘Be Gentle!’ Pyle

  • Machine Account Password Process


    Hi, this is Manish Singh from the Directory Services team and I am going to talk about the machine account password process. Ever wondered what goes on with your machine account in Active Directory? Here is a brief set of question and answers to clear things up.


    How often does the machine password account change in AD (is it different for various Windows operating systems)?


    The machine account password change is initiated by the computer every 30 days by default . Since Windows 2000, all versions of Windows have the same value. This behaviour can be modified to a custom value using the following group policy setting in Active Directory.

    Domain member: Maximum machine account password age
    You can configure this security setting by opening the appropriate policy and expanding the console tree as such:

    Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options


    If a workstation does not change its password, will it not be allowed to log onto the network?        


    Machine account passwords as such do not expire in Active Directory. They are exempted from the domain's password policy. It is important to remember that machine account password changes are driven by the CLIENT (computer), and not the AD. As long as no one has disabled or deleted the computer account, nor tried to add a computer with the same name to the domain, (or some other destructive action), the computer will continue to work no matter how long it has been since its machine account password was initiated and changed.

    So if a computer is turned off for three months nothing expires. When the computer starts up, it will notice that its password is older than 30 days and will initiate action to change it. The Netlogon service on the client computer is responsible for doing this. This is only applicable if the machine is turned off for such a long time.
    Before we set the new password locally, we ensure we have a valid secure channel to the DC. If the client was never able to connect to the DC (where never is anything prior the time of the attempt – time to refresh the secure channel), then we will not change the password locally.

    The relevant Netlogon parameters that come into play and we can think about changing here are:

    ScavengeInterval (default 15 minutes),
    MaximumPasswordAge (default 30 days)
    DisablePasswordChange (default off).

    DisablePasswordChange would prevent the client computer from changing its computer account password.

    Key = HKLM\SYSTEM\CurrentControlSet\Services\NetLogon\Parameters
    Value = DisablePasswordChange REG_DWORD
    Default = 0

    Group policy setting:
    Computer Configuration\windows Settings\Security settings\Local Policies\Security Options
    Domain member: Disable machine account Password changes    

    Warning: If you disable machine account password changes, there are security risks because the security channel is used for pass-through authentication. If someone discovers a password, he or she can potentially perform pass-through authentication to the domain controller. Here is the article that talks about disabling automatic machine account password change: KB 154501

    ScavengeInterval controls how often the workstation scavenger thread runs - the workstation scavenger is responsible for changing the machine password if necessary:

    Value: ScavengeInterval REG_DWORD 60 to 172800 Seconds (48 hours)
    Default : 900 (15 minutes)

    MaximumPasswordAge determines when the computer password needs to be changed.

    Key = HKLM\SYSTEM\CurrentControlSet\Services\NetLogon\Parameters
    Value = MaximumPasswordAge REG_DWORD
    Default = 30
    Range = 1 to 1,000,000 (in days)

    Group policy setting:
    Computer Configuration\windows Settings\Security settings\Local Policies\Security Options

    Domain member: Maximum machine account Password age

    To clear things up, it is 7 days on Windows NT by default, and 30 days on Windows 2000 and up. The trust password follows the same setting. So Trust between two NT 4 domains is 7 days. Trusts between Windows 2000 and up and anything else is 30 days. So what this means is if:

    • 2000 and NT4 trust password is 30 days
    • 2000 to 2000 is 30 days
    • 2000 to 2003 is 30 days
    • 2003 to 2003 is 30 days

    After the Netlogon service starts, the Workstation service scavenger thread wakes up. If the password is not older than MaximumPasswordAge, the scavenger thread goes back to sleep and sets itself to wake up when the password will reach that age. Otherwise, the scavenger thread will attempt to change the password. If it cannot talk to a DC, it will go back to sleep and try again in ScavengeInterval minutes.    

    The ScavengeInterval setting can be modified to a custom value using the group policy setting in Active Directory.

    Group policy setting:
    Computer Configuration\Administrative Templates\System\Netlogon\Scavenge Interval

    Further we have given the following clarification regarding the behavior described


    How do computers actually use passwords?


    Each Windows-based computer maintains a machine account password history containing the current and previous passwords used for the account. When two computers attempt to authenticate with each other and a change to the current password is not yet received, Windows then relies on the previous password. If the sequence of password changes exceeds two changes, the computers involved may be unable to communicate, and you may receive error messages.

    When a client determines that the machine account password needs to be changed, it would try to contact a domain controller for the domain of which it is a member of to change the password on the domain controller. If this operation succeeds then it would update machine account password locally.

    The client first changes the password locally and then attempts to update it in Active Directory. If the domain controller is configured with security policy "Domain Controller: Refuse machine account password changes" (i.e. RefusePasswordChange, see here and here), then the client rolls back locally to the previous password. If the password change fails, however, the client keeps the new password locally and keeps trying to set it on the scavenge interval until it succeeds.

    The local copy of the machine password is stored under:


    We store the current password and the previous password under CurrVal & OldVal Keys respectively. In Active Directory, we store the password in unicodepwd and lmpwdHistory. We also store the timestamp in the pwdlastset attribute (the method to convert it into readable format is:

    • Convert the value in the attribute from decimal to hex (using calc.exe)
    • Split the result into two equal parts (8 bits for each part)
    • Run nltest /time: rightsidehex leftsidehex

    The resultant value is the date and time the password was set on this computer object in AD. The cases where in you could run into problems that the KB260575 describes would be:
    If you use System Restore after the password change interval expired one time, and you restore the computer to a point before the password changes, the next password change may not occur when it is due. Instead, the operating system treats the restore as if the password was changed.

    Now consider the scenario, when a machine is not connected to the network for a long period. Supposing on the client:

    • Old password = null
    • Current password = A
    • New random password = B

    And on the machine account in AD:

    • unicodePWD = A

    After 30 days when the Scavenger thread runs, the value would be:

    • Old Password = A
    • Current Password = B

    At 60th day the same process happens again. So now the newly generated password is C and the values are:

    • Old password = B
    • Current Password = C

    Now when the client connects to AD, it will try the current password to authenticate. When that fails with error. Otherwise machine should be able to reset its password once it boots even after say 90 days.

    Further reading:

    How to detect and remove inactive machine accounts;EN-US;197478

    How to disable automatic machine account password changes;EN-US;154501

    Effects of machine account replication on a domain;EN-US;175468

    Domain member: Disable machine account password changes

    Domain member: Maximum machine account password age

    Account Passwords and Policies

    -       Manish Singh

    [Updated with a strict clarification of the client local password rollback behavior 7-19-2012; thanks for asking, Joe - Ned]

  • Headache Prevention: Install Hotfix 953317 to Prevent DNS Records from Disappearing from Secondary DNS Zones on Windows Server 2008 SP1

    Craig here. We’ve had some nasty cases related to this bug, so it seemed prudent to do our best to increase the awareness of this issue. In a nutshell, the DNS Server service in Windows Server 2008 has a bug that can result in a large number of DNS records disappearing. When those records go missing, you will start seeing problems with anything that depends on name resolution, which in an Active Directory environment is pretty much everything. Note this hotfix only applies to standard secondary zones. Active Directory-integrated zones are not affected by this issue because they use AD replication, not zone transfers, to stay synchronized.

    For this reason, we recommend that you take a look at the following KB article and consider applying the hotfix to your environment.

    953317 A primary DNS zone file may not transfer to the secondary DNS servers in Windows Server 2008

    If you are hitting this issue, you may see the following event logged:

    Event Type: Error
    Event Source: DNS
    Event Category: None
    Event ID: 6527
    Date: 8/21/2008
    Time: 3:20:34 PM
    User: N/A
    Description: Zone expired before it could obtain a successful zone transfer or update from a master server acting as its source for the zone. The zone has been shut down.

    The problem is specific to Windows Server 2008 SP1 (meaning the original release of Windows Server 2008). The 953317 hotfix version of DNS.EXE is 6.0.6001.22218. The problem may occur on both secondary DNS servers that were upgraded from Windows Server 2003 and also new installs of Windows Server 2008. For this issue to reproduce, a master server must be hit with enough changes that it cannot service an IXFR request, and so will respond to IXFR with an AXFR.

    What you will see is that most of the records in the DNS zone will appear to have disappeared, expired, or been deleted. The zone itself continues to exist but virtually all records in the zone are deleted except for the Start of Authority (SOA) records. Often a handful of host “A” records will also remain present in the zone.

    Because DNS servers affected by this condition continue to host a copy of the zone, they will continue to respond to queries from clients. The typical response returned by DNS servers with deleted zone contents is that the record queried do not exist (this assumes that the DNS server role is otherwise functional) in the zone. Windows clients will continue to direct queries to responsive DNS servers instead of failing over to an alternate DNS server that hosts a complete copy of the zone.

    Keywords: Windows Server 2008 secondary master primary zone transfer zone axfr ixfr incremental zone transfer full zone transfer delete deleted disappear disappeared missing expired expire

    - Craig Landis

  • Renaming a computer using WMIC and how to get around that aggravating “Invalid Global Switch” error

    I’m guessing Ned will insert something like “Hello, Warren here again”. [Nope – Ned]

    The other day I was working with server core and wanted to rename my server. Recently I have been working more with WMIC to get data from the DFSR service so I thought why not try WMIC?

    Renaming a computer with WMIC

    To rename a computer we can use the “Rename” method of the “Win32_ComputerSystem” WMI class. The command looks like this:

    WMIC ComputerSystem where Name=COMPUTERNAME call Rename Name=NewName

    Don't forget to reboot as you will not be prompted to restart the system.

    Netdom can do the same thing

    Don’t forget that you can also rename a computer using Netdom. You may find the Netdom method more user friendly if you are just renaming one system. Using WMIC may come in handy if you need to script the renaming of several machines or just want show off your “skills” to coworkers.

    325354 How To Use the Netdom.exe Utility to Rename a Computer in Windows Server 2003 -;EN-US;325354

    Netdom is part of the Support Tools for Windows 2003. In Windows 2008 it is included in the OS install.

    “Invalid Global Switch” error

    I noticed while working with renaming a computer with WMIC I would sometimes get an “Invalid Global Switch” error. After digging a bit I found that WMIC does not handle special characters. If there are dashes in any of the names used the command will fail unless they are enclosed with quotes. Just so happens I always use dashes in my computer names. So if the command was formatted as shown below it would fail.

    WMIC ComputerSystem where Name=COMPUTER-NAME call Rename Name=NewName

    To get this command to run you would need to enclose the name in quotes

    WMIC ComputerSystem where Name="COMPUTER-NAME" call Rename Name=NewName

    This command also takes environment variables, so if you have been using SYSPREP and have computer names that are gross, you can save time by using the %computername% variable:


    Happy computer renaming!

    - Warren Allen Williams

  • New Directory Services KB Articles 2/14-2/21

    New KB articles related to Directory Services for the week of 2/14-2/21.


    A Windows Server 2003-based computer becomes unresponsive when a high volume of traffic runs through a network adapter that has a large bandwidth


    A Windows Server 2003-based or Windows Server 2008-based terminal server stops accepting new connections, and existing connections stop responding


    During user logon or logoff, you receive stop error code 0x00000050, and the system restarts automatically on a computer that is running Windows Server 2008 or Windows Vista SP1


    Error message when you create a RODC IFM or RODC Sysvol IFM on a Windows Server 2008-based domain controller


    Windows Vista and Windows Server 2008 do not correctly audit all the privilege use events


    A Terminal Server smartcard logon using RDP 6.0 may fail with error code 0x507


  • Are you backing up ADAM?

    Hi, it's Adam Conkle from the Microsoft Directory Services team. I recently encountered a disaster recovery situation with an ADAM instance where objects were accidentally deleted, and the customer required help performing the restore of those objects from backup. We ran into a major problem that I feel many administrators may be overlooking, so I thought I should write about it.

    When I began working with the customer, she had performed a restore to an alternate location of the Microsoft ADAM directory using their enterprise 3rd party backup software. She then showed me the contents of the restore directory and I saw that neither the database (adamntds.dit) nor the transaction log (edb.log) were present. She stated that their backup job was configured to back up the entire Microsoft ADAM directory, and she was concerned why the database and transaction log were not included in the restore.

    Here are some things to think about:

    • Some 3rd party backup applications are described as being "Active Directory aware". Being Active Directory aware means that the software knows how to deal with System State data which allows you to create backup jobs that can specifically target the System State. The Active Directory database is included in System State as well as various other data. Data included in System State for Windows 2000/2003 is described in more detail here.
    • When the System State is backed up, we utilize the Volume Shadow Copy Service (VSS) to take a snapshot (shadow copy) of the data while it remains online so that system activity is not interrupted. We take a backup of this snapshot instead of the live data to be sure that we are backing up valid data. VSS is described in this overview.
    • The contents of the Microsoft ADAM directory are not included in a System State backup. Since the ADAM database is an online database we still need to rely on VSS to take a valid backup of the entire directory. When an ADAM instance is installed, a new registry value is created here:


    Value name: ADAM (<instance_name>) Writer

    Value Data: <path_to_adamntds.dit>


    This registry value tells NTBackup not to backup these files as a normal file backup. Instead, NTBackup will ignore these files during the normal file backup and then invoke VSS to take a snapshot of the files so that a proper backup can be taken.

    In NTBackup, if you select the entire Microsoft ADAM directory for backup and then drill down to the data directory you can see that the database and transaction logs are not checked. This tends to be confusing because one would assume that this means the files will not be backed up. Rest assured that as long as your VSS writer is functional and the registry value described above is in place, the files will be captured in your backup.


    When NTBackup encounters the Microsoft ADAM directory data you can see on the interface when the VSS has been invoked:


    In the customer's scenario, the 3rd party backup software could back up the System State, but it was not aware of the FilesNotToBackup registry key, and thus did not back up the ADAM database and transaction log files successfully. Without a valid backup of ADAM we were not able to restore those objects.

    In conclusion, for disaster recovery purposes, I highly recommend that administrators verify that they are getting good backups of ADAM, especially if they are using 3rd party backup software. If your ADAM database and transaction logs are not being backed up you should schedule NTBackup to backup your ADAM instances or consult your 3rd party backup software vendor to add this functionality.

    *Note - This posting does not apply to AD LDS on Windows Server 2008. Windows Server 2008 contains Windows Server Backup which backs up whole volumes of data using the Volume Shadow Copy Service. A System State backup using Windows Server Backup captures the entire volume for all critical volumes containing operating system files. If your AD LDS instance(s) are stored on a non-critical volume, you will need to ensure that the volume(s) on which they reside are being backed up correctly as they will not be captured in a System State backup.

    Take care,

    Adam "Wheatgrass" Conkle

    [What are the odds of an ADAM SME being named Adam? I think I'll change my name to DFSR - Ned]

  • HOW TO: Export the Configuration Container in ADAM & AD LDS Using LDIFDE

    Hi, Russell here. I’m a member of the Microsoft Texas Directory Services Team. I specialize in all things LDAP, with particular focus on 3rd Party LDAP Client interop, ADAM & AD LDS, Directory Service Schemas, Indexing, and LDAP Query Performance Tuning.

    We recently had a customer who had "inherited" an ADAM infrastructure. He called concerning replication failures between ADAM instances. Trouble was, he had no documentation explaining the configuration. Fortunately, AD LDS and ADAM have many tools to help you sort out the confusion after the fact. One of them is LDIFDE, which is the MS version of a tool that imports and exports in the LDAP Data Interchange Format (LDIF) RFC2849 Spec.

    To assist the customer, we asked for an LDIFDE export of his ADAM Configuration Partition to view the ADAM NTDS Settings Objects and Site configurations.

    Problem - The command line help leaves a bit to be desired. While export mode of operation is the default for ldifde, we did not require a full output of all ADAM Partitions, #1; nor would the macro expansion feature give us the desired results, #2:

    1. LDIFDE -m -f output.ldf

    2. LDIFDE -f export.ldif -c "#configurationNamingContext" "cn=configuration,dc=x"

    Complicating matters, if the machine is in a domain, the export will occur from the first DC to respond, not ADAM if ADAM is listening on any port other than 389. See the fine print at the end.

    To obtain just the Configuration Container for analysis, we'll need to supply LDIFDE more information:

    •  -d Specifies the Root Container of our search & export
    • -s Specifies the Server we want to connect to. Localhost can be used if running locally on ADAM
    •  -t Specifies the ADAM port you want to connect to (Use dsdiag.exe “List Instances” sub-command to determine the port if not known)
    •  -f Specifies the file name where you want to write the output of the export

    Order is important. Use the -d switch first, then the server, port, and an output file name.


    LDIFDE -d CN=Configuration,CN={43B6F689-F8B3-47B5-BB75-5B56BB5A55} –s  localhost -t 50000 -f ServerConfig.ldif

    NOTES – CN=GUID is from a sample machine. Each configuration container will have a unique GUID. Replica members will share this GUID. Possible errors you might encounter when syntax is incorrect:

    "The default naming context cannot be found. Using NULL as a search base."
    "No entries found."

    Fine Print on the above error - This is actually an issue with LDIFDE & ADAM interop, in that ADAM does not populate the defaultNamingContext in RootDSE by default. The error shows that you connected to ADAM RootDSE, but without a search base, nothing gets exported.

    Hasta luego,

    -Russell “SpaniardR2” Despain

  • A pair of useful AD books released

    Ned here again. William Stanek has released a couple of new books through Microsoft Press that any AD administrator will find indespensible:

    Active Directory Administrator's Pocket Consultant

    Group Policy Administrator's Pocket Consultant

    In William's own words, the "book is organized by job-related tasks rather than by features. Speed and ease of reference is an essential part of this hands-on guide. The book has an expanded table of contents and an extensive index for finding answers to problems quickly. Many other quick reference features have been added as well. These features include step-by-step instructions, lists, tables with fast facts, and extensive cross-references."

    These are excellent task-oriented handbooks that are especially useful at 3AM when the brain gets fuzzy and the chips are down. You can read more details on what the books include at William's website. They each list at thirty bucks, but a quick look at Amazon shows they are currently a steal at $19.79 plus free shipping. Hussle on over there before they come to their senses. :)

    - Ned "Just flew back from Redmond and boy are my arms tired" Pyle