A co-worker of mine overheard me talking with a customer the other day and said he thought my conversation would make a nice quick blog, so I told him I would write this up. Basically a customer and I were discussing how servicing worked and what all of the moving parts were, I do this fairly often at Microsoft with customers, so thats nothing new. What was new was the way I related the concepts about how things work and I think thats what my colleague keyed off of. Honestly, I think it might have been the first time I explained things in the manner that I did and my co-worker said it finally made a little more sense to him. So, even though I know I have gone over the servicing concepts on here in the past, I figured I would throw this out there and see what those of you who read my blog thought about it.
The servicing mechanisms in Windows are broken into a couple of concepts, namely packages, components and payloads.
Packages, which are defined by manifests, can be thought of as grocery lists. They detail all of the different components that make them up. Most of the time when we talk about packages, we're speaking about feature packages (.mum files in \Windows\servicing\packages) and the components that make them up.
Components are also defined by manifests, and similarly are a more detailed grocery list for things in a specific recipe. They are defined by .manifest files and are located in the \Windows\winsxs directory.
Payloads are the groceries themselves. These are the files that you're looking to have installed on your machine when you install a role/feature/update.
So, how does this all come together? Using the grocery/recipe analogy, let's say you're installing a new update named joscon.msu on your machine. The .msu itself is a cabinet file packaged in a wrapper. Inside of this .msu is the manifest for the package and the payload. What happens when you run this is:
1. Windows parses the package to determine whats in the package.
2. Once we know what's in the package, we need to determine if it applies or not to the current system.
3. If the update supercedes what's in the component store, we stage the update in the component store and schedule it for installation.
4. After the update has been staged, we change the component state of any installed files to staged and then change the state of the new package from staged to installed.
This is all a high level overview but if you were to look at these operations similarly to buying groceries, it might put things into perspective. Before you go to the store, you determine what groceries you need based on what you have. You comprise a list of groceries and then you head to the store and "fill" that list based on what you have. In a servicing operation, we basically do the same thing. We check to see what you're attempting to install, determine if we need it and then unpackage and install those files.
Hopefully that helps.....it sounds better when I say it than when I write it.
As an end user, here's what I feel about the NT 6 OSes: Servicing absolutely sucks - every aspect of it sucks. Performance, I/O that goes on to apply to single update and the bloatedness of WinSxS sucks. The slowdowns and waits it causes at logon and logoff suck. The inability to slipstream sucks. The time it takes to install or uninstall an update sucks. And the very reason for which it was designed - reliable servicing sucks. Many of my Vista systems often get stuck in the endless loop state while installing updates and most often service packs fail to install and roll back at 99%. Even Windows 7 SP1 beta and RC is failing on my new RTM installation after 99%. With XP, I could install 10+ updates in less than 2 mins using a simple batch script and I could install a service pack in about 10 mins. Servicing is one of the key reasons why in spite of migrating to Windows 7, I absolutely hate the product and curse the people who designed it. If you didn't have to uninstall updates, the Windows 2000/XP INF update.exe-based servicing was problem-free and very reliable. Sorry for ranting again but this is how I feel about the product.
And mark my words. Around the time Windows 7 SP1 releases, there are again going to be far too many users who systems get botched up while installing the service pack or they will be unable to install the SP at all. And unable to obtain one that has SP1 slipstreamed. Around the time a service pack releases, it has now become customary to clean install the RTM copy (for those who aren't fortunate to get access to a slipstreamed image).
Please wait while Windows configures... but I don't want to wait. I don't want Windows gobbling up my disk space every few days. I want to be able to install service packs in maximum half an hour like I can on the decade old Windows XP. I have cumulative service packs. I have /nobackup back. I also want /passive switch back. I want the speed of update.exe. How can a single component degrade so much in a single release from XP to Vista?
Thanks for the feedback.
Joseph, regarding servicing component corruption, and other servicing related problems, how much of it is ultimately due to running updates online, and inherent difficulties related to this, as opposed to all other problems? I hope that sentence is meaningful.
Put it this way - suppose Windows could offer the system administrator the option of running all MSU updates offline. When CBS determines its time to reboot the system, the boot order is temporarily changed, so that the system boots into a PE/RE environment (installed during setup). From here, updates are applied offline by DISM or related APIs. On completion, system reboots into Windows. Would that hypothetical scenario conceivably make updating more reliable, faster, or both?
I see what you're getting at Drew and the answer to whether that would be better is....maybe. And that's an honest answer. Ideally, I would like to see all updates applied online because I can track whats happening easier when they fail as opposed to offline installations where my only inclination of a problem is after the fact.
The nicest thing about offline installation is that it does remove other code from getting in the way of the installation of the update. In this way, both MS and third party code never gets a chance to interfer. Realistically though, the problems I see that are servicing related tend to be on a couple of things:
1. Bad image deployment. This is the most common and the easiest to spot because updates simply never install on the systems. It's a little harder when only pieces of the image have gotten corrupted.
2. Unsupported methods of installation. This one is a little tough for me to explain because as a Microsoft engineer, I know what I can and cant do when you call me. As a person who understands what those who work in the "real world" have to deal with, I know what you have (or dont have) at your disposal. Usually though, we'll see issues where someone is scripting installations of updates, using old methods that worked in XP and dont work now, etc, most of which come down to being unsupported on our end and result in problems. If you can, use something like WSUS or another product to manage an environment and its a little less problematic.
3. Third party code. This is the normal suspects, anti-virus, encryption software, etc. Finding and resolving these isnt always hard but the problems caused can be tough to figure out.
1. How are people botching deployment so badly that updating never works? Are they disabling services or doing something wrong with permissions? Bad WSUS config?
2. Understood, although i'm struggling to determine what these XP compatible methods are that people are still using, given that so much has changed.
3. Understood. theitbros.com/sysprep-windows-7-third-party-anti-virus
To answer #1 above, a lot of people do unsupported things with their images in regards to how they actually build them. That tends to cause a lot of issues down the line.