I can tell just starting out on this topic that it’s going to be a long one.  For those of you who attended the session that Johan and I presented at MMS 2010, we talked about various ways of managing drivers, with Johan focusing on MDT 2010 Deployment Workbench (Lite Touch) and me focusing on ConfigMgr 2007 (Zero Touch).  We focused on the whole range of possibilities, ranging from “total chaos” through “complete control freak”, mapping those into the capabilities provided by the product.

Since I focused on the ConfigMgr side of things in the session, I’ll focus on the same thing here.  Basically, here’s how it maps out:

  • “Total Chaos”
    • In this scenario, you basically just import all the drivers into the ConfigMgr driver store, then let the standard “Auto Apply Drivers” step figure out what drivers are needed on each machine.
    • Pros:
      • This is really easy to set up: Create a physical driver store folder structure (arrangement doesn’t matter much), copy all your drivers into it, import them all into the ConfigMgr driver store.
    • Cons:
      • Predicting which driver will be used on each machine will be challenging.  (It will follow the standard driver ranking rules, but those aren’t always obvious.)
      • Testing on each model of machine is essential to make sure they all work as expected.
      • Each driver imported may affect other machines, so you need to re-test each time you import a new driver.
  • “Added Predictability”
    • In this scenario, you still import all the drivers into the ConfigMgr driver store, but along the way you assign categories to each driver, with the categories reflecting the computers that need the drivers, typically identified by the OS and the model of computer to be used (e.g. “XPx86-Latitude E6410”).  You can then specify on the “Auto Apply Drivers” task sequence step which categories should be used on each machine.  (That process can be made dynamic using something like Ben Hunter’s script posted at http://blogs.technet.com/deploymentguys/archive/2008/04/18/configuration-manager-dynamic-driver-categories.aspx.  Otherwise, you would need multiple “Auto Apply Driver” steps each with a condition that limits when that step runs.)
    • Pros:
      • You have much more control over what drivers are used on each machine.
      • Importing a new driver doesn’t affect all models, only those associated with the specified categories.  Testing can then be limited to those models.
      • Driver packages are used only for content distribution, so they can be set up however you like.
    • Cons:
      • Importing duplicate drivers presents a challenge.  When ConfigMgr detects a driver that has the same hash value as an existing driver, it generates an error saying the driver is already present.  But it doesn’t assign that driver to the category and package that you specified in the wizard – it just drops it.  So you need to go back through all the drivers and figure out which driver that is so you can manually add the needed category.  (That’s where the PowerShell scripts come in, more on that later.)
      • Arrangement of the driver directories in the physical driver store (which you build manually before importing the drivers) is important because you’ll need to import them one folder at a time.  (See Johan’s blog post at http://www.deployvista.com/Home/tabid/36/EntryID/82/language/en-US/Default.aspx for a suggested folder structure.)
  • “Complete control”
    • In this scenario, you’ll use driver packages to group drivers (so categories don’t matter), with one driver package created for each OS and model combination (e.g. “Win7x86-ThinkPad W500”).  You then add multiple conditional “Apply Driver Package” steps into each task sequence, specifying when to use which package (again, typically by OS and model).  (There’s no way to make this dynamic like you can with driver categories.)
    • Pros:
      • You explicitly specify which drivers you want on each machine – never any question about what will be injected (everything in the package).
      • You can add drivers even if PnP detection says they aren’t needed (e.g. two-phase drivers or drivers for devices not currently on or connected).
    • Cons:
      • Pretty much all the “cons” from the previous “Added Predictability” scenario still apply here: duplicate drivers are a challenge, physical store arrangement is very important.
      • The task sequence will only run when all the packages are available on a distribution point.  So you need to be careful when you refresh any of the packages as you won’t be able to run the task sequence until the DP update is complete.  (That’s really true for any package referenced by the task sequence, not just driver packages.)
      • If you have lots of models to support (e.g. hundreds), you might run into task sequence size limitations requiring modifications to the WMI provider memory allocation on your ConfigMgr site server.  You’ll also put an extra load on your management points, as a new task sequence advertisement will generate a policy download for the task sequence as well as for each referenced package (so 100 driver packages equals 100 policy downloads, even though only one will actually be downloaded later and used) – make sure your management points can handle it.
  • “Johan’s Control Freak Method”
    • This scenario, like the previous one, uses “Apply Driver Package” with unique packages for each OS and model.  But unlike the previous scenario, you only import the drivers that are needed for boot images (e.g. NICs and mass storage drivers), leaving it pretty bare.  When you create the driver packages, instead of pointing to an empty folder you’ll point to a folder on your physical driver store – basically, you’ve already built the contents you want in the package and you “trick” ConfigMgr into using it.  (Apparently Johan ignored the documentation like that at http://technet.microsoft.com/en-us/library/bb632329.aspx that says to specify “an empty directory share”.)  Johan goes through the full details in his posting at http://www.deployvista.com/Home/tabid/36/EntryID/82/language/en-US/Default.aspx.
    • Pros:
      • Very easy to set up.  Once you’ve organized your physical driver store by OS and model, you just create one new driver package for each OS and model.
      • Otherwise, the pros are the same as the previous “Complete control” scenario.
    • Cons:
      • Probably stretches the “supportability” boundaries, since I don’t think this was supposed to work this way.
      • Package availability, task sequence size limitations, and MP load are the same as with the “Complete control” scenario.

As we mentioned in the session, you will probably pick one of these scenarios as the “primary” one for your organization, but these can also be combined into “hybrid” approaches.

Historically I’ve always suggested that people use one of the first three mechanisms, but because of the way ConfigMgr handles duplicate drivers the second (“Added Predictability” with categories) and third (“Complete Control” with driver packages) are challenging, especially if you want to set those up by hand.  Various people have proposed solutions around that, including articles like these and various newsgroup postings over the years:

http://wbracken.wordpress.com/2009/09/26/adding-duplicate-drivers-to-sccm-sp1/

http://myitforum.com/cs2/blogs/jsandys/archive/2010/04/05/duplicate-drivers-helper-script.aspx

In fact, this is the same solution that Dell has implemented in their driver CABs available for download from http://www.delltechcenter.com/page/Dell+Business+Client+Operating+System+Deployment+-+The+.CAB+Files:  They put a “release.dat” file into each driver folder (with different content, to change the directory hash) so that ConfigMgr doesn’t see any of the drivers as duplicates.  That certainly solves the problem, but does it by defeating ConfigMgr duplicate detection process altogether.  As a result, the ConfigMgr driver store will also end up being much bigger.  That “release.dat” file also defeated the duplicate detection in MDT 2010’s Deployment Workbench, so we added explicit logic to look for that file and ignore it when calculating hashes.  That way we see the duplicates again (which keeps our driver store as small as possible too).

OK, so how do you implement the “Added Predictability” and “Complete Control” scenarios without either going to Johan’s approach or defeating the duplicate detection (or doing it manually, a process that’s really hard to do as finding the duplicate manually won’t be easy)?  That’s where the PowerShell scripts came in.  As I discovered early last year when researching this topic, ConfigMgr knows which driver is the duplicate (obviously) and will tell you which one it is when you make the request via the ConfigMgr WMI provider (not so obviously) – you just don’t see that information through the UI.  But with a script, you should be able to do something like this:

  1. Try to import a new driver.
  2. If the import fails, get the existing driver from the error reported.
  3. Assign the driver to the specified category (whether new or existing).
  4. Add the driver to the specified package (whether new or existing).

So the next posting will focus on that part:  Using PowerShell to import drivers while handling the duplicates along the way.

Enough “driver babble” for now.