There is a story that goes around PFE circles and that I have told myself so many times I’m starting to wonder if it is true or legend. The story is about a SCCM admin who is testing a failing Win7 deployment against his test machines with OSD and is so tired and frustrated that he kicks off another test run and heads home only to get a call from his boss shortly there after. The boss wonders why his machine is about to run an OS deployment and the admin then realizes he mistakenly targeted the “Al Systems” collection and not “All Test Systems” collection. Seeing as how his SCCM environment managed both workstations and servers this is super bad so he goes rushing back to fix things and a call comes in to PFE on what is the fastest way to stop an OSD deployment in progress.
True or not, there are lessons to be learned from this story and I tend to share them to help folks be careful in their OS deployments. Here are some points to consider so that this never happens to you (or you can stop it if it does)
SCCM 2007 R3 released today. I don’t see a RTM download yet, but it should be on the web this afternoon it seems.
http://blogs.technet.com/b/systemcenter/archive/2010/10/14/system-center-configuration-manager-2007-r3-unleashed.aspx
Working in the System Center space I occasionally I get asked about management of environments that don’t really fit the world that SCCM focuses on. SCCM is not a cost effective solution for many small businesses and with a top scale of 200,000 clients (soon to increase with R3 release) there are a few very large environments that go beyond what SCCM can handle effectively. For the smaller environments there is a related product called System Center Essentials. This is a product aimed at 50-500 client machine companies and is essentially a combination of SCCM, SCOM, SCVMM and Hyper-V all rolled into one nice and affordable package. For now, that is what I recommend to smaller IT shops. However, there is a new offering coming online that may be appropriate for some organizations and it is called Microsoft Intune. Intune is less a hybrid of things, like essentials is, and more of a SCCM-lite or WSUS-lite offering. It has some of the same features of SCCM but is hosted in the cloud so small businesses don’t have to pony up the money for server hardware and can more easily get at the key things they need. It doesn’t have nearly the deep and rich feature set that SCCM has, but for some companies it might be the right answer. Check it out if you think it might be the right thing for you!
For those that may not have seen the news you may want to take a look at the SCCM V.next Community Evaluation program. If you are accepted you will get some early builds of the product as well as an opportunity to provide some feedback to the product group. The program nominations close Sept. 24th.
I have encountered several customers where something I thought was well understood wasn’t as clear as I thought, and that is relative paths and working directories. In an effort to help the SCCM community I will try to explain it all here.
When you are writing a batch file to control some kind of software installation you obviously need to reference the files of that install. The trick is how the execution of that batch file finds those files. As an example for this discussion I will keep it simple and create a batch file (I will call it copyfiles.cmd) with the following line in it:
Xcopy /y trace32.exe c:\
For my example I will place copyfiles.cmd and trace32.exe in c:\packages\copyfiles.
When the batch file executes it will go searching for trace32.exe and if it can be found it will be copied to the root of the C drive. How does it get found? Well, the directory that you are in when you execute the batch file is called your working directory, and that is the first thing which the OS looks in to find trace32. If I open a command prompt and I navigate to the root of the C drive, then the root of the C drive is my working directory. From my command prompt I might try this:
c:\packages\copyfiles\copyfiles.cmd
This fails to find trace32 to copy. Why? Because when that batch file executes it looks for trace32.exe in the root of the C drive, my working directory, and fails to find it. I could solve this by defining an absolute path, such as:
xcopy /y \\Myserver\c$\packages\copyfiles\trace32.exe c:\
Awesome. Now that I have an absolute path I can run my batch file from any directory on the machine and it will work. But wait..., what happens when I distribute this with SCCM or SMS? If I want trace32.exe in the root of the C drive on all my SMS/SCCM clients and not just this one I would create a package with a package source of \\Myserver\packages\copyfiles and for that package I would create a program that calls copyfiles.cmd. I advertise that out to all my clients and everyone gets trace32 in the root of their C drive, right?
Maybe. There are a few catches. Yes, this would run and yes it would copy trace32 into the root of every C drive IF every computer has permissions to access \\myserver\packages. That is its own catch but what I am discussing here is the fact that it has to go to \\myserver at all. When you setup that SMS/SCCM package you probably set it up to copy all the files from \\myserver\packages\copyfiles out to your distribution points. The clients then probably downloaded all (two) of those files down into the local cache on the hard drive of the SCCM/SMS client. The SCCM/SMS client then looked in that local folder, found the copyfiles.cmd and executed it per the program. The second catch is that even though we went through all the trouble to move trace32.exe from the site server to the distribution point and down into the client cache, the batch file had a hard coded path back to the server. When the batch file ran it ignored the copy of tracew32 sitting next to it and instead reached across your WAN to the server to copy the file from there. That somewhat defeats the usefulness of SMS/SCCM for minimizing bandwidth usage for software deployments.
In this example the first version of the batch file would have been the right one to use. Unlike my manual process outlined above, when SCCM runs a package it sets the working directory to the root of the SCCM/SMS package that was copied into the client cache. Thus it would have found the already local copy of trace32.exe.
Ok, now I told you all that to tell you this. SCCM/SMS sets the working directory (by default, it can be modified) to the top level of the package you setup in the source location. To make this clear, consider these two similar, yet different, package directories and files:
Package1:
\packages\Copyfiles\Copyfiles.cmd
\packages\Copyfiles\trace32.exe
Package2:
\packages\Copyfiles\trace\trace32.exe
For package1 our original batch file would work fine. That same batch file in package2 would fail because trace32 is in a subdirectory, not the working directory that SMS/SCCM sets up. To solve this we need to give a relative patch to trace32.exe. For package2 the batch file would need to look like:
Xcopy /y .\trace\trace32.exe c:\
This gives us the best answer. We don’t have a hardcoded absolute path. Instead we are telling the computer that runs the batch file to look in the trace directory under the working directory (where ever that may be) to find trace32.exe.
For more information you can do a Bing search on “relative paths” or start with http://en.wikipedia.org/wiki/Path_%28computing%29.
I was with a customer recently who found management of their Forefront updates to be problematic and they were looking for an alternative method to the general recommendation (http://technet.microsoft.com/en-us/library/dd185652.aspx). They had actually come to this idea on their own then asked my input, but if they had asked me first this is the same solution I would have proposed.
Setup a script to download the updates (see http://support.microsoft.com/kb/935934 to get you started) and run that script as a scheduled task (say…, every 4 hours). In SCCM create a package that points to the source location where your updates are downloading to. Set a schedule to update your distribution points on a regular interval (such as every 4 hours, about 10 minutes after your download is kicked off). Create a program that silently installs the update. Advertise that update with a re-occurring schedule that runs the update program on the client on a regular interval, such as every 4 hours and about 45 minutes after your initial download via your script (depending on your DP replication times).
Tada…, all your clients now have up-to-date forefront definitions, all done through the bandwidth controlled mechanism of SCCM.
NOTE: The time interval I gave was just for discussion and example purpose. Depending on your environment, size and latency of your SCCM hierarchy, etc. you may need to adjust that time interval and/or set up separate downloads and packages for down level child sites.
Inventory settings are site wide yet there are times when you want a class of hardware inventory collected for all machines EXCEPT a few (for example, some kiosk machines where many folks are logging in with new and separate profiles). To handle that there is a handy write-up at http://myitforum.com/cs2/blogs/skissinger/archive/2009/07/03/selectively-disable-ccm-recentlyusedapps-per-client.aspx that will allow you to disable certain inventory classes only on select client machines while leaving it enabled for all other machines in that site.
Handy Eh?
It looks like the open beta for the next version of SCCM is coming soon. The Microsoft Connect website has a section where you can sign-up to get it once it comes out. See https://connect.microsoft.com/ConfigurationManagervnext and sign-up. Rumor is this month sometime it will be available.
For those that may not have yet heard, the SCCM 2007 R3 beta is available for folks to look at if they are interested. I’m in process of setting up my own test box to play with it. It is an open beta program and eval version, so you can only apply it to a SCCM eval install, not to a full install as most folks have. The good news is that you can download a VHD of SCCM 2007 R2 eval and then apply the SP3 to that and play around. The VHD and other R3 info is at the open beta section on Microsoft connect (https://connect.microsoft.com/site16).
One of the hardest things to tackle in SCCM these days is client health. It is an on-going issue because it is hard to diagnose and hard to programmatically fix. SCCM’s client is much improved over older versions but it still has occasional issues and its dependencies such as WMI and Windows Update Agent still have theirs as well.
While looking into this for one customer I came up with a trick that won’t solve all client health problems, but it moves one step closer. This trick is for some of the Windows Update Agent (WUA) issues. If anyone uses this and finds issues or improvements please let me know and I will follow-up or correct this post as needed.
The first step is to identify the machines having WUA issues. There are probably several ways but what I found useful was to look for clients sending 11416 status messages. Creating a status message query was easy but creating a collection based on status messages takes a little more work to build. Here is one I put together that seems to do the trick:
select distinct SYS.Name,SYS.Client from SMS_StatusMessage as stat join sms_r_system as SYS on stat.machinename = SYS.name where stat.ModuleName = "SMS Client" and stat.MessageID = 11416 and DateDiff(dd,stat.Time, GetDate()) <1
This query gets all the machine names that have sent a 11416 status message in the last day and cross references with the system object for that machine so that a collection of machines can be put together.
Once you have your collection of machines identified the next step is to send those machines something to repair WUA. KB971058 has a nice Fix It script that will do this and you can download it from the KB. It is an MSI and in my testing using the default settings seemed to be enough to fix most machines. As an MSI you can have SCCM create your package and program by creating a package from definition and pointing at the MSI file itself. This should give you a silent run option.
Once you have the package in place advertise it to your collection created based on the query above and see if that solves your WUA health issues. For my customer we saw a 92% reduction in WUA issues using this method.
** Correction** I had previously posted this as a WMI fix, when this is really a WUA fix. I just had WMI on my brain. My apologies for any confusion.
This is one of those little known tricks of SMS/SCCM, but it can be handy. I often run into customers who have SMS 2003 and their collection management is out of control for various reasons. When going to SCCM they want to clean up their collections. Deletion is the easy way to do that, but doesn’t work in all situations. Here is how you can move a collection (or collection hierarchy) under another collection.
Let us say you have two root level collections named “parent” and “child”. You want to put “child” under “parent”. Child may or may not have other collections under it already. To move child you would start by accessing the right-click menu on “parent” and choosing a new “Link to collection”. In the dialog that comes up choose “child” and complete the dialog. This will now put a copy of “child” under “parent”.
Notice the collection ID for the sub-node “child” is the same as the root-level “child.” They are, essentially, one and the same so any change made to one, such as membership rules, will effect the other. There is one exception however, deletion of the collection. This is where your “collection copy” turns into a “collection move.” Not that “child” is under “parent” you can delete the original root level “child” and the new sub-node “child” will not be deleted.
TaDa… collection moved!
Today’s blog post again comes from a fellow PFE and I can’t take any credit. However, I thought it was a worthwhile topic and I wanted to make sure the knowledge and information is out there for other folks.
When you create a collection and decide to populate it you have two options, direct membership and query based. Many folks use direct membership because it is very easy to do compared to a query. Pick a machine, and it is in your collection. This works fine if you are setting up a quick package test or something similar but there is a pitfall most folks are not thinking about. When you do this you may pick a machine/user based on its name, but under the covers SCCM is actually assigning the resource based on an ID number.
Where this can come back to haunt folks is if a machine should change its ID number. To the SCCM administrator the machine still shows in the admin console, but would no longer be a member of any collections where it was directly assigned. A machine can change its ID number for various reasons, such as an OS upgrade (hopefully via the OSD feature), because it is accidently deleted from SCCM and then repopulates from a new DDR, or if maintenance tasks are set to aggressively and delete the current record.
The solution here is slightly more work in the short term, but can save you some work in the long run. When you create that collection membership make it query based and in that query you can specify to return all machines/users with a specific name. Now no matter what happens to the machine and it’s ID in SCCM it will continue to be a member of the collection as long as it keeps the same name.
One more catch, this time in relation to the solution I mentioned. I have heard (but not witnessed myself) that collections with a large amount of queries can perform poorly. I would suggest that rather than making one query for each machine you want you instead make a single query that is a “list of values” instead of the more standard “simple value” and then just add all your machines to that one query.
I can’t take any claim for this idea but it is one that I think is handy enough, and not very well known, that I am trying to spread the word. One of my co-workers has a great way to capture OSD logs when a failure occurs so they can be easily read and not lost simply because you weren’t in front of the machine to hit the function key and sniff around via a command prompt.
The simple concept is this…. you take your entire task sequence and place it under a new high level folder. If any step in your TS should fail control will be returned up to this top level folder. You set this top level folder to continue on error, leading to the next part.
Also at the top level, but below the folder you previously created you create a log capturing folder. Under this folder you have some tasks which capture the SMSTS log and other related files and copy them to a file share on your network.
The end result of all this is that if any task in your TS fails, no matter which stage, you don’t have to hunt for the logs and you don’t loose them when the machine re-boots into a failed deployment. You just go to the file share and take a look at what failed then go fix it.
All the details are on Steve Rachui’s blog at http://blogs.msdn.com/steverac/archive/2008/07/15/capturing-logs-during-failed-task-sequence-execution.aspx
As I am working with customers often times there is a discussion about Active Directory discovery (usually systems, sometimes users). People often do not want to discover EVERYTHING in AD, only a sub-set. If they already have a specific OU or two to aim at, that’s great and SCCM can do an LDAP query to just those few OUs. However, if there is a lot of separate OUs this becomes a pain to add plus you may miss out if a new OU is added by the AD folks.
A common example of all this is where a company wants to discover and manage all their workstations, but none of their servers. Even though discovery of servers doesn’t mean they will be managed, folks do not want to take that chance so they want to limit their discovery so no servers are discovered. Often times servers are in their own OU while workstations are in many OU.
So the very simple trick here is to simply grant a DENY permission for the SCCM Site server machine account on the OU you do not want discovered and then point SCCM to discover everything in the domain. This allows discovery of everything in AD except specific OUs. SCCM uses the machine account context to query AD during discovery and if it has deny permissions on an OU it simply skips over it, finding everything else and including, by default, all new OUs your AD team makes in the future.
Many folks may know this but for those that do not I’m making this post.
Microsoft has posted on Http://connect.microsoft.com a program for TAP nominations for “Configuration Manager vNext.” If you have a passport account you can subscribe to the group with-out committing to any thing. Once you have applied to this group you can see a link to download a slide deck on vNext of SCCM. Some good info in there as well as some recommendations on how to prepare your existing SCCM infrastructure for vNext. I highly recommend taking a look if you want to know what is coming down the pipe.
Something that I recently learned from a colleague of mine was how to create a search folder in SCCM to show all the software updates required by client machines. It is non-intuitive, but in my testing it does seem to work.
Under Software Updates –> Update Repository –> Search Folders you should create a new search folder. For the Criteria choose “required” and for search text type 1 and hit add, then 2 and hit add, 3 and it add, 4, 5, 6, 7, 8, 9. For search options select to search all folders under this feature. Give it a name and save it.
This doesn’t make sense to me but, as an example, by adding a 7 then any number starting with a 7 will be picked up (7, 70, 700, 723, 72, etc). Since you add each digit all numbers will complete the query and all results will show up for each patch.
Hi and Hello. I’m a Premier Field Engineer here at Microsoft specializing in System Management Server (SMS) and System Center Configuration Manager (SCCM). I’m starting up this blog to share some of the many tips and tricks I learn about while working here at Microsoft, and specifically with SMS and SCCM. I hope to help others through sharing of these things.