Driver management has never been a fun aspect of Configuration Manager. It is one of those necessary evils to enable a bigger solution of great OS deployment. Done according to product design it consists of 3 general steps:
Adding drivers to the driver catalog can, if not done with foresight, become a big mess. Some advice floating around the net recommended avoiding this mess by skipping step 1 and just creating a flat directory structure and pointing to it as your source for your driver package. To my knowledge this was never recommended by Microsoft, or supported as a proper driver management technique, but it worked and it is hard to argue with something that works and saves you time…., until now.
Under System Center 2012 Configuration Manager this method is blocked in the UI. If you try to make a driver package and point to a directory that already has files in it, you will get an error. However, if you migrate what you had under SCCM 2007 to 2012, the driver packages will migrate just fine and all will seem good..., until you run your first task sequence that uses those driver packages.
When you run that task sequence you will fail to find content for the driver package. If you look into the smsts.log you will find a 404 error being raised when trying to find the driver package contents. This is because the new single instance storage model on the DP is not compatible with the unsupported manor in which the driver packages were made. The solution is to go back to the supported method of driver management, using step 1 from above, and manage the “mess” of drivers as best you can.
Sorry for the bad news folks. The testing for the new features in ConfigMgr 2012 apparently didn’t go over testing all the unsupported scenarios out there, no matter how popular.
Today’s tip is one which many folks already know, but surprisingly many folks, even those who have used the product for a long time, have some how missed. Due to some work I put into this many years ago it holds a special little place in my ConfigMgr admin heart.
If you use any version of System Center Configuration Manager (SMS, ConfigMgr, SCCM) then you have probably created some packages and programs to deploy software. Have you ever noticed the option (varies based on product version) for creating a package from definition? That option was originally placed there so that software makers, such as Microsoft, could provide an easier way to deploy the product by you, the ConfigMgr admin. Along with the binaries of the product they could also supply a file, called a package definition file, which would auto populate some fields in ConfigMgr such as product name, version, proper command lines, etc. Originally these files were a .PDF extension but, for reasons I am all figure out, we changed that to be a .sms extension.
So.., great concept. You grab the files from the software maker, import the package definition file to create the package and program details in ConfigMgr, then you point the newly created package at your source files and start distributing software. The catch is… that it didn’t catch. Most companies and products did not bother with the creation of package definition files. Then along came our friend, the MSI.
With MSI technology picking up we saw the opportunity to help the ConfigMgr admin use this package definition concept. Code changes were made and now you can reference an .MSI as well as a .SMS or .PDF file for package and program creation. No longer must you depend on the software maker to create a special file. If they have an MSI then you can reference that and ConfigMgr will extract all the necessary data out to create the package and programs you need.
Next time you need to deploy software, check this out and see if it helps you take a few steps out of the deployment process.
10/13/14 - updated to add restart info
Many of my customers are in similar situations where they do not include application deployment in their OS imaging process but instead rely on Configuration Manager to deploy apps after the OS is up and running. The biggest complaint I hear is about how slow ConfigMgr is to do this and thus people try to speed it through various things like faster collection update intervals and more aggressive client policy polling intervals. The downside of these more aggressive practices is that there is more churn and load on the network and server infrastructure, just to support these few new machines as they are initially built out.
In a discussion with one of my customers I hit upon a solution to this situation using the new capabilities of the ConfigMgr 2012 product. The general idea is to have aggressive schedules only for machines in the process of being setup but less aggressive schedules for the rest of the machines.
Overview:
Outcome:
A new machine will get the aggressive default policy polling interval of 5 minutes and keep checking for new software deployments. Once AD discovery picks up the machine via delta discovery it will be added to the collections via incremental updates to start getting software. It will also join the “setup” collection and get a policy that keeps its policy polling interval at 5 minutes. It will also have its reboot notification schedule shortened to allow for faster deployments when multiple reboots are necessary. After a set time (1 day in my example below) when setup should be complete it will move out of the setup collection but get the standard policy which puts it back on a normal 60 minute policy polling interval and regular reboot notifications, causing less churn and more notification time to the end user.
Details:
The key to this setup is the collection query rule for the setup collection. The following query should give you all machines added to Configuration Manager in the last day:
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType, SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier, SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where DateDiff(dd,SMS_R_System.CreationDate, GetDate()) <1
select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,
SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,
SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client
from SMS_R_System where DateDiff(dd,SMS_R_System.CreationDate, GetDate()) <1
There are several tips on how to stop accidental deployment of task sequences (one of my favorite links is to a blog post by Frank Rojas). One of my customers had come up with a fairly good idea on this topic I wanted to share also.
Like programs, a task sequence can be set to only run on certain supported platforms. If you have the scenario where you only want a task sequence to be run to a machine which has done a PXE boot or booted from media, you can use this to your advantage. On the task sequence you set the supported platforms to be some OS you don’t have, and don’t expect to have, in your environment. Now, I know that everyone reading this has likely deployed all MS OS shortly after we release them and thus it might be hard to find an unused OS. I suggest checking your inventory and seeing if, oh, Vista is in much use in your enterprise.
What ever you pick, it will limit the task sequence to only allowing execution on that OS and from WinPE. This means you can advertise (only by accident of course) that TS to all systems and have less risk of it accidently running on machines you don’t want it to. Cool eh?
Today’s post isn’t a tip or trick per se, but rather an issue that is not well documented that I hit with a customer.
When you install ConfigMgr 2012 you will notice that .NET 4.0 is a pre-req. If, after installing the site, you decide you want to want to put the Application Catalog Website and Application Catalog Web Service on the site server you will most likely need to install WCF, a sub-component of .NET 3.5.1.
This will cause you the first issue, as documented in KB2015129. Apparently WCF install messes up .NET 4. Easy fix to stop the flood of status messages is to run aspnet_regiis.exe /iru and then everything looks good, the error status messages stop, and all is good, right? Maybe
For one customer of mine this seemed to do the trick, but for my other customer the results were mixed. The status messages stopped and all seemed fine. Domain admins could access the Http:\\<server>\CMApplicationCatlog website just fine, but other users would get a 401 error: “Unauthorized: access is denied due to invalid credentials after launching Software Catalog.” we tried various things but in the end found another article on the net with the final solution (sorry I can’t give credit, I can’t seem to find it again). The final solution to all this was:
If you are new to System Center 2012 Configuration Manager and learning the new Role Based Authentication (RBA) model you may not initially grasp the concept that you grant a user a role and scope to define their security access. I find this gets people a little confused some times. The role is the set of abilities a user is given. To compare to some thing people are more familiar with, Administrator role means you can do things in AD like crate accounts, stop services, etc. That’s fine, but then the question is WHERE you can do these actions. A local administrator has a different set of objects they can affect compared to a Domain Administrator. This is their scope (local or domain). In ConfigMgr you grant a user a scope to define what objects in the hierarchy the user is allowed to exercise their actions against.
Said one more time for clarity, a security role defines the actions you can take, the scope defines on what objects you can take those actions.
Now, the scenario I recently hit with a customer was where they had a CAS and a primary site. They created a scope, called Pri1, and tagged the primary site object to be part of this scope. They then granted a user the Full Administrator security role, but only on the Pri1 scope. This let the user administer and run the primary site, but not touch the configuration on the CAS. We got down to setting client settings from the Primary, and couldn’t see them. They are considered a part of the CAS site, where no rights were granted. Now, how do we let the user at the primary access these client settings but not have full permissions over the CAS? If we simply added them to the CAS scope, they would combine that with their full administrator permissions and be able to do far more than desired. The answer is in this screen shot:
The names used in this screen shot are different, but the key is the use of the 3rd radio button, and not the 1st or 2nd. We want to Associate the assigned security roles with SPECIFIC security scopes. To follow on from my earlier example, we need to add the read permission to the CAS site, leaving the Full Admin permission attached only to the Pri1 scope and specific collections.
For some of you this might be enough for the “lite bulb to go on,” but in case you weren’t so lucky, here are the steps you should be taking to set up this user in this scenario:
One of the things I often get asked to do is to look over ConfigMgr 2007 installs and provide feedback on any issues I see. One of the things I look at is the folders in \Microsoft Configuration Manager\Inboxes. These folders contain all the activity of ConfigMgr and typically consist of files which are coming and going. However, some times things go wrong and a bunch of files can start accumulating in these folders, eventually running you out of disk space. This is never good and you should look into the root cause to fix it before worrying to much about the files themselves. However, some things fix themselves, or some files are left behind after fixing the problem, and those problems are a nuisance that needs to be cleaned up and removed. I will try and provide a list of some of the folders I look at and recommend cleaning up if there are old files in them.
I should make a note here that while you can just delete the files, and will be fine in most cases, it is a good practice to instead move the files to a temp folder long enough (a day or week) to make sure there are no negative repercussions from the file removals. It is just smart to play it safe.
There are lots of inboxes, and I’m sure many of them also get file backlogs for various reasons. these are just the ones I see most often. As I see and validate them I will update this list as appropriate.
For a good list of all the inboxes and their purpose, see this technet article.
7/29 update - Added CEP files
I was working with a customer once and we were looking for a good way to separate laptops from desktops within Configuration Manager when we hit upon this little known “memory” trick that I wanted to share.
Often times people use the Win32_SystemEnclosure class and look up the chassis type. While this is a good, and accurate, way to do it the fact that you have to parse all the possible outcomes to then group into “laptop” and “not laptop” is a little bit of a pain. An alternative method that we found is to use the Win32_PhysicalMemory class instead. In there is a property called FormFactor that can be put to use. We found that a value of 12 would indicate a laptop. 12 means that the memory in the machine is SODIMM, a memory used almost (but not totally) exclusively by laptops. While this isn’t as fool proof as the system enclosure, I suspect that it will work for most of you out there who are trying to differentiate laptops from desktops in your ConfigMgr inventory.
There is one downside, which is that the Win32_PhysicalMemory class is not collected by default as part of your hardware inventory. Adding it is easy enough though, just modify your SMS_DEF.MOF and off you go.
All kinds of news today. A piece I find interesting…, ConfigMan is no longer going to be available as a stand alone product, but instead as part of a suite. A good write-up is linked below.
http://www.zdnet.com/blog/microsoft/microsoft-details-new-licensing-plans-for-its-cloud-management-suite/11673?tag=content;feature-roto
What I like about Configuration Manager is that there is always new things to learn. I had one of those learning points recently. I was with a customer and after explaining that during a PXE boot WDS would detect the architecture of the client machine and send the x86 or x64 WinPE that matched, we did a demo. We booted a x64 machine and after seeing the x64 detection we saw it download the x86 WinPE, proving me wrong. As I swallowed my pride and they agreed to do some quick testing for me we found out how it really works. I took that back and confirmed with some other smart folks in Microsoft.
As it turns out, if you watch the SMSPXE.log you will see that there is a look-up that occurs when the PXE client connects. It finds the newest OSD task sequence advertisement targeted to that machine, looks up the boot WIM architecture, then sends that. This would seem like a reasonable thing to do, as it is most likely the same WinPE architecture about to be needed by the task sequence. If, however, you are setup with several task sequences to, say, the Unknown Computers collection then you may or may not get the expected architecture. PXE doesn’t know which task sequence you are going to run so this “best guess” gets you something to work from when picking your actual task sequence.
So, if you have a x64 capable client you will get an initial WinPE that is x86 or x64, depending on what the boot WIM is for the newest task sequence for that machine. I didn’t test, but for an x86 machine you should only get an x86 boot WIM. I have never seen otherwise for that architecture.
I learned something new, and hopefully you have as well.
While the ConfigMgr product has matured over the years the general category of client health continues to make admins curse. For those that don’t know what I’m referring to (lucky folks), “client health” generally refers to making sure all the components, drive space, network connectivity, etc. is all in place so ConfigMgr can monitor and change the client machines as you intend.
There are several tools out there which have built to try and identify the myriad of possible causes for a client to go “belly up” on you. I know several companies which have built their own home grown network of scripts and detection logic to do it as well. Nothing was perfect, however, as I don’t know any common standardization within the ConfigMgr community. Seeing this some fellow PFE decided to give it a shot and put together a framework to help tackle this issue. It is available as an offering through Microsoft Premier services, because it does a lot and needs some good understanding to get it setup and use it correctly. We call it the Microsoft System Center Configuration Manager Client Health and Remediation Service.
Notice I called it a framework, not a tool. The folks that built this did so in a way that is very adaptable. It has within it a set of rules on things to check and determine client problems, but it is done in such a way that every company can adapt it to their needs. For instance, while one customer might say that a machine with less than 1 GB of free disk space is a problem, others may not care until it gets down to less than 100 MB.
The toolset is still being developed, with more improvements planned, but it can do a lot to help admins today. The biggest critique I have heard about it is that it is mostly a problem identification framework, and less about remedies. We do have some sample code for some suggested remedies, but the thought is that many of the folks out there already have remedies for most things, they just don’t know where to apply those remedies. You could easily apply your remedies to this framework.
I am now accredited to provide this service to customers with Microsoft Premier contracts, and I am excited to start helping folks get this “hurdle” in ConfigMgr under control. I hope to see it go away, as the old duplicate GUID problem did that once use to haunt us all.
UPDATE 7/3 - Corrected the hyperlink to point to the SCCM 2012 datasheet.
I co-worker of mine pointed out a large amount of 30102 and 30103 status messages being generated by FEP 2010 clients. If you run a status message query on your system you can see them, but nothing in the UI brings them to your attention by default. They aren’t the most useful status messages.
I don’t normally advocate hiding things from yourself, but unless someone can point out a usefulness to me I will make an exception for these. If you don’t want to bloat your DB with them I suggest you create a status filter rule to block them from processing. Remember, status filter rules process top to bottom, so put your new rule before the write to DB and block further processing and all should be good.
If you spend much time around Configuration Manager you become aware that a lot of it runs through WMI. WMI is “Windows Management Instrumentation” and is essentially Microsoft’s implementation of a internet standard called Web Based Enterprise Management (WBEM).
If you are doing task sequences and wanting to provide intelligent branching, digging into hardware inventory to possibly extend it, or working with the ConfigMgr SDK, you are playing around with WMI/WBEM. One tool I like to show my users who are new to WMI is one which is built right into every windows OS, called WBEMTEST. As you read this blog feel free to follow along, as I’m betting most readers are on a windows box. Go to Start and type “WBEMTEST” into the search or run box.
Before I get too far, let me note that there are many WMI tools out there. On a regular basis I don’t use any of them? Why? Many are more friendly with a better UI designed. The big downside is that they have to be downloaded every time you want to use them. WBEMtest is at your fingers on any windows machine and OS learning to use it will speed your troubleshooting compared to going in search of your other favorite WMI tool.
When you launch WBEMTEST different OS will work slightly differently. Some will automatically connect to a namespace, others (like Win7) will not. If you aren’t connected you can hit the connect button, make sure “root\cimv2” is selected, then hit connect again. Now you are back in the main UI with everything “lit up” and ready to go. A namespace is, to relate it to something most folks are familiar with, a directory within WMI. You can change directories to all kinds of namespaces. CIMV2 is where a good amount of hardware information is kept.
From here you have lots of options. If you are already a WMI expert and know hat you are after you could hit the query button and type your WMI query to look at the results. For the beginner, just exploring WMI for the first time, I suggest you hit the “Enum Classes” button. In the pop-up choose “Recursive” and hit OK. You have just done the equivalent of a DIR to list all the contents of the name space. Everything with underscores (__) in the front of the name is what I call WMI overhead. This is what helps WMI be WMI. IN most cases you will skip over that and look at the other stuff. For this discussion I suggest going to Win32_Service and double clicking.
You have now opened up this “file” in WMI which lists all the collect info about services on your machine. The file/directory analogy starts to break down at this point. Know that this is some definition information and skip past it by clicking on the instances button. You now get a list of all the services on your machine. Pick a service, such as RemoteRegistry” and double click it. You now get to see all the info about that specific service. This view is kind of a pain to look at, however, so I recommend you click the “Show MOF” button to get a nicer view of it all. Here you can see the service state, description, start mode, etc.
Snoop around in WMI and see what you can find. Know that once you find something you could write queries in your task sequence to use that data as decision points, you could collect it via hardware inventory, or you could script against it using the SDK. All kinds of things open up for you. A few other namespaces you might be interested in are
Have fun exploring!
OK, I will reveal one of my own skeletons today. I have setup the FEP definition update tool several times for customers but in my own lab I was banging my head against the wall trying to figure out why it would not run correctly. For those that don’t know, this is a tool run via task scheduler to automate the deployment of FEP definitions via ConfigMgr with out on-going admin interaction. No matter what I did, it just would not run and I would get an error of 1 back. Finally, with the help of my fellow PFE, Richard Balsley, I figured out that this was due to my use of System Center Updates Publisher (SCUP) on my test bench, and a bug in the interaction between the two tools.
Just recently this bug was fixed and a new version of the tool was placed on the website. There were some other updates to the tool as well which Jason Lewis has nicely documented in a blog post as well. If you have Forefront Endpoint Protection 2010 and aren’t using the tool yet I highly recommend you read the blog post and get it set up.
Part 2
On a separate, but related note, I had another cause for the 0x1 error with one of my customers with a not-so-obvious cause. We had copied the command line options from the published technet article, which turned out to be the problem. In that article the quotes around the article ID are actually smart quotes and will not interpret correctly from the command line or task scheduler. If you replace the quotes with normal ones it should work. I believe the article is going to be corrected so hopefully by the time you read this it will no longer be an issue.
Heads-up to everyone out there. The RC of System Center Configuration Manager 2012 is now availible for public download. You can find it at http://www.microsoft.com/download/en/details.aspx?id=27841. Notice how Forefront Endpoint Protection 2012 RC is also included.
One of my customers asked me to build this example query and I figured I would share with everyone, not just one company. it can be handy when you are doing OS Deployment testing and you need to look up a machine based on its MAC address to delete it so it becomes an “unknown computer” again. Keep in mind that in the ConfigMgr database, the MAC address is in the form of ZZ:ZZ:ZZ:ZZ:ZZ:ZZ.
select distinct SMS_R_System.Name, SMS_R_System.MACAddresses from SMS_R_System where SMS_R_System.MACAddresses = ##PRM:SMS_R_System.MACAddresses## order by SMS_R_System.MACAddresses
“This software (or sample code) is not supported under any Microsoft standard support program or service. The software is provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the software and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the software be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the software or documentation, even if Microsoft has been advised of the possibility of such damages.”
Before System Center Configuration Manager was known as “ConfigMgr 2007” or “SCCM” it was Systems Management Server (SMS). You have probably all heard the joke about what SMS stood for; Slow Moving Software. Well, those that have not adjusted their lifestyles to suit the speed (and I will add, power) of the product now have something to keep them busy. Next time you are waiting for some OS deployment to complete, why not do a few ConfigMgr quizzes to see how smart you really are. Who knows…, maybe you will learn something while you wait.
Last night I decided to tackle a minor, but annoying, problem in my household. I have a Zune and I use my 10 credits every month to buy music that I like. I don’t have physical media for these songs and perhaps I’m just old school, but I wanted physical copies of my music as a backup. The secondary purpose was to be able to play the CDs in my car. My 3rd purpose was to write the music not as a typical music CD but as data so I could fit more songs per CD. My 4th purpose was to start building my powershell skills, which I’m lagging behind much of the world on developing.
When I purchase my music I have been adding it to a playlist, for tracking. The physical files are in various directories and it was going to be a pain to track them all down to copy to the cd to burn. Instead I decided it was time to brush up on my powershell some and wrote the script below one night.
You will notice there are some TODO items. There are a few “fit and finish” pieces to handle, but the script functions as is. I will be cleaning up my personal copy but I wanted to get this quickly posted for my friends who wanted to see it.
The script accesses a saved ZUNE playlist, finds all the files from that playlist, and copies them all to one single directory which you can then burn to CD.
##------------------------------------------------------------------------------------------------------------## Created by Michael Griswold on 8-16-11## Last updated: 8-17-11#### This script will parse the XMl from a Zune playlsit file (ZPL) and then copy each listed file to a specific## directory. You can then burn to CD as native MP3 files for playback on MP3 aware CD systems.#### Reminder: Run this to allow execution of this script on yoru PC: set-executionpolicy remotesigned####------------------------------------------------------------------------------------------------------------ # TODO: Add in ability to read in values from the command line.[string]$ZPLFilepath="C:\Users\Mike\Music\Playlists\To Burn.zpl"[string]$Outputpath="c:\temp\ZPLOutput\" ## Prompt for ZPL file, with full path"ZPL file path is hard coded""Output directory is hardcoded (and must be pre-created) to c:\temp\ZPLOutput" ## Parse XML file[System.Xml.XmlDocument] $xd = new-object System.Xml.XmlDocument$xd.Load($ZPLFilePath) ## Fetch the music files and paths$nodelist = $xd.selectnodes("/smil/body/seq/media")Write-Output $nodelist# $nodelist | Get-Member | more ## Parse all the media tags to find the file nameforeach($mediaNode in $nodelist) {# "Entering foreach"# $mediaNode | get-member | more $path=$medianode.src Write-output $path ## copy the files to a final location Copy-Item $path $OutputPath } # TODO: Add validation on user inputs
##------------------------------------------------------------------------------------------------------------## Created by Michael Griswold on 8-16-11## Last updated: 8-17-11#### This script will parse the XMl from a Zune playlsit file (ZPL) and then copy each listed file to a specific## directory. You can then burn to CD as native MP3 files for playback on MP3 aware CD systems.#### Reminder: Run this to allow execution of this script on yoru PC: set-executionpolicy remotesigned####------------------------------------------------------------------------------------------------------------
# TODO: Add in ability to read in values from the command line.[string]$ZPLFilepath="C:\Users\Mike\Music\Playlists\To Burn.zpl"[string]$Outputpath="c:\temp\ZPLOutput\"
## Prompt for ZPL file, with full path"ZPL file path is hard coded""Output directory is hardcoded (and must be pre-created) to c:\temp\ZPLOutput"
## Parse XML file[System.Xml.XmlDocument] $xd = new-object System.Xml.XmlDocument$xd.Load($ZPLFilePath)
## Fetch the music files and paths$nodelist = $xd.selectnodes("/smil/body/seq/media")Write-Output $nodelist# $nodelist | Get-Member | more
## Parse all the media tags to find the file nameforeach($mediaNode in $nodelist) {# "Entering foreach"# $mediaNode | get-member | more $path=$medianode.src Write-output $path ## copy the files to a final location Copy-Item $path $OutputPath }
# TODO: Add validation on user inputs
Today’s post falls into the category of things I feel a little guilty about posting, but I will anyway. I’m not saying anything new here, just trying to spread the knowledge because I keep seeing customers hit the same problem, which has a simple solution. If you want to skip reading my blog you can go to http://blogs.technet.com/b/configurationmgr/archive/2008/11/12/configmgr-2007-osd-task-sequence-fails-with-unspecified-error-80004005-and-setupact-log-indicates-invalid-product-key.aspx and get the details.
What I see with customers is that they are building a new OSD task sequence and they enter their company product key into the task sequence only to have it fail, usually with a 80004005 error code. This is occurring because the product key they entered was a a MAK (Multiple Activation Key) and not a standard product key, which ConfigMgr just doesn’t know how to handle. The solution is easy for newer (vista and higher) OS. Leave the key blank and add a step to your task sequence, after OS install, to run the following command line and set the product key:
SLMGR.VBS -ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx
I had one customer encounter this with older OS, WinXp. We didn’t actually get into the issue so if that is your situation I can only offer you the advice I offered them (and if it works or not, please let us all know in the comments below) but I have not yet had validated. Add the key to a custom unattend.txt file and then add that text file to your task sequence, in the “Apply Operating system”. This might pass it along correctly.
I had an interesting case a few weeks back. I was on-site going over various aspects of SCCM with a customer and we wanted to do some patch deployments in their test lab, which they previously setup. I quickly became aware that many of their clients were not communicating with the WSUS machine to scan for updates. After a little troubleshooting we saw that the WindowsUpdate.log was showing a 0x80244021 error, which indicates a proxy problem typically. We checked the proxy server and saw that the client was indeed hitting the proxy and getting denied. The odd thing was that, being an internal client, it should have never been hitting the proxy in the first place.
Searching around for known workarounds the simple solution seems to be to add a rule to the Proxy to allow traffic re-direction to the WSUS machine. For my customer, this wasn’t acceptable because (and I agree with them) it is unnecessary load on the proxy and just should not be necessary. I called upon the great experts within Microsoft and found a little known problem related to WPAD files. It turns out that there is a case sensitivity issue. For the proxy they were setting an exception for “wsus.company.domain.com” but in SCCM they had set the FQDN for the WSUS/SUP as “WSUS.COMPANY.DOMAIN.COM” and thus the case mis-match. We first changed the proxy to use the all capital version but found that in the WPAD file which the client receives from the proxy that it was lower case again. We then changed SCCM to be lower case FQDN and saw WUA connect to WSUS correctly and stop hitting the proxy.
My hat is off to Richard Balsley (PFE) and David Chamizo (MCS) for helping identify this very odd behavior.
When I talk with customers new to SCCM there are two things I advise them to not mess with: Heartbeat discovery and the “All Systems” collection.
Heartbeat discovery is configurable, and it is ok to adjust the interval to meet your company needs, but don’t turn it off. It isn’t obvious to the new SCCM admin but there are several maintenance tasks that key off that heartbeat DDR. Disabling heartbeat discovery may cause systems to be deleted from SCCM before you had expected or intended. I don’t ever recommend turning it off.
The All systems collection is a different, but similar issue. That collection has a default object ID of SMS00001. If you delete that collection you can, of course, re-create a collection with the same membership, and the same name. It will, however, not get the same ID. I have seen od behavior when that collection is missing. I have never had the time to investigate the exact problem, but in general I just suggest not deleting it. If you do perhaps delete it then you can restore it. An example is http://blogs.msdn.com/b/vinpa/archive/2010/03/17/how-to-restore-the-all-systems-collection.aspx.
File this under the category “won’t kill ya, but best not to mess with it”.
For those who have been using SMS and SCCM for a while this post will tell you nothing new, so check out the new picture at Http://bing.com and then go search for something else interesting to read.
I’m a big advocate for avoiding duplicate work. To that end there are many things which I work on or encounter in SCCM that can be shared, and should be in my personal opinion. One area where such sharing has done well is around collection of information via hardware inventory.
Changing the SMS_DEF.mof and Configuration.mof to collect additional information on your SCCM clients is a key concept but for folks new to SCCM there is a learning curve on how to modify MOF files, what they are, and how to get the information they want. Thanks to Sherry Kissinger, and many of the folks who she has worked with, there is a handy tool to help you collect the information you seek with-out yet being a SCCM guru. It is the Mini Monster MOF builder (http://myitforum.com/cs2/blogs/skissinger/archive/2008/10/28/mini-monster-mof-builder.aspx). A simple download, search, and click of a button gets you the info you need, then copy and paste to your SCCM site and wait for the info to start rolling in.
Next time you want to extend the inventory collection capabilities of your SCCM infrastructure, check out the Mini Monster MOF and see if you can save yourself some time and effort.
Hi folks. For those that have been using SCCM and SCUP for a while you may recall that adobe had a limited catalog available for distribution via SCUP. Adobe was removed from the “catalog of catalogs” which SCUP accesses. Then they had a direct download of their Flash catalog. Now they are coming around to include many more of their products. They are not yet in the “catalog of catalogs”, but I’m told that is coming soon. For more info see the blog post and links at http://blogs.technet.com/b/configmgrteam/archive/2011/02/11/announcement-adobe-acrobat-and-reader-x-scup-catalogs-are-here.aspx.
Setting up different levels of Admin UI permissions for SMS or SCCM is not always straight forward and obvious. It is, however, fairly flexible and granular. I’m not going to cover how to get console connectivity, as I think that is well covered elsewhere on the internet. Things like membership in the SMS Admins group is assumed to already be in place. Instead I’m going to focus on a few scenarios which, hopefully, will help folks build out their security model in the UI itself.
One quick point of clarification before we dive into this. Class vs. instance level permissions. Class level permissions are set at the top level of objects. Classes are things like Collections, Packages, Queries, etc. Instances of those classes can have their own permissions in most (but not all) cases. So “All Systems” is an instance, as is the “Office 2010” package. Class level permissions flow down to instances of those classes. However, instance level permissions do NOT flow down to instances beneath them. As an example use the following collection structure:
Collections All Systems Department1 Laptops Desktops
Granting a user class level permissions to “Collections” will give that same permission to everything else. Giving a user a permission to the “Department1” instance will not give them any permissions to “Laptops” or “Desktops”.
The typical groups I see companies want to utilize when setting up permissions are: 1) Uber-admins, 2) Department admins, and 3) Helpdesk operators. Uber-admins have full permissions to everything, and are the easiest to configure. When SMS or ConfigMgr is installed the uber-admin permissions are granted to the account that did the install. They are also granted to the local system account. You can clone either of these accounts to grant uber-permissions to another person or group. I usually clone the system account simply because it has all the class level rights but does not have the clutter of instance level rights that would occur from a user account who has been active in the MMC console.
Helpdesk operators is the next easiest to setup. What helpdesk folks may use ConfigMgr for tends to very. For my example I will assume they will need to look at machine inventory information, put machines in pre-created collections to get software delivered to them (pre-advertised), and remote tools access to assist the end users. The first step is permissions to collections. Being the helpdesk I’m making the assumption that they need to assist all managed machines so we would want to grant them class level permissions. We would grant them Read (to see the collections), Read Resource (to see details of those machines, such as inventory), Use Remote Tools (to remotely connect to machines), Modify (to allow adding machines to collections) and perhaps View collected files (to see anything brought back from software inventory). At a minimum, that is all that is needed. Perms to queries might let them do a little more self investigation of issues, but most companies I work with escalate that kind of activity to a higher support tier.
Now for the hard, yet common, request. You have a company with several departments, each wanting a level of autonomy from each other. The root of all this is to allow each group to only touch their own machines and not the machines of others. From there you want to grant perms to shared objects or keep them separated carefully. Shared objects would be things like packages, advertisements, patch deployments, etc. For my example I will go for total isolation of each department, but allow each department to do their own software and patch deployments.
Step1 – Collections. You want to start by creating a collection for each department. In that collection make a query of the machines that belong to that department (hopefully based on OU membership, machine naming convention, or something similar). At the collection class level you grant each dept. Create, Delegate, and Advertise. Yes, this means they can create collections anywhere. You have to instruct them to only create under their dept. specific collection to avoid confusion. To avoid this, remove the Read permission so they can only see and grant on the collections you allow. On their dept. specific collection grant instance permissions of Read, Read Resource, Advertise, Modify, Modify Resource, Modify Collection Settings, view collected files, and Use Remote Tools. Any collections the dept. admins create they will have full control over. There is a catch, which is that only they will have permissions on collections they create therefor they will need to get in the habit of granting perms to their admin group every time they create a new collection.
Step2 – SWDist. We can’t keep people from different depts. from seeing each other’s packages, but we can isolate which ones they can use. Note that you can use folders to visually organize things, but folders are simply UI display conveniences and do not carry or isolate permissions in any way. At the package class level grant Read, Distribute, Delegate and Create. Now when a user creates a package they will get full rights to use and distribute it, but only those packages they create (again, they need to grant perms to their dept. group). For advertisements grant Read, Create, and Delegate. This will unfortunately let dept. 1 advertise out packages made by dept. 2, but only to dept. 1 machines at the risk of dept. 2 admins changing something unexpectedly.
Step 3 – Site. Up to now we have kept folks out of all the site settings. Unfortunately, they need to get access to those settings to enumerate the distribution points for software distribution. This means that at the site level we need to grant Read permission to each dept. group.
OS deployments, patch management, etc... these things have similar permission structures that need setup. I can put those in a future blog post if folks express an interest. If you take this, try it in your environment, and do some testing you may need to fine tune things to get your desired results. Hopefully this will help many folks get started and help you understand a little better how the security model works. If you find I messed up anything let me know and I will clarify or correct for the benefit of others.
Today’s trick is one of those things that those that know think everyone knows, and those that don’t know never get told about. It is the ability to do auto-complete from a command prompt.
The command shell (you know, cmd.exe) has an awareness of file and folder structure. You can use this to your advantage when you are getting around. For example, if you are in c:\users and you want to change directories to \windows\system32 then all you need to type is “cd \wi” and then hit your TAB key. The line should lengthen to be “cd \windows" thanks to auto-complete (unless you have some other root directory starting with the letters “wi”). Now continue the command line by typing “\s” and hit TAB multiple times until “system32” comes up, then hit enter to actually change directories. This little trick can be very handy to find directories when you can’t recall their exact names. Try typing “cd \prog” to get “program files” to come up, then type “\” and just hit TAB until you get to the directory you want. If you go past the directory you wanted you can either keep hitting TAB to loop through everything or you can hold shift and hit TAB to go through the directories and files in the reverse order.
This doesn’t work only on CD commands, it works for most command line things. Next time you are in a directory of logs or such try doing “notepad” and then hitting tab until you find the log you want to open.
Spread the word and don’t look down on anyone who didn’t know the trick!