Michael Griswold's SCCM Tips and Tricks

Things I have learned and want to share

Michael Griswold's SCCM Tips and Tricks

  • Block accidental task sequence execution

    There are several tips on how to stop accidental deployment of task sequences (one of my favorite links is to a blog post by Frank Rojas).  One of my customers had come up with a fairly good idea on this topic I wanted to share also.

    Like programs, a task sequence can be set to only run on certain supported platforms.  If you have the scenario where you only want a task sequence to be run to a machine which has done a PXE boot or booted from media, you can use this to your advantage. On the task sequence you set the supported platforms to be some OS you don’t have, and don’t expect to have, in your environment.  Now, I know that everyone reading this has likely deployed all MS OS shortly after we release them and thus it might be hard to find an unused OS. I suggest checking your inventory and seeing if, oh, Vista is in much use in your enterprise.

    What ever you pick, it will limit the task sequence to only allowing execution on that OS and from WinPE.  This means you can advertise (only by accident of course) that TS to all systems and have less risk of it accidently running on machines you don’t want it to.  Cool eh?

  • ConfigMgr, OSD, and MAK keys

    Today’s post falls into the category of things I feel a little guilty about posting, but I will anyway.  I’m not saying anything new here, just trying to spread the knowledge because I keep seeing customers hit the same problem, which has a simple solution.  If you want to skip reading my blog you can go to http://blogs.technet.com/b/configurationmgr/archive/2008/11/12/configmgr-2007-osd-task-sequence-fails-with-unspecified-error-80004005-and-setupact-log-indicates-invalid-product-key.aspx and get the details.

    What I see with customers is that they are building a new OSD task sequence and they enter their company product key into the task sequence only to have it fail, usually with a 80004005 error code.  This is occurring because the product key they entered was a a MAK (Multiple Activation Key) and not a standard product key, which ConfigMgr just doesn’t know how to handle.  The solution is easy for newer (vista and higher) OS.  Leave the key blank and add a step to your task sequence, after OS install, to run the following command line and set the product key:

    SLMGR.VBS -ipk xxxxx-xxxxx-xxxxx-xxxxx-xxxxx

    I had one customer encounter this with older OS, WinXp.  We didn’t actually get into the issue so if that is your situation I can only offer you the advice I offered them (and if it works or not, please let us all know in the comments below) but I have not yet had validated.  Add the key to a custom unattend.txt file and then add that text file to your task sequence, in the “Apply Operating system”.  This might pass it along correctly.

  • WUA can’t contact WSUS

    I had an interesting case a few weeks back.  I was on-site going over various aspects of SCCM with a customer and we wanted to do some patch deployments in their test lab, which they previously setup.  I quickly became aware that many of their clients were not communicating with the WSUS machine to scan for updates.  After a little troubleshooting we saw that the WindowsUpdate.log was showing a 0x80244021 error, which indicates a proxy problem typically.  We checked the proxy server and saw that the client was indeed hitting the proxy and getting denied.  The odd thing was that, being an internal client, it should have never been hitting the proxy in the first place.

    Searching around for known workarounds the simple solution seems to be to add a rule to the Proxy to allow traffic re-direction to the WSUS machine.  For my customer, this wasn’t acceptable because (and I agree with them) it is unnecessary load on the proxy and just should not be necessary.  I called upon the great experts within Microsoft and found a little known problem related to WPAD files.  It turns out that there is a case sensitivity issue.  For the proxy they were setting an exception for “wsus.company.domain.com” but in SCCM they had set the FQDN for the WSUS/SUP as “WSUS.COMPANY.DOMAIN.COM” and thus the case mis-match.  We first changed the proxy to use the all capital version but found that in the WPAD file which the client receives from the proxy that it was lower case again.  We then changed SCCM to be lower case FQDN and saw WUA connect to WSUS correctly and stop hitting the proxy.

    My hat is off to Richard Balsley (PFE) and David Chamizo (MCS) for helping identify this very odd behavior.

  • How to manually clean your SCCM server roles off the box

    Occasionally it becomes necessary to manually clean off a server of SCCM components.  When that becomes necessary I usually tell customers to just flatten and rebuild the box, but that is not always an option.  In those cases I have this mental list of things I go through to remove all the traces that SCCM/SMS could leave on the machine.  Not every machine will have all of these locations populated. Every scenario has its nuances so don’t blindly follow this if you want a clean box but consider if each item is relevant to what you are trying to accomplish. If you think I missed anything, comment below and I’ll update as appropriate.

    • File System
    • \Program Files\Microsoft Configuration Manager
    • \Program Files\SMS_CCM
    • \sms
    • \windows\ccm
    • \windows\ccmsetup
    • \windows\ccmcache
    • Registry
    • HKLM\software\Microsoft\sms
    • HKLM\software\Microsoft\ccm
    • HKLM\software\Wo6432Node\Microsoft\sms
    • HKLM\software\Wo6432Node\Microsoft\CCM
    • Services
    • SMS_executive
    • SMS_Site_Component_Manager
    • SMS_Site_Backup
    • SMS_Site_SQL_Backup
    • SMS_SITE_VSS_Writer
    • SMS Agent Host
    • WMI
    • root\sms
    • root\ccm
  • Which Prestart command will I get?

    For those of you doing OS deployments with System Center Configuration Manager 2012 (I bet the same info below applies to the 2007 version but haven't tested it) you may have gotten into prestart commands to run a custom script or perhaps an HTA to better control and customize the deployment process for your environment.  I have a customer doing this and in the process they noted that there are actually two different places you can set the prestart command (also know as a pre-execution hook).  Depending on how your deploy your OS you may only have one place that will work for you, but what happens if you set different scripts in both locations?

     

    Setting in WinPE

    The first place you can set is on the boot WIM itself.  You specify a script source location and a command line.


    This ends up making it into the TSCONFIG.ini that is in the root of the WinPE drive when it boots up and looking something like (command line is different because of the timing of my screen shots, but you get the point I hope):

    [CustomHook]
    CommandLine=dumpvar.vbs
    Source=SMS10000

    When the task sequence process runs it finds this TSCONFIG.ini and launches the script as desired

     

    Setting during Boot Media Creation

    The other option is to add the hook during creation of your boot media.

     

    In this case, the package gets pulled from the DP and put into the boot media with the WinPE along with a modification of the TS variables so that on startup and reading the variables the value is seen and the hook will be executed.

     

    So what happens if you set both?  Take your guess before reading on and see how smart you are.  I’ll give you a hint… I originally thought both scripts would execute..., and I was wrong.

     

    The process SCCM goes through is to check the variables to see if the media had set them to have a hook.  If so… that is what executes and the fact that the TSCONFIG.ini has a value is ignored.  Only when the variable is unpopulated will the ini be checked and that hook will be executed.

     

     

     

    8/6 update - Fixed an image display problem.  Screen shots and logs may not line up correctly on script names and such.  Sorry.

  • Get more from your queries

    Every SCCM admin has figured out how to create a collection with a query rule to dynamically update its membership (you don't just do direct rules I hope).  Most admins have also learned that the columns able to be returned by a collection are limited but if you create a query in the Query node of the UI instead of under a collection you will have more control over what columns get returned.  Because of this it can be handy to run a query to bring back the info you are after, like a machine name and MAC.  You can't get AC under a collection, but you can as part of a query.

    A piece of frustration kicks in at this point for most folks, and that is the fact that your right click options on a query can be.. missing a few things:

    While I can't promise the solution to get every option to appear (I have been trying to figure out how to get a delete option to show up) I can share that the proper columns returned will provide ore right click options:

    The UI can only offer options if it has the necessary data returned to make those options work.  Here is the query syntax to get more options to show up in your Configuration Manager console:

    select distinct SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Client, SMS_R_System.Name, SMS_R_System.OperatingSystemNameandVersion, SMS_R_System.IPAddresses, SMS_R_System.SMSAssignedSites, SMS_R_System.SMSInstalledSites, SMS_R_System.Active, SMS_R_System.SMSResidentSites, SMS_R_System.MACAddresses, SMS_R_System.ClientVersion, SMS_R_System.NetbiosName, SMS_R_System.SystemRoles from  SMS_R_System

  • Getting started with RBA in System Center Configuration Manager 2012

    RBAC, or Role Based Administration, is new with the SCCM 2012 product.  Many customers I encounter are excited with the separation and flexibility it can provide, but daunted by the activity of getting it setup initially.  There are a few handy tricks I share time and again with customers at this stage and I thought it might be nice to share with everyone.

    1 - Create a security role template

    If you get into RBA you will find that you can copy an existing role, then modify it for your needs.  However, this means that you have to go and remove all those perms that came over from your copy…, and you have to do this for each role you have to make.  Save yourself some time and the very first time make a copy of the Remote Tools Operator. This has the least amount of existing perm lines that need cleared.  I like to change all the existing lines to ‘no’ then add 1 single perm at the very top, under Alert Subscription.  Save this as “_Template” and from this time forward you can just copy it as your starting point, making things a little easier.

    image

     

    2 - Think ahead about your potential use of scopes

    If today you manage desktops only, and that is all you will ever have in SCCM, then stick to the default scope.  If, however, you think that some day you may want to add servers and have management of them separated out, then the time to create the “desktop” and “server” scopes is right after SCCM finishes installation.  Sure, you can do it any time, but the problem is that any object you create going forward is tagged to the scopes you are in.., which means they can all be tagged as part of the “desktop” scope, or they can be tagged as part of the “default” scope and you can go change them all to “desktop” later (not simple, but possible).

    3 – Use RBAViewer

    This tool is free as part of the SCCM 2012 R2 toolkit. It will make your visualization and exploration of the different perms necessary to reach your ends goals easier and more clear.  There is also a helpful spreadsheet made by Brent Dunsire that many people find useful.

     

    As a little side note, I was asked to figure out what the minimum permission necessary to allow an admin to block/deny devices from communicating with SCCM.  That requires “Read Resource” perm under collections.  Of course, if you can’t see the collections or connect to the admin console you will probably need an additional perm of some kind.  “Read” under collections was the easy one I used.

  • 404 for Driver packages, can’t find content.

    Driver management has never been a fun aspect of Configuration Manager.  It is one of those necessary evils to enable a bigger solution of great OS deployment.  Done according to product design it consists of 3 general steps:

    1. Import drivers into driver catalog
    2. Add drivers to driver packages
    3. Add boot critical drivers to WinPE images

     

    Adding drivers to the driver catalog can, if not done with foresight, become a big mess.  Some advice floating around the net recommended avoiding this mess by skipping step 1 and just creating a flat directory structure and pointing to it as your source for your driver package.  To my knowledge this was never recommended by Microsoft, or supported as a proper driver management technique, but it worked and it is hard to argue with something that works and saves you time…., until now.

    Under System Center 2012 Configuration Manager this method is blocked in the UI.  If you try to make a driver package and point to a directory that already has files in it, you will get an error.  However, if you migrate what you had under SCCM 2007 to 2012, the driver packages will migrate just fine and all will seem good..., until you run your first task sequence that uses those driver packages.

    When you run that task sequence you will fail to find content for the driver package.  If you look into the smsts.log you will find a 404 error being raised when trying to find the driver package contents.  This is because the new single instance storage model on the DP is not compatible with the unsupported manor in which the driver packages were made.  The solution is to go back to the supported method of driver management, using step 1 from above, and manage the “mess” of drivers as best you can.

    Sorry for the bad news folks.  The testing for the new features in ConfigMgr 2012 apparently didn’t go over testing all the unsupported scenarios out there, no matter how popular.

  • All Requested Software Updates

    Something that I recently learned from a colleague of mine was how to create a search folder in SCCM to show all the software updates required by client machines.  It is non-intuitive, but in my testing it does seem to work.

    Under Software Updates –> Update Repository –> Search Folders you should create a new search folder.  For the Criteria choose “required” and for search text type 1 and hit add, then 2 and hit add, 3 and it add, 4, 5, 6, 7, 8, 9.  For search options select to search all folders under this feature.  Give it a name and save it.

    This doesn’t make sense to me but, as an example, by adding a 7 then any number starting with a 7 will be picked up (7, 70, 700, 723, 72, etc).  Since you add each digit all numbers will complete the query and all results will show up for each patch.

  • Search through everything!!!!!

    With Configuration Manager 2012 came a new UI.  It took some getting used to but I, for one, have come to like it over the old MMC interface.  There is one frustration with it that I hear from customers or see them attempting to workaround all the time.  ConfigMgr admins everywhere like to organize their applications, software updates, etc. into nice folders, for easy administration.  The downside of this is that when they or some other co-worker goes looking for a given application, they can't figure out which folder it is in.  Many folks ignore the search ability at the top of the UI window because it only searches the current folder, or root.

    The solution is to look a little higher and to the left.  Up there and hidden among all the icons is a little button called "All subfolders".  If you are on applications, collections, software updates, and probably several other locations you should be able to find it.  The trick is that it doesn't show up initially.  You must click in the search box, then you will see it.  Some quick screen shots below, since a picture is worth a thousand words.

  • To hotfix or not to hotfix, that is the question

    Let me start this blog post by making it clear that this is my opinion, not an official stance.  Everyone is entitled to their opinion and this is just mine.  Feel free to ignore it if you like.

    I often time have discussions with customers about applying hotfixes, and more recently cumulative updates.  The discussion revolves around if these things should be applied pro-actively or not.  There are many folks in the support organizations here at Microsoft that want to make sure they are looking after our customers and thus recommend to them to apply all the latest and greatest fixes. The thought is that doing so will avoid fighting already known and fixed issues.  This can be a big time and frustration saver rather than battling for days trying to get something to work, only to find out there was already a fix for the issue.

    Change is risk.  This is why most medium and large companies with mature IT processes have some form of change control in place.  Any change means a deviation from the status quo and while that may improve things, it has the potential to cause problems as well.  Standard change mitigation is to test the changes before committing to them in production.  This is a good practice for IT groups as well as internally at Microsoft.  Everything that Microsoft releases is tested before let out “into the wild”.  As some of you may have noticed, occasionally that testing has missed a scenario or two and had unforeseen side effects.  Fortunately I think such scenarios are getting to be less and less as process and diligence improves.  Unfortunately, we aren't to a 0 occurrence rate just yet.

    I was a test lead/manager for several years in the SCCM development group and was part of many hard discussions on how much testing is the right amount of testing for a given problem.  With only a few quick tests the possibility of having a regression or unforeseen problem caused by the fix was high.  On the other end of the spectrum I could sit down and dream up an infinite amount of potential tests, meaning that the product could never be released.

    To give you an off-the wall example of how far these tests can go.  Think of a screw driver.  It is a fairly simple and straight forward thing that most of us use with out really thinking about it. Various tests are: can it turn to the right, turn to the left, fit screw size A, fit screw size B, comfortable in my hand, comfortable in my kids hand, not break under normal use, not break in a deadly way under more extreme use, not melt in my hand, not melt out in sunlight, not melt in my garage with the heater left on during a 100 degree day, not melt in the sun, go to space, not wear down too quickly, look good on a store shelf to sell better,  etc.

    So, how much testing is enough testing?  Well, the balance point changes.  The more critical and time sensitive something is, the more risk we take by doing less testing.  Hotfixes are generally at the higher end of the risk scale.  We test them in lab, maybe with some internal folks, and usually with at-least a few customers before they become available for everyone to download.  The testing is limited because we want to be able to get them out fairly quickly.  Cumulative updates get a little more testing, especially the interaction of the multiple fixes.  The full development, then testing, then release happens in only a few months (and is in addition to the testing that some included hotfixes may have already been through) with a limited number of people so while this is more coverage than a typical hotfix, it isn’t really what I personally would consider as “low risk” just yet (although it is, arguably, getting close).  SCCM CU hold less risk than a typical standalone hotfix would, and typically only include items that have a clearly understood risk to them which was covered by internal testing. Service packs get much more rigorous testing across many different in house and external scenarios.  The chances of problem arising from a service pack are very low (or at-least well understood and documented) and thus on a personal level I consider it a “low risk” type of deployment.

    So…, why do I write all this up?  The advice I have always heard is “if it isn’t broke, don’t fix it” and in general I think that applies to software patching.  Don’t apply a hotfix or a CU unless you are experiencing the symptoms that it means to address.  Yes, you might waste a weekend battling a problem only to find out that a fix already existed.  Compared to wasting a weekend applying the fix then battling an issue caused by it I think it is a good trade off to have not applied the fix if it wasn’t truly needed.  There is one caveat I make to this statement, however, and that is for “invisible” problems.  These are problems that you may actually be having, but not know about.  A good example is a memory leak.  Sure, you might have a leak in your admin console (as an example) but if you close it at-least once per day then you never realize it.  The fact that every Monday after you left it open all weekend it is sluggish until you restart it has just become a habit that you have never bothered to investigate.  A fix that solves admin UI memory leaks might help, or might be completely unrelated and do nothing for you but it is worth considering applying proactively.

    So now I shall get down off my soap box.  There are may smart people who I have respect for who disagree with me on this stance and in the end what works best for one company may not work best for all companies. Make the choice you deem appropriate for your company and your role.  I will hope that it works out well for you in any case.

     

    8/15 - Minor updates to clarify CU

  • Memory trick to finding your laptops

    I was working with a customer once and we were looking for a good way to separate laptops from desktops within Configuration Manager when we hit upon this little known “memory” trick that I wanted to share.

    Often times people use the Win32_SystemEnclosure class and look up the chassis type.  While this is a good, and accurate, way to do it the fact that you have to parse all the possible outcomes to then group into “laptop” and “not laptop” is a little bit of a pain.  An alternative method that we found is to use the Win32_PhysicalMemory class instead.  In there is a property called FormFactor that can be put to use.  We found that a value of 12 would indicate a laptop.  12 means that the memory in the machine is SODIMM, a memory used almost (but not totally) exclusively by laptops.  While this isn’t as fool proof as the system enclosure, I suspect that it will work for most of you out there who are trying to differentiate laptops from desktops in your ConfigMgr inventory.

    There is one downside, which is that the Win32_PhysicalMemory class is not collected by default as part of your hardware inventory.  Adding it is easy enough though, just modify your SMS_DEF.MOF and off you go.

  • Digging in on application deployment

    I will admit, this is a bit of a lame post on my part.  I just wanted to help spread some awareness of a post my fellow PFE, Jamie Moyer, put up.  When I’m trying to troubleshoot some kind of SCCM 2012 application deployment failure it is useful to reference what a good deployment looks like so I can find the point of difference and likely failure.  Jamie pulled some things together and posted a run through of the client side logs and what “good” looks like.  Keep this one bookmarked and handy for the day you need it:

    http://www.moyerteam.com/2013/10/troubleshooting-configmgr-application-deployments-detailed-log-file-analysis/

  • Active Directory Discovery..., or not

    As I am working with customers often times there is a discussion about Active Directory discovery (usually systems, sometimes users).  People often do not want to discover EVERYTHING in AD, only a sub-set.  If they already have a specific OU or two to aim at, that’s great and SCCM can do an LDAP query to just those few OUs.  However, if there is a lot of separate OUs this becomes a pain to add plus you may miss out if a new OU is added by the AD folks.

    A common example of all this is where a company wants to discover and manage all their workstations, but none of their servers.  Even though discovery of servers doesn’t mean they will be managed, folks do not want to take that chance so they want to limit their discovery so no servers are discovered.  Often times servers are in their own OU while workstations are in many OU.

    So the very simple trick here is to simply grant a DENY permission for the SCCM Site server machine account on the OU you do not want discovered and then point SCCM to discover everything in the domain.  This allows discovery of everything in AD except specific OUs.  SCCM uses the machine account context to query AD during discovery and if it has deny permissions on an OU it simply skips over it, finding everything else and including, by default, all new OUs your AD team makes in the future.

  • Disabling RBAC during custom report creation

    A much anticipated feature of System Center Configuration Manager 2012 (SCCM) was RBA (Role Based Administration, also known as RBAC or Role Based Access Control).  When this functionality was added to reports it was very welcomed, but also made life a little difficult.  When most people, myself included, are working out the syntax of a report we want to work in SQL Management Studio.  If you take an existing report and try to customize it you will find that it is making a call to a RBAC function based on the user's SID.  You may not have a user SID available easily so you could remove that from the query syntax or look up a test SID or something.  There is an easier way.

    If you look at the RBAC function you will notice that there are some exemptions if the passed in value is "disabled".  If you simply switch to use "disabled" in your testing you can keep the function but get full results, then when you are done working out your SQL syntax change it back to "@UserSIDs".

    As an example, if I want to mess around with a report I already I have I start with the following query which I can grab from report builder, but won't work in SQL Management Studio :

    select SYS.Netbios_Name0, TCU.SystemConsoleUser0, SF.FileName, SF.FileDescription, SF.FileVersion, SF.FileSize, SF.FileModifiedDate, SF.FilePath
    From v_GS_SoftwareFile  SF
    join fn_rbac_R_System(@UserSIDs)  SYS on SYS.ResourceID = SF.ResourceID join v_GS_SYSTEM_CONSOLE_USER TCU on SYS.ResourceID = TCU.ResourceID
    Where SF.FileName LIKE @variable
    ORDER BY SYS.Netbios_Name0

    For SQL Management Studio I use this variation to work from (also hard coding for another variable used in the query that is normally a prompted input):

    select SYS.Netbios_Name0, TCU.SystemConsoleUser0, SF.FileName, SF.FileDescription, SF.FileVersion, SF.FileSize, SF.FileModifiedDate, SF.FilePath
    From v_GS_SoftwareFile  SF
    join fn_rbac_R_System('disabled')  SYS on SYS.ResourceID = SF.ResourceID join v_GS_SYSTEM_CONSOLE_USER TCU on SYS.ResourceID = TCU.ResourceID
    Where SF.FileName LIKE 'cmtrace.exe'
    ORDER BY SYS.Netbios_Name0

  • DP converts, but content fails

    I hit an odd issue with a customer recently that was not easy to figure out or troubleshoot so I would like to share the problem and solution in hopes that others can avoid the pain we had to go through.

     

    The issue is, in general, very straight forward.  The customer had  several existing System Center Configuration Manager 2007 distribution points (DPs) which they wanted to upgrade to be System Center 2012 Configuration Manager DPs as part of their migration from the old product to the new.  The DP migration job ran and converted over the DP but none of the content was successful in conversion, which was really the point of the conversion in the first place.

     

    Before I get into the gory details let me say that I have had numerous customers convert DPs and content, so why this failure occurred I don’t know.  It might have been something in the environment, it might have been a product defect, or it might have been a solar flare at the wrong time causing an electrical disruption.  In any case, the solution was so easy once we figured it out that we never bothered to spend time trying to find root cause.

     

    The early symptom of the problem was a 2389 status message that indicated a failure to connect to the DP.  The logs seem to indicate some kind of WMI failure, which may not really be the problem.  After a bit of other troubleshooting steps we managed to follow these fairly easy steps to resolve it:

    1. Open up SQL Server Management Studio
    2. Run the following query against your ConfigMgr database, where <ServerName> is the FQDN name od the DP you are having problems with:
      1. select * from DistributionPoints  where ServerName = ‘<ServerName>’
    3. Make note of the corresponding DPID for your problem (in this example I will call it 122389)
    4. Using the DPID returned form the query create an empty DPU file with the DPID as the file name
      1. example:  122389.dpu
    5. Put the file in the \Microsoft Configuration Manager\inboxes\distmgr.box

     

    That’s it.  You should start seeing the content converted on the DP.  One query…, one DPU file…, problem solved!  Why it happens I’m still curious to know but I need another customer who is experiencing it to hire me to investigate it.  If you are a premier customer who wants to spend their premier hours on me and you have this issue, contact your TAM and I will be there in 1-4 months. Smile

  • Fixing the 0x1 error with the Forefront Definition Update Tool?

    OK, I will reveal one of my own skeletons today.  I have setup the FEP definition update tool several times for customers but in my own lab I was banging my head against the wall trying to figure out why it would not run correctly.  For those that don’t know, this is a tool run via task scheduler to automate the deployment of FEP definitions via ConfigMgr with out on-going admin interaction.  No matter what I did, it just would not run and I would get an error of 1 back.  Finally, with the help of my fellow PFE, Richard Balsley, I figured out that this was due to my use of System Center Updates Publisher (SCUP) on my test bench, and a bug in the interaction between the two tools.

    Just recently this bug was fixed and a new version of the tool was placed on the website.  There were some other updates to the tool as well which Jason Lewis has nicely documented in a blog post as well. If you have Forefront Endpoint Protection 2010 and aren’t using the tool yet I highly recommend you read the blog post and get it set up.

    Part 2

    On a separate, but related note, I had another cause for the 0x1 error with one of my customers with a not-so-obvious cause.  We had copied the command line options from the published technet article, which turned out to be the problem.  In that article the quotes around the article ID are actually smart quotes and will not interpret correctly from the command line or task scheduler.  If you replace the quotes with normal ones it should work.  I believe the article is going to be corrected so hopefully by the time you read this it will no longer be an issue.

  • Are your new clients working?

    I have setup many SCCM installations in my time and once I have all the server side pieces in place I like to kick a few quick tests to make sure it is all working as expected.  Nothing too heavy and cumbersome, but enough to see if you have a good foundation to build from.  Depending on time constraints, how much is setup, etc. I have “option 1” and “option 2”.  I prefer Option 2 because it exercises just a little bit more of the system.

     

    Option 1

    1. Install a client
    2. After install go to the control panel and kick off a hardware inventory cycle
    3. Check resource explorer on the site server in about 5 minutes and make sure there is inventory listed for that client.

    This validates that the admin UI is talking to SQL correctly, that the client can correctly communicate with the management point, that the MP can correctly communicate back to the the database, and that the site itself is able to process files and load them into the database.  A good solid foundation

    Option 2

    1. Setup a test package with something in it, like calc.exe.  What you use really isn't terribly important as long as there is at-least one file, preferably a fairly small one.
    2. Make a program for this package and run calc.exe.  Note that calc.exe doesn’t need to be in the source files but should be found on the local system.
    3. Install a client then deploy an optional deployment to your new clients of the above program and package

    In addition to all the things that option 1 validates above, this also validates proper communication with the distribution point, a key part of the base infrastructure.  Even though calc.exe isn’t in your package, if there was a failure to get the package content down then the execution should fail.

    These tests will work on anything from SMS 2.0 to SCCM 2007 and SCCM 2012.  The new application model in SCCM 2012 opens up some other fancy possibilities, but they can’t really validate more than the application catalog roles.

    There are fancy variations on these things, to be sure.  The “next level” is to use option 2 to deploy a simple script and copy cmtrace.exe or smstrace.exe onto the client for future troubleshooting needs.  If anyone has their favorite “quick test” please add it in the comment section.

  • Configuration Management for the non SCCM admin

    DCM was a feature in SCCM 2007 that got renamed to Configuration Management in System Center 2012 Configuration Manager.  IN 2007 It saw little use outside of FEP.  I encourage customers to take another look at it with SCCM 2012. Now it not only monitors settings and reports back on drift, but there is auto-remediation built in so you can have machines monitor settings for corporate compliance and rest themselves to standards if something gets changed that shouldn’t have.  I will be honest, it has a little bit of a learning curve to get your head wrapped around to get it working the way you like, but there are several examples out there to help get you started.  The latest example was done by one of our platform PFEs and I encourage you all to take a look and see if you might be interested in doing something similar to get more value out of your SCCM installation.

  • So you think you can be me (a.k.a Job opening in PFE)?

    I have all kinds of quotes running through my head today.  “To every season, turn, turn, turn” and “it’s so hard to say goodbye”.  I have come to another one of those points in life where I am making a good change, but I will miss what I leave behind.  I have made the choice to change my job at Microsoft.  To those who I have had the pleasure of working with in the past I regretfully say that I will no longer be a PFE that is available to assist you.  For those who only read this blog, I hope to continue providing you good insights and tricks.  I am taking on the role of a dedicated PFE, also called a dedicated support engineer or DSE.  For me the role is very similar to what I have done except I will be focused all year on only a handful of customers instead of a new customer every week with some repeat customers occasionally.  After almost 5 years it is time for me to do something a little different.

    This takes me to the point of this post…, we need to replace me.  I think I’m hot stuff some times, but the reality is that many folks out there are just as smart or smarter than I am.  We need to hire an SCCM PFE to fill the void I’m leaving behind.  Anyone can apply, although I’ m not sure if the exact job posting is up yet it will be something like https://careers.microsoft.com/jobdetails.aspx?ss=&pg=0&so=&rw=9&jid=114388&jlang=EN&pp=SS but based out of the Pacific Northwest, ideally Seattle but there is some flexibility.  However, if I have worked with you and know you I would be willing to answer any questions you might have about the PFE role and submit your resume with a personalized recommendation.  Send me a private message and your resume if you want to pursue a PFE job at Microsoft.

    I small plug about the position.  I have been with Microsoft for over 14 years, with many managers and many teams.  I have worked in support, and I have worked in product development.  I have met all kinds of great people.  My PFE manager, Mark Edwards, has been one of the best managers around.  I have worked with him the entire time I have been a PFE and it has been a great relationship.  He holds you accountable to do the right thing, and he has your back when working through the political morass that can sometimes occur.  He is a great guy to work under and was a large reason of why I stayed a PFE for so long.  He and I had many talks about my career and he was very supportive in my change, even though he didn’t like losing me.  If you get to work for him, count yourself lucky.

  • Troubleshooting OSD with-out being there

    I can’t take any claim for this idea but it is one that I think is handy enough, and not very well known, that I am trying to spread the word.  One of my co-workers has a great way to capture OSD logs when a failure occurs so they can be easily read and not lost simply because you weren’t in front of the machine to hit the function key and sniff around via a command prompt.

    The simple concept is this…. you take your entire task sequence and place it under a new high level folder.  If any step in your TS should fail control will be returned up to this top level folder.  You set this top level folder to continue on error, leading to the next part.

    Also at the top level, but below the folder you previously created you create a log capturing folder.  Under this folder you have some tasks which capture the SMSTS log and other related files and copy them to a file share on your network.

    The end result of all this is that if any task in your TS fails, no matter which stage, you don’t have to hunt for the logs and you don’t loose them when the machine re-boots into a failed deployment.  You just go to the file share and take a look at what failed then go fix it.

    All the details are on Steve Rachui’s blog at http://blogs.msdn.com/steverac/archive/2008/07/15/capturing-logs-during-failed-task-sequence-execution.aspx

  • Permissions to Create a collection

    I had a customer doing some custom permission setting in System Center 2012 Configuration Manager the other week and after adding folks to a few existing security roles all was good except that these limited rights users could not create collections.  If you have looked at all the possible perms in ConfigMgr 2012 you may have seen that there are a lot and while you could test and figure this out, I figured I would share the answer to save everyone else a bit of time.

    To create collections a user needs the following permissions:

    • Collection
      • Read
      • Create
      • Modify Folder
  • Are you Intune?

    Working in the System Center space I occasionally I get asked about management of environments that don’t really fit the world that SCCM focuses on.  SCCM is not a cost effective solution for many small businesses and with a top scale of 200,000 clients (soon to increase with R3 release) there are a few very large environments that go beyond what SCCM can handle effectively.  For the smaller environments there is a related product called System Center Essentials.  This is a product aimed at 50-500 client machine companies and is essentially a combination of SCCM, SCOM, SCVMM and Hyper-V all rolled into one nice and affordable package.  For now, that is what I recommend to smaller IT shops.  However, there is a new offering coming online that may be appropriate for some organizations and it is called Microsoft Intune.  Intune is less a hybrid of things, like essentials is, and more of a SCCM-lite or  WSUS-lite offering.  It has some of the same features of SCCM but is hosted in the cloud so small businesses don’t have to pony up the money for server hardware and can more easily get at the key things they need.  It doesn’t have nearly the deep and rich feature set that SCCM has, but for some companies it might be the right answer.  Check it out if you think it might be the right thing for you!

  • Command shell auto-complete

    Today’s trick is one of those things that those that know think everyone knows, and those that don’t know never get told about.  It is the ability to do auto-complete from a command prompt.

    The command shell (you know, cmd.exe) has an awareness of file and folder structure.  You can use this to your advantage when you are getting around.  For example, if you are in c:\users and you want to change directories to \windows\system32 then all you need to type is “cd \wi” and then hit your TAB key.  The line should lengthen to be “cd \windows" thanks to auto-complete (unless you have some other root directory starting with the letters “wi”).  Now continue the command line by typing “\s” and hit TAB multiple times until “system32” comes up, then hit enter to actually change directories.  This little trick can be very handy to find directories when you can’t recall their exact names.  Try typing “cd \prog” to get “program files” to come up, then type “\” and just hit TAB until you get to the directory you want.  If you go past the directory you wanted you can either keep hitting TAB to loop through everything or you can hold shift and hit TAB to go through the directories and files in the reverse order.

    This doesn’t work only on CD commands, it works for most command line things.  Next time you are in a directory of logs or such try doing “notepad” and then hitting tab until you find the log you want to open.

    Spread the word and don’t look down on anyone who didn’t know the trick!

  • Simple, but important, things

    When I talk with customers new to SCCM there are two things I advise them to not mess with:  Heartbeat discovery and the “All Systems” collection.

    Heartbeat discovery is configurable, and it is ok to adjust the interval to meet your company needs, but don’t turn it off.  It isn’t obvious to the new SCCM admin but there are several maintenance tasks that key off that heartbeat DDR.  Disabling heartbeat discovery may cause systems to be deleted from SCCM before you had expected or intended.  I don’t ever recommend turning it off.

    The All systems collection is a different, but similar issue.  That collection has a default object ID of SMS00001.  If you delete that collection you can, of course, re-create a collection with the same membership, and the same name.  It will, however, not get the same ID.  I have seen od behavior when that collection is missing.  I have never had the time to investigate the exact problem, but in general I just suggest not deleting it.  If you do perhaps delete it then you can restore it.  An example is http://blogs.msdn.com/b/vinpa/archive/2010/03/17/how-to-restore-the-all-systems-collection.aspx.

    File this under the category “won’t kill ya, but best not to mess with it”.