There is a point of confusion currently in the System Center 2012 Configuration Manager product, and that revolves around fallback DP and how they get used.
The scenario is this. You setup a content boundary group associated with 2 distribution points. You put the content on all of them. You associate the boundary of a machine to that content boundary group. What most people think is that the machine will get content from either of the 2 DPs.
Normally, this would be the case. However, if some of those DP are marked as fallback DP, they will then be "bucketized" differently in the DP selection process. If the application in question is not enabled for fallback then that fallback DP will not be offered to the requesting client and all traffic will be limited to only the remaining DP.
Here is an example scenario to help illustrate the current behaviour:
"We have two DPs: DP1 and DP2.DP1 has the option “Allow fallback source location for content” checked.We have only one Boundary Group : BG1BG1 has DP1 and DP2 as “assigned” site systemsDeployment type doesn’t have Fallback enabled (default option).
In that case, clients which ”belong” to the BG1, will not use DP1 at all. It won’t be “offered” by the MP."
Put another way... watch out what DPs you mark for fallback. The DP will either be a fallback DP, or a preferred DP.., but not necessarily both at the same time.
With Configuration Manager 2012 came a new UI. It took some getting used to but I, for one, have come to like it over the old MMC interface. There is one frustration with it that I hear from customers or see them attempting to workaround all the time. ConfigMgr admins everywhere like to organize their applications, software updates, etc. into nice folders, for easy administration. The downside of this is that when they or some other co-worker goes looking for a given application, they can't figure out which folder it is in. Many folks ignore the search ability at the top of the UI window because it only searches the current folder, or root.
The solution is to look a little higher and to the left. Up there and hidden among all the icons is a little button called "All subfolders". If you are on applications, collections, software updates, and probably several other locations you should be able to find it. The trick is that it doesn't show up initially. You must click in the search box, then you will see it. Some quick screen shots below, since a picture is worth a thousand words.
Every SCCM admin has figured out how to create a collection with a query rule to dynamically update its membership (you don't just do direct rules I hope). Most admins have also learned that the columns able to be returned by a collection are limited but if you create a query in the Query node of the UI instead of under a collection you will have more control over what columns get returned. Because of this it can be handy to run a query to bring back the info you are after, like a machine name and MAC. You can't get AC under a collection, but you can as part of a query.
A piece of frustration kicks in at this point for most folks, and that is the fact that your right click options on a query can be.. missing a few things:
While I can't promise the solution to get every option to appear (I have been trying to figure out how to get a delete option to show up) I can share that the proper columns returned will provide ore right click options:
The UI can only offer options if it has the necessary data returned to make those options work. Here is the query syntax to get more options to show up in your Configuration Manager console:
select distinct SMS_R_System.ResourceId, SMS_R_System.ResourceType, SMS_R_System.Client, SMS_R_System.Name, SMS_R_System.OperatingSystemNameandVersion, SMS_R_System.IPAddresses, SMS_R_System.SMSAssignedSites, SMS_R_System.SMSInstalledSites, SMS_R_System.Active, SMS_R_System.SMSResidentSites, SMS_R_System.MACAddresses, SMS_R_System.ClientVersion, SMS_R_System.NetbiosName, SMS_R_System.SystemRoles from SMS_R_System
A much anticipated feature of System Center Configuration Manager 2012 (SCCM) was RBA (Role Based Administration, also known as RBAC or Role Based Access Control). When this functionality was added to reports it was very welcomed, but also made life a little difficult. When most people, myself included, are working out the syntax of a report we want to work in SQL Management Studio. If you take an existing report and try to customize it you will find that it is making a call to a RBAC function based on the user's SID. You may not have a user SID available easily so you could remove that from the query syntax or look up a test SID or something. There is an easier way.
If you look at the RBAC function you will notice that there are some exemptions if the passed in value is "disabled". If you simply switch to use "disabled" in your testing you can keep the function but get full results, then when you are done working out your SQL syntax change it back to "@UserSIDs".
As an example, if I want to mess around with a report I already I have I start with the following query which I can grab from report builder, but won't work in SQL Management Studio :
select SYS.Netbios_Name0, TCU.SystemConsoleUser0, SF.FileName, SF.FileDescription, SF.FileVersion, SF.FileSize, SF.FileModifiedDate, SF.FilePath From v_GS_SoftwareFile SF join fn_rbac_R_System(@UserSIDs) SYS on SYS.ResourceID = SF.ResourceID join v_GS_SYSTEM_CONSOLE_USER TCU on SYS.ResourceID = TCU.ResourceIDWhere SF.FileName LIKE @variable ORDER BY SYS.Netbios_Name0
For SQL Management Studio I use this variation to work from (also hard coding for another variable used in the query that is normally a prompted input):
select SYS.Netbios_Name0, TCU.SystemConsoleUser0, SF.FileName, SF.FileDescription, SF.FileVersion, SF.FileSize, SF.FileModifiedDate, SF.FilePath From v_GS_SoftwareFile SF join fn_rbac_R_System('disabled') SYS on SYS.ResourceID = SF.ResourceID join v_GS_SYSTEM_CONSOLE_USER TCU on SYS.ResourceID = TCU.ResourceIDWhere SF.FileName LIKE 'cmtrace.exe'ORDER BY SYS.Netbios_Name0
For those of you doing OS deployments with System Center Configuration Manager 2012 (I bet the same info below applies to the 2007 version but haven't tested it) you may have gotten into prestart commands to run a custom script or perhaps an HTA to better control and customize the deployment process for your environment. I have a customer doing this and in the process they noted that there are actually two different places you can set the prestart command (also know as a pre-execution hook). Depending on how your deploy your OS you may only have one place that will work for you, but what happens if you set different scripts in both locations?
Setting in WinPE
The first place you can set is on the boot WIM itself. You specify a script source location and a command line.
This ends up making it into the TSCONFIG.ini that is in the root of the WinPE drive when it boots up and looking something like (command line is different because of the timing of my screen shots, but you get the point I hope):
[CustomHook]CommandLine=dumpvar.vbsSource=SMS10000
When the task sequence process runs it finds this TSCONFIG.ini and launches the script as desired
Setting during Boot Media Creation
The other option is to add the hook during creation of your boot media.
In this case, the package gets pulled from the DP and put into the boot media with the WinPE along with a modification of the TS variables so that on startup and reading the variables the value is seen and the hook will be executed.
So what happens if you set both? Take your guess before reading on and see how smart you are. I’ll give you a hint… I originally thought both scripts would execute..., and I was wrong.
The process SCCM goes through is to check the variables to see if the media had set them to have a hook. If so… that is what executes and the fact that the TSCONFIG.ini has a value is ignored. Only when the variable is unpopulated will the ini be checked and that hook will be executed.
8/6 update - Fixed an image display problem. Screen shots and logs may not line up correctly on script names and such. Sorry.
Let me start this blog post by making it clear that this is my opinion, not an official stance. Everyone is entitled to their opinion and this is just mine. Feel free to ignore it if you like.
I often time have discussions with customers about applying hotfixes, and more recently cumulative updates. The discussion revolves around if these things should be applied pro-actively or not. There are many folks in the support organizations here at Microsoft that want to make sure they are looking after our customers and thus recommend to them to apply all the latest and greatest fixes. The thought is that doing so will avoid fighting already known and fixed issues. This can be a big time and frustration saver rather than battling for days trying to get something to work, only to find out there was already a fix for the issue.
Change is risk. This is why most medium and large companies with mature IT processes have some form of change control in place. Any change means a deviation from the status quo and while that may improve things, it has the potential to cause problems as well. Standard change mitigation is to test the changes before committing to them in production. This is a good practice for IT groups as well as internally at Microsoft. Everything that Microsoft releases is tested before let out “into the wild”. As some of you may have noticed, occasionally that testing has missed a scenario or two and had unforeseen side effects. Fortunately I think such scenarios are getting to be less and less as process and diligence improves. Unfortunately, we aren't to a 0 occurrence rate just yet.
I was a test lead/manager for several years in the SCCM development group and was part of many hard discussions on how much testing is the right amount of testing for a given problem. With only a few quick tests the possibility of having a regression or unforeseen problem caused by the fix was high. On the other end of the spectrum I could sit down and dream up an infinite amount of potential tests, meaning that the product could never be released.
To give you an off-the wall example of how far these tests can go. Think of a screw driver. It is a fairly simple and straight forward thing that most of us use with out really thinking about it. Various tests are: can it turn to the right, turn to the left, fit screw size A, fit screw size B, comfortable in my hand, comfortable in my kids hand, not break under normal use, not break in a deadly way under more extreme use, not melt in my hand, not melt out in sunlight, not melt in my garage with the heater left on during a 100 degree day, not melt in the sun, go to space, not wear down too quickly, look good on a store shelf to sell better, etc.
So, how much testing is enough testing? Well, the balance point changes. The more critical and time sensitive something is, the more risk we take by doing less testing. Hotfixes are generally at the higher end of the risk scale. We test them in lab, maybe with some internal folks, and usually with at-least a few customers before they become available for everyone to download. The testing is limited because we want to be able to get them out fairly quickly. Cumulative updates get a little more testing, especially the interaction of the multiple fixes. The full development, then testing, then release happens in only a few months (and is in addition to the testing that some included hotfixes may have already been through) with a limited number of people so while this is more coverage than a typical hotfix, it isn’t really what I personally would consider as “low risk” just yet (although it is, arguably, getting close). SCCM CU hold less risk than a typical standalone hotfix would, and typically only include items that have a clearly understood risk to them which was covered by internal testing. Service packs get much more rigorous testing across many different in house and external scenarios. The chances of problem arising from a service pack are very low (or at-least well understood and documented) and thus on a personal level I consider it a “low risk” type of deployment.
So…, why do I write all this up? The advice I have always heard is “if it isn’t broke, don’t fix it” and in general I think that applies to software patching. Don’t apply a hotfix or a CU unless you are experiencing the symptoms that it means to address. Yes, you might waste a weekend battling a problem only to find out that a fix already existed. Compared to wasting a weekend applying the fix then battling an issue caused by it I think it is a good trade off to have not applied the fix if it wasn’t truly needed. There is one caveat I make to this statement, however, and that is for “invisible” problems. These are problems that you may actually be having, but not know about. A good example is a memory leak. Sure, you might have a leak in your admin console (as an example) but if you close it at-least once per day then you never realize it. The fact that every Monday after you left it open all weekend it is sluggish until you restart it has just become a habit that you have never bothered to investigate. A fix that solves admin UI memory leaks might help, or might be completely unrelated and do nothing for you but it is worth considering applying proactively.
So now I shall get down off my soap box. There are may smart people who I have respect for who disagree with me on this stance and in the end what works best for one company may not work best for all companies. Make the choice you deem appropriate for your company and your role. I will hope that it works out well for you in any case.
8/15 - Minor updates to clarify CU
Occasionally it becomes necessary to manually clean off a server of SCCM components. When that becomes necessary I usually tell customers to just flatten and rebuild the box, but that is not always an option. In those cases I have this mental list of things I go through to remove all the traces that SCCM/SMS could leave on the machine. Not every machine will have all of these locations populated. Every scenario has its nuances so don’t blindly follow this if you want a clean box but consider if each item is relevant to what you are trying to accomplish. If you think I missed anything, comment below and I’ll update as appropriate.
The other week I had a customer asking me how they could keep clients from using a Management Point, yet still have it installed and functional to interact with some 3rd party software they wanted to use. That question didn’t have a simple answer. By default an SCCM 2012 client will randomly choose from any available MP in a site. The key things that control the choice over one MP versus another are if an HTTPS MP specifically is required. Clients also have a preference to use the MP in the same forest they are in, if several MP area available. For my customer, all the MP were HTTP and all in the same forest, so of their 3 MP all would have the same possibility of being chosen.
I tried an idea that turned out to work, which is to "hide" one of the MP. By “hide” I mean that it is still in AD and seen in an MPList call but will not be returned to clients which call their current MP and request other MP to communicate with. This means that normal client processes would randomize between 2 of the MP, but the third MP would be used only when specified or hard coded, such as during client installation. That third MP is still there and running as normal but it takes something like 3rd party software, boot media, or a client command line parameter for it to be used. Un-publishing the MP means that it won’t be listed in AD and normal location requests will not return it as an option. Screen shots on where this un-publishing can be done are below. The change can be seen by watching the clientlocation.log on the clients and looking for a line similar to the following, never changing to the "hidden" MP:
Assigned MP changed from <MP1> to <MP2>.
There is a desire in the SCCM community to allow clients to have an affinity to a specific MP, similar to the use of boundaries and Distribution Points (DP). To be clear this will not provide that affinity. It simply removes one or more MP from normal client use processes. It cannot be used to selectively make one MP serve a subset of clients. If you were to set a client to use this “hidden” MP it would, for a time. Various processes in the SCCM client would eventually ask for a list of available MP, and the results returned would be the other MP, and thus the clients would switch away from the use of this “hidden” MP. The MP would serve a limited purpose, until such a switch occurred.
Thanks go to Jason Sandys and Adam Meltzer for helping me provide clarity on this post.
6/27/2014 UPDATE - To provide better clarity I changed the post to reflect that MPList will still show the "hidden" MP and its object will still be in AD
RBAC, or Role Based Administration, is new with the SCCM 2012 product. Many customers I encounter are excited with the separation and flexibility it can provide, but daunted by the activity of getting it setup initially. There are a few handy tricks I share time and again with customers at this stage and I thought it might be nice to share with everyone.
1 - Create a security role template
If you get into RBA you will find that you can copy an existing role, then modify it for your needs. However, this means that you have to go and remove all those perms that came over from your copy…, and you have to do this for each role you have to make. Save yourself some time and the very first time make a copy of the Remote Tools Operator. This has the least amount of existing perm lines that need cleared. I like to change all the existing lines to ‘no’ then add 1 single perm at the very top, under Alert Subscription. Save this as “_Template” and from this time forward you can just copy it as your starting point, making things a little easier.
2 - Think ahead about your potential use of scopes
If today you manage desktops only, and that is all you will ever have in SCCM, then stick to the default scope. If, however, you think that some day you may want to add servers and have management of them separated out, then the time to create the “desktop” and “server” scopes is right after SCCM finishes installation. Sure, you can do it any time, but the problem is that any object you create going forward is tagged to the scopes you are in.., which means they can all be tagged as part of the “desktop” scope, or they can be tagged as part of the “default” scope and you can go change them all to “desktop” later (not simple, but possible).
3 – Use RBAViewer
This tool is free as part of the SCCM 2012 R2 toolkit. It will make your visualization and exploration of the different perms necessary to reach your ends goals easier and more clear. There is also a helpful spreadsheet made by Brent Dunsire that many people find useful.
As a little side note, I was asked to figure out what the minimum permission necessary to allow an admin to block/deny devices from communicating with SCCM. That requires “Read Resource” perm under collections. Of course, if you can’t see the collections or connect to the admin console you will probably need an additional perm of some kind. “Read” under collections was the easy one I used.
I was recently asked to pull together a general starting point guideline on how to troubleshoot content deployment issues with SCCM 2012. There is a lot that could go wrong, and the specifics for each scenario are different, but here is what I pulled together as a general starting point.
· Check Status in the UI
· Head to the logs
Additional info:
While I can’t say I have to do it all that often, there are occasional times when I need to test connectivity from a machine back to SQL. The question then comes up, how to easily get that done? Installing SQL tools is the most obvious, but not always handy and I really don’t want to be installing SQL tools on every machine I do this from. The easy answer is to use a UDL file.
While you can use a UDL file to test various connectivity scenarios, as an System Center Configuration Manager admin I’m mostly concerned with connections back to the SQL database so I set it up on the provider tab to use the “Microsoft OLE DB Provider for SQL.”
The next step is to fill out the necessary information on the connection tab.
Test the connection, and troubleshoot as necessary if it fails to connect.
I’m sure you can do all kinds of fancy other things with the tool.. heck, we only used two of the four tabs, but I’m an SCCM guy and not a SQL guru so I leave that for other folks to blog about.
I saved the best for last. This cool tool… you already have it. It is part of every windows OS that I have used. Create a simple text file, rename the extension from TXT to UDL, and TADA… you got yourself a handy utility. Delete the file when you are done to clean-up after yourself.
Credit goes to Faruk Celik for clueing me into this from his blog post: http://blogs.msdn.com/b/farukcelik/archive/2007/12/31/basics-first-udl-test.aspx
I have setup many SCCM installations in my time and once I have all the server side pieces in place I like to kick a few quick tests to make sure it is all working as expected. Nothing too heavy and cumbersome, but enough to see if you have a good foundation to build from. Depending on time constraints, how much is setup, etc. I have “option 1” and “option 2”. I prefer Option 2 because it exercises just a little bit more of the system.
Option 1
This validates that the admin UI is talking to SQL correctly, that the client can correctly communicate with the management point, that the MP can correctly communicate back to the the database, and that the site itself is able to process files and load them into the database. A good solid foundation
Option 2
In addition to all the things that option 1 validates above, this also validates proper communication with the distribution point, a key part of the base infrastructure. Even though calc.exe isn’t in your package, if there was a failure to get the package content down then the execution should fail.
These tests will work on anything from SMS 2.0 to SCCM 2007 and SCCM 2012. The new application model in SCCM 2012 opens up some other fancy possibilities, but they can’t really validate more than the application catalog roles.
There are fancy variations on these things, to be sure. The “next level” is to use option 2 to deploy a simple script and copy cmtrace.exe or smstrace.exe onto the client for future troubleshooting needs. If anyone has their favorite “quick test” please add it in the comment section.
I will admit, this is a bit of a lame post on my part. I just wanted to help spread some awareness of a post my fellow PFE, Jamie Moyer, put up. When I’m trying to troubleshoot some kind of SCCM 2012 application deployment failure it is useful to reference what a good deployment looks like so I can find the point of difference and likely failure. Jamie pulled some things together and posted a run through of the client side logs and what “good” looks like. Keep this one bookmarked and handy for the day you need it:
http://www.moyerteam.com/2013/10/troubleshooting-configmgr-application-deployments-detailed-log-file-analysis/
I got a question from one of my customers the other day that was an easy, but not obvious, answer. They had SCCM 2012 setup in Forest A but wanted to discover machines in forest B. They supplied alternative credentials with the correct username and password for this other domain\forest but kept getting back a 0x8007052E error, which translates to “Logon failure: unknown user name or bad password.”
That error is, unfortunately, a misleading error. There is nothing wrong with the username or the password. The real problem was in the formulation of their LDAP query. They needed to add a named DC to the query for it to run correctly, which was not an obvious thing to do. The solution syntax was to formulate the LDAP query that looked similar to this:
LDAP://RemoteDC.remotedomain.com/DC=remotedomain,DC=com
I have all kinds of quotes running through my head today. “To every season, turn, turn, turn” and “it’s so hard to say goodbye”. I have come to another one of those points in life where I am making a good change, but I will miss what I leave behind. I have made the choice to change my job at Microsoft. To those who I have had the pleasure of working with in the past I regretfully say that I will no longer be a PFE that is available to assist you. For those who only read this blog, I hope to continue providing you good insights and tricks. I am taking on the role of a dedicated PFE, also called a dedicated support engineer or DSE. For me the role is very similar to what I have done except I will be focused all year on only a handful of customers instead of a new customer every week with some repeat customers occasionally. After almost 5 years it is time for me to do something a little different.
This takes me to the point of this post…, we need to replace me. I think I’m hot stuff some times, but the reality is that many folks out there are just as smart or smarter than I am. We need to hire an SCCM PFE to fill the void I’m leaving behind. Anyone can apply, although I’ m not sure if the exact job posting is up yet it will be something like https://careers.microsoft.com/jobdetails.aspx?ss=&pg=0&so=&rw=9&jid=114388&jlang=EN&pp=SS but based out of the Pacific Northwest, ideally Seattle but there is some flexibility. However, if I have worked with you and know you I would be willing to answer any questions you might have about the PFE role and submit your resume with a personalized recommendation. Send me a private message and your resume if you want to pursue a PFE job at Microsoft.
I small plug about the position. I have been with Microsoft for over 14 years, with many managers and many teams. I have worked in support, and I have worked in product development. I have met all kinds of great people. My PFE manager, Mark Edwards, has been one of the best managers around. I have worked with him the entire time I have been a PFE and it has been a great relationship. He holds you accountable to do the right thing, and he has your back when working through the political morass that can sometimes occur. He is a great guy to work under and was a large reason of why I stayed a PFE for so long. He and I had many talks about my career and he was very supportive in my change, even though he didn’t like losing me. If you get to work for him, count yourself lucky.
I was recently working with a customer to setup management of their Mac and Linux machines with System Center Configuration Manager 2012 and once we had things up and running we wanted to validate the client communication. In the windows world I often setup a simple SWDist of something like CMTrace to do this, but in the Linux world I wasn’t sure what to do. I work for Microsoft so my interactions with Linux are… very limited. That said, I worked with my customer to come up with a simple app that I thought others might want to make use of in their Linux testing as well.
The goal was to be simple, generic, and easy. We came up with the idea of launching a shell file (similar to a windows batch file) that simply echoes text to a text file. By making a file, instead of a command line, we would be sure to exercise DP access and communication for the *inx client. Here is what we did:
Keep in mind the command line syntax. Similar to how sometimes you would call a vbscript (cscript blah.wsf) or a command prompt function (cmd.exe /c copy blah.dll c:\foo\bar\blah.dll) you need to call the shell for Linux to properly run the command. I’m not a Linux expert so while we tried a few different shells, using the SH shell seemed to work better than other things we tried.
My thanks to M.C. in California and his co-worker for helping me improve my Linux knowledge.
One of the topics that comes up with a lot of my SCCM customers is how their management is looking at reports of things like updates deployments or software deployments and seeing low success rates. Getting 100% success is rare in any large deployment but each company should know a reasonable number, say 90% or 95%.
For any given deployment there are many things which can and do go wrong, but there are many items which could be considered “false alarms” and really server only to bring down your metrics and distract you from focusing on the real problem clients. I like to start by splitting all your machines into two categories: Manageable vs. non-manageable. A non manageable machine could be corrupted, offline, or thrown away. There are many possibilities which is its own series of discussions and leads to a discussion about the PFE client health and remediation service.
Tip 1 – Manageable machines only
There are several ways to do this:
The above helps you deal with machines that were manageable at one time but are no longer in a manageable state. Of course, it is separating out the bad stuff from the good stuff in your SCCM database. Keeping the bad stuff out of the SCCM DB in the first place is also helpful, and that means not discovering things from AD that can’t be managed.
Tip 2 – Avoid the “garbage” from AD
In SCCM 2007 or 2012
In SCCM 2012 only
Using those two tips you should be able to get most deployments hitting above 90% success. If not then you likely have a more systemic issue like missing content or a bad command line.
DCM was a feature in SCCM 2007 that got renamed to Configuration Management in System Center 2012 Configuration Manager. IN 2007 It saw little use outside of FEP. I encourage customers to take another look at it with SCCM 2012. Now it not only monitors settings and reports back on drift, but there is auto-remediation built in so you can have machines monitor settings for corporate compliance and rest themselves to standards if something gets changed that shouldn’t have. I will be honest, it has a little bit of a learning curve to get your head wrapped around to get it working the way you like, but there are several examples out there to help get you started. The latest example was done by one of our platform PFEs and I encourage you all to take a look and see if you might be interested in doing something similar to get more value out of your SCCM installation.
With System Center 2012 Configuration Manager Service pack 1 came new ways to manage new OS types. Mac and Unix/Linux management were big additions, but perhaps equally big was the expansion of mobile device management to manage things like iOS devices, Android devices, Windows Phone 8, and Windows RT devices. This management is a richer management than we had for ActiveSync devices in SCCM 2012 RTM and different than what we have for older devices like WinCE and Windows mobile 6.x devices in SCCm 2007 and SCCM 2012 RTM. This new functionality is via a connection with Intune, Microsoft’s cloud solution for device management.
What is Intune?
Intune is a fully standalone solution for managing devices from the cloud. It is a subscription service you can use and it will manage full OS machines as well as mobile OS like those I mention above. Rather than duplicate efforts for mobile device management the SCCM 2012 product leverages Intune’s communications and functionalities to mange these devices, but moves the management responsibility back to the SCCM admin console so all management of devices can be done in one place.
How to hook SCCM and Intune together?
There are many settings and specifics to setting up Intune and configuring it to work with SCCM. Craig Morris from the SCCM product group put together a great blog post on the subject so I’m not going to try and duplicate that here, but rather give a few key pointers to be aware of.
The first step to hooking SCCM and Intune together is to have SCCM and Intune. You need to have SCCM 2012 SP1 as a minimum, and then you need to setup an Intune account. There is a 30 day free trial so clear your calendar for the next month and give it a shot to see if it is right for you. TIP 1 - When you set it up you probably don’t want to use any existing account you have to use for Hotmail, MSDN, SkyDrive, etc. Intune will be tied to your account so you probably want to setup or use some kind of generic account for your company rather than one tied to you personally. It makes it easier for you to retire down the road when that day comes.
TIP 2 - The next pointer is that once you setup Intune and you start poking around DO NOT set the Mobile Device Management Authority. This setting can be done once, and only once. If you do it wrong you have to “tear up” your Intune site, throw it away, and make another one. Let SCCM set it for you when you are ready.
The next part of integration is up to you what you want to do. In the ideal world you would setup Single Sign On (SSO) so your users can use their domain credentials. For me and my test lab I was limited by the lack of an internet resolvable domain name so I couldn’t do it and had to do some workarounds. If you do want to set it up you will need to look into setting up an Active Directory Federation Services (ADFS) to aid in the syncing. This leads to TIP 3 – Make sure the Universal Principal Name (UPN) of your domain user accounts can be resolved by the Microsoft AZURE cloud. Said in other terms, if your users have accounts like user@domain.internal and your company is externally reachable by user@company.com.., you are going to have some hurdles to jump over to get things working correctly. Once you have your UPN figured out, use DIRSYNC to get accounts into Intune and then activate them in Intune.
Once you have all that in place you get to head to the SCCM 2012 admin console to complete the last bits. You need to setup an Intune subscription to link things together, and you need to setup an Intune connector site system role (similar to a Distribution Point) so you can publish content for devices into the Intune cloud. In this process you should also create a user based collection as your control point of what users will be allowed to use devices managed via Intune. You might want to start with only a test user or two (who has the correct UPN) and eventually expand to all the users in your organization. TIP 4 - Keep the users in this collection set as activated users in Intune for best results.
Now what?
These will get you a link between Intune and SCCM, so now you are ready to manage some devices. I’ll be adding some future blog posts to talk about how to setup a connection with each type of mobile device and distribute software to it… so stay tuned.
This week Microsoft rolled out a BIG hotfix (90 hotfixes) rollup for Windows 7 SP1 and Windows Server 2008 R2 SP1. To better understand all the goodness it gives you check out the AskPFEPlat blog or go to the source of one of the guys who helped put it together, my fellow PFE Jeff Stokes. It is being distributed as an enterprise hotfix and so I think it likely that a lot of you folks running SCCM to manage your enterprise might want to roll out this hotfix. One big advantage that comes to mind is including it in your OSD image capture to cut down on patch install time for future deployments. The trick is that “out of the box” you can’t deploy this. It does not sync to WSUS and your SCCM software update point (SUP) automatically. There are some simple steps you can take to get it there, however.
3-14-13 update - Added links to AskPFEPlat and Jeff Stokes' blogs along with warning about using WSUS admin console
3-14-13 update #2 - added clarity about fix classification
There are all kinds of new features in System Center 2012 Configuration Manager (SCCM 2012) and you have probably read about, watched video, or made use of many of them. As I visit customers I find there is one very cool feature that most folks don’t realize is actually there: Distribution Point groups
Distribution groups aren’t really a new feature per se. They existed in SCCM 2007. However, their usefulness in 2007 was limited. They were only useful as a UI convenience during package distribution UI. In SCCM 2012 they are much more than that. They are a very useful tool.
The great thing about DP groups is that you can target software deployments at the DP group and then content will be added dynamically to any DP which is added to that group. Because of this I recommend to all my customers that even if they only have 1 DP they should make a DP group, put the 1 DP in it, then target all software distributions to the group, not the named DP. I tell them this to make the future better and easier. Compare this future scenario with and with out a DP group:
Your SCCM 2012 site has been up and running for a year. You have 100 packages already deployed to your single remote DP but the hardware is old and failing so you want to rebuild it into a play toy for your new hire to train on. You finally got the funds for new hardware and built it out to be your new remote DP. Now you have to get your packages deployed to it.
Non-group option:
DP group option:
How is that not cool?!?!?
To be fair, I kept my scenarios to “in-product functionality” and there are some scripts to help handle this kind of thing in SCCM 2007…, but why mess with scripts when the product has such a nice feature built right in and fully supported?
I should also note that content is not removed from the DP just by removing it from the DP group. You either have to delete it from the DP via the SCCM UI before removal, or format the ex-DP if you don't care. I did a modification of this post to make that more clear.
10/22/14 update - Added archive_reports.sms section
Back in the SMS 2.0 days I was doing phone support and I would often have to listen to statements of disbelief when I told customers to create a Dial_Me_in_baby.sms file in order to enable logging to troubleshoot their issues. While that file is no longer needed (all server logs are turned on by default) I thought I would do a short post on the magical files of SCCM. The files who mere existence affects the behavior of our product.
Dial_Me_In_Baby.sms
In the old SMS 2.0 days server logging was not enabled by default. It could be enabled component by component, but that was a pain. The other option was to create an empty text file in the root of the C drive called Dial_Me_In_Baby.sms then stop and start the SMS Executive service. On startup it would see the file and enable logging for all components, with default log sizes.
Die_Evil_Bug_Die.sms
This log was similar to Dial_Me_In_Baby except that it changed the logging level to include debug level logging. Not many folks realize that there are multiple levels of server logging on SMS/SCCM and use of this file was one way to get additional details in the logs (as if all of you want MORE detail in the logs). For some background on these two files see http://www.myitforum.com/articles/1/view.asp?id=3918 UPDATE: My co-worker Larry Mosley pointed out that this file is also used to disable SCCM's exception handler. Normally when SCCM crashes we "catch it" and handle it with some special logging and such. Using this file excepts that so crashes can be caught by various debugging tools.
This log was similar to Dial_Me_In_Baby except that it changed the logging level to include debug level logging. Not many folks realize that there are multiple levels of server logging on SMS/SCCM and use of this file was one way to get additional details in the logs (as if all of you want MORE detail in the logs). For some background on these two files see http://www.myitforum.com/articles/1/view.asp?id=3918
UPDATE: My co-worker Larry Mosley pointed out that this file is also used to disable SCCM's exception handler. Normally when SCCM crashes we "catch it" and handle it with some special logging and such. Using this file excepts that so crashes can be caught by various debugging tools.
SkpSwi.dat
This file is in use today and is used by configuration manager to avoid inventorying itself. When a software inventory is being run and the client agent is recursively walking through the directory structures on the hard drive it checks each folder for the existence of this file. If the file is found then inventory of that that directory and all sub directories is skipped. It is usually hidden but if you look for the hidden file on one of your clients you should find it in a few places, the cache directory being one of them.
Archive_reports.sms
This file is used to retain inventory reports on the client. When it exists in \ccm\inventory\temp then all hardware and software inventory XML files will be retained on the client after they are sent to the MP.
No_SMS_On_Drive.sms
Ideally this would never be needed, but the ideal and the real are not always the same, and thus it exists. There are several points in deployment of SCCM server roles, such as a distribution point, when you want to keep SCCM from using one drive partition. In most cases the default SCCM behavior is to choose the NTFS formatted drive with the most free disk space. If you don’t want a specific drive to be used then make a No_SMS_ON_Drive.sms file on the rot of the partition to be skipped, and it will be removed from consideration by the SCCM process.
As much as I love it, it may occasionally be necessary to remove the System Center Configuration Manager client. I recently had a customer who could not get a machine to upgrade from 2007 to 2012 and after attempting many things we did a manual clean down of the client then the 2012 client installed just fine. If you need to remove the client you can do so fairly easily by running ccmsetup.exe /uninstall. Ccmsetup should exist on all clients, usually under the windows folder. In the event that the command line doesn’t work here is a list of things I usually check and remove to manually clean-up all the traces of the client so I can try a fresh install.
1. SMS Agent Host Service
2. CCMSetup service (if present)
3. \windows\ccm directory
4. \windows\ccmsetup directory
5. \windows\ccmcache directory
6. \windows\smscfg.ini
7. \windows\sms*.mif (if present)
8. HKLM\software\Microsoft\ccm registry keys
9. HKLM\software\Microsoft\CCMSETUP registry keys
10. HKLM\software\Microsoft\SMS registry keys
11. root\cimv2\sms WMI namespace
12 root\ccm WMI namespace
* Updated 9/30 to add WMI info
I had a customer doing some custom permission setting in System Center 2012 Configuration Manager the other week and after adding folks to a few existing security roles all was good except that these limited rights users could not create collections. If you have looked at all the possible perms in ConfigMgr 2012 you may have seen that there are a lot and while you could test and figure this out, I figured I would share the answer to save everyone else a bit of time.
To create collections a user needs the following permissions:
I hit an odd issue with a customer recently that was not easy to figure out or troubleshoot so I would like to share the problem and solution in hopes that others can avoid the pain we had to go through.
The issue is, in general, very straight forward. The customer had several existing System Center Configuration Manager 2007 distribution points (DPs) which they wanted to upgrade to be System Center 2012 Configuration Manager DPs as part of their migration from the old product to the new. The DP migration job ran and converted over the DP but none of the content was successful in conversion, which was really the point of the conversion in the first place.
Before I get into the gory details let me say that I have had numerous customers convert DPs and content, so why this failure occurred I don’t know. It might have been something in the environment, it might have been a product defect, or it might have been a solar flare at the wrong time causing an electrical disruption. In any case, the solution was so easy once we figured it out that we never bothered to spend time trying to find root cause.
The early symptom of the problem was a 2389 status message that indicated a failure to connect to the DP. The logs seem to indicate some kind of WMI failure, which may not really be the problem. After a bit of other troubleshooting steps we managed to follow these fairly easy steps to resolve it:
That’s it. You should start seeing the content converted on the DP. One query…, one DPU file…, problem solved! Why it happens I’m still curious to know but I need another customer who is experiencing it to hire me to investigate it. If you are a premier customer who wants to spend their premier hours on me and you have this issue, contact your TAM and I will be there in 1-4 months.