This blog is owned and operated by the ANZ ConfigMgr Premier Field Engineer team.
Contributors
Ian BartlettMatt ShadboltGeorge Smpyrakis
Blog Links
In CM12 we have a number of changes in Software Updates. One of the most anticipated one’s is Auto Deployment Rules.
Yes finally I hear you say….
Well Lets run through creating an Auto Deployment and one little gotcha to keep your eye on.
Software Library > Software Updates > Automatic Deployment Rules
Choose Create Automatic Deployment Rule from the Ribbon or Right click on the mouse.
In the first screen we can choose a Template
(Templates are no longer a node in the console they are now created when creating an Auto Deployment Rule or manually Deploying Updates and are saved at the Summary screen.Ill point this out later in the post)
You can Select to Add to an Existing Software Update Group or Create a new Software Update Group.
If you select Add to an Existing Software Update Group a brand new group will be created the first time the Auto Deployment Rule is run and every time the rule runs after that the new updates are added to that group.
(NOTE You cannot create a software Update group manually and then create an Auto Deployment rule to add new updates to that group. Even if you give it the same name and description the Auto Deployment Rule will still create a new group. See Figure below.The group created at 6:02 pm was done manually. I then ran the Auto Deployment rule at 6:07 pm and you can see that it creates a group with a duplicate name and description.)
If you select Create a new Software Update Group every time the rule is run a new Software Update Group is created.
You can also choose to Enable the deployment after the rule is run.
Here you can choose to use Wake on lan and also decide whether to automatically deploy all updates and approve any license agreements or deploy only updates that do not include license agreements.
This is where you select the requirements to select the updates to auto approve.
Here you can set a Schedule for the Rule to run. Potentially every Patch Tuesday or Daily for Forefront updates.
Or you can run the rule manually.
Similar to CM07 we can set the deployment schedule and whether the Deployment will be Mandatory.
Set the User Experience, deadline behaviour and reboot suppression.
We can now Generate Alerts if the compliance falls below a certain after a certain period of time. As before we can select to disable alerts for Operations Manager.
Set your Deployment options
Either select an existing package or create a new one for the new updates
Select a DP or DP Group
Where to download the updates from
Choose a language
On the Summary screen you can Choose to Save your settings as a Template for future use
We now see the new Rule in the console and we can choose to Run Now from the ribbon.
The log file for troubleshooting is Ruleengine.log
We can see the Auto Deployment Rule is kicked off
Evaluating and downloading updates
Here we see it looking for an existing update group and not finding one therefore creating a new Software Update Group then adding the updates to that Group.
Back to the console.If we select Software Update Groups we now see the newly created Windows 7 Automatic Deployment and the Deployment (Yet to be enabled) on the tab below.
When we select Show Members we can see the updates applied.
and there you have it.
For full details, download the following file
Microsoft have recently released a Windows 8 and Server 2012 cumulative update KB2770917.
http://support.microsoft.com/kb/2770917
One of the important features of this update is the ability to customize the Windows 8 lock screen with corporate branding and set this across your domain joined computers using Group Policy.
From the KB:
This cumulative update includes the following performance and reliability improvements:
After installing the update, you get four new Group Policy settings
Force a specific default lock screen image Provide a UNC or local path to your corporate lock screen logo, and all of your users will receive that as their lock screen.
Prevent changing lock screen image After setting the corporate lock image, enable this option if you don’t want your users to have the ability to personalize the lock screen image.
Prevent changing start menu background Use this option to stop your users from changing the Start Menu background colour. This means whatever the colour of the Start Menu background was when the machine was deployed will not be changed.
Do not display the lock screen Enabling this setting will remove the lock screen for any user who isn’t required to press CTRL+ALT+DEL to login.
After configuring all the settings and applying the GPO, my corporate machines lock screen now looks like this, and my users are stuck with it!
Matt Shadbolt
Firstly, locate your most up to date image and make a copy of it. This is so we can stream the newest Windows Updates into the mounted WIM without risk of damaging a working WIM. I suggest copying the WIM to a temp location. Also, put the Windows Update that you want to apply into an Updates folder.
Next, mount your image in the temp location.
DISM /Mount-Wim /WimFile:C:\TempMount\install.wim /index:1 /Mountdir:C:\TempMount\Mount
Now inject the Windows Update you need to apply
DISM /image:C:\TempMount\Mount /Add-Package /Packagepath:C:\Updates\
Finally, save an unmount the image
DISM /Unmount-Wim /Mountdir:C:\TempMount\Mount /commit DISM /Cleanup-Wim
While running updates manually like this is an easy way to apply a few updates, hundreds of updates require more work. Here’s how you would apply the updates using PowerShell.
$UpdatesPath = "C:\Updates\*" $MountPath = “C:\TempMount\Mount” $WimFile = “C:\TempMount\install.wim” DISM \Mount-Wim /WimFile:$WimFile /index:1 /Mountdir:$MountPath $UpdateArray = Get-Item $UpdatesPath ForEach ($Updates in $UpdateArray) { DISM /image:$MountPath /Add-Package /Packagepath:$Updates Start-Sleep –s 10 } Write-Host "Updates Applied to WIM" DISM /Unmount-Wim /Mountdir:$MountPath /commit DISM /Cleanup-Wim
Using SCCM 2007 Deployment Packages makes getting these updates really simple. Package up the updates like you would normally, then set the $UpdatesPath variable above to the SMS package location.
Happy patching!
If you need to distinguish whether or not a site has been upgraded to ConfigMgr 2012 SP1, here is the process and version numbers.
1. Open the ConfigMgr console
2. Browse to Administration > Site Configuration > Sites
3. Right-click on the site you need information for, and select Properties
4. You’ll find the site version and build number here
ConfigMgr 2012 RTM
Version: 5.00.7711.0000 Build number: 7711
ConfigMgr 2012 SP1
Version: 5.00.7804.1000 Build number: 7804
I have been doing a number of customer engagements recently around Windows 8 deployments through ConfigMgr 2012 SP1 and one question I often ask our customers during the planning phase is “Will you be integrating MDT 2012 Update 1 into your ConfigMgr 2012 SP1 environment?” The general response I get is “What are the benefits…?” Well the short answer is A LOT!!, but one of the cool new reasons is MDT 2012 Monitoring and the ability to use this to monitor your ConfigMgr 2012 SP1 OSD deployments.
There are a few pre-requisites that are required to get the FULL functionality of what is offered in MDT 2012 monitoring in particular the option to DaRT Remote Control to your client machine during the build, even while in PXE. This will require a custom boot image to be created that includes the DaRT 8 utility embedded. As DaRT is part of the Microsoft Desktop Optimization Pack (MDOP) you will need an MDOP subscription.
However if you do not have MDOP subscription you can still utilise the MDT 2012 Monitoring feature for your ConfigMgr 2012 SP1 deployments.
In this session I will step through both configuring MDT 2012 Update 1 Monitoring for ConfigMgr 2012 SP1 OSD deployments as well as how to create a DaRT 8 embedded boot image to get the full power of MDT 2012 Monitoring.
- Open the MDT management MMC
- Right Click Deployment Share \ New Deployment Share
- Complete the Wizard
- Right Click your Deployment Share and select Properties
- Select the Monitoring Tab
- Enable Monitoring for this Deployment Share
- Navigate to your source directory that your set for your MDT Settings Package
- If you are not sure where it is check your ConfigMgr Package
- Open your CustomSettings.ini file using notepad
- Add the following text to the end of the file: EventService=http://<server>:9800
- Update your Distribution Point to ensure the Settings Package is updated.
NOTE: If you want to confirm your DP has been updated you can follow the steps outlined in one of my previous blogs – ConfigMgr 2012 Content Library Overview
- Open the MDT 2012 Update 1 Management Console
- Expand your MDT Deployment Share
- Select the Monitoring Node
- Select the build you want to monitor and select Properties
Note: You will not see your deployment appear until after the first “GATHER” has run during the Task Sequence.
In the next section I will show you how to take monitoring further by using DaRT 8…
You will need to have integrated MDT 2012 Update 1 with your ConfigMgr 2012 SP1 environment and have a MDT 2012 Deployment Share configured before proceeding.
Note: After Integrating MDT 2012 Update 1 with your ConfigMgr 2012 SP1 environment you will have the option to create a new MDT Boot Image directly out of the ConfigMgr UI Management console. However you will not have the option to select DaRT 8. The following steps will be required to make this option available.
The image above is what options you have out of the box when creating a custom MDT Boot Image in ConfigMgr 2012 SP1.
NOTE that DaRT 8 is not an available option.. YET!!
This is only available for DaRT 8
- Complete the DaRT 8 Installation wizard
- Using File Explorer, navigate to the C:\Program Files\Microsoft DaRT 8\v8 folder.
- Copy the Toolsx86.cab file to C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x86
- Copy the Toolsx64.cab file to C:\Program Files\Microsoft Deployment Toolkit\Templates\Distribution\Tools\x64
- Open the ConfigMgr 2012 Management Console
- Select Software Library \ Operating Systems \ Boot Images
- Right Click Boot Images and select “New MDT Boot Image”
- Complete the wizard
- You will now notice we have a DaRT 8 option..
- As we have deployed with a DaRT 8 embedded Boot Image we now have the option to connect to your client machine using DaRT Remote Control
You can now view your deployment status for any machine from start to finish even while it is in WinPE..
I hope you have found this information useful and will consider the benefits of integrating MDT 2012 Update 1 into your ConfigMgr 2012 SP1 environment, even if it is just for the monitoring components.
Until next time…
Dynamic Collection Updates and Delta Discovery were features introduced in SCCM 2007 R3 and promised a lot. Finally, admins thought they could limit their full Collection updates to 1 per day, their AD System Group Discovery to 1 per day and use the Dynamic Updates/Delta Discovery to run every 5 minutes to pick up changes. Problem is, Dynamic Collection Updates/Delta Discovery do not work as we had hoped. In the end, Dynamic Collection Updates only worked on *newly* discovered systems… whether that’s a first time AD System Discover, OSD or manual client install. And hey, that’s great if your doing OSD!
If you want your Advertisements to be available as soon as a machine has been built, then Dynamic Collection Updates are for you. But for alleviating the need to run your collection schedules every 15 minutes (against MS Best Practice) for software distribution, Dynamic Updates didn’t hit the mark.
I’m happy to say that ConfigMgr 2012 now does Dynamic Updates and Delta Discovery in the way we had all hoped!
And here’s the proof.
Firstly, lets look at enabling CM12 Delta Discovery, creating a new group, adding a computer to the group and letting ConfigMgr 2012 do its thing:
CM12 AD Group Discovery Settings
SCCM 2007 AD System Group Discovery Settings
Before Delta Discovery is run:
SELECT * FROM v_R_System_SystemGroupName
From the Computer Object in SCCM
After Delta Discovery is run:
And for final proof – the adsgdis.log
So that’s great news! The Delta Discovery now works with System Group membership!
One last thing we need to check – whether or not the Dynamic Collection Membership now picks up changes to existing machines… not just new objects created by DDRs. To check this, we’re going to create a Collection with a membership query looking for an AD group. If all goes well, within MAX 10 minutes, the computer should become a member of the new Collection.
This time, we’re going to watch two logs – adsgdis.log and colleval.log and we should see the updates pretty soon.
Adsgdis.log
Colleval.log
And now, within about 6 minutes we can see the Collection is now populated with the object!
The ConfigMgr 2012 Package Conversion Manager (PCM) tool allows administrators to convert their legacy SCCM 2007 packages and programs into the new ConfigMgr 2012 Application and Deployment Types. This applies for all legacy packages apart from virtual APP-V packages that will automatically be converted to the new application model during the migration process. Once your legacy packages have been migrated and you have installed the Package Conversion Manager utility, then it is just a matter of analysing your packages in order to determine which readiness state it is in, and then converting your packages.
There are still a number of bugs in the RTM version of the PCM tool but all in all it works well.
It is worth noting that any legacy packages that have been configured to run another program first will only create the CM2012 Application Dependency if the program is in a different source Package ID. IE. “ Package with program dependent upon another program in the SAME package will not have that dependency converted” (from TechNet). To get around this you can just manually create the dependencies once you have converted your packages to Applications.
For this demonstration I am using an Adobe Reader X Package/Program with Package ID CEN0001A
We need to Analyse a package to determine whether our package is in an appropriate state to be migrated to the new Application model or whether some remediation work will be required to prepare the package to be converted to an Application and more importantly into a Deployment Type.
Expect to see either results after you have analysed your legacy package:
1. Automatic – Your package is in a state that will allow for a successful migration to an Application
2. Manual – Some remediation work on your legacy package is required before it can be successfully migrated to an Application Deployment Type.
Once the analysis is done you will see the readiness state under the Summary tab of your Package
If your legacy package returns an Automatic readiness state, than you can simply select the Convert Package option (This option will be GREYED out if Automatic is not returned) to convert your package to an Application.
If your legacy package returns a Manual readiness state, than some work needs to be done to fix some issues before you can convert it to a CM 2012 Application.
So we can see that my Adobe Reader package is not quite ready to be converted to a CM 2012 Application due to the Readiness state indicating Manual and the issue provided is because no Detection Method has been defined.
NOTE: Application Detection Methods are mandatory for any CM 2012 Application so expect this as a common issue.
So how do we fix this issue so my Adobe Reader package can be converted to a Application??? We use the Fix & Convert Tool to add the required information.
Wizard Screen Shots…
THATS IT, WE’RE DONE!!
So if we now have a look under Applications we will see that we have an Adobe Reader X Application with a Deployment Type created for us.
AS SIMPLE AS THAT… GOOD LUCK!!
ConfigMgr hardware inventory is a very important and widely used feature. We use it to collect all sorts of information, from hardware info such as disk configuration, amount of RAM, etc. Hardware Inventory is also used to query the Add/Remove Programs to see what software is installed on a machine.
All hardware being queried by default can be found in your SMS_Def.mof file (%ConfigMgrInstallDir%\Inboxes\clifiles.src\hinv). There are heaps of WMI classes that are enabled by default – and even more than you manually need to enable. But sometimes we need to collect hardware information for a non-standard piece of hardware, or a device that isn’t included in the SMS_DEF.mof file.
To cater for this type of inventory, we provide a function to extend the classes using special files call IDMIFS and NOIDMIFS.
Creating custom Hardware Inventory classes
By default the IDMIF/NOIDMIFS collection is not enabled, but to enable it open your Hardware Inventory Client Agent and tick the MIF collection boxes.
There's an interesting warning here – Enabling these settings increases your security risk by permitting ConfigMgr to collect and process unvalidated client data.
IDMIFS/NOIDMIFS are unlike the SMS_DEF.mof file as they are unvalidated. This means they aren’t checked against any standard and can contain pretty much anything. The problem with this is, any user who is a local administrator on their machine, can add a NOIDMIF file to their CCM directory, and ConfigMgr will process this directly into the database and create custom tables. While this isn’t a huge problem, it would not be hard for a rouge user to create hundreds or thousands of extra tables in your ConfigMgr database.
In my demo, I’m going to create a custom Hardware Inventory class called Wide World Asset Numbers. A company could use this class as a way to track asset numbers of their clients.
First of all, from one of my clients I’m going to browse to %Windir%\System32\CCM\Inventory\Noidmifs
To create this class, we create a text file in the Noidmifs directory, give it a relevant name (addclass1.mif) and insert something like this:
Start Component Name = "System Information" Start Group Name = "Wide World Asset Numbers" ID = 1 Class = "wideWorldAssetNumbers" Key = 1 Start Attribute Name = "Computer Asset Number" ID = 1 Type = String(10) Value = "414207" End Attribute End Group End Component
This would create a class called wideWorldAssetNumbers in your Hardware Inventory. For more info, see http://technet.microsoft.com/en-us/library/cc181210
If I now run a Hardware Inventory on the client I’ll see in the InventoryAgent.log that it was processed (I’m running this on a Site Server by the way, that's why the path is SMS_CCM\Inventory)
And if I check the hardware inventory on the client, I’ll now see a new class called Wide World Asset Numbers
You can also see a new table in the ConfigMgr database
And that’s how easy it is to create a new Hardware Inventory class, and a new ConfigMgr database table!
With all of the excitement surrounding the Beta 1 release of ConfigMgr 2012 SP1, there’s been a fairly big feature that's been rarely spoken about… Deploying iOS/Android apps to ‘un-managed’ devices!
This post wont be a deep dive into creating and deploying iOS/Android apps, it will just take you through the high-level announcements and give you something to start thinking about before SP1 hits RTM.
Firstly, you’ll want to check out the iOS app demo from the presentation made by Brad Anderson at TechEd Europe. The demo shows an iOS device using the Intune service to access and install a custom LOB (Line of Business) application.
http://channel9.msdn.com/Events/TechEd/Europe/2012/KEY01 (1 hour 27 into the video for those playing at home)
Secondly, you’ll want to download and test the Beta 1 release of ConfigMgr 2012 SP1
http://www.microsoft.com/en-us/download/details.aspx?id=34607
NOTE: This is full installation media. You don’t need to build an RTM server and upgrade. You do however, need to install the new WADK separately, otherwise the prereq check will fail.
ConfigMgr 2012 makes the deployment a snap, because all we need to do is add a new Deployment Type to our current applications. This means we can have a user receive the LOB app via MSI if they’re in Windows, and the iOS app if they request it on their iPhone. You can really see the benefit of the new ConfigMgr 2012 App Model when you think about all your users utilizing different devices and different deployment types… it’s really rather exciting.
The process flow would be:
In terms of installing the application, the application would be delivered via a pull function. That is, we can’t forcibly install the application on a target iOS device, but instead we make the application Available for our users to install.
Finally, I wanted to add a couple of screenshots – one from the iOS client and one from the ConfigMgr console.
1. Creating a new iOS Deployment Type
2. Installing the App on an iOS device
There will be HEAPS more details coming soon… so stay tuned!
Matt
In ConfigMgr 2007 we had a registry key called CacheExpire which would allow a client to start a new PXE session after 60 minutes. The idea is, if a build is mandatory the cache will hold onto the session to ensure it doesn’t get into a rebuild loop.
It was a bit of a pain when you were developing/testing your OSD builds because if it failed, you had to wait for the cache to run out or clear it manually. So many engineers would change the key to 120 seconds or so.
HKLM\Software\Microsoft\SMS\PXE\ CacheExpire=120
I’ve recently been testing a new build in ConfigMgr 2012 using PXE as my boot method, and went in search of the CacheExpire value to speed up the process, and found its no longer there. In fact, the entire PXE key is missing.
The reason is quite simple, in ConfigMgr 2012 the PXE Service Point is no longer a separate role – it’s now integrated with the Distribution Point role. So I checked in the DP registry key for a CacheExpire… no luck.
HKLM\Software\Microsoft\SMS\DP
So I decided to try and create one and see how it went… bingo, my PXE Cache is now 120 seconds.
Fox the lazy, here’s the exported key & value
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SMS\DP] "CacheExpire"="120"
Heaps of PFE’s speaking at this years TechEd Australia!
Ian Bartlett & Mohnish Chaturvedi – Microsoft Application Virtualization 5.0 Introduction (Wed 12 Sep, 9:45-11:00)
Grant Holliday – From Server to Service: How Microsoft Moved Team Foundation to Azure (Thu 13 Sep, 11:30-12:45)
Matt Shadbolt & George Smpyrakis – Implementing SCM for Compliance in SCCM 2012 (Thu 13, 1:45-3:00)
Alex Pubanz & Jesse Suna – Kick Starting your Migration to Windows Server 2012 (Friday 14 Sep, 8:15-9:30)
Chad Duffy & Tristan Kington – DirectAccess: Your Next Gen Remote Access Experience (Fri 14 Sep, 11:30-12:45)
Shyam Narayan – Application Hosting Models in SharePoint 2013 (Fri Sept 14, 11:30-12:45)
So I posted this on my mattlog.net blog ages ago, but for some reason it didn’t make the migration over to our new Technet ConfigMgrDogs blog, so thought I should repost as a new entry.
I still find it staggering how many customers I visit who had no idea about these features!
One of the more important but commonly missed features is the Error Lookup option in Trace32’s Tools > Error Lookup (Ctrl + L) menu.
The Error Lookup tool will return descriptions of cryptic error codes from
Windows error codes
WMI error codes
Winhttp error codes
When trying to troubleshoot specific issues such as site replication issues, it’s sometimes nessesary to open more than one log file at once. Windows 7’s window snap feature makes viewing two logs side-by-side really easy, but sometimes a more accurate timeline is needed between viewing log files.
If you select Open in Trace32, Ctrl-click on multiple log files in the open dialog box, tick Merge selected files you will find that all the selected log files will merge together into one large super log. The log entries are automatically sorted by time so it’s super easy to see ConfigMgr process certain things and log the progress across multiple logs.
In this quick example I’m just viewing the process for finding a clients default management point
As you can see, the client is logging to both LocationServices.log and ClientLocation.log and it’s quite easy to read the timeline of what is going on.
Lastly, a minor but handy tip, Trace32 by default will save the last log location that you opened. This is really handy as you don’t have to browse to the logs path every time you want to read SMS logs. It is a bit of a pain though when you use Trace32 on a client, because every time you launch Trace32 for the first time on a certain machine, it defaults to the %userprofile%\Desktop directory. The Last Directory registry found at HKCU\Software\Microsoft\Trace32 is the key that controls the default open location. If you add a GPO that updates your clients to %windir%\System32\CCM\Logs\ every time you jump on a machine it will automatically open Trace32 at the client log location.
The Product Group have recently announced the Beta of ConfigMgr 2012 SP1 at TechNet 2012 North America.
You can view the announcement at http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/MGT309
Enhancements include:
Hi All, it’s been a while since my last post and one of Matts posts hit-count just past my article on Auto Deployment Rules (Unashamed plug of my old blog post to try and get the numbers up). So I decided, okay it’s time to get another one done.
NOTE: This is not meant to be a step by step guide merely some general advice and run through. I always recommend doing any upgrade process in your test lab environment before even considering upgrading in production. If you don't have one create one.
The environment I'm working with is a lab that was setup when ConfigMgr 2012 RTM came out. It sits on Server 2008 R2 SP1 and SQL 2008 R2 SP1 CU4. Probably not too dissimilar to most 2012 RTM environments. I have a one CAS, one Child Primary and one Secondary Site.
The first thing I want to do is run the prerequisite checker. On the ISO I have Prereqchk.exe sits in the folder SC2012SP1_RTM_SCCM_SCEP\SMSSETUP\BIN\X64.
The advantage of running this instead of running through the setup GUI and then seeing what prerequisites are required, you avoid having to have to exit the setup and run through the entire setup process again. To get the command line switches simply run prereqchk /?
Let's run it on each server separately to see what is needed. CAS Server (CM12CAS.Contoso.com) Open an administrative command prompt and run the following command
prereqchk /CAS /SQL <FQDN of the SQL Server> /SDK <FQDN of the SDK Server>
Feel free to add any other switches that you would like to but this will server our current purpose.
OK so we can see a couple of Warnings which I expected in our lab environment, and number of things we need to fix. Anything that is a warning is usually just a best practice alert such as the WSUS on site server warning, or for you to be aware of a potential issue such as verifying site server permissions to publish to AD. Let's go through the errors.
1) We can ignore the two errors Existing Configuration Manager server components on site server and Dedicated SQL Server Instance because the prereqchk is assuming we are trying to do a fresh install on this site server, not an upgrade. 2) User State Migration Tool(USMT) installed, Windows Deployment Tools installed and Windows PreInstallation Environment installed are all part of the latest Window Assessment and Deployment Toolkit which you can download from here http://www.microsoft.com/en-us/download/details.aspx?id=30652 3) SQL Server version I need to upgrade SQL from 2008 R2 SP1 CU4 to either CU6 or SP2. (See Supported Configurations for Configuration Manager.)
NOTE : f you are going to upgrade from 2008 RTM/R2 to SQL 2012 post upgrading to ConfigMgr 2012 SP1, you need to upgrade in the following order. 1st - CAS 2nd - Secondary Site 3rd - Child Primary of that secondary site.
From Technet - http://technet.microsoft.com/en-us/library/gg682077#BKMK_SupConfigUpgradeDBSrv
For Configuration Manager SP1 only: Configuration Manager with SP1 supports the in-place upgrade of SQL Server 2008 or SQL Server 2008 R2 to SQL Server 2012 with the following limitations: • Each Configuration Manager site must run service pack 1 before you can upgrade the version of SQL Server to SQL Server 2012 at any site. • When you upgrade the version of SQL Server that hosts the site database at each site to SQL Server 2012, you must upgrade the SQL Server version that is used at sites in the following order: o Upgrade SQL Server at the central administration site first. o Upgrade secondary sites before you upgrade a secondary sites parent primary site. o Upgrade parent primary sites last. This includes both child primary sites that report to a central administration site, and stand-alone primary sites that are the top-level site of a hierarchy. Important Although you upgrade the service pack version of a Configuration Manager site by upgrading the top-tier site first and then upgrading down the hierarchy, when you upgrade SQL Server to SQL Server 2012, you must use the previous sequence, upgrading the primary sites last. This does not apply to upgrades of SQL Server 2008 to SQL Server 2008 R2.
For Configuration Manager SP1 only: Configuration Manager with SP1 supports the in-place upgrade of SQL Server 2008 or SQL Server 2008 R2 to SQL Server 2012 with the following limitations:
• Each Configuration Manager site must run service pack 1 before you can upgrade the version of SQL Server to SQL Server 2012 at any site.
• When you upgrade the version of SQL Server that hosts the site database at each site to SQL Server 2012, you must upgrade the SQL Server version that is used at sites in the following order:
o Upgrade SQL Server at the central administration site first.
o Upgrade secondary sites before you upgrade a secondary sites parent primary site.
o Upgrade parent primary sites last. This includes both child primary sites that report to a central administration site, and stand-alone primary sites that are the top-level site of a hierarchy.
Important
Although you upgrade the service pack version of a Configuration Manager site by upgrading the top-tier site first and then upgrading down the hierarchy, when you upgrade SQL Server to SQL Server 2012, you must use the previous sequence, upgrading the primary sites last. This does not apply to upgrades of SQL Server 2008 to SQL Server 2008 R2.
4) If you haven't already done so, the following WSUS Updates may also need to be applied. An update for Windows Server Update Services 3.0 Service Pack 2 is available (KB2734608) An update for Windows Server Update Services 3.0 Service Pack 2 is available (KB2720211) PLEASE read and understand what may occur in your environment before applying these hotfixes. and of course test in your lab environment first.
Install WADK
Ok so let's install the WADK components to my CAS Download it from the link above Run adksetup Download the Kit to a network location so it is available for installation. (Note this may take a while)
From that location on your server share Run adksetup.exe Specify your installation directory and click Next
Select Yes or No depending on your preference and click Next
Accept the license agreement
All you should need is the Deployment Tools, Windows Preinstallation Environment (Windows PE) and User State Migration Tool (USMT) select these and click Install
Again, this may take a while to install.
NOTE: Check your build via PXE after this is done as you may potentially need to remove and redistribute your x64 Boot Image. Ensure you refer to the SMSPXE.log file for any errors.
SQL Upgrade I'm going to upgrade to SQL 2008 R2 SP2. (If you wish to do the same you can get SP2 from here How to obtain the latest service pack for SQL Server 2008 R2)
Run the SP2 executable
Click Next
After the file check click Next
Click Update
Click Close
Click OK and restart the server I'd suggest checking the SQL Logs after the reboot to ensure there are no errors that you may need to look into.
Upgrade to ConfigMgr to SP1
Now we can start the SP1 Upgrade
Double click on splash.hta to bring up the splash screen
You'll see a familiar screen Click Install
Select Upgrade this Configuration Manager Site and Click Next
Enter your product key and click Next
Accept the license agreement and click Next
You can either download the latest prerequisite files from the internet and save them on a network location, or use an already downloaded copy. In this case they are available on my copy of the ISO so I'll just grab them from there. Obviously over time I'd suggest you download the latest version from the internet.
Click OK and Next
Select your Server Language and click Next
Select your client language and click Next.
Click Next again at the Summary screen
Finally we reach the prerequisite check screen (You can now see the value in using prereqchk.exe) For more detail you can look at the ConfigmgrPrereq.log file in the root of C.
Click Begin Install
You should now see a log file called C:\ConfigmgrSetup.log open this up to watch how the upgrade process is going.
We can see that after a successful connection to the database we are about to Upgrade the CAS Server
After about 40 minutes in my lab the upgrade process is complete. See the entry in the setup log file above
We can now click Close on the splash screen
Now let's check and see if the Site has upgraded successfully. Open the ConfigMgr console and select Administration > Site Configuration > Sites > CAS right click and select Properties
As per Matt’s previous blog we can now see that our CAS is at Version 5.00.7804.1000 and Build number 7804
I'm also going to check my database replication and ensure everything is functioning correctly. . One other thing that is interesting to note is the change in the change in the configure Client Installation Settings under Administration > Sites and in the ribbon Hierarchy Settings
RTM
SP1
We now no longer have a choice to select the latest version for the Automatic Client Upgrade option.
Child Primary
OK, so let's now move onto the Child primary
Open an administrative command prompt and run the following command
prereqchk /PRI /SQL <FQDN of the SQL Server> /SDK <FQDN of the SDK Server>
So we can see the exact same prereqs as the CAS, so I will run through the same process as per the CAS of installing the latest WADK and upgrading SQL.
Upgrade to SP1
The screen shots are exactly the same for the child primary so I won't bore you with those
Once we start again, we can see in the Setup log file after the SQL connections are successful the upgrade will begin
Then after about 30 minutes we can see that the setup is now complete
We can also see a few extra tasks have been done on the Child Primary.
Let’s check some of the logs and the console to ensure it has upgraded successfully.
To see if the components have reinstalled without issue, we can check the sitecomp.log under <SCCMInstallFolder>\Logs We can see where the bootstrapper starts successfully after SP1 finishes installing. You can also see as it successfully reinstalls each component.
See the figure below for all of the entry’s filtered in the log file.
You may also notice a few new components being installed.
Mpcontrol.log shows us that the management point is communicating successfully.
Now jump into the console and check the site version and database replication. We also essentially check that the provider is functioning since we need it to be to get into the console.
All looks good. Also the picture of the Cloud is a bit of a giveaway.
DB Replication also looks nice and healthy.
Secondary Site
OK, so now lets look at our Secondary site
prereqchk.exe /SECUPGRADE
We can see that there are 2 points I need to fix before attempting an upgrade.
1) The upgrade process will not automatically install a supported version of SQL so we need to do it manually first 2) SQL Express does not have a static port set so we will need to go into SQL to set a static port of 1433.
The upgrade of SQL we have already run through so I will just go through setting the static port On the secondary server open up SQL Server Configuration Manager
Ensure the local Secondary server is selected (Or remote server name if you've started it from another server)
Expand SQL Server Network Configuration and select Protocols for CONFIGMGRSEC Then double click on TCP/IP under protocol name
You will notice that SQL has both Dynamic and a static port set for IPAll
Lets delete the Dynamic entry and click Apply
Click OK and restart the appropriate services. After I have upgraded SQL and changed the port I run my prereq check again.
We can now see that the errors are gone and we should be able to upgrade our secondary site successfully. You also may have noticed that I have warnings on all of the servers for SQL Server process memory allocation. That is because SQL requires a minimum of 8 GB of RAM for a CAS and Primary, and 4GB for a secondary. It will still run with less as per my Lab VM’s but you will get a performance hit.
As most of you will now know we install and we upgrade the Secondary site via the ConfigMgr console and not directly on the box itself. So we can open the console on either our CAS or our Child Primary. I'll do it from the Child primary just to speed things up.
Go to Administration > Sites and select the secondary site. You will now see an Upgrade option available on the ribbon. When your upgrading you have two choices to monitor what is happening. 1) Click on Show Install Status
This brings up a step by step guide to let you know at what stage the installation is at. Here we can see that the prereq’s have already occurred and the Bootstrap service has already been installed ready for the upgrade.
2) Look at the local log files sitting in the root of C:\
As with the CAS and the Child primary, we have the exact same prereq Wizard and setup log files that go into much more depth should there be any issues with the installation. Although we will need to watch the logs initially on the Child Primary before the setup and bootstrap begins on the secondary site.
I will click on Upgrade get the above warning then click Yes.
This log file is from the Child Primary
With each action we can see the corresponding action in the appropriate log file if we wish. Once prereqs are finished we can see that the bootstrapper begins its process
And on the secondary site we can see the upgrade process begin
And we can see that now the installation is complete
I'm going to check the SMSEXEC.log to see if my components have started.
I'm also going to check MPControl.log to see if my management point is functioning as expected.
Now ill check the version and database replication status from the console.
We can see that we are on build 7804
And we can also see that database replication is working as expected.
I would suggest checking that all of your components on each server, are functioning correctly. Keep your eye on the status messages and alerts, in case any of them fail and need further attention. You can do this from Monitoring > Site Status and Component Status. Below we can see an example of an issue with my Software Update Point after the SP1 update that needs attention.
Looking at the WCM.LOG you can see that I haven't applied the WSUS updates I mentioned earlier in this blog.
I will download and apply these updates.
after a restart of the WSUS Configuration Manager component (This isn't necessary the next time it polls it would do this anyway) you can see that it now has a supported version and is now running though setting up the updated component.
Conclusion and next steps.
So what we have seen here are various methods to run through the upgrade process and various log files and GUI settings in the console that we can use to follow the process. If you plan and get the prerequisites setup correctly before you begin you should have a fairly smooth SP1 upgrade.
Next steps. 1) Update your ConfigMgr Client package to all of your Distribution points and plan out the client upgrade. 2) Think about and potentially plan an upgrade to SQL 2012.
George Smpyrakis
One of the many tasks an SCCM admin faces is checking for ConfigMgr duplicate computer records. There are many ways that we can get duplicate records, but here are the three most common:
I normally see this in environments where VDI is common. Generally, a VDI admin has duplicated machines without giving each a unique MAC address. This is bad and should be avoided. If you’re VDI admin has created a bunch of computers with the same MAC, we need to delete them from SCCM. To find these objects, run the following TSQL query:
SELECT dbo.v_RA_System_MACAddresses.MAC_Addresses0, Count(dbo.v_R_System.Name0) AS SystemCount FROM dbo.v_R_System RIGHT OUTER JOIN dbo.v_RA_System_MACAddresses ON dbo.v_R_System.ResourceID = dbo.v_RA_System_MACAddresses.ResourceID GROUP BY dbo.v_RA_System_MACAddresses.MAC_Addresses0 ORDER BY SystemCount DESC
These are the most common duplicate objects. Most likely this occurs due to OSD or conflicting discovery cycles. We basically end up with two computer objects, with the same name/hardware/MAC but different SMSBIOSGUID. It causes much confusion for ConfigMgr because it often doesn’t know how to process the inventory for the phantom object. To find these objects, create an SCCM Query (or query based collection) with the following Query Statement:
select R.ResourceID,R.ResourceType,R.Name,R.SMSUniqueIdentifier, R.ResourceDomainORWorkgroup,R.Client from SMS_R_System as r full join SMS_R_System as s1 on s1.ResourceId = r.ResourceId full join SMS_R_System as s2 on s2.Name = s1.Name where s1.Name = s2.Name and s1.ResourceId != s2.ResourceId
One other way to look for dupes is to use the HardwareID. Use the following TSQL to find the objects with duplicate HardwareIDs:
SELECT Name0, Hardware_ID0, Count(Hardware_ID0) AS SystemCount FROM dbo.v_R_System GROUP BY Hardware_ID0, Name0 ORDER BY SystemCount DESC
Had an annoying one this afternoon. While trying to install the new (ish) SCCM 2012 Beta 2 the Updated Prerequisite Components wizard wouldn’t download the prereqs. I was able to run the setupdl.exe manually, but still when providing the prereq path to the Updated Prerequisite Components wizards it would still fail.
<04-28-2011 14:38:50> INFO: setupdl.exe: Start <04-28-2011 14:38:50> INFO: setupdl.exe: Finish
Really easy fix in the end. You just need to make sure the prereqs are stored in a directory without any spaces in the name, and bingo. :/
When you distribute content to a Distribution Point in ConfigMgr 2012, a check is done against the Content Library to confirm that any of the source files being referenced in your application/package do not already exist in the Content Library. Ie. There is only one copy of every file stored on a DP regardless of how many packages reference that one file.
The ConfigMgr 2012 Content Library (SCCMContentLib) consists of 3 subfolders:
1. PkgLib 2. DataLib 3. FileLib
In the Package Library folder you will find a PackageID.ini file for every package. The PackageID.ini file for each package will give you the details for the Data library content and what file you will need to look for.
In the Data Library folder you will find a PackageID.ver.ini file for each package and a PackageID.ver folder. The folder is a “copy” of the exact package folder structure but without any of the real files. For each file in the original package you will find a corresponding .INI file with information about the original file. To find the HASH value for the file you are interested in, open the relevant .INI file and make a note of the first 4 digits of the HASH value, this will be needed to find the data in the FileLib directory
The File Library folder is where you will find the actual files that are used in the different packages. All files are grouped together in a folder. The folder for each file uses a 4 digit pre-fix of the HASH value of that file found in the DataLib. The FileLib is where the nuts and bolts of the application are stored. You will see three files under each folder:
1. A file with no extension (FILE), the actual file named is the HASH value. This file contains all of the source files for that part of the application. If the application only had a single .EXE or .MSI file to execute than you could copy the file to a location such as the Desktop, rename it to the original file name, ie .EXE, .MSI and start the installation
2. Hash.ini file contains a link between the file and the packages that uses the file. If you have multiple packages referring the same file, you will just see an entry for each package ID.
3. Hash.sig. It contains the original package signature
For this example I will use my Microsoft Office 2010 package which has Package ID IBC0000B.
1. Navigate to the SCCMContentLib directory on your DP.
2. open the PkgLib directory
3. Open the configuration file referencing your Package ID, ie IBC0000B, and make a note of the ContentID it is referencing, ie Content_30b777f9-be…………..
4. Navigate to SCCMContentLib\DataLib and open the directory that was referenced in your package configuration file, ie Content_30b777f9_30b777f9-be51-484b-8327-77bf2ef75169.1
Notice that folder Content_30b777f9_30b777f9-be51-484b-8327-77bf2ef75169.1 holds reference to all files in your source location, ie you will see all the Office 2010 files being referenced. NOTE: These files do not contain any of the actual source data, only references to the HASH values for each
5. Open any of the configuration files referencing a file in your package to gain the HASH value to the actual content. For this example I will use the setup.exe file as the example.
6. Take note of the first 4 digits in the hash, so for the setup.exe file it will be “97E6”
7. Navigate to SCCMContentLib\FileLib\<4 digit hash> - eg SCCMContentLibrary\FileLib\97E6
8. The file with extension ‘FILE’ is the actual content for the setup.exe file in our source content. To confirm this we can see the size of this file is 1075KB, the same as our setup.exe file from our source location.OriginalSource Files
9. As this Office is an application made up of many components, ie Word, Excel etc, they all share common files; to see what packages require the setup.exe file you could open the “Configurations Settings” file to see the package IDs of all packages that require the setup.exe file. For applications that are compiled to one executable, eg Adobe Reader, you could follow this process and simply copy the FILE file to a desktop, set the file extensions accordingly, ie .EXE and you could simply run this file and it would execute and install as it would contain all source files…
THAT’S IT… GOOD LUCK…
My last post on ConfigMgr 2012 Collections (I promise!). I wanted to go over Include and Exclude Collection Rules.
Include and Exclude collection rules are a new feature in ConfigMgr 2012 and I think you’ll find them most useful. They basically allow us to add or remove resources from a collection without having to write a complex membership rule.
Let’s say we have five collections: Finance Accountants, Finance Banking, Finance Bookkeeping, Finance Insurance and Finance Interns. Each of these collections will have their own Deployments targeted at the users in the collections. If we wanted to target an application at all of these collections, we would need to have 5 deployments.
Or we could create a collection with an Include Collection setting
Now we can just target the application at the Finance – All collection and we’ll get all three members.
Exclude collections are even cooler. Let’s say we have a collection for all computers who need Office 2010. Normally we would just deploy our application to the collection and hope for the best. We can use an Exclude Collections rule to be a little smarter in our deployments and remove all clients who are in an ‘unhealthy’ state.
First, we’ll create our Unhealthy Clients Collection. Our membership rule will be any looking for any client who’s client status is NULL or the client activity is NULL. This should get any computer that hasn’t got the client installed, or any computer with the client installed that isn’t active. (this is a *really* basic query for unhealthy clients. You can write a more complex one to find those clients who are truly unhealthy)
I’ve got 4 systems in my lab hierarchy – only one of which is healthy (oh, and the two Unknown Computer objects – but they don’t count)
So when I update the membership of my Office 2010 membership, I will only get the healthy client.
Using this exclusion method has a few benefits – firstly, you have a collection full of systems in which you have to repair the ConfigMgr client. Secondly, you’re software deployment statistics should be a lot cleaner as you’re not getting failures on the unhealthy clients. Finally, as the collection is dynamic, as soon as you remediate the unhealthy clients, they will receive the Office 2012 deployment!
“The World Wide Web Publishing Service (WWW Service) did not configure logging for site X. The data field contains the error number.”
This warning will show up in your System event logs and you custom IIS logging to a SQL database won’t work. Obviously the message is fairly cryptic, however in a facepalm sort of moment I worked out what wasn’t working.
Two words. Custom Logging.
Although I had enabled the ODBC logging feature for IIS, I hadn’t enabled custom Logging. Once you enable the custom logging feature your SQL DB will start to fill with log entries. I’ve included the full warning message for those of you unsure.
I promised in my last post to provide you all with my scripts for modifying all your package and application source paths… well that was over two months ago now!
http://blogs.technet.com/b/configmgrdogs/archive/2013/02/18/moving-your-package-source-after-migration.aspx
Note: These scripts are provided “as-is” and no guarantees are provided. Please TEST these in a non-production environment beforehand.
First is my script will modify the source paths for all of your Deployment Types within all Applications that are Script or MSI installers (you can modify this to do your App-V Deployment Types too)
Write-Host "#######################################################################" -f Green Write-Host "## Matts ConfigMgr 2012 SP1 Application Source Modifier ##" -f Green Write-Host "## blogs.technet.com/b/ConfigMgrDogs ##" -f Green Write-Host "## ##" -f Green Write-Host "## ##" -f Green Write-Host "## Please ensure your package source content has been moved to the ##" -f Green Write-Host "## new location *prior* to running this script ##" -f Green Write-Host "## ##" -f Green Write-Host "#######################################################################" -f Green Start-Sleep -s 2
Write-Host "" Write-Host "" ## Import ConfigMgr PS Module Import-Module 'C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1'
## Connect to ConfigMgr Site $SiteCode = Read-Host "Enter your ConfigMgr Site code (XXX)" $SiteCode = $SiteCode + ":" Set-Location $SiteCode Write-Host ""
## Set old Source share Write-Host "NOTE: This is the location your 2007 packages are stored. It must be correct" $OriginalSource = Read-Host "Enter your source ConfigMgr share (\\2007Server\Source$)"
## Set new Source share Write-Host "" Write-Host "NOTE: This is the location your Applications are stored. It must be correct" $DestinationSource = Read-Host "Enter your destination ConfigMgr Source share (\\2012SERVER\Source$)" Write-Host "" Write-Host "Working.." Write-Host "" ## Get your Application Deployment Types
$ApplicationName = Get-CMApplication $ApplicationName = $ApplicationName.LocalizedDisplayName
ForEach($x in $ApplicationName) { $DeploymentTypeName = Get-CMDeploymentType -ApplicationName $x #$DeploymentTypeName = $DeploymentTypeName.LocalizedDisplayName
ForEach($DT in $DeploymentTypeName) { ## Change the directory path to the new location $DTSDMPackageXLM = $DT.SDMPackageXML $DTSDMPackageXLM = [XML]$DTSDMPackageXLM ## Get Path for Apps with multiple DTs $DTCleanPath = $DTSDMPackageXLM.AppMgmtDigest.DeploymentType.Installer.Contents.Content.Location[0] ## Get Path for Apps with single DT IF($DTCleanPath -eq "\") { $DTCleanPath = $DTSDMPackageXLM.AppMgmtDigest.DeploymentType.Installer.Contents.Content.Location } $DirectoryPath = $DTCleanPath -replace [regex]::Escape($OriginalSource), "$DestinationSource"
## Modify DT path Set-CMDeploymentType –ApplicationName "$x" –DeploymentTypeName $DT.LocalizedDisplayName –MsiOrScriptInstaller –ContentLocation "$DirectoryPath" ## Write Output Write-Host "Application " -f White -NoNewline; Write-Host $x -F Red -NoNewline; Write-Host " with Deployment Type " -f White -NoNewline; Write-Host $DT.LocalizedDisplayName -f Yellow -NoNewline; Write-Host " has been modified to " -f White -NoNewline; Write-Host $DirectoryPath -f DarkYellow } }
My second script is much simpler, as we are changing only the Package source location, with no need to cycle through each Deployment Type
Write-Host "#######################################################################" -f Green Write-Host "## Matts ConfigMgr 2012 SP1 Package Source Modifier ##" -f Green Write-Host "## blogs.technet.com/b/ConfigMgrDogs ##" -f Green Write-Host "## ##" -f Green Write-Host "## ##" -f Green Write-Host "## Please ensure your package source content has been moved to the ##" -f Green Write-Host "## new location *prior* to running this script ##" -f Green Write-Host "## ##" -f Green Write-Host "#######################################################################" -f Green Start-Sleep -s 2
$SiteCode = Read-Host "Enter your ConfigMgr Site code (XXX)" $SiteCode = $SiteCode + ":" Set-Location $SiteCode
$PackageArray = Get-CMPackage $OldPath = "\\2007SERVER\source$" $NewPath = "\\2012SERVER\cmsource$" ForEach ($Package in $PackageArray) { $ChangePath = $Package.PkgSourcePath.Replace($OldPath, $NewPath) Set-CMPackage -Name $Package.Name -Path $ChangePath Write-Host $Package.Name " has been changed to " $ChangePath }
Your ConfigMgrDogs team will be speaking at TechEd Australia 2012!
Come and check out our sessions & buy us a beer!
Matt Shadbolt and George Smpyrakis – Configuration Manager 2012 and Security Compliance Manager
Ian Bartlett and Mohnish Chaturvedi – Introduction to App-V 5.0
Session time and room information will be posted shortly.
I have been working with a number of customers recently that have had issues running their monthly Software Update compliance reports due to a high number of “DETECTION STATE UNKOWN” results reporting back long after the update deployment has successfully run.
As usual the first thing we want to identify is whether it is on the client side or server side.
State Message IDs are used to define specific state messages for each topic type. For our issue a State Message for a Software Updates has a TopicType=500 which has status Message ID state of 0, 1, 2 or 3 which would then depict the actual state of the given update on a client machine as below:
Topic Type
State Message ID
State Message Description
500
0
Detection state unknown
1
Update is not required
2
Update is required
3
Update is installed
To determine what information your clients are sending back to your Management Point we can use WMI queries to see what is happening on the client.
1. Open wbemtest with elevated permissions
2. Connect to the WMI Namespace: root\CCM\StateMsg
3. Select Query and run the query SELECT * FROM CCM_StateMsg
Find any software update deployment which can be determined by looking for “TopicType=500” and what we want to check is the below values in yellow as this will determine if the client has indeed sent a message back to the MP and if so what it sent back, If we see it sent back a “0” and confirm that the KBs are installed then we know it is something on the client side, we would expect to see 1, 2 ,3 pending the state listed above
Example below:
instance of CCM_StateMsg
{ Criticality = 0;
MessageSent = TRUE; Message is sent
MessageTime = "20101027211908.749000+000"; UTC Time
ParamCount = 1;
StateDetails = "";
StateDetailsType = 0;
StateID = 2; Update is required
TopicID = "9d4681d5-46fa-4250-bedc-480ac7bce3aa";
TopicIDType = 3;
TopicType = 500; Update Detection
UserFlags = 0;
UserParameters = {"102"};
Hope this helps..
So I’m getting my preparation done for TechEd 2013 on the Gold Coast and needed to fill my ConfigMgr hierarchy with some dummy computer objects. My session being PowerShell for ConfigMgr 2012 SP1, of course I went straight to PowerShell to do the work for me.
I’m not looking for anything too special; 1000 laptops, 1000 desktops and 500 servers for my demo domain contoso.com.
ConfigMgr can be a little picky when it comes to AD System Discovery, such as requiring a matching DNS record and a valid Operating System value. All of the options below are required otherwise you get errors in the ADSysDis.log.
Here’s my script (note: you must have the Active Directory PowerShell module installed on the local machine)
Import-Module ActiveDirectory $Count=1 $LaptopCount = 1001 $DesktopCount = 1001 $ServerCount = 501 # Create Laptops While ($Count -lt $LaptopCount) { New-ADComputer -Name "CON-LAP-$Count" -DNSHostName "CON-LAP-$Count.contoso.com" -OperatingSystem "Windows 7 Enterprise" -OperatingSystemVersion "6.1 (7600)" Add-DnsServerResourceRecord -ZoneName contoso.com -Name "CON-LAP-$Count" -IPv4Address "192.168.169.123" -A $Count = $Count + 1 } $Count = 1 # Create Desktops While ($Count -lt $DesktopCount) { New-ADComputer -Name "CON-DSK-$Count" -DNSHostName "CON-DSK-$Count.contoso.com" -OperatingSystem "Windows 7 Enterprise" -OperatingSystemVersion "6.1 (7600)" Add-DnsServerResourceRecord -ZoneName contoso.com -Name "CON-DSK-$Count" -IPv4Address "192.168.169.123" -A $Count = $Count + 1 } $Count = 1 # Create Servers While ($Count -lt $ServerCount) { New-ADComputer -Name "CON-SVR-$Count" -DNSHostName "CON-SVR-$Count.contoso.com" -OperatingSystem "Windows Server 2012 Enterprise" -OperatingSystemVersion "6.2 (9200)" Add-DnsServerResourceRecord -ZoneName contoso.com -Name "CON-SVR-$Count" -IPv4Address "192.168.169.123" -A $Count = $Count + 1 }
Active Directory Computer accounts
DNS A Records