...building hybrid clouds that can support any device from anywhere
Okay Readers – we have the next post ready to go!
Here is the fourth of six posts in the MVP Spotlight Series for the Automation Track…
We consider SMA to be a platform for executing our PowerShell workflows. To this end we author our workflows to be executable outside of SMA environments. This helps us debug and author outside of SMA using normal local development tools. We check all of our work into TFS (and actually track it against User Stories / Tasks) and utilize the continuous TFS integration solution above to sync our work to our SMA environments. We run our SMA environment in a low privilege model which means the service account that our Runbook service executes as has no permissions beyond domain user outside of the SMA environment. We use inlinescript blocks to elevate portions of code to run as a different user as needed.
Attempt to treat workflow authoring much like you would web development. Three versions of each script are recommended (Dev / QA / Production) and should live in three different SMA environments. To facilitate moving between these environments you should structure your workflows to leverage global variables for information that will be changed when they are promoted from Dev to QA to Production so that the workflow itself doesn’t ever need to be changed (which would, in theory, invalidate the testing you have done).
If all this code is a bit too much and you want just a walkthrough of how to import this solution into your SMA environment stay tuned for the next blog post where we will walk through taking the export files, importing them into a SMA environment and configuring the global settings to work in an environment! This next post will also expand on what the variable values setup as in our environment for the scripts below.
The solution contains two PowerShell Scripts and one approval action in SharePoint
Oh, and here (color portion) is a look at where this post fits in to the overall example Solution Architecture:
The basic idea of a monitor is rather simple; every so often go out and look at something and see if there is work there to be done. In Orchestrator we had activities that were labeled as monitors and did this sort of functionality for us. As a part of our migration to SMA we have carried this concept forward and created a basic reusable pattern for ‘monitors’. The pattern consists of
Since this is a very generic pattern we create a ‘shared’ Runbook in SMA that can be referenced by all of the other Runbooks that will need to use this functionality. We have called this shared Runbook Monitor-SharePointList. In this way if we have multiple things that need to monitor list items we they can all share the basic pattern
This is the generic pattern for monitoring a SharePoint lists. It contains two main section, a section related to ‘looking’ for new SharePoint list items in a given status and a section that re launches the monitor Runbook after a defined period of time
This action occurs inside of a while block. The while block ends after a timeout period (the length of time our monitor are active before they refresh themselves). Inside of this area we query whatever data source has the information we need to determine if there is work to be done and gather the requisite information from that data source to initiate the worker Runbook. In the example below we are querying a SharePoint list for items with a status field set to a particular value (new in this case). If a request is found we start a new instance of the worker Runbook using Start-SmaRunbook. This causes the Runbook to be executed independently of the Monitor Runbook which is a very important concept. Having the worker logic separate from the monitor logic allows us to be constantly be looking for new work to do without waiting for any individual request to fully process. This means that we are not creating any relationships between request, each request is handled independently. After polling is complete calculate the remaining cycle time and sleep for that amount of time. In this way if we set DelayCycle to 30 seconds each poll will happen in a 30 second time window.
Initially we would have our monitor Runbooks running indefinitely. This caused a number of issues with the built in SMA grooming jobs and was not ideal for shutting down our environment (if you have Runbooks that run forever then the SMA Runbook Worker service will timeout on its stop operation and ungracefully kill the monitor). Furthermore it complicated our TFS continuous integration strategy, since monitor Runbooks never restarted they would never pick up the newly deployed code and we would have to go out to the environment and stop and then start them by hand. This section of code runs after the polling section and is rather simple.
Now that we have the common pattern in our environment we can call a unique instance of it to monitor our ‘DFS Share’ list that contains requests for a new DFS Share. To facilitate this we want to store the unique information about the SharePoint list that we will be monitoring in SMA’s asset store so we can easily change it if we want to move the scripts between SMA environments (only need to change the asset values, do not need to change the workflows). Once we have pulled the variable values from the store we simply invoke the shared Monitor-SharePointListRunbook.
In this section we poll the SMA environment for all assets we will use during execution. These assets commonly include global variables (items which are pulled out into a configuration file and changed when we migrate from Dev to QA to Prod – very akin to the sorts of variables pulled out into a web.config in web development) and credentials from the credential store. All ‘magic numbers’ should be evaluated for pulling out into global configuration. After we pull the information we output it to the debug stream for debugging purposes.
Now that we have all of the unique values we starting the shared monitor code is very simple.
This is the script that will actually setup our DFS Share. This script is split up into multiple regions each with its own purpose. As usual the first region is settings related, we define the parameters that calling workflows must supply while providing default values for optional parameter, access SMAs asset store to pull out global variables and access SharePoint to retrieve the rest of the needed information to carry out the task. Once all of this information is gathered we create an Active Directory (AD) group which will be used to secure the share then create the share on a file server and finally send out communications.
We access a number of settings for this script.
We first access two sets of credentials, one for accessing SharePoint and a different one for carrying out the automation tasks (creating the AD group and share). In this way we can use a low privilege account for accessing SharePoint and a higher privilege account for carrying out automation tasks – remember one huge advantage of PowerShell workflow is the ease of switching user contexts making running in a low privilege mode a real possibility.
This is a list of windows file servers for us to build the share on. We have logic to choose the server that has the highest free space available on it. Restricted Drives contains a list of drives that we should not use for creating the share on (C drive, Temporary Drives etch)
This is the domain that the AD group will be built in. For Dev and QA we build the automation in a non-production domain, this allows us to change that without modify the code as we promote the workflow
The SharePoint list name and the field name that contains the status property. These are generic constructs for us with our functions that access SharePoint. The list corresponds to the list name for this request and the property corresponds to the field name that contains the requests current status. This field is used to update the request as we take action on it.
We also pull additional information from the SharePoint List
This section is rather simple. For illustrative purposes we run this whole section as the domain credential accessed from the store above. We also could have just passed this credential objects to each AD command individually. PSPersist is set to true which causes a checkpoint to occur after this action has completed. This means that if the workflow encounters an error later it will not re-attempt to create on a resume the group if this was successful. For more information check out TechNet.
We take the necessary actions to create a DFS share inside of an InlineScript block that runs as the DomainCred credential accessed from the SMA credential store. After this action completes we checkpoint to ensure we do not attempt to create the share again if this workflow suspends and is resumed.
After everything is complete we update SharePoint as appropriate. This is also where you could add in additional notifications on errors. In our production environment this section contains information on emailing support teams in the case of error to initiate manual resolution steps.
And now a few notes from me (Charles)…
Be sure to check out Ryan’s session from TechEd North America 2014!
In this session, see a real-world implementation of a fully automated IT service catalog developed by a Fortune 500 company for supporting self-service requests. This service catalog is based in Microsoft SharePoint and utilizes the newly released Service Management Automation (SMA) engine. During the session we look at how the solution is architected, cover integration between SMA and SharePoint, build a new service offering from the ground up, and share the best practices we have developed for doing work with SMA along the way. So what’s the best part? You get access to the solution we create, so you leave with access to a working solution to help get you started!
Speakers: Ryan Andorfer, Mike Roberts
Link on TechEd NA 2014 Channel 9 Recording: DCIM-B363 Automated Service Requests with Microsoft System Center 2012 R2
And finally - As always, for more information, tips/tricks, and example solutions for Automation within System Center, Windows Azure Pack, Windows Azure, etc., be sure to check out the other blog posts from Building Clouds in the Automation Track (and http://aka.ms/IntroToSMA), the great work over at the System Center Orchestrator Engineering Blog, and of course, Ryan’s Blog over at http://opalis.wordpress.com!