Jim Britt's [MSFT] Blog

This blog will contain lessons learned as it relates to automation specific to Opalis / System Center Orchestrator.

Jim Britt's [MSFT] Blog

This blog will contain lessons learned as it relates to automation specific to Opalis / System Center Orchestrator.
Blog - Title

Jim Britt's [MSFT] Blog

  • My New Home for Automation Blogging

    Hey Readers!  I’ve been fairly quiet out here as of late and really “Jonesing” to get some blog posts out on automation.  So the good news is, I’ve transitioned to a new team (Technical  Enablement and Delivery or TED) and hence a new location for my blogging is available.  I’m really excited about the new location (and my new team is awesome).

    Some information

    My introduction post on what is coming out on the Building Clouds TechNet Page!

    The Automation Track that I am providing input to along with my good friend Charles Joy.

    Something fun my team did with 4 Surface Pros (a must watch)!

    By the way the entire team is going to be at MMS this year so come by our booth if you are at MMS 2013.  Look for posts on the Building Clouds Blog for sessions presented by the team in April.

     

    I’ll continue to post other items out here probably but not as frequent as the Building Clouds site.  Actually, I’m looking forward to potentially posting a Windows 8 Theme Pack of my pictures Smile  www.jimbrittphotography.com (shameless plug) in the near future!

    Until next time, Happy Automating!

  • Evaluating the "Reach" of Our Opalis Infrastructure at Microsoft

    Hello readers. This is my first technical post of hopefully many on the topic of all things Opalis and Orchestrator!  This product keeps me up at night (in a good way Smile) so I should have plenty of interesting content as time goes on. 

    Challenge and Initial Questions

    So considering our vast collection of systems here at Microsoft that we support across the world, it was clear that some analysis needed to be done on what our initial Opalis 6.3.3 environment needed to look like to support our diverse environment. With various sites and bandwidths, some with latent links, we needed a way to determine how far reaching our Opalis Action Server could reach into our infrastructure and perform certain actions.  The goal was to leverage a single Management Server cohosting the SQL DB for Opalis, and a single / separate Action Server to manage our policy execution. 

    Answering the Challenge

    So what better way to answer the question of Reach than to use a workflow within Opalis to evaluate scenario-based tests.

    Main Orchestration Workflow

    image

    The main orchestration workflow (shown above) breaks out into the following components

    • Scheduler: We setup a schedule for our workflow to fire every 4 hours.  Having a schedule applied to our workflow provided us the ability to run this automatically, on a scheduled basis, collecting historical data that could be correlated later on for more interesting trend analysis.
    • Table Creation: This task creates a status table (if it doesn’t exist) to hold the reach data that we are gathering as part of this analysis workflow
    • Get Computers: This activity is reading in an array of computer systems for processing by pulling in a list of systems from a text file sitting on a share.
    • Get Ping and Service Status: This activity is triggering the sub workflow for gathering our analysis data as well as logging that information in the status table we created in activity 2 above.

    Ping and Service State Sub Workflow

    image

    Now to break out the sub workflow components (shown above).  This is where the heavy lifting happens!

    • Initiate Worker: This activity is a custom start object that holds a computer name from the list of computers gathered in the main workflow above.
    • Get Ping Data: This activity is a PowerShell script that initiates a ping and stores the results as well as the latency measured during that ping for the host that we are evaluating.
      • If the ping fails, it goes directly to Log Data into Status Table and then moves to the next system
      • If the ping is successful, it moves to Get Service Status. 
    • Get Service Status: This activity is using the computer named pulled from the initiate activity above and checks service stated on a predefined service we are interested in.  In our case (SMS Agent Host) Smile.  Status of the service state is logged into the Log Data into Status Table activity.
    • Log Data into Status Table: This activity essentially takes the computername, status of the ping (success/failure), latency data and service state and inserts them into an entry into the status table for this machine.
    • Update Variance info: This activity takes the data for the previous run of this workflow for a particular computer analyzed, and analyzes the variance (+ / – ) from the last time it was run for latency data.  Essentially this tells you where the ping latency was higher or lower from the last run potentially giving you an idea of trends  for your network connectivity.

    Results

    So what do we get with all of this?  We get a table.  However, that table contains historical data that can be analyzed over time for trends, success / failure of activities, potentially to be leveraged for decisions regarding how far your reach can be within your organization for Opalis Action Servers. 

    Example Data

    image

    For us, it showed we had quite a bit of reach from our Opalis Action Server, even over high latency.  The fine print on this is that “your mileage may vary” and likely will depending upon the health of your network and what you are attempting to do over the links at the end of your network from your core Opalis Action Server.  The above scenario that I walked through can certainly be modified by grabbing the attachment provided and updated according to your own needs.  Take out service check and add file copy, or add a file copy, or event log combing, etc.  The rest is up to you.

    Note: A huge thank you goes out to Benjamin Reynolds (our local SQL guru within the MPSD Platform and Service Delivery teams) for helping me with the variance data query provided in this workflow.  A final obligatory note – use at your own risk and support and only after testing in your environment – and have fun building automation!

    Download Workflow Here ReachFiles.zip

    A final note: The workflow that is attached in the above download has the logging turned on for the purposes of showing logging information during execution.  If you decided to implement this into production, it is best practice to remove these options to avoid the excessive logging that is possible with the frequency of run and number of servers you may run this against.  If you leave these settings as is, the sub workflow (Ping and Service Check) will eventually lock up when viewed in the OIS client due to logs being populated at the bottom of the designer.

    image

  • Sending Email with PowerShell and the Run.Net Object

    Hello readers. Now that TechEd NA 2011 is over (what a great event) I thought I’d take this opportunity to share a very simple solution I have leveraged in my workflow development within Opalis for providing communication on workflow success / failure of status information and details.  Sometimes it is as simple as “Your xyz workflow has successfully completed” or “A failure has occurred with xyz application @5:45 PM on Server server01 at triggered workflow X and activity y”.  Of course these details can be included in a ticket within System Center Service Manager as an example, but sometimes it is really valuable to be notified when things are done or they have gone wrong.

    Why use PowerShell?

    Why use PowerShell you ask?  After all there is already an activity natively available in Opalis (2 in fact) that will allow you to send emails without any custom coding.  Well, for me I like to have the ability to leverage PowerShell.  It gives me the flexibility to modify things on the fly.  To give some context, I used to code webpages in Notepad Smile.  That hit a nerve with some of you – you know who you are haha.

    Let's get to the solution

    $PSEmailServer = "{Published Data}"
    send-mailmessage -to "{Published Data}"`
    -from “ActionServerActionAccount@contoso.com”`
    -subject "Testing email from start object (body as html)"`
    -body "<B>This is a test with Body as HTML http://www.contoso.com</B>"`
    -BodyAsHtml

    The code example is above (also shown directly below in the Run.Net object leveraging PowerShell)image

    Taking a look at the highlighted sections in the screen shot above, you need to update the following:

    • Update your SMTP server (shown as published data above)
    • Update the “To:” to the alias you are sending the email to (shown as published data above)
    • The “From” in this case will be the Action Server Action Account
    • Then format your email with published data and HTML code as you see fit.  You can also just type in text with published data where appropriate without worrying about HTML code at all.

    Note: The Action Server Action Account is being leveraged in this example (and would be the authenticated user by default for the Run.Net object).  The assumption is that this account will have an email account on your mail server.

    Hint: If any of you are wondering how you get the larger window for editing your PowerShell (as shown above), right click in the code window and select “expand”.  Candidly, it took me a bit to figure that one out so I wanted to share that tidbit for those of you new to Opalis / Orchestrator.

    That’s it.  Just a quick simple post on using PowerShell to send notifications.  Happy Automating!

  • Using System Center Orchestrator to Gather Client Logs at Microsoft

    Hello again readers.  I wanted share with you a solution we are leveraging within Microsoft for log collection utilizing System Center Orchestrator.  This will entail a few posts as this Log Copy solution is somewhat complex, but simple in its architecture and implementation.  This initial blog post will merely introduce the solution and I will continue to discuss this in future blog posts fully explaining the design, moving parts,  and provide code you can leverage to implement this Orchestrator Runbook in your own environment (at your own discretion / testing / risk and under your own support, but happy to respond to questions as you have them of course Smile). 

    Some Background

    To give you a little background on the group I work in at Microsoft currently;  MPSD (Managed Platforms and Service Delivery) supports ~300,000 client systems within Microsoft.  On occasion we are required to pull logs from client systems to troubleshoot various items related to the System Center products we dogfood and support.  Previously, our go to mechanisms for collecting logs have been somewhat numbered and all called “Log Copy”, but utilizing different approaches.

    • ConfigMgr Software Distribution – static log collection script targeted by advertisement
    • PSEXEC scripts to run remotely against machines and gather these logs
    • VBSCRIPTS, PowerShell scripts, batch files

    You get the idea…lots of ways, but all somewhat limited in one way or another.

    Negatives to the Above Solutions

    • Log collection not possible if ConfigMgr client is broken (if using the Software Distribution method)
    • File compression wasn’t always possible and likely not used to keep complexity down between different OSs
    • Multiple solutions and multiple folks supporting
    • Difficult to maintain and keep up to date
    • Not scalable – single threaded grabbing one log from one machine at a time
    • Manually driven and cumbersome in most cases
    • Restricted to few users due to rights required

    Introducing the System Center Orchestrator Log Copy Workflow

    The Log Copy Workflow, in its simplest form, is a front end interface built on PowerShell and rolled into an EXE that instructs Orchestrator on the back end to gather logs on behalf of the requestor.  The UI accepts requests for scenario based logs (predefined logs) or custom logs (logs you choose with restrictions) and runs these requests against the machines that are provided.  This UI, when the requestor hits the “Submit” button, will generate an XML request file that gets copied to a file share on our System Center Orchestrator Runbook (Action) Server.  Once there, our Runbook automatically picks up this file, processes the data within this file as a request that is submitted into the system.  This request is inputted into a backend database table which Orchestrator leverages to know what “work” it has to do.  This same table is utilized for a retry queue that happens every 2 hours by another Runbook executing on a timed interval.

    Front End

    The front end shown below is essentially the portal to request logs.  You add machines from an input file or from the clipboard.  You can choose a scenario from the drop down by selecting check boxes as shown and they show up in the list.  A custom log request is also available.  By default, the front end is launched and prepopulates a majority of the look and feel from a XAML file stored on our Runbook (Action) server.  The email address is used to isolate who is requesting the logs (pulled from logged on user ID).  The email field can additionally be semi colon separated to include multiple recipients to be notified each time an email is sent on status of log collection.  A description field is provided and required in order to notate what the log request was for (shows up in the subject line of the email you receive).  Finally, you can set a retry interval (maximum of 99) which will allow you to request a retry to occur by default every 2 hours for a maximum of 99 hours to capture machines that are offline at the time of your initial request.

    image

    Notifications

    As mentioned above, an email notification is sent to provide status of your log copy request.  This is done by default at the beginning of each request, and the completion of each request (grouped together by request so you receive only one email for all machines you have requested), and if you select the retry copy option, you will receive an email by default every 2 hours until all systems have successfully been gathered or until the retry has expired in number of hours.  Example below.

    image

    What’s Next?

    Well, this was just an introduction to this solution.  Stay tuned for a more in depth analysis of the Log Copy Workflow within Orchestrator.  For now, I hope you’ve gotten a taste of one potential scenario that can leverage the power of System Center Orchestrator for IT Process Automation.  This solution has definitely provided impact and value for our organization and has shown how automation can shine when applied to repetitive tasks to save time and effort.

    Giving “Props”

    I love giving credit where credit is due and ensuring the folks I work with get noticed for their hard work.  I have a very talented developer working on my team that works with me to make these workflows happen.  David Wei deserves credit for his hard work and effort on putting together my design, architecture, and overall ad hock requests into our finished products.  Great teams make great end results – thank you David for all your hard work on this project and continued support as we move forward with more automation!

    That’s it for now – Happy Automating!

  • The Photographers@Microsoft are at it Again!

     

    image

     

    Hello again readers!  I wanted to take a brief tangent from my technical content to let you all know about a fantastic book that Microsoft has been publishing for the last four years.  This book is an amazing representation of the internal talented photographers at Microsoft and even better, the proceeds to go the United Way to support a great cause!

    OK so give me the details right? Smile

    • Price is $50 which gets you a 300+ page book with 453 images!
    • $37.50 of that goes to charity. 
    • AND if you are a US-Based Microsoft employee you are matched by Microsoft so a $50 purchase results in $75 going to charity!

    Call to Action (GO GET YOUR BOOKS!)

    I encourage you to check out the details of this book here: http://www.photographersatmicrosoft.com/

    It goes on sale starting October 1st, 2012.  This book is truly beautiful. 

    Details below from the site:

    Explore | Photographers@Microsoft Book is a 300+ page collection of images taken worldwide by individuals at Microsoft® in support of the annual GIVE campaign. Each photo in the book is accompanied by a narrative written by the photographer as well as technical information which helps the reader learn how the photograph was taken. All proceeds from the sale of the book are donated to the United Way Worldwide. Purchases made by eligible Microsoft employees qualify for the Matching Gifts Program at Microsoft, effectively doubling the gift.

    Explore | Photographers@Microsoft Book makes a perfect gift for yourself, family or friends whether they are just getting started in the photographic journey or are seasoned shutterbugs.

    Shameless Plug Smile

    For the second year in a row since joining Microsoft, I’ve been lucky enough to be featured along with the serious talent represented in this amazing book.

     

    This year’s photo

    image

    Last year’s photo

    image

     

    Check out the United Way stats!  What an amazing organization! 

    image

    image

  • Building Scalable Workflows in Opalis

    Building from my first blog post on how we established “reach” in our environment for our Opalis infrastructure, I thought it would be fitting to fill you all in on some lessons learned that I gathered during my first workflow development effort.  I believe this post is an important one as I have actually discussed this exact topic more than once and it resonates with the folks that I share it with as a critical way of thinking about how workflow development should be considered and laid out.

    Let’s look at version 1.0 of the Reach workflow.

    Looks simple enough right?  We start with creating a status table to store our results.  Next, we gather a list of machines from an input file, and then finally we do a round-trip assessment and ping to see if the machine is in fact online and if it is, we check the status of a service.  To wrap this workflow up, we are storing our results into the status table we initialized at the very beginning.  This is a very real world type scenario that I have done in the past with batch files or a VBScript to determine if a server/workstation is online to patch, or do some automated task to from the command line.  We’re just pulling this directly into a workflow to bring it to life a bit more.

    Version 1.0 of Reach Workflow (sequential processing)

    image

    So what’s wrong with the above approach?

    Absolutely nothing on the surface.  In fact, for 30 machines this is perfectly scalable.  Even 50 machines, not that big of a deal.  However, as you start to add more and more machines this workflow literally falls flat on it’s face Smile.  In fact, when I ran this against 250 machines as a test, it started out just fine.  Then as I watched each result insert into the table one by one, I could see that the timestamps were getting longer and longer in between.  This workflow (as architected above) took approximately 1 hour and 15 minutes, it took all the memory on the server ~4GB, and it crashed the client and terminated the workflow before it completed.  Essentially, setting up our workflow in the configuration above kept all running processes within this workflow into this long running execution that didn’t release the memory until it was complete.  So on each iteration of the workflow, memory was being consumed and inching the memory consumption slowly till it completely ran the server out of memory and crashed the process.

    Let’s look at version 2.0 of the Reach workflow (redone for scalability)

    With Version 2.0 I took a slightly different approach.  Rather than running everything from a single workflow, I thought it may be better to trigger sub tasks that would execute, do what they needed to do, and then exit and return to the main workflow.  I like to call those triggered sub tasks “workers”.  So the concept is we can fire up the main orchestration (long running) workflow, and then trigger multiple worker workflows that can do the job much faster and with little or no impact on memory or CPU (at least for the task I was doing).

    Version 2.0 of Reach Workflow (concurrent processing)

    image

    So looking at the above, I added a schedule to let it run every 4 hours (instead of just once) so we could establish trends over time.  Then we create the table, pull in the machines we want to analyze, and then trigger worker(s).

    image

    From the custom start, we are gathering the computer names.  I still do the ping and service status checks and log into the status table.  One other thing I added was the variance information from the last time the machine was analyzed to see if the latency data is going up or down.  Finally, I’m publishing the computer name from the workflow (ironically that is not necessary – I know that now Smile).  In fact, this workflow is completely self contained.  No need to publish anything back to the main workflow in this case.  One final thing I did to speed the whole process up was allow this workflow to run multiple times (concurrent parallel executions). For my initial testing I set it to 50 from the default of 1.   Leveraging this approach allows you to execute multiple concurrent threads to reduce the time it takes to gather your information. 

    SNAGHTML5a55f013

    NoteFor Opalis, the default maximum number of policies running at one time is 50.   I wouldn’t recommend setting yours to 50 unless you follow the recommendations and process in this link http://support.microsoft.com/kb/2102398.

    So what were the results?

    The new execution time (total) for my policy too 4.5 minutes (instead of 1.25 hours).  Monitoring the memory on the Action Server, the memory stayed happy and constant without ramping up and maxing out.  Not even a blip on the radar!

    Lessons learned

    It is better to build out your workflows in a way that provides for scalability rather than simplicity.  Simply put, avoid placing everything that you need to do into a single policy.  Instead, break your policies up into task oriented policies and trigger them from a main orchestration workflow.  This will allow you to manage memory, performance, and scalability while allowing those workers to be leveraged as functional based executions that fire and return.

    Thanks for reading.  Happy Automating!

  • First Post–About the Blogger

    imageAs my first post, I wanted to give you an idea of what keeps me up at night and is motivating me to blog. I am very fortunate to work in the group responsible for Dogfooding (among other things) System Center Configuration Manager and System Center Orchestrator (Opalis). My role is to work with the various teams within the Management Platforms and Service Delivery (MPSD) Team at Microsoft to integrate past, current, or future automation into Opalis Workflows as part of our Dogfooding efforts for the System Center Orchestrator Product Group. Our environment here at Microsoft provides us the ability to "flex the Opalis muscles" against our collective count of client systems reaching close to 300,000 worldwide in addition to our various flavors and revisions of ConfigMgr within our realm of support and responsibility. That's a lot of systems and a very complex environment!

    Thanks for reading.  I look forward to providing some of my experiences as I am going through the development of Opalis workflows for Microsoft.  My goal is to illuminate some useful techniques and scenarios that we have faced in our environment within Microsoft to hopefully remove roadblocks and generate ideas for your environment.

  • Using an Excel XML File for Input to a Runbook

    Hello readers.  I thought it would be a good idea to quickly discuss a topic that came up during a recent Runbook development effort.  I was faced with the challenge of how to do the following:

    1. Provide an easy way to request data from an end user
    2. Make it familiar and intuitive
    3. Provide the data in a format that could be easily consumed by Orchestrator via PowerShell.

    Solution?

    I opted for a quick and dirty Excel XML file.  Looks exactly like an EXCEL XLS file but based on XML data format.  The benefits from this approach were two fold.  I didn’t have to program a fancy front end interface (more on this in a future blog post however Smile), and PowerShell could consume this data and immediately turn it into Published Data for the Runbook processing within Orchestrator.

    STEP 1

    So where do you start?  Start with a basic Excel XML file.  The easiest thing to do is put columns together as shown below that frame up what you are looking to gather.  Highlights can be leveraged as shown to call attention to required data.  In addition, you can even put in some data validation (three letters required, etc.) as well as “hover text” to provide your end user the ability to review helpful tips as they are filling out the required data.

    image

    STEP 2

    Now that you have a basic template file put together, you can either have the user drop it into a pre-determined (monitored by Orchestrator) input directory and allow automation to pick up this file and appropriately process it.  Or what we did was attach it to a service ticket (in our case Team Foundation Server) and allow Orchestrator to collect up that file from the ticket, process the data inside, and act accordingly (updating the ticket along the way of progress).

    STEP 3

    So far so good right?  Next, once you have an idea of where you are going to place the file for processing, leverage a simple Run.NET object within Orchestrator to process the file according to cell and row analysis.  The code snippit below basically sets variables in PowerShell according to text values located within the called out row and cell. Then you set returned data within your Runbook to the PowerShell variables.

    Example: line 9 below evaluates $table.Row[1].Cell[1].Data."#text"

    The value shown in row(1) and cell(1) is “SEC”.  In contract row(0) cell(1) is “Values”.  So for each row and cell combination you have with data you need, you will set to an appropriate variable that you can leverage within your Runbook inside Orchestrator.

     1: #Setup the file and initialize Excel COM object
     2: $file1 = "{Published Data}"
     3:  
     4: #Read in XML Data
     5: $template  = [xml](Get-Content "$File1")
     6: $table = $template.Workbook.Worksheet[0].Table
     7:  
     8: #Read values and set variables
     9: $DeployType        = $table.Row[1].Cell[1].Data."#text"
     10: $DecomSvr           = $table.Row[2].Cell[1].Data."#text"
     11: $DecomVar           = $table.Row[3].Cell[1].Data."#text"
     12: $NewSvr               = $table.Row[4].Cell[1].Data."#text"
     13: $NewSite              = $table.Row[5].Cell[1].Data."#text"
     14: $ParentSvr           = $table.Row[6].Cell[1].Data."#text"
     15: $CNTSVR              = $table.Row[7].Cell[1].Data."#text"
     16: $NetworkBin         = $table.Row[8].Cell[1].Data."#text"
     17: $LocalBin              = $table.Row[9].Cell[1].Data."#text"
     18: $dpGroupOSD      = $table.Row[10].Cell[1].Data."#text"
     19: $Install_directory = $table.Row[11].Cell[1].Data."#text"
     20: $PKGID                 = $table.Row[12].Cell[1].Data."#text"
     21: $DP_Drive            = $table.Row[13].Cell[1].Data."#text"
     22: $emailAddress      = $table.Row[14].Cell[1].Data."#text"

    STEP 4

    Last and final step is to set Published Data within your Runbook to the variable data you have set in your Run.NET object.

    image

    That’s it!  I’ve provided an example input file and PowerShell script for you to review and play around with.  If you have any questions – as always please don’t hesitate to ask!  Thanks for stopping by and till next time, Happy Automating!

    Process XML Example Files Process-XML.zip

  • Opalis @ Microsoft

    Please check out the following post for information on an upcoming Webcast I'm participating in on Tuesday June 28th, 2011 from 9:30 am - 10:30 am PST where we will be discussing how Opalis is being leveraged within Microsoft.  Should be a great discussion!

    http://blogs.technet.com/b/system_center_in_action/archive/2011/06/24/opalis-microsoft.aspx?wa=wsignin1.0

    Happy Automating!

  • Opalis and the Internal Microsoft Adoption Story (On Demand Webcast)

    On Tuesday June 28th, Charlie Satterfield and I had a great webcast presentation with a lot of content related to how Microsoft is leveraging Opalis internally.  Please click the inline link or the graphic below to view the content.

    Details

    This webcast concentrated specifically on the following details:

    • Real world scenarios we are currently leveraging Opalis for
      • Service Request Automation (overview)
      • ConfigMgr Role Deployment (Demo)
      • Patch Tuesday Automation (Demo)
      • Reach Analysis for Opalis at Microsoft (overview)
    • Architecture considerations
      • Single Management Server and Single Action Server
      • Single Management Server and Multiple Action Servers
    • Lessons Learned
      • Moving from simple to scalable / componentized workflows
      • Leveraging paper specs to build out initial workflow design before development efforts

    This webcast was well received and we believe is a solid hour of great information on how you may look at implementing Opalis and Orchestrator into your own environment.  To view the webcast on demand click on the following link: http://bit.ly/iuZz2h

     

    Q & A from the presentation


    Q: How far along in your process did you realize you needed to go into componentization?

    A: Almost immediately. Our reach workflow showed that if we hadn’t componentized our workflows, they wouldn’t be very scalable or efficient.
    (Note: see next Q/A for more information on this).


    Q: What kind of requirements did you find concerning the number of action servers required to run concurrent policies? Meaning did you need to deploy a certain number of action servers to run the necessary number of simultaneous policies?

    A: The answer to this question is, it depends.  To put a finer point on this, the Desktop Management side within MPSD is essentially supporting ~280,000 systems worldwide on a single Opalis Management Server and separate Action Server (both running on Hyper-V as guest VMs).  The reason the answer to this question is “it depends” is mostly due to the fact that there are many ways to slice out how you may execute and design runbooks to accomplish automation in your environment.  Do we hit all 280,000 systems from our environment directly from Opalis – no.  We indirectly manage those systems through the use of System Center Configuration Manager (and the System Center suite as a whole for that matter in one way or another).  Key detail on how you design your architecture really has to do with what you are doing and how you are going to do it.  If you have lots of long running tasks that take several hours to complete, you may be leveraging your Action Servers in a way that will prevent you from scaling efficiently (long running tasks taking up memory and running over long periods of time reducing your ability to push more policies into the queue).  is an important piece of understanding how to optimize your Action Server load.  Final note on this is – mileage may and will vary depending upon what you are doing.  Analyze how  you are executing and what you are executing in regards to your IT Process Automation efforts.  If you can reduce your runbook tasks in to subrunbooks that execute quickly and return back to a main orchestration workflow, you will spend less time waiting on many tasks to complete in a larger single threaded workflow and allow Opalis to complete quicker and more efficiently allowing you to run more workflows on less infrastructure.


    Q: Would you say that action server architecture decisions are more location-based than concurrent policy based? So, in deploying additional action servers in a high volume location, then deploying an action server in a location that might have a high latency link with the main datacenter/Opalis Management Server location?

    A: The answer is both :).  It depends on how you want to spread the load of your policies and whether different locations are sensitive to certain infrastructure needs (non trusted domains,etc.).  Action Server build out can be useful to allow you to have more policies running concurrently but more often it is geographical (as long as your runbooks are built in a scalable fashion).


    As always, any questions – hit me up!  Smile

    And of course – Happy Automating!

  • System Center Orchestrator 2012 Beta–Runbook Import Issue

    Hello readers. My group is already starting to “dogfood” the beta of System Center Orchestrator 2012.  We came up against an issue that is definitely worth talking about in case anyone sees this issue in evaluation.  This information could change in the near future so keep in mind that this is specific to the beta at this point.  Let’s get on to the info!

    Setting the Stage

    To set the stage for the issue, I need to give you a little background on how the problem surfaced to begin with.  This is an interesting one for sure.  There are two things I really like about Opalis and Orchestrator.

    1. You can “tell a story” with your activities and runbooks.  Each individual activity (ex: Run.Net object) is like a paragraph in a chapter.  The runbooks I guess would then be chapters.  Following this analogy through to the end, multiple runbooks combined together gives you a book (or novel if you prefer Smile).    
    2. In addition to being very descriptive on how you put the pieces together on the designer palette, you can also put descriptions on your links in between these activities which can provide additional information on what may be happening next, etc.

    If you notice in the example below, the links between the objects can have descriptive names such as “Success” or “Failure” or “Go” – essentially anything you want in there.  The default is the word “Link” or whatever you have changed it to.  You cannot have simply a “blank” by inputting  a blank in the GUI and hitting enter.  It will default back to the previous text that was there.  Me being the type of person that wants to ensure my policies are visually appealing I leveraged a “hack” (CNTRL+SHIFT+6) to make the link text disappear and we could have essentially a single “underscore”.  This was something I picked up in an training I was in.  This may have proliferated out as it was a great way to clean up your policies so they were more visually appealing.  See below for an example.

    image

    So what’s the issue?

    So the issue comes into play when you want to import your exported Opalis 6.3 workflows into Orchestrator Beta 2012.  The XML processing logic that is leveraged by Orchestrator in the new version is not allowing for CHAR(1) through CHAR(31) to be processed.  Long story short, CNTRL+SHIFT+6 is in this list of non printable (non-allowable) characters.

    How do you know if you have this issue?

    So working with Charles Joy,  we (mostly Charles) created this SQL query that will find all offending CHARs that Orchestrator has an issue with and display them in the results to take action on.

    image
     

    The above query yields something like what is displayed below.  It will show you the offending CHAR value  (in this case it is the CNTRL+SHIFT+6) along with the Policy Name that you can find the problem child.

    image

    So how do I update this you ask?

    Recommendation 1:

    Manually.  Find all occurrences of the non printable CHARs that are unsupported with the query above, chase them down from there to update your Opalis workflows before you go and export them to re-import them into Orchestrator.

    Recommendation 2:

    Don’t like recommendation 1 above?  Well, there is another couple of queries that Charles Joy and I worked out for updating our MPSD Opalis lab environment.   Two options are presented below because we were trying to take into consideration scenarios where you may have the special character at the beginning of a string or in a case where it is the only thing on a line.

    Note: Run this against a backup DB before running in production.  This is a “use at your own risk” type thing.  This isn’t reversable so make sure you are testing, retesting, and validating your tests.

    image

    That’s it.  The SQL queries are provided in a download for your review and testing.  And I mean it…..test!

    Happy Automating!  Enjoy Orchestrator!

  • Opalis 6.3 Installation Guide

    Hello Readers.  This is likely one of my last Opalis related posts as we move forward into Orchestrator as our main production platform for my group.  However, as there are still likely many of you that are starting on the trek of installing your first Opalis instance in your environment for evaluation or production deployment, I thought it would make sense to post my installation guide we have used internally at Microsoft for building out Opalis instances for our group.  I’ve shared this internally for other groups inside Microsoft that have had success with it, so why not give it freely to you all – my readers? Smile

    So….What’s in it Anyway?

    Provided for your convenience, here is a sneak peak at what content is in this guide.  It is chock full of information that was gathered during our initial deployments internally.  Some of this follows along with Charles Joy’s great video’s available on his blog so go check those out as well!

    Table of Contents

    1. Infrastructure Requirements for Opalis

    2. Installing the Opalis Core Components

    2.1. Installing the Management Server

    2.2. Creating and Configuring the Database for Opalis

    2.3. Importing the License information for Opalis

    2.4. Upgrading to 6.3

    2.5. Deploying Action Servers from Deployment Manager

    2.6. Deploying Clients from Deployment Manager

    3. Installing the Opalis Operator Console

    3.1. Downloading the Pre-Requisites

    3.2. Preparing for the Console Install

    3.2.1. Extract the Opalis Operator Console Installer

    3.2.2. Install and Configure JDK 6 Update 4

    3.2.3. Extract and Install JBOSS Application Server

    3.3. Setup and Configure the Operator Console

    4. Running the Operator Console

    4.1. Running the Operator Console Interactively

    4.2. Running the Operator Console as a Service

    5. Manual Installation of Opalis Client

    That’s it! Short and simple.  If you have any questions, please do not hesitate to reach out and I will do my best to answer or get additional information for you.

    And as always,

    Happy Automating!

    Download File: Opalis Installation Guide

  • Orchestrator Migration–Locating Checked Out Policies

    A topic came up recently on an internal distribution list that I have been planning on blogging about.  I figured it’s time to get it out to you because it is timely with the release of System Center Orchestrator Beta and future versions coming. 

    What is so important and why?

    We will all need a way to report on what policies in Opalis are currently in process of development and are not yet checked in.  Even more important, where they were last checked out and who checked them out last!  If they are checked out, they are not saved in the database and will be lost on migration.

    Why is this particular topic is so important you ask?  Well, as part of your migration strategy to Orchestrator from Opalis,  you should have in it your plan to export all “key” workflows and sub workflows so they can be imported into your new Orchestrator environment.  You can browse through your Opalis Client and look at workflows one by one and see a padlock on them indicating they are checked out remotely or a pencil if they are checked out from your own machine.  The padlock ones are the tricky ones.

    So what can we do to find all checked out Policies?

    It just so happens that some of this data is stored directly in the Opalis Database.  Once you get this data, you need to do some manipulation of some data and then extract the data.  So what better way to do this than…you got it Smile create another workflow to automate it.  I’m going to keep this simple and straightforward and you can take it from here and make it as complex as you want.  It is just three simple activities linked together as shown below.

    image

    Query Database Activity

    1. Start simply by creating a new Policy and drop a Query DB activity on the designer palette.  The query DB object I am using is one of the foundation objects that came with Opalis found under the “Utilities” section on the right hand side of the designer.
    2. Open the Query Database activity and select the “Connection”image
    3. Ensure you are selecting the appropriate DB type and set your SERVER and CATALOG (Opalis would be the default catalog).
    4. Next, select the “Details” tab and type in the following query
      image

    Get Account from SID Activity

    1. Next, I call this the “Get Account from SID” activity but it is doing a bit more than that.  This activity is setting published data for each item you have above and doing some extra PowerShell manipulation.  Drag a “Run.net” object from the “System” section of the foundation objects on the right hand side of the client.
    2. Link the two objects you have together and place the PowerShell code you see below into the details section (ensuring you are setting it to PowerShell for the type). 
      image
      An “eye chart” I know – I provided the workflow at the end of this blog so you can easily recreate and modify.  :)
    3. One very key step that I want to note above is the gathering of the UserID from the SID (see below).

         1: if ($lastModifiedBySid -ne "NULL")
         2: {
         3:     $LastModifiedBy = (new-object system.security.principal.securityidentifier($lastModifiedBySid)).translate([system.security.principal.ntaccount])}
         4:     $CheckOutUser = (new-object system.security.principal.securityidentifier($checkedOutUsersid)).translate([system.security.principal.ntaccount]
         5: )

    4. Finally, we need to established published data objects for each variable we have defined within our PowerShell script.  This gets us all our published data that can be leveraged in the next step. 
      image

    Append Line Activity

    1. Next, this is the step that probably lacks the coolness that the rest of the process above clearly has Winking smile .  I grabbed the "Append Line” object from the foundation objects list out of the “Text File Management” section (sometimes a hard one to find).
    2. Type in the path you want to save your file to (I picked the temp folder on my Action Server) and then populate the published data you care about.
      image

    That’s it!  Check it in and run it. 

    Your results will look something like the following (hopefully you don’t name all your policies “New Policy”) Disappointed smile

    image

    Final note: The download is provided as an example – use at your own discretion and test first in a lab environment or sandbox to see results.

    Downloadable Content Checked-Out-Policies.ois_export

    Thanks for following along – if you want to cut to the chase, just download the provided file and import it into your environment, configure the appropriate pieces, and let it fly.  I hope it helps with the migration.

    Till next time…

    Happy Automating!

  • My New Home for Automation Blogging

    Hey Readers!  I’ve been fairly quiet out here as of late and really “Jonesing” to get some blog posts out on automation.  So the good news is, I’ve transitioned to a new team (Technical  Enablement and Delivery or TED) and hence a new location for my blogging is available.  I’m really excited about the new location (and my new team is awesome).

    Some information

    My introduction post on what is coming out on the Building Clouds TechNet Page!

    The Automation Track that I am providing input to along with my good friend Charles Joy.

    Something fun my team did with 4 Surface Pros (a must watch)!

    By the way the entire team is going to be at MMS this year so come by our booth if you are at MMS 2013.  Look for posts on the Building Clouds Blog for sessions presented by the team in April.

     

    I’ll continue to post other items out here probably but not as frequent as the Building Clouds site.  Actually, I’m looking forward to potentially posting a Windows 8 Theme Pack of my pictures Smile  www.jimbrittphotography.com (shameless plug) in the near future!

    Until next time, Happy Automating!

  • System Center 2012 Orchestrator - Changing User or Group Rights Post Install

    I’ve seen a topic come up from time to time that I think was worth getting out to those that can use it Smile.  This is already published on TechNet (referenced within my blog here) but I wanted to put some finer notes on it to assist.

    Issue –  Updating the Primary Administrator and Orchestrator User Group Post Install

    So consider the scenario where you have installed and setup Orchestrator, selecting some initial defaults that you were comfortable with for access to the designer and who is the primary administrator for Orchestrator,  then you want to update your environment to support a new group or primary administrator.  In order to change the primary group and / or primary administrator after the fact, you need to leverage a tool called PermissionsConfig.exe located on the Management server.  You can follow the below process to accomplish this.

     

    Changing User / Group Rights After Install

    The following process is documented as a supported method to change the admin user and administrator group that is used by Orchestrator.  On occasion, you may need to update one of these (example: picked the wrong group during installation, user that setup Orchestrator is not going to be supporting it going forward).

    • Go to an Administrator elevated cmd prompt
    • Go to the installation directory for System Center 2012 Orchestrator on the Management Server

    Example: C:\Program Files (x86)\Microsoft System Center 2012\Orchestrator\Management Server>

    • Execute the following ensuring you are putting in your Group and “primary user” to manage things.  Both need to be on this command line or you will end up overwriting with what you put here (leaving only group and no user or vice versa)

    PermissionsConfig.exe -OrchestratorUsersGroup "DOMAIN\GROUPNAME" -OrchestratorUser "DOMAIN\USERNAME" –remote

    Note: The above command is word-wrapping for readability – this command is executed all on one line.  Also, be careful not to “copy / paste” using the example from the TechNet article below (or even this blog for that matter).  Ensure that the “-“ between the commands are in fact dashes and not “non ascii” characters, or the command will fail

    Additional Note: For additional troubleshooting information on this command (success / failure) navigate to the following directory on the Management Server and review the permissionsconfig.exe*.logs.

    %SystemDrive%\programdata\Microsoft System Center 2012\Orchestrator\PermissionsConfig.exe\logs

     

    http://technet.microsoft.com/en-us/library/hh463588.aspx

    Below is a abstract of what the command line options mean for your reference

    Parameter

    Details

    OrchestratorUsersGroup

    The name of the group to use for Orchestrator permissions.

    OrchestratorUser

    If this parameter is specified with a user name, the user is granted immediate access to Orchestrator whether a member of the specified group or not. This is to prevent the requirement for the user to log off and on if the group has just been created.

    Remote

    Indicates that the Runbook Designer can be run from a computer other than the management server.

    Short, simple, and potentially very useful!  As always –  Happy Automating!