...building hybrid clouds that can support any device from anywhere
As you saw with our post last week (Automation–An Introduction to Service Management Automation), very soon we will be providing even more guidance on the next generation of Automation for System Center: SMA. Until then, I still have a couple more Orchestrator Tips/Tricks up my sleeve – and I wanted to follow-up on my most recent post (Automation–Orchestrator Tip/Trick–Run .Net Script Activity + PowerShell + Published Data Arrays) with more recently discovered “interesting” behavior.
So, let’s get to it!
As we saw in the previous post, the Run .Net Script activity handles the output of array data very differently than it did in previous versions of Orchestrator/Opalis Integration Server. In fact, if you are interested, you can review its previous behavior here (my “8-Minute-Demo - OIS 6.3 - Best Practice Video Shorts - Multi-Value Data Handling”).
I recently reproduced this behavior while helping one of our internal customers of SC2012 Orchestrator, here at MSFT. Because I was surprised at the results, I felt it necessary to make more information available, especially to list the options for workaround.
For the Run .Net Script activity, a mix of multi-value data and single value data published data output will result in data flattening (with a single space). This is not the same configurable Flatten option which exists on the Run Behavior tab of all activities. This is a separate (by design) behavior that only occurs when there is a mix of multi and single value data published data output. Because the Run .Net Script activity is 100% user configurable (published data included), the output is handled differently than other activities. For instance, if you use the Query Database activity, you know that you can have both multi-value and single value published data output (e.g. Number of Rows [single value data] vs. Full line as a string with fields separated by ';' [multi-value data]). Unfortunately, this is no longer the case for the Run .Net Script activity (from System Center 2012 Orchestrator to present).
This is only important if you want to be able to publish both multi and single value data to the databus (like the Query Database activity scenario described above). This is often the case when the published single value data is used as a semaphore in the adjacent link as a filter (e.g. If Count > 0 Continue, Else Stop). After the link filter on the single value data, the multi-value data would be leveraged accordingly. Because of the unexpected data flattening behavior in the Run .Net Script activity, the link filter on the single value data capability is either not possible or will require multiple activities to accomplish (see Workarounds section below).
REMINDER: Because the unexpected data flattening leverages a single space as the delimiter during flatten, if your data contains spaces, Option 2 Workaround is not valid.
The following set of screen captures will illustrate what the behavior looks like during configuration/execution.
NOTE: The flattening of the “Array Values” [multi-value data] field did not occur until the “Array Count” field [single value data] was added to the activity's configuration.
NOTE: More information about working with “Jagged” Published Data Arrays can be found here.
NOTE: This would have been my suggested workaround, but obviously it suffers from the same data flattening behavior.
There are essentially three options to avoid inadvertently flattening your data in the Run .Net Script activity. Two of these options will allow for single value data link filtering for later use of the multi-value data.
REMINDER: Because the unexpected data flattening leverages a single space as the delimiter during flatten, if your data contains spaces, the “expand” workaround is not valid.
NOTE: Activity Two directly leverages the tip/trick from the previous blog post (Automation–Orchestrator Tip/Trick–Run .Net Script Activity + PowerShell + Published Data Arrays).
I list Option 2 as [nearly full functionality] on purpose. It still relies on a " " as a delimiter – which is less than ideal. So then I thought, “What if I use the built-in configurable Flatten option which exists on the Run Behavior tab of all activities?”
Unfortunately it is ignored when dealing with this type of “mixed” published data behavior. Here are my test activity configuration and output results (I slightly modified the above example for the test):
NOTE: The #DELIM# delimited configured to Flatten the activity data is ignored and " " remains the delimiter. So my attempt at splitting by “#DELIM#” will fail in Activity Two.
While the [nearly full functionality] Option 2 workaround will not work for everyone, it is one way to have both single value filtering AND multi-value output as published data. Yes, it requires two activities and a link filter. Yes, it is only valid for data that doesn’t already contain spaces.
Of course! It is Orchestrator – there are many different ways to do the same thing. And luckily, I have one for this situation that makes it a bit more reliable…
3. [full functionality] Three Activities (with link filtering):
NOTE: Activity Three directly leverages the tip/trick from the previous blog post (Automation–Orchestrator Tip/Trick–Run .Net Script Activity + PowerShell + Published Data Arrays).
If any of this information fits your scenario, great. If not, then at the very least you won’t have to continue wondering why your data is being flattened.
For more information and tips/tricks for Orchestrator, be sure to watch for blog posts in the Automation Track!
Nice! I've run into this once or twice, and your solution is very elegant.
some time ago I encountered this phenomenon as well. In my case only one of the multi-valued published data got flattened with spaces instead of commas as selected; I discarded it as weird and built a workraound.
After reading this blog post and some more tests I think this is a bit more difficult:
In fact the Run .Net Script activity behaves kind of randomly concerning the publishing of multi valued data.
I tested with simple scripts with arrays (containing string or integers) and strings.
The outcome was that sometimes (but not always) when you mix single-valued and multi-valued published data the "forced" flatten phenomenon (with space delimiters) happens, sometimes not at all and sometimes only some of the multi-valued published data become "forced" flattened.
And the "force" flatten phenomenon even sometimes happens if all the published data is multi-valued...leaving option3 as NOT bulletproof.
If you check in the runbook the respective activity behaves the same for all runs but you cannot determine how it will behave before running it for the first time (at least I couldn't find out about a pattern).
The only thing: It seems to me that the phenomenon is connected with the variables used in the script. I tested several combinations changing the script; e.g. introducing a new variable that got not published leaving the rest of the script the same.
So I suspect it has something to do with the extracting of (the right) variables from Powershell and forwarding it to the published data bus.
There is a "collection" checkbox but it is greyed out for Powershell...wouldn't it be a good idea to (re-)introduce the option for Powershell (perhaps renamed to "multi-valued" and with one of the future Update Rollups)?
The designer of a runbook could then configure whether the respective published data (and the respective powershell variable) has to be treated as multi-valued or not.
Here are the test scripts (A and B) and the scenarios with their respective outcome in my environment(s) (SC 2012 Orchestrator SP1 UR2); if somebody has the time to test it would be great to hear whether this is reproducable in other environments:
[System.string] $strsingle1 = 'abc'
[System.string] $strsingle2 = 'xyz'
[System.object] $arrmultiple1 = @() #strings will be inserted
[System.object] $arrmultiple2 = @() #strings will be inserted
[System.object] $arrmultiple3 = @() #intergers will be inserted
[System.object] $arrCount = @() #integers will be inserted
$arrmultiple1 += 'def'
$arrmultiple1 += 'ghi'
$arrmultiple1 += 'jkl'
#$arrCount += $arrmultiple1.Count
$arrmultiple2 += 'mno'
$arrmultiple2 += 'pqr'
$arrmultiple2 += 'stu'
#$arrCount += $arrmultiple2.Count
$arrmultiple3 += 123
$arrmultiple3 += 456
$arrmultiple3 += 789
#$arrCount += $arrmultiple3.Count
$strtype = $arrmultiple3.gettype().name
$arrCount += $arrmultiple1.Count
$arrCount += $arrmultiple2.Count
$arrCount += $arrmultiple3.Count
(the only differences are the lines adding entries to the $arrCount that are commented out in script A)
Script: Script A
Name: single1, Type: string, Variable name: strsingle1
Name: single2, Type: string, Variable name: strsingle2
Name: Type, Type: string, Variable name: strtype
Name: multiple1, Type: string, Variable name: arrmultiple1
Name: multiple2, Type: string, Variable name: arrmultiple2
Name: multiple3, Type: integer, Variable name: arrmultiple3
---> All the multiple1, multiple2, multiple3 are published multi-valued (no "force" flatten) the other three are published correctly as well, expected behavior but contradicting the blog post
---> All the multiple are published multi-valued (no "force" flatten), expected behavior
Script: Script B
---> All the multiple are published "force" flattened, unexpected behavior and contradicting the blog post
(Even with only multiple1 as published data the "force" flatten happens here...)
Name: Count, Type: integer, Variable name: arrCount
---> Count is published multi-valued and multiple1 and multiple2 are published "force" flattened, unexpected behavior and contradicting the blog post
another alternative for processing complex output from PowerShell with run .net activity is to make use of the PowerShell cmdlet convertto-xml in conjunction with Orchestrator's "query xml" object.
e.g. in the powershell
$Published = ( get-process | select -first 3 | convertto-xml -as string -notypeinfo)
$published is then published on to the databus, and fed into the query xml object where we can use the full power of XPath/XQuery to process the complex data.
I hope that makes sense
@PIfM - I will remove the word "bulletproof" But in my defense, I said, "more bulletproof". ;) The fact that it is acting different under different conditions is concerning. I will find time to test your scenarios and message back (probably next week).
@Richard Catley - Wow. I never thought about this. This is a great option. Even if you output the data as an "blob" of XML only to be used as needed later on and extracted with PowerShell it makes sense for a "flatten" then "expand" option. Leveraging the "Query XML" activity is a fine solution as well, and will natively get things back on the databus. Actually, you could even pop the XML into the Query Database activity and leverage SQL to "expand" the data...
Thanks for your feedback!