Kevin Holman's System Center Blog

Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Mic

OpsMgr: Public release of the Alert Update Connector

OpsMgr: Public release of the Alert Update Connector

  • Comments 34
  • Likes

 

The Alert Update Connector for SCOM 2012 is now public: 

http://www.microsoft.com/en-us/download/details.aspx?id=34783

 

The AUC was a tool that many large enterprises used in OpsMgr 2007, where they integrated OpsMgr alerts with an upstream alert system, ticketing, or incident management system.  It helped (tremendously) to offer the following solutions:

  • Create the ability to insert specialized data into custom fields on alerts from rules or monitors, after they are generated, for a ticketing system to interpret and use to assign to the correct queues or take special actions on.
  • Create a simple to use *filter* with a user interface – to quickly and simply pick and choose which alerts would be sent to a ticketing system, rather than relying on “All Critical” or “All Critical with High priority”.  Often times customers would have to generate WAY too many overrides to use standard alert as the filter for which alerts would go to a connected system.

The only way to do this without the AUC, was to write powershell scripts, or code and develop your own “pre-connector” service.

    ***Note: I don't recommend that customers deploy the AUC unless you have a good business need for it.  It becomes a link in the chain of events to get in your incident management system, and could be one more thing to break.  However, for customers who are already using it, or for customers who didn't realize this was available – it is an awesome tool.  It is being updated for OpsMgr 2012 to help those customers who depended on it transition into their 2007R2 > 2012 upgrade a little easier.

     

    Long term, System Center Orchestrator is going to be the solution for these tools, where Orchestrator runbooks can handle the alert modification, enrichment, and forwarding to an upstream system.

     

    Let’s have a deeper look.

     

    When you extract the MSI download – you have the following files:

    image

     

    AlertUpdateConnector.exe is the file used for the service that will be installed.

    AlertUpdateConnector.exe.config is an ACSII configuration file you edit for making changes.

    AlertUpdateConnector.rtf is a readme.

    AlertUpdateConnector.tmf is used for troubleshooting when needing to perform an ETL trace.

    AlertUpdateConnector.xml is a management pack for monitoring the AUC service.  (note – I don't recommend you use this – better to create a service monitor for the service and add a recovery to restart it on your own – this MP could create too many state changes in some situations).

    ConnectorConfiguration.exe is the User Interface used to pick which rules and monitors to make changes to.

    TracingGuidsAlertUpdateConnector.txt is used for ETL tracing/troubleshooting.

     

     

    Installing the connector

     

    A good location to install the connector is any management server.  The connector is USUALLY pretty lightweight, so it isn't a big concern.  It just needs to be installed on a server that also has the console components installed as we need to have access to the SDK.  We can point the connector at ANY management server after install, since all management servers run the SDK service.  This does not need to be pointed at the RMSe.

    We need to create a folder on this server, and copy all the files to it.  I like C:\Program Files\AlertUpdateConnector

    Follow the instructions VERY carefully when installing the connector.  Be methodical – ensure you are using the 64bit install, and path to InstallUtil.exe, and the correct .NET folder.  Here is my full command line:

    C:\Program Files\AlertUpdateConnector>c:\Windows\Microsoft.NET\Framework64\v2.0.50727\InstallUtil.exe AlertUpdateConnector.exe

    This step above installed the Service.  Open the Services.msc applet and find the service:

    image

    Change the logon credentials of the service to the SDK account, or any account that has SCOM admin rights.  DO NOT start the service yet.

    image

     

     

    The next step is to install the connector into SCOM.  Here is my command line for that:

    C:\Program Files\AlertUpdateConnector>AlertUpdateConnector.exe -InstallConnector

    You will now see a new connector show up in the UI:

    image

     

    Next up – we need to configure the connector config file.  Look in C:\Program Files\AlertUpdateConnector and open the AlertUpdateConnector.exe.config file in notepad.

    Under <appSettings> modify the following:

    <add key="RootManagementServerName" value="localhost" />

    You should change localhost to your management server name – ONLY IF you did not install the connector service on a management server.  It just needs to point to any server running the SDK (DAS) service.

    <add key="PollingIntervalInSeconds" value="10" />

    You shouldn’t change this in most cases.  We want the alert subscription module to grab these alerts quickly.

    <add key="ConfigurationFilePath" value="C:\Connector_Configuration.xml" />

    Modify the path to where you will store the Alert configuration file.  Mine looks like:

    <add key="ConfigurationFilePath" value="C:\Program Files\AlertUpdateConnector\Connector_Configuration.xml" />

     

    Next – we need to open the Alert Update Connector UI – and create an Alert Configuration file.  Run the ConnectorConfiguration.exe program located at C:\Program Files\AlertUpdateConnector.  For the Root Management Server Name – give it the name of any Management Server that runs the SDK, and hit connect.  This will take some time to fully populate all the rules and monitors in your management group.

    Choose File > Save Configuration > browse to the correct location and supply the name we used for the alert config file above:

    image

     

    Open this Connector_Configuration.xml file for editing in notepad.  Add in the ExcludeResolutionState="255" like you see in my example below.  This will keep our connector from re-opening alerts that auto-close quickly.

     

    <ConnectorConfig GlobalResolutionState="251" ExcludeResolutionState="255">
      < AlertSources />
    < /ConnectorConfig>

     

    Save the file.

    You can now start the Alert Update Connector service.  You will see events logged in the OpsMgr event log for any issues, or see a normal startup and connection logged.

    Once this is working – we only have a few more steps.

    We need to create a couple new resolution states.  One for all alerts that we process, but do not want to send across the connector.  Another – for all alerts that will be sent across the connector.  The AUC defaults to 251 for “processed” alerts, and we can use any resolution state ID for what gets used to send to the connector.  I like using 251, and 252.

    In the console – browse Administrator > Settings > Alerts.  See my graphic below for my examples – you can name these whatever you like:

    image

     

    Next – we need a subscription to subscribe to the alerts that we want the Alert Update Connector to inspect.  My preference here is to create a subscription to inspect ALL NEW alerts, regardless of their criteria.  This ensures that all alerts will either be set to a “Processed” or a “SendToConnector” resolution state.  This is important as this will impact you you subscribe to email notifications from within SCOM.  No longer will we include “New” resolution state for emails – because alerts will be changed to one of these custom resolution states very quickly after being created.

    In the console, Administration > Product Connectors > Internal Connectors.

    Open the properties of the Alert Update Connector.  Click Add – to add a new subscription.  Give it a name, like “Process All NEW Alerts”.  Scope it to all groups, all targets, and check ALL severities, ALL priorities, and UNCHECK “Closed” alerts.  Only NEW resolution state should be inspected for these.

    image

    Save the subscription.

     

    From this point forward – you will see all NEW alerts getting assigned to the Alert Update Connector, with a forward pending status:

     

    image

     

    Then the AUC will take action – and either modify the alert, or will place it in the “Processed” resolution state if it takes no action:

    image

     

     

    Configuring the AUC:

     

     

    Speaking of taking action – that's what this connector business is all about – lets configure that now!

    Open your ConnectorConfiguration.exe UI.

    Connect to the management server first, THEN – open your previously saved Connector_Configuration.xml file.  File > Load Configuration > Connector_Configuration.xml

    Explore to a class where you have a targeted workflow that you want to send to a connector.  (hint – you can uncheck to View > Group By Management Pack to make it easier sometimes). 

    I am going to modify my “Test Alert” rule.  Right click it – and choose “Specify Properties to Modify”

     

    image

     

    A new window pops up.  Right click and choose to “Add Property”

    In Custom Field 1 – I am going to add the text “Test Team”.  Optionally I can scope it to only modify this property if the alert is raised about an instance contained in a specific group.

     

    image

     

    I can keep making changes, to the following properties:

     

    image

    Once I am done – save the configuration.  Overwrite your existing file.  AUC will automatically save a backup copy of your previous version in C:\Program Files\ directory. 

    The Alert Update Connector will also notice that the file has changed – and reload the in memory config from the file.

    Now – when this alert in generated in the future – it will follow this path:

    1. Generated as NEW
    2. Assigned to the AUC
    3. AUC inspects the alert ID based on what is loaded in the config file.
    4. If it matches, it will make the modifications specified in the config file.

    I am going to make the following changes to this Alert:

    Custom Field 1:  Core Infra Team  (the upstream system can use this to identify which team to notify)

    Custom Field 2:  Page  (the upstream system can take additional special actions, like send to a paging system)

    Custom Field 3:  PRODUCTION  (only if in special server group)

    Custom Field 3:  NONPROD  (on if in special infra group)

    Owner:  Infrastructure Queue  (this could tell the upstream system which ticketing queue the ticket belongs in or who to assign it to)

    Resolution State:  252  (this places the alert in a special resolution state, to which a product connector can subscribe to)

    image

     

    Now – when my alert comes in as new, it gets inspected by the AUC, and then modified as we see here:

     

     image

     

    image

     

    image

     

    ***Note – it is best not to use Custom Fields 5 – 10.  The Exchange 2010 MP used custom fields as part of its process.  While this is a bad practice for Microsoft MP’s to make use of custom fields because of the disruption it could cause to customer alert lifecycle processes, the Exchange 2010 MP is hard coded on these for now, so best to avoid using those or over-writing them.

     

     

    The main benefit of using the AUC over scripts to make these types of modifications are that the AUC is a commonly used tool, it has an easy to use UI to select alerts, and it is far less resource intensive than executing powershell scripts on a timed basis as it leverages the alert subscription module and an in-memory configured service.

    The main benefit of using the AUC over Orchestrator, is the simple UI to help select which alerts need to be modified.  A superb solution would be something like using the UI in AUC to create the config file, then using Orchestrator run-books to make the modifications based on the config file.  That might be something I try and work on down the road.

     

    Summary:

     

    Again – this isn't a catch-all tool that every organization should deploy.  However, if you find that you’d benefit from using custom fields in an upstream incident management system, or you’d like a more granular filter for “what gets ticketed” than using tons of overrides on all workflows, this tool can be very handy.  This tool is similar to a Microsoft Resource Kit tool – it is not directly supported by Microsoft.  However, it is making simple SDK calls, which those SDK functions are supported.

    Comments
    • Kevin, the AUC 2012 documentation says that you can point the configuration as and NLB cluster name for a pool of management servers. You mentioned in response to an earlier questions that you can't run this in a highly available failover mode. I just want to verify if you can create an NLB pool and have the service running on more than one MS?

    • @Brian -
      The connector service just needs to connect to a SDK service. It doesn't matter if that SDK service is direct, or part of an NLB cluster, so yes, NLB is supported. However, we don't support multiple services running at the same time. This will cause issues. You can install it on two servers, and have one disabled and the other enabled, and use this as part of a DR plan, but the only highly available solution for the connector service itself would be using a Windows Failover cluster.

    • Thank you Kevin, do you know what problems it may cause? We tested it running on two management servers and then we saw this blog entry. We had not come across any issues yet but we obviously would not want to implement something that is problematic and/or not supported. Thank you again.

    • @Brian -

      I don't know of any specific issues, but this one theoretical: When the subscription assigns alerts to a connector, it does this based on a configured polling cycle (default is 60 seconds). Then - the connector modifies alerts on its own polling cycle, which is every 10 seconds. It is possible having two connector services might try and modify the same alert at the same time or near time. This might produce occasional unexpected results. For instance, the AUC was updated a long while back for a specific condition, where an alert auto-closed after it was assigned to a subscription, but just before the connector modified the resolution state. This re-opened a closed alert. Therefore - the code for the connector was changed to actually ignore the alerts that were in the closed resolution state just before modifying them. Having two connector services might cause something similar to this, where they are both trying to act on an alert at the same time. In theory, I cannot think of any settings that might conflict though, because they would both share the same configuration, and just attempt the same action on the same alert at the same time. So even if one connector completes first, the second one would simply be modifying the same properties over again. The only negatives offhand, would be that you might see some evidence of this in the alert history. Worst case scenario, would be some sort of exception that crashed the connector service if they ran at the same time.

    Your comment has been posted.   Close
    Thank you, your comment requires moderation so it may take a while to appear.   Close
    Leave a Comment
    Search Blogs