Case study: Fixing a discovery…

Case study: Fixing a discovery…

  • Comments 3
  • Likes

Discoveries are a critical part of management packs. Ideally the discovery should discover the objects and their properties accurately, as soon as possible and with the least amount of performance hit.


In this post I shall discuss a scenario where we had the following problems:

1)      The right information was not discovered (accuracy problem)

2)      There was a big lag in discovering the information. (freshness problem)

3)      It had a performance hit on the machine. (performance problem)

Let us discuss the problem in a general term without mentioning the MP or the discoveries which were really affected.

For a background, let us assume a MP has class A and B. With class A hosting class B.

The key property of class A is called Key and both class A and class B have a property P that is discovered by two individual discoveries.

Both class A and B were supposed to discover the same value for P, but the main problem surfaced when the two classes showed different values for property P.

Upon investigation it was discovered that the two discoveries each used a WMI query to discover the property P and during a fix the discovery of P in class A was fixed whereas the discovery of P in class B was not fixed which led to the discrepancies in the values.

At this stage we had two problems; the information was not accurate and using two WMI scripts to discover the same property has a performance hit too.

So the solution at this stage was to reuse the information discovered by discovery targeting class A in class B. This would avoid second WMI query and make sure the information is accurate as there is only one script to fix.

We did not completely eliminate the WMI query as it was still needed to find the issue and since it ran only once in 24 hours, the performance hit was not that bad.

To achieve this we replaced the discovery of class B to have a datamapper module to map the property P from class A to B.

The code to do that looks as shown below.

<ConditionDetection ID="CD" TypeID="System!System.Discovery.ClassSnapshotDataMapper">















During testing it was learnt that the above solution still did not solve the problem completely, looking into the log file it was seen that the discovery packet was being dropped at times. The reason for that was since we do not control the execution order of the workflows on the agent, if the discovery for B runs before the discovery of A has run for the first time, then the value of property P is null which makes the management server drop the discovery packet.

Even though this is not a major problem it is something to take care of, and the fix was to introduce the below condition detection module above the datamapper module. This module checks if the property P is not null and only then does the data mapping.

<ConditionDetection ID="Filter" TypeID="System!System.ExpressionFilter">








        <Value />





With the above condition detection, we have a good performing and accurate solution. This is good for most cases but there is an issue with freshness. If the discovery for B runs before discovery for A is run then in the worst case B would pick up the data 24 hours after discovery for A has run.

There is a solution for this problem too. Not an easy but a solution does exist. To understand how we can achieve this we need to understand the workings of operations manager. Every time the configuration of a module changes, this causes the module to be reloaded and activated. So if we can make the discovery of B reload when the property P for A changes, we can immediately discover the newly discovered value for property P for class A.

The way we do this is by creating a new scheduler module which takes in a property as the configuration parameter (ReloadOnValueChange), so that it can reload when the property changes on discovery. The code for the module looks as shown below.

<DataSourceModuleType ID="MyReloadable.Discovery.Scheduler" Accessibility="Internal" Batching="false">





    <xsd:element name="Scheduler" type="PublicSchedulerType"/>

    <xsd:element name="ManagedEntityId" type="xsd:string"/>

    <xsd:element name="ReloadOnValueChange" type="xsd:string"/>


  <ModuleImplementation Isolation="Any">







So to finally fix the discovery of B, we replace the scheduler module with the following.

<DataSource ID="DS" TypeID="MyReloadable.Discovery.Scheduler">




      <SyncTime />


    <ExcludeDates />





With the above fix we were able to instantaneously discover P for B when P changes for A.

So a fix to just make sure the right value is discovered turns into a journey to improve accuracy, performance and freshness of the discovery. Glad to say, that the mission was accomplished J





This type of discovery pattern is very useful when you want to chain discoveries based on the discovery of another object or property. So even if you have very large discovery intervals, you can still maintain freshness of data when the property on which you depend changes.



This scheduler module is only for discovery as it completely breaks cookdown. Also it should not be used when the dependent property changes often, as it would reload the module every time the value changes and can possibly cause a performance hit.


This posting is provided "AS IS" with no warranties, and confers no rights. Use of attachments are subject to the terms specified at



Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • The System Center Operations Manager 2007 product team just posted a cool article on how to fix a couple

  • this is very good news was well informed that the followers of the issue I am. Thank you ...

  • Introducing such a topic you'd like to congratulate you've let us know. Have good work.