Kevin Holman's System Center Blog

Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Mic

Using a Generic Text Log rule to monitor an ASCII text file – even when the file is a UNC path

Using a Generic Text Log rule to monitor an ASCII text file – even when the file is a UNC path

  • Comments 38
  • Likes

There are several examples in blogs on how to create a generic text log rule to monitor for a local text file (Unicode, ASCII, or UTF8).

This will be a step-by-step example of doing the same, however, using this to monitor the log file on a remote UNC path instead of a local drive.  This is useful when we want to monitor a file/files on a NAS or an a share that is hosted by a computer without an agent.

This is a bit unique… instead of applying this rule to ALL systems that might have a specific logfile present in a specific directly – we are going to target this rule to only ONE agent.  This agent will monitor the remote fileshare similar to the concept of a “Watcher Node” for a synthetic transaction.  Therefore we will be creating this rule disabled, and enabling it only for our “Watcher”.

 

In the Ops console – select the Authoring pane > Rules. 

Right click Rules, and select Create a new rule.  We will chose the Generic Text Log for this example:

 

image

 

Choose the appropriate MP to save this new customer rule to, and click Next.

For this rule name – I will be using “Company Name – Monitor remote logfile rule”

Set the Rule Category to “Alert”

For the target – I like to use “Windows Server Operating System” for generic rules and monitors.

UNCHECK the box for “Rule is enabled”

 

image

 

Click Next.

 

The directory will be the UNC path.  Mine is “\\VS2\Software\Temp”

The pattern will be the logfile(s) you want to monitor.  We can use a specific file, such as “logfile.log” or a wildcard, such as “*.log”.

You should not check the “UTF8” box unless you know the logfile to be UTF8 encoded.

 

image

 

Click Next.

On the event expression, click Insert for a new line.  Essentially – log file monitors look at each new line in a logfile as one object to read, and this is represented by “Params/Param[1]”  This “Parameter 1” is the entire line in the logfile, and is the only value that is valid for this type of monitor – so just type/paste that in the box for Parameter Name.

Since we want to search the logfile line for a specific word, the Operator will be “Contains”.

For the value – this can be the word you are looking for in the line, that you want to alert on.  For my example, I will use the word “failed”.

 

image

 

Click Next.

 

On the alert screen – we can customize the alert name if desired, set the severity and priority, and build a better Alert Description.  If you are using SP1 – the default alert description is blank.  If you are using R2 – the default alert description is “Event Description: $Data/EventDescription$”  HOWEVER – this is an invalid event variable for this type of event (logfile)…. so we need to change that right away.  I keep a list of common alert description strings HERE

For this – I will recommend the following alert description.  Feel free to customize to make good sense out of your alert:

Logfile Directory : $Data/EventData/DataItem/LogFileDirectory$
Logfile name: $Data/EventData/DataItem/LogFileName$
String:  $Data/EventData/DataItem/Params/Param[1]$

Click “Create” to create the rule.

Find the rule you just created in the console – right click it and choose “Properties”.  On the Configuration tab, under responses (to the right of “Alert”) click Edit.

Click the “Alert Suppression” button.  You should consider adding in alert suppression on specific fields of an alert – in order to suppress a single alert for each match in the logfile.  If you don't – should the monitored logfile ever get flooded with lines containing “failed” from the application writing the log – SCOM will generate one alert for each line written to the log.  This has the potential to flood the SCOM database/Console with alerts.  By setting alert suppression here – we will create one alert, and increment the repeat count for each subsequent line/alert.  I am going to suppress on LoggingComputer and Parameter 1 for this example:

 

image

 

Click OK several times to accept and save these changes to the rule.

 

Now – we created this rule as disabled – so we need to enable it via an override.  I will find the rule in the console – and override the rule “For a specific object of class:  Windows Server Operating System”

 

image

 

Now – pick one of these machines to be the “watcher” for the logfile in the remote share. 

**Note – the default agent action account will make the connection to the share and read the file.  In my case – the default agent action account is “Local System” so this will be the domain computer account of the “Watcher” agent which connects to the remote share and reads the file.  This account will need access to the share, folder, and files monitored.  Keep that in mind.

Set the override to “Enabled = True” and click OK.

 

At this point, our Watcher machine will download the management pack again with the newly created override, and apply the new config.  Once that is complete – it will begin monitoring this file.  You can create a log file in the share path, and then write a new line with the word “failed” in it.  You need a carriage return after writing the line for SCOM to pick up on the change.

You should see a new alert pop in the console, based on matching the criteria.  Subsequent log file matches will only increment the repeat count.  Customize the alert suppression as it makes sense for you.

Then – create additional rules just like this – for different UNC paths.

 

image

Comments
  • Excellent example, thanks!

    One comment on Alert Supression though.

    Does not Parameter 1 equal Params/Param[1] (the entire line)?

    If so, for alert supression to work, the line would have to be exactly the same. A date or timestamp will prevent supression to happen.

    Regards

    Roger

  • Nice work. Quick question. Does the SCOM Agent on th Watcher node need to be running under a Network account?

  • "One comment on Alert Supression though.

    Does not Parameter 1 equal Params/Param[1] (the entire line)?

    If so, for alert supression to work, the line would have to be exactly the same. A date or timestamp will prevent supression to happen."

    -------------------

    Yes - Parameter 1 equals that - therefore - my example would supress anytime the line that matched was identical.  Typically - this is correct.  If the line isnt identical - then it will be a different alert.  If that is not desirable - then remove Param 1.

  • No - in my example - I used an agent using Local System.... which is an "Authenticated User" and therefore had access to this share.  THis specific share had share permissions of Everyone-FullControl, and NTFS permissions of Everyone-Read.

    If your share or NTFS permissions are more strict - then make sure you grant the computer account of the agent access to both share and NTFS, or run the agent under a domain user account, which has access to the share/NTFS.

  • Nice work. Just curious. Is it possible to set the Rule Target to specific machine/server rather than a class of machines?

  • Yes and No.

    A rule/monitor workflow MUST target a class.  Period.  End of story.

    However - we have two options here:

    1.  Target a generic class, like Windows Operating System - then disabled, then override as enabled for my one specific object.  This is the example I used above.

    2.  Create a new class, using WMI/Registry provider for example, and make only the one special computer I want to be a discovered instance of that class... then target that class (much more complicated)

  • Excellent post, although I am having one strange problem, the log reader works 100% and the alerts are correct, but the system seems to randomly alert off the same line in the log file over and over again, what could I have done wrong to get this happening?

  • i got error for the path "C:\SummitCfAdapter\PROD\SummitCfAdapter-LOH-PROD\log\SummitCfAdapterMaster.log" Error opening log file directory Event ID's 31705,31707

    Please help

  • Please post more details from the events you are getting - I dont know what those event ID's are.

  • i got error for the path "C:\SummitCfAdapter\PROD\SummitCfAdapter-LOH-PROD\log\SummitCfAdapterMaster.log" Error opening log file directory Event ID's 31705,31707

    Error description

    "Error opening log file directory

    Directory =

    C:\SummitCfAdapter\PROD\SummitCfAdapter-LOH-PROD\log

    Error: 0x80070003

    Details: The system cannot find the path specified."

    but when I change path to "C:\SummitCfAdapter\PROD" it works fine.

    is it because of "-" in the file path.

    I'hv tried enclosing path in the double and single quote also but the got error "The filename, directory name, or volume label syntax is incorrect."

  • thanks kevin for your quick reply. Dont know how but it is working now. not changed anything just restarted the health service and it is working.

  • Thanks Kevin for the excellent example.

    i have a problem as i fallowed excatly the steps you mentioned but it is not giving me any output.

    the path i tried 2 ways

    \\localhost\d$\product

    &

    d:\product\

    file name is company.log

    is their a way to find if the rule is working or not?????

    -VRKumar

  • Thanks for this example.

    I have create same rule and alert appears only when create log file don't when log file change ? It is Normal ?

    If Yes, is there a way to display an alert if a log file changes ?

    Thanks you for response.

  • Kevin,

    I am using this rule in several situations, but lately I have run into a situation where this rule is not effective for log files that are recreated everyday. The previous day's file is renamed, and the application creates a new and empty log file.

    So if SCOM were to detect a string on line 100 one day, and then several days later, with a recreated file of the same name and path, the string appears on line 90, SCOM will not alert. It appears that this monitoring solution only works for files that grow, and not for log files that are created anew everyday by the application on the server.

    Until I understand how to use a rule that avoids this limitation, I am going to be parsing the log file with a script and creating a counter file to track the number of error string detections that appear on a daily basis.

    I am interested to know if you have encountered this before and welcome any suggestion you have.

    Mike

  • I have been trying to find an example to collect events from a w3svc log file, in MOM 2005 you would select the IIS Application log provider but I can't find the equivilant in SCOM.

    I am aware that SCOM has the ability via the system.ApplicationLog.InternetLogEntryData library data type to read w3svc files, but I'm not sure how to create a rule using this.

    Any suggestions appreciated.

    Phil

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
Search Blogs