Kevin Holman's System Center Blog

Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Mic

Using a Generic Text Log rule to monitor an ASCII text file – even when the file is a UNC path

Using a Generic Text Log rule to monitor an ASCII text file – even when the file is a UNC path

  • Comments 38
  • Likes

There are several examples in blogs on how to create a generic text log rule to monitor for a local text file (Unicode, ASCII, or UTF8).

This will be a step-by-step example of doing the same, however, using this to monitor the log file on a remote UNC path instead of a local drive.  This is useful when we want to monitor a file/files on a NAS or an a share that is hosted by a computer without an agent.

This is a bit unique… instead of applying this rule to ALL systems that might have a specific logfile present in a specific directly – we are going to target this rule to only ONE agent.  This agent will monitor the remote fileshare similar to the concept of a “Watcher Node” for a synthetic transaction.  Therefore we will be creating this rule disabled, and enabling it only for our “Watcher”.

 

In the Ops console – select the Authoring pane > Rules. 

Right click Rules, and select Create a new rule.  We will chose the Generic Text Log for this example:

 

image

 

Choose the appropriate MP to save this new customer rule to, and click Next.

For this rule name – I will be using “Company Name – Monitor remote logfile rule”

Set the Rule Category to “Alert”

For the target – I like to use “Windows Server Operating System” for generic rules and monitors.

UNCHECK the box for “Rule is enabled”

 

image

 

Click Next.

 

The directory will be the UNC path.  Mine is “\\VS2\Software\Temp”

The pattern will be the logfile(s) you want to monitor.  We can use a specific file, such as “logfile.log” or a wildcard, such as “*.log”.

You should not check the “UTF8” box unless you know the logfile to be UTF8 encoded.

 

image

 

Click Next.

On the event expression, click Insert for a new line.  Essentially – log file monitors look at each new line in a logfile as one object to read, and this is represented by “Params/Param[1]”  This “Parameter 1” is the entire line in the logfile, and is the only value that is valid for this type of monitor – so just type/paste that in the box for Parameter Name.

Since we want to search the logfile line for a specific word, the Operator will be “Contains”.

For the value – this can be the word you are looking for in the line, that you want to alert on.  For my example, I will use the word “failed”.

 

image

 

Click Next.

 

On the alert screen – we can customize the alert name if desired, set the severity and priority, and build a better Alert Description.  If you are using SP1 – the default alert description is blank.  If you are using R2 – the default alert description is “Event Description: $Data/EventDescription$”  HOWEVER – this is an invalid event variable for this type of event (logfile)…. so we need to change that right away.  I keep a list of common alert description strings HERE

For this – I will recommend the following alert description.  Feel free to customize to make good sense out of your alert:

Logfile Directory : $Data/EventData/DataItem/LogFileDirectory$
Logfile name: $Data/EventData/DataItem/LogFileName$
String:  $Data/EventData/DataItem/Params/Param[1]$

Click “Create” to create the rule.

Find the rule you just created in the console – right click it and choose “Properties”.  On the Configuration tab, under responses (to the right of “Alert”) click Edit.

Click the “Alert Suppression” button.  You should consider adding in alert suppression on specific fields of an alert – in order to suppress a single alert for each match in the logfile.  If you don't – should the monitored logfile ever get flooded with lines containing “failed” from the application writing the log – SCOM will generate one alert for each line written to the log.  This has the potential to flood the SCOM database/Console with alerts.  By setting alert suppression here – we will create one alert, and increment the repeat count for each subsequent line/alert.  I am going to suppress on LoggingComputer and Parameter 1 for this example:

 

image

 

Click OK several times to accept and save these changes to the rule.

 

Now – we created this rule as disabled – so we need to enable it via an override.  I will find the rule in the console – and override the rule “For a specific object of class:  Windows Server Operating System”

 

image

 

Now – pick one of these machines to be the “watcher” for the logfile in the remote share. 

**Note – the default agent action account will make the connection to the share and read the file.  In my case – the default agent action account is “Local System” so this will be the domain computer account of the “Watcher” agent which connects to the remote share and reads the file.  This account will need access to the share, folder, and files monitored.  Keep that in mind.

Set the override to “Enabled = True” and click OK.

 

At this point, our Watcher machine will download the management pack again with the newly created override, and apply the new config.  Once that is complete – it will begin monitoring this file.  You can create a log file in the share path, and then write a new line with the word “failed” in it.  You need a carriage return after writing the line for SCOM to pick up on the change.

You should see a new alert pop in the console, based on matching the criteria.  Subsequent log file matches will only increment the repeat count.  Customize the alert suppression as it makes sense for you.

Then – create additional rules just like this – for different UNC paths.

 

image

Comments
  • Hello Kevin,

    I got below error in event viewer

    Error opening log file directory

    Directory =

    “D:\Program Files (x86)\Quest Software\QCVDS\R6.0.3\confs\MRO-389\logs"

    Error: 0x8007007b

    Details: The filename, directory name, or volume label syntax is incorrect.

    One or more workflows were affected by this.  

    Workflow name: UIGeneratedMonitor642fcc2492734d1bbcf373a7b64785f1

    Instance name: Microsoft Windows Server 2008 R2 Enterprise  

    Instance ID: {18469874-BBD2-A085-0744-9EC5DC7B2D5A}

  • Another information, my log file name would be operation_dumper.log-yyyymmdd.log....so in pattern i gave as operation_dumper.log.*..i am getting the same 31705 error in event viewer.

  • My requirement for log file monitoring:

    Log file location: D:\Program files (x86)\company name\logs

    Log file name: verigy_name.log.yyyymmdd.log

    I've created log file monitor with directory in double quotes due to space in program files "D:\Program files (x86)\company name\logs"

    Pattern: verigy_name.log.*.log

    created overrides for specific servers, but i didn't receive any alert and got the error in event viewer with event id: 31705..error opening log file directory, the file name, volume label name syntax is incorrect.

    Please help me to fix this.

  • I have followed your instructions and created a rule but if I select ConfigMgr Pri Site Server as Target the rule doesn't work can you please help me to make it work.

  • Hi,


    anyone give information about params/param[1] ?

  • Hello Kevin,

    I just would like to know if you have even try to monitor text log files on different server locations?

    Ex. The requirement is to monitor the path D:\Sample\Testlog.txt.
    This path is present on 100+ servers and should be monitored by an MP. And I know i'ts very illogical to create 100+ rules just to point to 100+ different server locations.

    Hope you could help me. Thanks in advance!

  • The timestamping in some log files is nuisance. It means the Rule continues to alert on old matches, as individual alerts, until the log is cleared, which isn't always practical.

    Also, all the time your matched entry exists, the repeat count will just continue rising until, again, the line is removed from the log file.

    Anyway round this without using a monitor (which I believe remembers where it was in the log, the last time it triggered)? The monitor version is ok, but requires you to manually reset the monitor to continue, and if, in that time, more matches have occurred, it will change state for those immediately after until you clear past them by multiple Health resets.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
Search Blogs