Kevin Holman's System Center Blog

Posts in this blog are provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of UseAre you interested in having a dedicated engineer that will be your Mic

Does your OpsDB keep growing? Is your localizedtext table using all the space?

Does your OpsDB keep growing? Is your localizedtext table using all the space?

  • Comments 57
  • Likes

This post is about an issue in OpsMgr SP1 AND R2 – where the localizedtext table in the database may fill and consume large amounts of space.

OpsMgr 2007 no longer has a hard database limit of 30GB like MOM 2005 did.  For this reason, most OpsMgr administrators don't watch this very closely anymore, or freak out when it gets big.

However - it must be noted... console and operational performance are still impacted when this DB gets big.  You really should keep an eye on it and try to keep it as small as possible.  In general, I recommend only keep 2 days of operational data (Database Grooming global setting) from the default of 7 days, until everything is tuned.

One thing I have noticed at several locations, is that there are a couple tables that often grow quite large... depending on the agent count and what management packs are installed.  These are LocalizedText and PublisherMessages.  This is cause by management packs, that create a large amount of events, from script.  I have seen this mostly in environments that have funky converted MOM 2005 MP's what run a lot of backwards-compatibility scripts, or in large Exchange 2007 and SCCM deployments.  Like I said - this won't affect all customers... just those with specific management packs that expose this.  What happens, is each event writes additional data to these tables, and they are not groomed or pruned.... so they keep growing.  Over time, the impact is, that your DB might keep filling and run of of disk space, or your performance might be impacted when you use a view that queries LocalizedText.

 

* Am I impacted by this issue? *

 

To know if you are impacted - I would run the following query against your OpsDB:

Simple query to display large tables, to determine what is taking up space in the database:

SELECT so.name,
8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END) AS data_kb,
Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) AS index_kb,
Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0) AS blob_kb
FROM dbo.sysobjects AS so JOIN dbo.sysindexes AS si ON (si.id = so.id)
WHERE 'U' = so.type GROUP BY so.name  ORDER BY data_kb DESC

Normally, in most typical environments with typical MP's, we'd expect perf data to be the largest tables, followed by event, state, and alert.  If localizedtext is your largest table, this is impacting you.  You can run the following query:

select count(*) from localizedtext

Generally, if this table is your largest in the database, and over a million rows, you are impacted.  The impact is low... however.... mostly just hogging space in the DB, and possibly impacting console performance.

 

* OK – I am impacted.  What do I do? *

 

You need to run the attached SQL statements to clean this up.  You might need to run these on a regular basis (once a week to once a month) if it grows back quickly.  To run these – you open SQL Server Management Studio, connect to the SQL instance that hosts the OperationsManager DB, and run a “New Query”.  Then paste the text from one of the scripts attached into the query window, and run it.

When you upgrade to R2 – most of this is resolved…. we no longer fill this table, however, you WILL need to run the cleanup at least once to get rid of all the old junk leftover from SP1 days.

I am attaching TWO scripts below, which clean up these tables.  That being said - this script is NOT supported by Microsoft, as it has not been thoroughly tested.  It is being provided "AS IS" with no warranties, and confers no rights. Use of included script samples are subject to the terms specified in the Terms of Use

***UPDATED for R2:

There are now TWO scripts.

If you are on SP1 – you run both on a regular basis.

If you are on R2 – you only need to run the LocalizedTextCleanupforSP1.txt script ONCE, and then run the LocalizedTextCleanupSP1andR2.txt script on a regular basis.

This core issue was fixed in R2, however – since R2 released we found another type of data that gets left in the LocalizedText table, so this second script was developed.

 

*** Critical Note:

These scripts will require a LARGE amount of TempDB (mostly TempDBLog) space - make sure your TempDB is on a volume with lots of space to grow... if not - add an additional TempDB file on another volume just in case.  Make sure you take a good SQL backup of your OpsDB FIRST.  The script in general, takes about 20 minutes per million rows in the LocalizedText table, depending on the hardware capabilities of the SQL server.  I have seen it take 10 minutes per million rows on a fast server. 

Now – when I say LOTS of space for your tempDB – I mean it.  LOTS.  I believe it is the tempDBlog that needs most of the space.  Just make sure you have at least as much tempDB space as the size of your LocalizedText table.  That means if your LT table is 40 million rows (~40GB) then I would plan to have at LEAST 40GB of free space for your TempDB/TempDBLog to grow.  Changing the default autogrow on these to a larger value, and growing them out in advance will help speed up the process as well.

 

When the script is done, you wont recognize the space freed up immediately.  You need to run a - DBCC DBREINDEX ('localizedtext') - to reindex the table, and show the newly freed space.  It would likely be a good idea to reindex the entire database at this point, which you can do by running the following:

Reindex the database:

USE OperationsManager
go
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET ANSI_WARNINGS ON
SET ARITHABORT ON
SET CONCAT_NULL_YIELDS_NULL ON
SET QUOTED_IDENTIFIER ON
SET NUMERIC_ROUNDABORT OFF
EXEC SP_MSForEachTable "Print 'Reindexing '+'?' DBCC DBREINDEX ('?')"

If you first want to troubleshoot, and try and determine what is consuming your tables... or which MP's are generating the most noise in this table.... you can run the following (they might take a LONG time to complete - depending on how big your tables are:

Most common events:

select messageid, ltvalue, count(*) as Count from publishermessages with(nolock)
inner join localizedtext with(nolock)
on messagestringId = localizedtext.ltstringid
group by messageid, ltvalue
order by Count DESC

LT insertions per day/month:

SELECT
DATEPART(mm,timeadded) AS 'MONTH',
DATEPART(dd,timeadded) AS 'DAY',
count(*)
from localizedtext with(nolock)
group by
DATEPART(mm,timeadded),
DATEPART(dd,timeadded)
order by
DATEPART(mm,timeadded),
DATEPART(dd,timeadded)

Attachment: LocalizedTextCleanup.zip
Comments
  • <p>One of our databases had almost 20 million rows in localized text. Thanks for the post</p>

  • <p>Oh! Our OperationsManager database hd 18 milions of rows in LocalizedText and our performance is very poor.</p> <p>Thank you for the post too.</p>

  • <p>we have 6 millions of rows in LocalizedText, the performance, alerts and events are poor...</p>

  • <p>Thank you very very much for the informations, and for this wonderful Blog..</p>

  • <p>Kevin, great post. We really had a big table. Unfortunately the first time we ran into diskspace issues. That's how I figured out the errorprocessing part does not delete the first temporary table (#PublisherMessageReverseIndex).</p>

  • <p>I upgraded my test lab from SP1 to R2-RTM this weekend. &amp;#160; My current test lab consists of the following</p>

  • <p>I had troubles with the Opmsgr Console performance at serveral clients. One of the most important things</p>

  • <p>If you’ve been reading this blog for any length of time then you’re already familiar with the postings</p>

  • <p>Lifesaver!</p> <p>Thankyou,</p> <p>John Bradshaw</p>

  • <p>Thank you Kevin,</p> <p>Our environment is large with 30 Exchange 2007 servers and 180,000 mailboxes.</p> <p>Finding this post has save me many hours of frustration, our DB was 70gb with 50 million entries in the localizedtext table. After running your scripts our DB is now 4gb and operating with far less noise.</p> <p>It would be good if this was resolved as I don’t really want to rebuild our SCOM environment with R2. Would also be nice if Microsoft release the native Exchange 2007 MP, two years after exchange 07 and we are still monitoring with a converted 03 exchange MP (poor form msoft).</p> <p>Once again, thanks Kevin!!</p>

  • <p>1. &nbsp;This issue is resolved in R2, for the most part.</p> <p>2. &nbsp;By moving to the native Exchange MP for R2, you also remove this issue. &nbsp;Microsoft HAS shipped the native Exchange MP. &nbsp;However - due to it needing some native powershell modules that were added in R2, it is R2-only MP.</p> <p>3. &nbsp;R2 is not a &quot;rebuild&quot; it is an upgrade, and installs just as most hotfixes do.</p>

  • <p>Thanks for the script!</p> <p>Whe monitor about 130 agents with more then 25 MP's. Whe had over 20.000.000 rows of localizedtext. </p> <p>The script took 27,5 hours!!! On a HP Proliant DL380 G5 with P500 raid controller and 512MB BBC.</p> <p>Our database was 26GB and has dropped to a 2.4GB</p> <p>&#205;t's now nice and fast :-)</p>

  • <p>So, typically, I expect 20 minutes per million rows. &nbsp;</p> <p>I would have estimated your cleanup to be around 7 hours, 14 MAX.</p> <p>This makes we wonder if you really have good disk I/O for the SCOM subsystem? &nbsp;You are using a DL380 - with only 130 agents, how are the disks set up on this server? &nbsp;Are you using RAID5? &nbsp;Do you have the RMS and DB on the same server?</p>

  • <p>Hi, may be you can help me. The result of the script was that WorkflowTimestamps table has 9 million of rows so i dont know how to reduce the space.FYI the grooming process finish ok. so what can i do?</p>

  • <p>I have no idea what &quot;WorkFlowTimestamps&quot; table is, or how it relates to this issue. &nbsp;Can you give more information on how you deduced that table name? &nbsp;Was that in an error output or something?</p>

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
Search Blogs