System Center PFE Russia Blog

This blog about System Center from Microsoft PFE Team in the System Center area.

Thoughts about ACS database size

Thoughts about ACS database size

  • Comments 5
  • Likes
Today, let’s talk about Audit Collection Services component. If you have SCOM up and running for monitoring you already:

· have an infrastructure to provide security audit collection;

· have a monitoring for this audit collection.

But ACS component is designed for short-term event collection, so if you configure it to store events for more than 30 days you are to provide enough storage which can be a serious problem. For example, I want to collect and store security logs from 10 domain controllers for a year (which is an average task, some security teams consider storing information for 3 and more years). If I assume that one domain controller generates 40 events per second, I will have a daily database growth about 13 Gb. And total database size will be more than 4 Tb! Such amount of storage is not cheap to acquire and support.
What can a SCOM administrator do to avoid generation of so much data and fulfill security team requirements at the same time?

 

Audit Policy

Enterprise organizations usually manage audit policy across all their servers. Easiest way to do this is to use Group Policy. Default audit policy on Windows Server defines events that should and should not be logged. If there are events logged which have no use for you, it is best to adjust policy so no log entries for them are generated. To read about Audit Policy, go here.

 

Audit Server Filter

Audit Collection Server receives all events sent by Audit Collection Forwarders. But you can control which events are stored in database. By AdtAdmin.exe utility you can administer your Audit Collection Server. It is located under %SystemRoot%\Windows\System32\Security\AdtServer.
To get current query:
AdtAdmin.exe -getquery
To set your custom query:
AdtAdmin.exe -setquery -query:<YOUR_QUERY>
Query is written in WQL and should output all events you want to store in database. Other events will be dropped. For example, to drop events about network share access to IPC$, NETLOGON, SYSVOL shares use:
SELECT * FROM AdtsEvent WHERE NOT (EventID=5140 AND (String04='' OR String04='\\\\*\\IPC$' OR String04='\\\\*\\NETLOGON' OR String04='\\\\*\\SYSVOL')
To store “Audit log was cleared” events only:
SELECT * FROM AdtsEvent WHERE EventId=517 OR EventId=1102
By default database queue query is:
SELECT * FROM AdtsEvent
Which means to store all events sent by Audit Collection Forwarders. This is the easiest way to significantly reduce ACS database size.

 

Data Compression

If you use SQL Server Enterprise Edition to host ACS database you can use a Data Compression feature to reduce its size. I should warn you that this is not tested by Microsoft and may be an issue for support. Nonetheless, because of simplicity and effectiveness of this way it should be considered. So, if you are running Enterprise or Dev edition of SQL Server you can use sp_estimate_data_compression_savings procedure to understand how much storage space can be saved.
Hint: you can backup and then restore ACS database on Enterprise or Dev edition of SQL Server Instance to be able to use this procedure.
Compression is used on tables and no built-in method exists to compress the whole database. There is a way to create such method but it is an overkill for our ACS database. In fact, in ACS database there are small tables to store configuration which are used rarely and tables to store event data. Among them biggest tables are “dtEvent” and “dtString” types – they hold majority of data. By default ACS partition duration is 86400 seconds. This means that every day new data tables are created to store events for that day. Old tables that exceed event retention period are dropped. How can we manage compression in such circumstances? A trick is to use database triggers. Long story short we can define a trigger to run after every table creation which will compress it. We can use:

·       Row compression for moderate compression and a small CPU overhead;

·       Page compression for better compression but bigger CPU overhead;

·       Both types of compression for best storage saving. Consider this way only if plans are to store information and rarely access it or if you have excessive CPU resources.

Example of trigger for page compression:
CREATE TRIGGER CompressEventTablesAfterCreation  
ON DATABASE 
AFTER CREATE_TABLE
AS
BEGIN 
    DECLARE @TableSchema sysname, @TableName sysname      
    SELECT  @TableSchema = EVENTDATA().value('(/EVENT_INSTANCE/SchemaName)[1]','sysname'),              
            @TableName = EVENTDATA().value('(/EVENT_INSTANCE/ObjectName)[1]', 'sysname')  
    IF (@TableName LIKE 'dtEvent!_%' ESCAPE '!' OR @TableName LIKE 'dtString%')
      BEGIN
            DECLARE @SQL NVARCHAR(2000)  
            SET     @SQL =  'ALTER TABLE ' + QUOTENAME(@TableSchema) + '.' + QUOTENAME(@TableName) + ' REBUILD WITH (DATA_COMPRESSION=PAGE)' 
            EXEC    (@SQL)
      END
END
GO
 So, you compress existing “dtEvent” and “dtString” tables and define compression for future tables. In my lab environment this saves 70% of storage space in general. Again, consider all consequences this method may bring to you.

 

Other methods

Methods discussed above help in reducing ACS database size and do not address performance issues of accessing this data. Nonetheless, they can improve performance too, because, generally, less IO – faster operation. There may be other ways to work around the issue of big ACS database. For example:

·       Have a retention period of 1 month and store ACS database backup made each month. This way information can be stored indefinitely, but accessing specific data is complicated.

·       Aggregate information from ACS database in another database which is optimized for big size. This database can be optimized for data access also and becomes “OperationsManagerACDW”. This also will help if you have multiple Audit Collection Servers which have separate ACS databases.

 
Please, tell us how you work around ACS database size? Do you use one of mentioned methods or have your own? All feedback is appreciated.
Comments
  • i have more than 10dc, fileservers with filesystem audit enabled, and also collect logs from 200+ servers and 4000 workstations.
    we hold data for 1 year and ACD db size without log file is less than 3TB, reports and sql queryes are slow, i plan to implement compression to reduce io times, our sql servers CPU have more than 50% resource

  • Sergei, thanks for comment!
    Data Compression often gives greater performance because of less IO operations on disk. Did you adjust audit policy and audit collection filter? Maybe you are collecting information you will never use? Anyway your installation seems huge. Even optimized reports will take some time to execute ;-)

  • thank you

  • Hi,
    unfortunately not, no collection filter yet created, but audit policy adjusted for our needs.
    i try to make a filter next year, this year i plan to split monitoring and log collection in two different management groups. currently i have all-in-one environment, for monitoring and log collection.
    Installation is huge, but working well, direct querys from ACS DB are also slow.
    SSMS suggest to add indexes, to speed up queryes, but we are not allowed to make this changes to ACS DB?

  • Sergei, unfortunately you are right, modifying SCOM databases is not supported.

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment