There may arise times where it is necessary to completely reseed an LCR, CCR, or SCR database copy from the source database.  In order to reseed the copy we use the update-storagegroupcopy commandlet.

When the update-storagegroupcopy is run, the database is pulled using the ESE backup online streaming API from the source machine to the target machine.  If the database is successfully copied without error the replication instance is automatically resumed.  No log files are pulled or copied as a part of the update-storagegroupcopy process.  It is not until after the update-storagegroupcopy process is completed, and replication is resumed, that the header of the database is reviewed and the replication service determines which logs are necessary to be copied.

In this blog post I want to highlight how the replication service makes the decision on which log files need to be copied post a re-seed of the database.  I will use examples from cluster.replay tracing (which can only be done with consultation with product support services).

*Databases copied offline between servers (clean shutdown).

When databases are copied offline between nodes this is a manual seeding operation.  By default a database that is offline and copied between nodes is in a clean shutdown state. 

Here is a sample header dump.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: 2003-MBX3-SG1-DB1.edb

        File Type: Database
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
     DB Signature: Create time:08/09/2009 14:10:12 Rand:7948610 Computer:
         cbDbPage: 8192
           dbtime: 20053 (0x4e55)
            State: Clean Shutdown
     Log Required: 0-0 (0x0-0x0)
    Log Committed: 0-0 (0x0-0x0)
   Streaming File: No
         Shadowed: Yes
       Last Objid: 133
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x9,A,1C1)  08/09/2009 14:12:18
      Last Attach: (0x8,9,86)  08/09/2009 14:12:15
      Last Detach: (0x9,A,1C1)  08/09/2009 14:12:18
             Dbid: 1
    Log Signature: Create time:08/09/2009 14:10:08 Rand:7930576 Computer:
       OS Version: (5.2.3790 SP 2)

Previous Full Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Incremental Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

Operation completed successfully in 0.63 seconds.

 

When replication is resumed, the header of the database is consulted.  Here is an example trace tag from non-customer viewable tracing.

2826 74006100440074 2256 Cluster.Replay FileChecker RunChecks is successful. FileState is: LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 0
HighestGenerationRequired: 0
LastGenerationBackedUp: 0
CheckpointGeneration: 0
LogfileSignature:
LatestFullBackupTime:
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup:
SnapshotLatestFullBackup:
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: True
2827 74006100440074 2256 Cluster.Replay ReplicaInstance SetReplayState(): LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 0
HighestGenerationRequired: 0
LastGenerationBackedUp: 0
CheckpointGeneration: 0
LogfileSignature:
LatestFullBackupTime:
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup:
SnapshotLatestFullBackup:
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: True

You can see here that the replication service, after reading the status of the database, has detected that a clean shutdown database was found.  (ConsisntentDatabase: True).

Since no backup has been performed on the database, and the log directory on the target was empty prior to resuming the storage group copy, the replication service determines that no minimum log file was necessary.  Log copy will start from the first log available on the source server and continue to the highest generation on the source server.  As long as the generation is contiguous replication will proceed and remain healthy post the manual database seed.  (It would be best practice with an offline reseed to clear the log directory on the target prior to resuming the database copy).

2871 030F3F44 2256 Cluster.Replay ReplicaInstance No logfiles present, no backup information, no required generation
2872 030F3F44 2256 Cluster.Replay ReplicaInstance Log copying will start from generation 0

The replication service begins the replication instance and queries the source directory, in this case it was determined that log file 5 was the first available, and thus the replication service starts by copying this log file.

2903 0385717B 2256 Cluster.Replay NetPath ClusterPathManager.GetPath() returns \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$
2904 007D2DBB 2256 Cluster.Replay LogCopy First generation for \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$ is 00000005
2905 0385717B 2256 Cluster.Replay NetPath ClusterPathManager.GetPath() returns \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$
2906 007D2DBB 2256 Cluster.Replay PFD PFD CRS 18907 First generation for \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$ is 00000005
2907 0385717B 2256 Cluster.Replay NetPath ClusterPathManager.GetPath() returns \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$
2908 007D2DBB 2256 Cluster.Replay ShipLog LogCopy: Trying to find file \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$\E0000000005.log
2909 007D2DBB 2256 Cluster.Replay ShipLog LogCopy: Found file E0000000005.log
2910 007D2DBB 2256 Cluster.Replay PFD PFD CRS 18395 LogCopy: Found file E0000000005.log

This is also confirmed by the event in the application log indicating that log copy began by successfully copying log generation 5 (0x5).

Event Type:    Information
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2114
Date:        8/9/2009
Time:        2:15:22 PM
User:        N/A
Computer:    2003-NODE2
Description:
The replication instance for storage group 2003-MBX3\2003-MBX3-SG1 has started copying transaction log files. The first log file successfully copied was generation 5.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

At minimum we must have all logs from the Last Consistent log generation forward in order to maintain replication.  This makes sense, if I did not have all logs from the last consistent log (where the database was shutdown) forward, how could I bring the passive copy up to current point in time?

As long as the generation is contiguous and logs are present on the source from last consistent to current log replication will proceed and remain healthy post the manual database seed.

 

*Database seeded using Update-StorageGroupCopy where no full or incremental backup was performed.

In this example we have a database on a source server that has neither had a full or incremental backup performed on it.  The storage group replication between nodes was suspended using suspend-storagegroupcopy.  Then the update-storagegroupcopy command was used to stream the database to the target server (the –manualResume switch was also used so I could generate the header dumps).  Below is a sample header dump of a database post an update-storagegroupcopy.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: 2003-MBX3-SG1-DB1.edb

        File Type: Database
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
     DB Signature: Create time:08/09/2009 14:10:12 Rand:7948610 Computer:
         cbDbPage: 8192
           dbtime: 20053 (0x4e55)
            State: Dirty Shutdown
     Log Required: 11-11 (0xb-0xb)
    Log Committed: 0-14 (0x0-0xe)
   Streaming File: No
         Shadowed: Yes
       Last Objid: 133
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x9,A,1C1)  08/09/2009 14:12:18
      Last Attach: (0xB,9,86)  08/09/2009 14:28:20
      Last Detach: (0x0,0,0)  00/00/1900 00:00:00
             Dbid: 1
    Log Signature: Create time:08/09/2009 14:10:08 Rand:7930576 Computer:
       OS Version: (5.2.3790 SP 2)

Previous Full Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Incremental Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 11-14 (0xb-0xe)
           Mark: (0xE,188,167)
           Mark: 08/09/2009 14:29:54

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

Operation completed successfully in 0.31 seconds.

In this header dump you will notice that the database is in dirty shutdown.  This is expected of a database that has come from an online seeding operation.  You will also note that the Current Full Backup header section of the database is populated.  The low log value here is 11 (0xb) and the high log value is 14 (0xe).

After the header dump was generated I resumed storage group copy (normally after a successful update-storagegroupcopy this is done for you automatically).  When replication is resumed the header of the database is consulted.  Here is a sample output from non-customer viewable tracing.

5107 61007400610044 2256 Cluster.Replay FileChecker RunChecks is successful. FileState is: LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 11
HighestGenerationRequired: 11
LastGenerationBackedUp: 0
CheckpointGeneration: 0
LogfileSignature: MJET_SIGNATURE(Random = 7930576,CreationTime = 8/9/2009 2:10:08 PM)
LatestFullBackupTime:
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup:
SnapshotLatestFullBackup:
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: False
5108 61007400610044 2256 Cluster.Replay ReplicaInstance SetReplayState(): LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 11
HighestGenerationRequired: 11
LastGenerationBackedUp: 0
CheckpointGeneration: 0
LogfileSignature: MJET_SIGNATURE(Random = 7930576,CreationTime = 8/9/2009 2:10:08 PM)
LatestFullBackupTime:
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup:
SnapshotLatestFullBackup:
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: False

You will note from this output that the HighestGenerationRequired and LowestGenerationRequired is 11 (0xb).  This is based on the current full backup information in the header of the database.  The lowest log recorded in current full backup represents the lowest log necessary to complete the source database at the time the update-storagegroupcopy was run.

You will note that the events in the application log indicate log copy started with logs 11 (0xb).

Event Type:    Information
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2114
Date:        8/9/2009
Time:        2:36:00 PM
User:        N/A
Computer:    2003-NODE2
Description:
The replication instance for storage group 2003-MBX3\2003-MBX3-SG1 has started copying transaction log files. The first log file successfully copied was generation 11.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

Post an update-storagegroupcopy replication will remain healthy pending that all logs are present and contiguous on the source server from the time the update-storagegroupcopy was initiated until it completed successfully.

 

*Database seeded using Update-StorageGroupCopy where a full backup was performed on the source database.

In this example we have a database on a source server that has had a full backup performed on it (in this case an ESE online streaming backup).  The storage group replication between nodes was suspended using suspend-storagegroupcopy.  Then the update-storagegroupcopy command was used to stream the database to the target server (the –manualResume switch was also used so I could generate the header dumps).  Below is a sample header dump of a database post an update-storagegroupcopy.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: 2003-MBX3-SG1-DB1.edb

        File Type: Database
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
     DB Signature: Create time:08/09/2009 14:10:12 Rand:7948610 Computer:
         cbDbPage: 8192
           dbtime: 22631 (0x5867)
            State: Dirty Shutdown
     Log Required: 31-31 (0x1f-0x1f)
    Log Committed: 0-32 (0x0-0x20)
   Streaming File: No
         Shadowed: Yes
       Last Objid: 134
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x1D,A,1C1)  08/09/2009 14:48:13
      Last Attach: (0x1F,9,86)  08/09/2009 14:48:15
      Last Detach: (0x0,0,0)  00/00/1900 00:00:00
             Dbid: 1
    Log Signature: Create time:08/09/2009 14:10:08 Rand:7930576 Computer:
       OS Version: (5.2.3790 SP 2)

Previous Full Backup:
        Log Gen: 20-21 (0x14-0x15)
           Mark: (0x15,D,195)
           Mark: 08/09/2009 14:45:43

Previous Incremental Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 31-32 (0x1f-0x20)
           Mark: (0x20,E,185)
           Mark: 08/09/2009 14:49:06

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

Operation completed successfully in 0.46 seconds.

In this header dump you will note that Previous Full Backup is populated.  The low log generation is 20 (0x14) and the high log generation is 21 (0x15).

After the header dump was generated I resumed storage group copy (normally after a successful update-storagegroupcopy this is done for you automatically).  When replication is resumed the header of the database is consulted.  Here is a sample output from non-customer viewable tracing.

6593 61007400610044 2472 Cluster.Replay FileChecker RunChecks is successful. FileState is: LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 31
HighestGenerationRequired: 31
LastGenerationBackedUp: 21
CheckpointGeneration: 0
LogfileSignature: MJET_SIGNATURE(Random = 7930576,CreationTime = 8/9/2009 2:10:08 PM)
LatestFullBackupTime: 8/9/2009 2:45:43 PM
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup: False
SnapshotLatestFullBackup: False
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: False
6594 61007400610044 2472 Cluster.Replay ReplicaInstance SetReplayState(): LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 31
HighestGenerationRequired: 31
LastGenerationBackedUp: 21
CheckpointGeneration: 0
LogfileSignature: MJET_SIGNATURE(Random = 7930576,CreationTime = 8/9/2009 2:10:08 PM)
LatestFullBackupTime: 8/9/2009 2:45:43 PM
LatestIncrementalBackupTime:
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup: False
SnapshotLatestFullBackup: False
SnapshotLatestIncrementalBackup:
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: False

In this output you will note that LastGenerationBackedUp is 21 (0x14).  This corresponds to the high log generation as stamped in previous full backup.  You’ll also note that the LowestGenerationRequired and HighestGenerationRequired is 31 which corresponds to the low log value stamped in current full backup. 

In this case log file copy will start at generation 21 (0x14).  Events in the application log correspond with this:

Event Type:    Information
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2114
Date:        8/9/2009
Time:        2:51:20 PM
User:        N/A
Computer:    2003-NODE2
Description:
The replication instance for storage group 2003-MBX3\2003-MBX3-SG1 has started copying transaction log files. The first log file successfully copied was generation 21.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

The difference between this example and previous examples is that a full backup was performed.  The decision to start copy at log 21 (0x14), which is based on previous full backup, makes sense if you think about the replication service.  Remember that a database can be backed up either from the active or passive nodes.  If I did not base my log file copy on previous full backup that means that I would not have all the logs on my passive copy since the last full backup.  This would essentially prevent me from, at a later point in time, performing an incremental backup.  (Remember an incremental backup requires all log files from the previous full backup be present).

When a database has had a full backup on it replication will remain healthy as long as all logs are contiguous on the source from the high log generation as stamped in previous full backup to the current log.

 

*Database seeded using Update-StorageGroupCopy where a full and incremental backup was performed on the source database.

In this example we have a database on a source server that has had a full and incremental backup performed on it (in this case an ESE online streaming backup).  The storage group replication between nodes was suspended using suspend-storagegroupcopy.  Then the update-storagegroupcopy command was used to stream the database to the target server (the –manualResume switch was also used so I could generate the header dumps).  Below is a sample header dump of a database post an update-storagegroupcopy.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: 2003-MBX3-SG1-DB1.edb

        File Type: Database
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
     DB Signature: Create time:08/09/2009 14:10:12 Rand:7948610 Computer:
         cbDbPage: 8192
           dbtime: 22745 (0x58d9)
            State: Dirty Shutdown
     Log Required: 50-50 (0x32-0x32)
    Log Committed: 0-51 (0x0-0x33)
   Streaming File: No
         Shadowed: Yes
       Last Objid: 134
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x30,A,1C1)  08/09/2009 14:59:22
      Last Attach: (0x32,9,86)  08/09/2009 14:59:24
      Last Detach: (0x0,0,0)  00/00/1900 00:00:00
             Dbid: 1
    Log Signature: Create time:08/09/2009 14:10:08 Rand:7930576 Computer:
       OS Version: (5.2.3790 SP 2)

Previous Full Backup:
        Log Gen: 20-21 (0x14-0x15)
           Mark: (0x15,D,195)
           Mark: 08/09/2009 14:45:43

Previous Incremental Backup:
        Log Gen: 5-34 (0x5-0x22)
           Mark: (0x23,8,16)
           Mark: 08/09/2009 14:59:00

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 50-51 (0x32-0x33)
           Mark: (0x33,F,29)
           Mark: 08/09/2009 15:00:05

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

Operation completed successfully in 0.78 seconds.

In this header dump you will note that Previous Incremental Backup is populated.  The low log generation is 5 (0x5) and the high log generation is 34 (0x22).

After the header dump was generated I resumed storage group copy (normally after a successful update-storagegroupcopy this is done for you automatically).  When replication is resumed the header of the database is consulted.  Here is a sample output from non-customer viewable tracing.

8933 61007400610044 2472 Cluster.Replay ReplicaInstance SetReplayState(): LowestGenerationPresent: 0
HighestGenerationPresent: 0
LowestGenerationRequired: 50
HighestGenerationRequired: 50
LastGenerationBackedUp: 34
CheckpointGeneration: 0
LogfileSignature: MJET_SIGNATURE(Random = 7930576,CreationTime = 8/9/2009 2:10:08 PM)
LatestFullBackupTime: 8/9/2009 2:45:43 PM
LatestIncrementalBackupTime: 8/9/2009 2:59:00 PM
LatestDifferentialBackupTime:
LatestCopyBackupTime:
SnapshotBackup: False
SnapshotLatestFullBackup: False
SnapshotLatestIncrementalBackup: False
SnapshotLatestDifferentialBackup:
SnapshotLatestCopyBackup:
ConsistentDatabase: False
8934 020C9AB5 2472 Cluster.Replay State CopyGenerationNumber is changing to 0 on replica 3d0099f3-ff35-46ea-8a2f-39eb50923209

In this output you will note that LastGenerationBackedUp is 34 (0x14).  This corresponds to the high log generation as stamped in previous incremental backup.  You’ll also note that the LowestGenerationRequired and HighestGenerationRequired is 50 which corresponds to the low log value stamped in current full backup. 

In this case log file copy will start at generation 34 (0x22).  Events in the application log correspond with this:

Event Type:    Information
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2114
Date:        8/9/2009
Time:        2:51:20 PM
User:        N/A
Computer:    2003-NODE2
Description:
The replication instance for storage group 2003-MBX3\2003-MBX3-SG1 has started copying transaction log files. The first log file successfully copied was generation 34.

The difference between this example and previous examples is that a full and incremental backup was performed.  The decision to start copy at log 34 (0x22), which is based on previous incremental backup, makes sense if you think about the replication service.  Remember that a database can be backed up either from the active or passive nodes.  If I did not base my log file copy on previous incremental backup that means that I would not have all the logs on my passive copy since the last incremental backup.  This would essentially prevent me from, at a later point in time, performing an incremental backup of the passive copy.  (Remember and incremental backup requires all log files from the previous incremental backup be present).

When a database has had a incremental backup on it replication will remain healthy as long as all logs are contiguous on the source from the high log generation as stamped in previous incremental backup to the current log.

 

*So what does an example look like where the necessary logs are not present.

In this example I have a database that has had a full and incremental backup performed on it.  I have suspended the storage group copy between nodes and forced log generation to occur.  I then went into the source log directory, and removed two logs from the end of the log stream. 

This is not an uncommon example.  While storage group copy is failed or suspended logs will continue to generate on the source server.  All full and incremental backups of the source will continue to be successful, but logs will not purge.  Depending on the size of your log file drive, and the amount of time that copy is suspended or failed, your log drive may begin to fill up.  This may lead to administrators manually purging the log file series.

Here is an eseutil /ml output of the source log directory showing the gap.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...

Verifying log files...
     Base name: e00

      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000005.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000006.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000007.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000008.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000009.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000A.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000E.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000000F.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000010.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000011.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000012.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000013.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000014.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000015.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000016.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000017.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000018.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000019.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001A.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001E.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000001F.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000020.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000021.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000022.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000023.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000024.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000025.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000026.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000027.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000028.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000029.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002A.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002E.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000002F.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000030.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000031.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000032.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000033.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000034.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000035.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000036.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000037.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000038.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000039.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003A.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003E.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000003F.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000040.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000041.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000042.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000043.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000044.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000045.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000046.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000047.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000048.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000049.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004A.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004E.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000004F.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000050.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000051.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000052.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000053.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000054.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000055.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000056.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000057.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E0000000058.log - OK
      Missing log files: e00{00000059 - 0000005A}.log
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000005B.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000005C.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E000000005D.log - OK
      Log file: D:\2003-MBX3\2003-MBX3-SG1-Logs\E00.log - OK

Operation terminated with error -528 (JET_errMissingLogFile, Current log file missing) after 5.203 seconds.

On the passive node I have now issues my update-storagegroupcopy.  Prior to resuming storage group copy I dumped the header of the database and here is the output.

Extensible Storage Engine Utilities for Microsoft(R) Exchange Server

Version 08.01

Copyright (C) Microsoft Corporation. All Rights Reserved.

Initiating FILE DUMP mode...
         Database: 2003-MBX3-SG1-DB1.edb

        File Type: Database
   Format ulMagic: 0x89abcdef
   Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
     DB Signature: Create time:08/09/2009 14:10:12 Rand:7948610 Computer:
         cbDbPage: 8192
           dbtime: 23064 (0x5a18)
            State: Dirty Shutdown
     Log Required: 94-94 (0x5e-0x5e)
    Log Committed: 0-95 (0x0-0x5f)
   Streaming File: No
         Shadowed: Yes
       Last Objid: 134
     Scrub Dbtime: 0 (0x0)
       Scrub Date: 00/00/1900 00:00:00
     Repair Count: 0
      Repair Date: 00/00/1900 00:00:00
Old Repair Count: 0
  Last Consistent: (0x5C,13,AC)  08/09/2009 15:20:34
      Last Attach: (0x5E,9,86)  08/09/2009 15:21:58
      Last Detach: (0x0,0,0)  00/00/1900 00:00:00
             Dbid: 1
    Log Signature: Create time:08/09/2009 14:10:08 Rand:7930576 Computer:
       OS Version: (5.2.3790 SP 2)

Previous Full Backup:
        Log Gen: 70-71 (0x46-0x47)
           Mark: (0x47,E,4F)
           Mark: 08/09/2009 15:06:34

Previous Incremental Backup:
        Log Gen: 5-75 (0x5-0x4b)
           Mark: (0x4C,8,16)
           Mark: 08/09/2009 15:17:03

Previous Copy Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Previous Differential Backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

Current Full Backup:
        Log Gen: 94-95 (0x5e-0x5f)
           Mark: (0x5F,C,198)
           Mark: 08/09/2009 15:22:10

Current Shadow copy backup:
        Log Gen: 0-0 (0x0-0x0)
           Mark: (0x0,0,0)
           Mark: 00/00/1900 00:00:00

     cpgUpgrade55Format: 0
    cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0

       ECC Fix Success Count: none
   Old ECC Fix Success Count: none
         ECC Fix Error Count: none
     Old ECC Fix Error Count: none
    Bad Checksum Error Count: none
Old bad Checksum Error Count: none

Operation completed successfully in 0.62 seconds.

Based on the header of the database I can see that incremental backup is populated.  Knowing our rules of replication – a database with an incremental backup will require all logs from the highest log generation, in this case 75 (0x4b) to current point in time be present on the source, contiguous, and able to be copied to the target. 

From our ML output you can see that I removed log 0x59 and 0x5a (decimal 89 and 90). 

I resumed storage group copy using resume-storagegroupcopy. 

The following event was logged indicating that copy started at log 75 (0x4b).

Event Type:    Information
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2114
Date:        8/9/2009
Time:        3:26:06 PM
User:        N/A
Computer:    2003-NODE2
Description:
The replication instance for storage group 2003-MBX3\2003-MBX3-SG1 has started copying transaction log files. The first log file successfully copied was generation 75.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

As logs copied between the nodes, I noticed that using get-storagegroupcopystatus the storage group in question was in a failed state.

Name                      SummaryCopySt CopyQueueLeng ReplayQueueL LastInspecte
                          atus          th            ength        dLogTime   
----                      ------------- ------------- ------------ ------------
2003-MBX3-SG1             Failed        0             0                       
2003-MBX3-SG2             Suspended     0             0            8/9/2009 ...

By reviewing the application log I noticed that our failure was due to the inability to copy the 0x59 (89) log file from the source.  This makes sense since I knowingly deleted it…and is expected since I know that all logs from the high log generation stamped in previous incremental backup to current time must be present on the source, contiguous, and able to get copied to the target.

Here is the sample error text:

Event Type:    Error
Event Source:    MSExchangeRepl
Event Category:    Service
Event ID:    2059
Date:        8/9/2009
Time:        3:27:07 PM
User:        N/A
Computer:    2003-NODE2
Description:
The log file \\2003-mbx3-replc\3d0099f3-ff35-46ea-8a2f-39eb50923209$\E0000000059.log for 2003-MBX3\2003-MBX3-SG1 is missing on the production copy. Continuous replication for this storage group is blocked. If you removed the log file, please replace it. If the log is lost, the passive copy will need to be reseeded using the Update-StorageGroupCopy cmdlet in the Exchange Management Shell.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

What is confusing about this event is that the administrator is advised to run update-storagegroupcopy…and that’s what we just ran that generated the error.  Based solely on the event, and without knowledge of what logs are required by the replication service depending on the state of the database, one could end up in an endless loop of update-storagegroupcopy and log file copy failures.

Now…how can this condition be corrected.  The condition can be corrected by running a new full backup on the database prior to running the update-storagegroup copy.  The full backup will reset the previous incremental information, and stamp values in the previous full backup section based on the logs that are not missing.

 

*So what are the log file copy rules:

If a database is offline (clean shutdown) then all logs from last consistent value to current point must be present on the source, be contiguous, and able to be copied to the target.

If a database is online but has never had a full or incremental backup then all the logs form the anchor log at the time the update-storagegroupcopy was initiated to the current log must exist on the source, must be contiguous, and must be able to be copied to the target.

If a database is online, and has had a full backup performed on it, then all logs from the high log generation, as stamped in previous full backup, must be present on the source, must be contiguous, and must be able to be copied to the target.

If a database is online and has had a full and incremental backup performed on it then all logs from the high log generation as stamped in previous incremental backup must be present on the source, must be contiguous, and must be able to be copied to the target.