• Cluster Network Thresholds – A Good Read

    Recently I was re-reading a blog post from 2012 about on Tuning Failover Cluster Network Thresholds that was posted by Elden Christensen, a Principal PM on the Windows Failover Cluster team. I think Elden’s post is a must-read for anyone planning a highly-available Exchange deployment.  One of the reasons I find this post an excellent read is that it addresses many things administrators need to understand when they decide to tune cluster heartbeat subnet delays and thresholds in Exchange environments.

     

    The post first makes an excellent point in addressing that the changes to these thresholds alter the amount of time it takes to detect that a node is down.  Elden uses a great metaphor here:

     

    “Think of it like your cell phone, when the other end goes silent how long are you willing to sit there going “Hello?... Hello?... Hello?” before you hang-up the phone and call the person back.  When the other end goes silent, you don’t know when or even if they will come back.”

    As subnet thresholds are adjusted up, this increases the amount of time it takes to detect a failure. The higher the values, the longer it takes to detect a failure, and therefore the longer it takes to act on that failure. There is a balance between reacting quickly to a failure and providing resiliency to transient networking issues.

     

    The other point that I think is worth understanding is the number of times these values are adjusted in the absence of an analysis or correction of underlying networking issues.  Elden sums this up, too, and I could not agree with him more:

     

    “It critical to recognize that cranking up the thresholds to high values does not fix nor resolve the transient network issue, it simply masks the problem by making health monitoring less sensitive. The #1 mistake made broadly by customers is the perception of not triggering cluster health detection means the issue is resolved (which is not true!). I like to think of it, that just because you choose not to go to the doctor it does not mean you are healthy. In other words, the lack of someone telling you that you have a problem does not mean the problem went away.”

    I often find myself in conversations with customers who have changed these values and have the perception that something is “fixed.” There are legitimate cases where these values need to be changed – but I always encourage a networking analysis enables you to understand what issues you are facing and how adjusting these values would help. Unfortunately, it seems that adjusting these thresholds without this understanding is far more common than it should be.

     

    I strongly encourage all Exchange administrators to read Elden’s post.

  • Office 365: Manage Office 365 features when Outlook Web Access is disabled on a mailbox.

    When accounts are provisioned in Office 365, there are certain self-service features that end users have access to without administrator interaction. For example, when logging in and selecting Office 365 settings, provisioned accounts have access to software downloads and the change password feature.

     

    image

     

     

    When an account has an Exchange license added to it, the provisioning process creates a mailbox for it in the service. This automatically enables Outlook Web App (OWA) for the user. Depending on the license type and the age of the mailbox, the user logon experience may be different.  For example, on new accounts when the user logs in they may be taken to the introduction portal page.

     

    image

     

    For accounts that are aged or possess an Exchange-only license type the Introduction page may be automatically skipped and the user immediately redirected to OWA.

     

    image

     

    When a user is redirected to OWA they can access their self-service options by selecting the gear in the upper right-hand corner and selecting Office 365 Settings.

     

    image

     

    One of the administrative features we provide is the ability to disable OWA. This allows the administrator to restrict mailbox access via OWA, while allowing access from other clients – for example, Outlook. When OWA is enabled the feature shows enabled with the Disable action available.

     

    image

     

    When selecting the disabled option OWA will be unavailable. The mailbox then shows a setting of disabled with the Enable action available.

     

    image

     

    When a user attempts to access OWA from the Introduction page (selecting Outlook in the action bar) they are prompted with the following error:


    X-OWA-Error: Microsoft.Exchange.Data.Storage.AccountDisabled.Exception:

     

    image

     

    When a user has landed on this page and they require access to other services, they must manually return to the Introduction page. The issue arises when the mailbox is aged or has been provisioned with an Exchange-only license as the Introduction page is bypassed and the user is immediately directed to OWA. The user cannot use the back feature in the browser to return to the Introduction page and no option to select Office 365 settings is present on the failure page.

     

    When this situation is present users can be directed to the following URL.  This URL will bypass the automatic OWA redirection and allow the initial page presented to be the user. 

     

    https://portal.microsoftonline.com/IWGetStarted15.aspx?DisableIWLanding=true

     

    image

     

    This URL should allow administrators to continue to provide access to self-service options while OWA is in a disabled state.

  • Database Availability Groups – Storage Swing Migrations

    In some circumstances it becomes necessary to migrate users quickly between servers within Database Availability Groups.  In some instances move mailbox is not an option.  When storage can be maintained we can utilize the swing method to complete the migration.

     

    Some instances where these instructions have been implemented include:

     

    • Replacement of off lease hardware (for example nodes) where additional storage does not exist for move mailbox or to replicate database copies.
    • Upgrading nodes to new operating systems by migrating storage / databases from a previous version of windows and Exchange to a new version of Windows with same Exchange version.

     

    The swing method involves identifying a database copy to be moved between nodes, migrating the storage between nodes, and then migrating additional copies of the database between nodes. 

     

    There are several considerations when deciding to implement the storage swing migration.  Some of these considerations include:

     

    • The complexity of the steps and the ability to test prior to implementing against production users.
    • The loss of all lagged database copies.
    • The need for end user downtime to complete the transition.
    • Maintaining enough storage to hold all log files during the transition process due to log file truncation being blocked either by circular logging or backups.
    • Content indexes must be completely rebuilt once databases and storage are migrated.

     

    In this article we have implemented the following architecture for testing and documentation:

     

    image

     

    Step 0:  Ensure all support teams are aware of the actions to be performed.

     

    It is important that all support teams are prepared for the actions to be taken in these steps.  Ensuring that storage can be migrated quickly is paramount to reducing downtime.  Also depending on your hardware, additional steps may need to be performed to ensure that storage can be imported.  For example, in DAS environments when moving storage chassis between nodes, RAID configurations must be imported into new controllers (as opposed to SAN environments, where LUNs can be mapped to different servers).

     

    Step 1:  Ensure storage has appropriate labels.

    When creating partitions, you can assign labels. Maintaining meaningful labels on the storage helps you to be aware of the volumes you are working with. You cannot rely on the disk numbers because as storage is migrated between servers, different disk numbers may be assigned. But when disks are moved between servers, disk labels are maintained.

     

    In this example, I have generic volume names on my server – for example Data0 / Data1 / Data2…etc. 

     

    image

     

    These volume names are generic and do not help identify the data contained within these volumes.  Using the Disk Management tool, I can change the volume names to something more meaningful, such as an indication of the data being stored.

     

    image

     

    image

     

    Naming volumes in this manner can reduce potential confusion when migrating storage between servers.

     

    Step 2:  Create new database objects on the new database availability groups for the databases that will be migrated.

     

    In this step, you create new database objects on the target DAG. I recommend that these new databases be created on the same DAG member.  This will serve as the node where the initial storage migration will occur and where you should be able to quickly restore services.  In larger DAGs, where there is no single node that will host all database copies, you can spread the new databases out across the DAG members.  Ensure to keep track of these database locations so storage is migrated to the appropriate servers.

     

    In the example below, I create the new mailbox databases, but I don’t specify a path for log files or databases.  Thus, these databases are created at the default location (%ProgramFiles%\Microsoft\Exchange Server\v14\Mailbox). The databases are not mounted at this point, and therefore no storage is being used. I am creating the databases and then allowing sufficient time for Active Directory replication and for the Microsoft Exchange Replication and Information Store services to detect the databases.

     

    [PS] C:\>New-MailboxDatabase -Name NEW-DB0 -Server MBX-2A

    Name                           Server          Recovery        ReplicationType
    ----                           ------          --------        ---------------
    NEW-DB0                        MBX-2A          False           None

    [PS] C:\>New-MailboxDatabase -Name NEW-DB1 -Server MBX-2A

    Name                           Server          Recovery        ReplicationType
    ----                           ------          --------        ---------------
    NEW-DB1                        MBX-2A          False           None

    I can validate the configuration with the following command:

     

    [PS] C:\>Get-MailboxDatabase -Server MBX-2A -Status | fl name,*mounted*,*path*

    Name                    : NEW-DB0
    MountedOnServer         : MBX-2A.exchange.msft
    Mounted                 : False
    EdbFilePath             : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0\NEW-DB0.edb
    LogFolderPath           : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB0
    TemporaryDataFolderPath :

    Name                    : NEW-DB1
    MountedOnServer         : MBX-2A.exchange.msft
    Mounted                 : False
    EdbFilePath             : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1\NEW-DB1.edb
    LogFolderPath           : C:\Program Files\Microsoft\Exchange Server\V14\Mailbox\NEW-DB1
    TemporaryDataFolderPath :

    Step 3:  Remove all truncate and replay lags from database copies.

    In this step, you disable truncation and replay lag settings from all database copies that have them configured.  It is necessary to have all databases up to date before migrating them to new storage.  By disabling replay and truncation lag settings, the administrator can decrease the amount of downtime required to move storage between nodes. Truncation and replay lag settings are dynamic, and when disabled, log file replay and log file truncation will start on lagged copies at the next Replication service update cycle.  Sufficient time should be allowed between this step and the day of migration in order to allow any lagged copies to replay outstanding log files.

     

    In the following example, databases assigned to MBX-1C are databases with lagged copies:

     

    [PS] C:\>Get-MailboxDatabaseCopyStatus *

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    DB0\MBX1-A                                    Mounted         0         0                                  Healthy
    DB1\MBX1-A                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
    DB1\MBX-1B                                    Mounted         0         0                                  Healthy
    DB0\MBX-1B                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
    DB0\MBX-1C                                    Healthy         0         267         1/28/2014 1:36:57 PM   Healthy
    DB1\MBX-1C                                    Healthy         0         253         1/28/2014 1:59:43 PM   Healthy
    NEW-DB0\MBX-2A                                Dismounted      0         0                                  Unknown
    NEW-DB1\MBX-2A                                Dismounted      0         0                                  Unknown

     

    To disable the lagged copy we utilize the set-mailboxdatabasecopy command:

     

    set-mailboxdatabasecopy DB0\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0

    set-mailboxdatabasecopy DB1\MBX-1C –replayLagTime 0.0:0:0 –truncationLagTime 0.0:0:0

    Eventually, the queues should decrease down to zero.

     

    [PS] C:\>Get-MailboxDatabaseCopyStatus *

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    DB0\MBX1-A                                    Mounted         0         0                                  Healthy
    DB1\MBX1-A                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
    DB1\MBX-1B                                    Mounted         0         0                                  Healthy
    DB0\MBX-1B                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
    DB0\MBX-1C                                    Healthy         0         0           1/28/2014 1:36:57 PM   Healthy
    DB1\MBX-1C                                    Healthy         0         0           1/28/2014 1:59:43 PM   Healthy
    NEW-DB0\MBX-2A                                Dismounted      0         0                                  Unknown
    NEW-DB1\MBX-2A                                Dismounted      0         0                                  Unknown

    Depending on the duration of the original lag settings, this procedure could take several hours or days to complete. Once the replay queues are at zero, the lagged copy is considered successfully disabled.

     

    Step 4:  Validate database copy health

    This step needs to be performed immediately before migrating storage.  It is imperative that all database copies be healthy prior to proceeding with further steps. You can use Get-MailboxDatabaseCopyStatus to validate that all database copies are healthy.

     

    [PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX1-A

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    DB0\MBX1-A                                    Mounted         0         0                                  Healthy
    DB1\MBX1-A                                    Healthy         0         0           1/29/2014 5:32:30 AM   Healthy

    [PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1B

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    DB1\MBX-1B                                    Mounted         0         0                                  Healthy
    DB0\MBX-1B                                    Healthy         0         0           1/29/2014 5:32:22 AM   Healthy

    [PS] C:\>Get-MailboxDatabaseCopyStatus *\MBX-1C

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    DB0\MBX-1C                                    Healthy         0         0           1/29/2014 5:32:22 AM   Healthy
    DB1\MBX-1C                                    Healthy         0         0           1/29/2014 5:32:30 AM   Healthy

     

    If any database is unhealthy, it needs to be fixed.  In some instances, that may require reseeding which can take several hours. Appropriate time should be allotted to ensure that any remediation steps can be followed.

     

    Step 5:  Dismount databases and validate database dismount

    Next, the databases that will be migrated are dismounted. They will need to remain dismounted until the storage has been successfully migrated, and services are restored in the new database availability group.

     

    Use Dismount-Database to dismount the databases:

     

    [PS] C:\>Dismount-Database DB0 -Confirm:$False
    [PS] C:\>Dismount-Database DB1 -Confirm:$False

     

    Use Get-MailboxDatabase –Status to verify the databases are dismounted.

     

    [PS] C:\>Get-MailboxDatabase DB0 -Status | fl *mounted*

    MountedOnServer : MBX1-A.exchange.msft
    Mounted         : False

    [PS] C:\>Get-MailboxDatabase DB1 -Status | fl *mounted*

    MountedOnServer : MBX-1B.exchange.msft
    Mounted         : False

     

    Step 6:  Ensure log file copy and replay has completed.

     

    There can be conditions that occur between validating copy status and dismounting databases that do not result in all log files being copied to the passive node. So in this step, you manually copy all log files to the passive node.

     

    Using an administrative command prompt on the server hosting the passive copy, navigate to the log file directory for a database you are migrating.

     

    Execute a command similar to the following – robocopy \\<ActiveNode>\<Drive$>\<log folder path> . /E  (This example assumes the command prompt is already present in the target log file directory)

     

    F:\DB1>robocopy \\mbx-1b\f$\DB1 . /e /xf *.chk

    ------------------------------------------------------------------------------
       ROBOCOPY     ::     Robust File Copy for Windows

    ------------------------------------------------------------------------------

      Started : Wednesday, January 29, 2014 9:50:13 AM
       Source : \\mbx-1b\f$\DB1\
         Dest : F:\DB1\

        Files : *.*

    Exc Files : *.chk

      Options : *.* /S /E /DCOPY:DA /COPY:DAT /R:1000000 /W:30

    ------------------------------------------------------------------------------

                             364    \\mbx-1b\f$\DB1\
            *EXTRA Dir        -1    F:\DB1\incseedInspect\
    100%        New File               1.0 m        E01.log
                               0    \\mbx-1b\f$\DB1\IgnoredLogs\
                               0    \\mbx-1b\f$\DB1\inspector\

    ------------------------------------------------------------------------------

                   Total    Copied   Skipped  Mismatch    FAILED    Extras
        Dirs :         3         0         0         0         0         1
       Files :       364         1       363         0         0         0
       Bytes :  362.00 m    1.00 m  361.00 m         0         0         0
       Times :   0:00:00   0:00:00                       0:00:00   0:00:00

       Speed :            22310127 Bytes/sec.
       Speed :            1276.595 MegaBytes/min.
       Ended : Wednesday, January 29, 2014 9:50:13 AM

     

    This will ensure that any missing log files, as well as the updated ENN.log, are available on all copies for replay.

     

    Step 7:  Replay all log files into databases.

    Next, use eseutil to replay all log files into all database copies.  This will ensure that all copies are up to date prior to migrating storage to the remote nodes.  This step will be performed on all servers hosting a passive or active database copy.

     

    Launch an administrative command prompt and navigate to the log file directory.

     

    Run eseutil /r ENN where ENN is the first three digits of the log prefix for that log sequence.

     

    F:\DB1>eseutil /r e01

    Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
    Version 14.03
    Copyright (C) Microsoft Corporation. All Rights Reserved.

    Initiating RECOVERY mode...
        Logfile base name: e01
                Log files: <current directory>
             System files: <current directory>

    Performing soft recovery...
                          Restore Status (% complete)

              0    10   20   30   40   50   60   70   80   90  100
              |----|----|----|----|----|----|----|----|----|----|
              ...................................................

    Operation completed successfully in 21.938 seconds.

     

    When completed on all database copies we can proceed to the verification step next.

     

    Step 7:  Validate database headers.

     

    Once all log files have been copied and replayed, you must ensure that all database copies reflect this work. Compare the database headers of each database to ensure that they are equal. Specifically, compare the attributes LastConsistent and LastDetached, which you can view using eseutil.

     

    To dump the header of the database utilize eseutil /mh.

     

    E:\DB0>eseutil /mh DB0.edb

    Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
    Version 14.03
    Copyright (C) Microsoft Corporation. All Rights Reserved.

    Initiating FILE DUMP mode...
             Database: DB0.edb

    DATABASE HEADER:
    Checksum Information:
    Expected Checksum: 0x1c1c8028
      Actual Checksum: 0x1c1c8028

    Fields:
            File Type: Database
             Checksum: 0x1c1c8028
       Format ulMagic: 0x89abcdef
       Engine ulMagic: 0x89abcdef
    Format ulVersion: 0x620,17
    Engine ulVersion: 0x620,17
    Created ulVersion: 0x620,17
         DB Signature: Create time:01/27/2014 12:18:49 Rand:3626382 Computer:
             cbDbPage: 32768
               dbtime: 962549 (0xeaff5)
                State: Clean Shutdown
         Log Required: 0-0 (0x0-0x0)
        Log Committed: 0-0 (0x0-0x0)
       Log Recovering: 0 (0x0)
      GenMax Creation: 00/00/1900 00:00:00
             Shadowed: Yes
           Last Objid: 7494
         Scrub Dbtime: 0 (0x0)
           Scrub Date: 00/00/1900 00:00:00
         Repair Count: 0
          Repair Date: 00/00/1900 00:00:00
    Old Repair Count: 0
      Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
          Last Attach: (0x16A,1,270)  01/29/2014 10:12:49
          Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52
                 Dbid: 1
        Log Signature: Create time:01/27/2014 12:18:48 Rand:3652040 Computer:
           OS Version: (6.2.9200 SP 0 NLS ffffffff.ffffffff)

    Previous Full Backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

    Previous Incremental Backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

    Previous Copy Backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

    Previous Differential Backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

    Current Full Backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

    Current Shadow copy backup:
            Log Gen: 0-0 (0x0-0x0)
               Mark: (0x0,0,0)
               Mark: 00/00/1900 00:00:00

         cpgUpgrade55Format: 0
        cpgUpgradeFreePages: 0
    cpgUpgradeSpaceMapPages: 0

           ECC Fix Success Count: none
       Old ECC Fix Success Count: none
             ECC Fix Error Count: none
         Old ECC Fix Error Count: none
        Bad Checksum Error Count: none
    Old bad Checksum Error Count: none

      Last checksum finish Date: 00/00/1900 00:00:00
    Current checksum start Date: 00/00/1900 00:00:00
          Current checksum page: 0

    Operation completed successfully in 0.125 seconds.

     

    After dumping the header of each database copy for the same database, compare the LastConsistent and LastDetach times.  If these times are equal across all copies of the same database, then log file copy and log file replay were successful.

     

    DB0\MBX-1A

    Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
    Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52

     

    DB0\MBX-1B

    Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:53
    Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:53

     

    DB0\MBX-1C

    Last Consistent: (0x1E6,1,2CA)  01/29/2014 10:57:52
    Last Detach: (0x1E6,1,2CA)  01/29/2014 10:57:52

    If any of the copies does not equal for any reason, the database should be mounted on the source server, and then start back at Step 4 of this document. If all database headers are equal, proceed with storage migration.

     

    Step 8:  Migrate storage to the new node.

     

    When moving storage to the new server, start by migrating from a server hosting the passive database copy. In this example, I will focus on DB0 which was passive on server MBX-1B. The steps to move storage between servers depend on your storage implementation, and as such they are not covered in this article.  These steps should have been tested and validated prior to this point.

     

    I recommend migrating the storage from a single node first. This allows the original active database and storage to remain intact in case there are any issues with the storage migration or the database on the target server. After services have been established on the target server the additional databases and storage can be migrated.

     

    In this example, the new databases were created on server MBX-2A. I am moving the storage from MBX-1B to MBX-2A.  After bringing the disks online on MBX-2A using the Disk Management tool, appropriate drive letters or mount points can be assigned.  IMPORTANT:  note the drive letters and paths used in this procedure.  You will need to repeat this step on other servers using the exact same paths. Failure to implement the same drive letters or paths will result in failure of subsequent steps.

     

    Step 9:  Mount the migrated database on the new node.

    In Step 2 above, you created your database objects. In this step, match one of those databases to the files that were moved from the original DAG. First, use Set-Mailbox to configure the allowFileRestoreFlag on each database.

     

    Using set-mailbox we will set the allowFileRestoreFlag on each database.

     

    [PS] C:\>Set-MailboxDatabase -Identity NEW-DB1 -AllowFileRestore:$TRUE

     

    Once the allowFileRestoreFlag has been set, change the database and log file paths for the new database object to match the migrated storage.  It is very important that when setting the EDB file path that you use the correct file name.  The paths do not have to be and may not be the same as they were on the original server depending on the configuration used in Step 8.

     

    Use Move-DatabasePath to set the database and log file paths, as shown below.

     

    [PS] C:\>Move-DatabasePath NEW-DB1 -LogFolderPath f:\DB1 -EdbFilePath g:\DB1\DB1.edb -ConfigurationOnly:$TRUE -Confirm:$FALSE

    Confirm
    This operation will skip the safety check and make the change to Active Directory directly. Do you want to continue?

     

    Be sure to allow ample time for Active Directory replication to occur. Then, mount the database using Mount-Database.

     

    [PS] C:\>Mount-Database NEW-DB1

     

    If the command completes successfully, the database mount status can be verified with Get-MailboxDatabase –Status.

     

    [PS] C:\>Get-MailboxDatabase -Identity NEW-DB1 -Status | fl *mounted

    MountedOnServer : MBX-2A.exchange.msft
    Mounted         : True

     

    Although the database is mounted, mailboxes still reference the original dismounted database in the original database availability group.

     

    Step 10:  Move mailboxes to reference the migrated database.

    Begin the process of restoring mailbox access by moving the mailboxes from the original database to the new database.  This is accomplished using Get-Mailbox and Set-Mailbox.

     

    [PS] C:\>Get-Mailbox -Database DB1 | Set-Mailbox -Database NEW-DB1

    Confirm
    Rehoming mailbox "exchange.msft/LoadGen Objects/Users/MBX-1B/DB1/MBX-1B 0B63EF06-LGU000001" to database "NEW-DB1". This
    operation will only modify the mailbox's Active Directory configuration. Be aware that the current mailbox content
    will become inaccessible to the user.
    [Y] Yes  [A] Yes to All  [N] No  [L] No to All  [?] Help (default is "Y"): a

     

    After allowing sufficient time for Active Directory replication, users should be able to access their mailboxes. Transport services may need to be restarted to force re-categorization of messages to deliver to new servers.

     

    Step 11:  (Optional):  Migrate storage associated with other database copies.

     

    This step is optional if all storage for all database copies was migrated in step 8. In this step, complete the migration of storage from the original servers to the new servers that will house the database copies. It is important that all paths on the new servers match, so pay careful attention to how the disks are presented and how driver letters / mount points are assigned.

     

    Step 12:  Add database copies of new databases to additional DAG nodes using migrated storage.

    After storage has been completely migrated, the original databases should now be available on servers in the new DAG. Using Add-MailboxDatabaseCopy, you can re-instate passive copies of the database using the databases that were migrated from the original DAG. The Replication service will match these databases to the new log file stream and begin log file replay. If truncation and / or replay lag was previously configured, the copies may be added with the lag at this time.

     

    [PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2B
    [PS] C:\>Add-MailboxDatabaseCopy NEW-DB1 -MailboxServer MBX-2C -ReplayLagTime 7.0:0:0

     

    The success of these operations can be validated with Get-MailboxDatabaseCopyStatus.

     

    [PS] C:\>Get-MailboxDatabaseCopyStatus NEW-DB1\*

    Name                                          Status          CopyQueue ReplayQueue LastInspectedLogTime   ContentIndex
                                                                  Length    Length                             State
    ----                                          ------          --------- ----------- --------------------   ------------
    NEW-DB1\MBX-2A                                Mounted         0         0                                  Healthy
    NEW-DB1\MBX-2B                                Healthy         0         0           1/29/2014 8:59:19 PM   Crawling
    NEW-DB1\MBX-2C                                Healthy         0         162         1/29/2014 8:59:19 PM   Crawling

     

    Once this procedure has been successfully completed on all database copies, the original servers can be decommissioned, if necessary.

  • Exchange 2013: Health Manager service may not reliably start after server boot.

    In Exchange 2013 we have introduced managed availability.  The managed availability process runs within the Microsoft Exchange Health Manager service (MSExchangeHMHost.exe).

     

    After booting a server it has been observed that the Microsoft Exchange Health Manager service fails to start automatically.  Attempts to start the service manually are successful.  When reviewing the system log the following events are present:

     

    Log Name:      System
    Source:        Service Control Manager
    Date:          11/13/2013 9:23:47 AM
    Event ID:      7009
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      MBX-1.domain.com
    Description:
    A timeout was reached (30000 milliseconds) while waiting for the Microsoft Exchange Health Manager service to connect.

    Log Name:      System
    Source:        Service Control Manager
    Date:          11/13/2013 9:23:47 AM
    Event ID:      7000
    Task Category: None
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      MBX-1.domain.com
    Description:
    The Microsoft Exchange Health Manager service failed to start due to the following error:
    The service did not respond to the start or control request in a timely fashion.

     

    To correct the issue the startup type for the service can be changed from AUTOMATIC to AUTOMATIC (DELAYED). 

     

    image

     

    After setting the startup type of the service to AUTOMATIC (DELAYED) the following event is noted in the system log on boot:

     

    Log Name:      System
    Source:        Service Control Manager
    Date:          11/13/2013 10:13:26 AM
    Event ID:      7036
    Task Category: None
    Level:         Information
    Keywords:      Classic
    User:          N/A
    Computer:      MBX-1.domain.com
    Description:
    The Microsoft Exchange Health Manager service entered the running state.

     

    This event indicates the Microsoft Exchange Health Manager service started successfully.

  • Office365: POP and IMAP clients receive OWA links for calendar invitations

    As you may know, Office 365 supports a number of different client protocols, including POP and IMAP and a variety of POP and IMAP clients. By default, POP or IMAP clients will be configured to use Outlook Web App (OWA) for handling calendar invitations. When these clients receive a meeting request, within the body of the invite is a link. Clicking on the link allows the user to open their mailbox via OWA so they can accept or decline the request.

     

    If you are using a POP or IMAP client that is capable of handling ICAL messages, you may want to change your configuration settings so that you can get a better experience.

     

    You can configure – on a per-mailbox basis – how POP and IMAP clients receive calendar appointments.  Get-CASMailbox can be used to view your current settings.

     

    Get-CASMAILBOX –identity <NAME> | fl name,*pop*,*imap*

    Name                                    : administrator
    ExternalPopSettings                     :
    InternalPopSettings                     :
    PopEnabled                              : True
    PopUseProtocolDefaults                  : True
    PopMessagesRetrievalMimeFormat          : BestBodyFormat
    PopEnableExactRFC822Size                : False
    PopSuppressReadReceipt                  : False
    PopForceICalForCalendarRetrievalOption  : False
    ExternalImapSettings                    :
    InternalImapSettings                    :
    ImapEnabled                             : True
    ImapUseProtocolDefaults                 : True
    ImapMessagesRetrievalMimeFormat         : BestBodyFormat
    ImapEnableExactRFC822Size               : False
    ImapSuppressReadReceipt                 : False
    ImapForceICalForCalendarRetrievalOption : False

    In the above output, you can see two attributes that you need to set to True to enable iCAL support. The *ICALForCalendarRetrievalOption specifies that the client should be provided calendar appointments that are in ICAL format.

     

    Also note the *UseProtocolDefaults attributes.  These must be set to False in order for any changes to *ICALForCalendarRetrievalOption to take effect.  You can use Set-CASMailbox to change these settings: 

     

    Set-CASMailbox –identity <NAME> –PopUseProtocolDefaults:$FALSE –ImapUseProtocolDefaults:$FALSE –PopForceICalForCalendarRetrievalOption:$TRUE –ImapForceICalForCalendarRetrievalOption:$TRUE

     

    Name                                    : administrator
    ExternalPopSettings                     :
    InternalPopSettings                     :
    PopEnabled                              : True
    PopUseProtocolDefaults                  : False
    PopMessagesRetrievalMimeFormat          : BestBodyFormat
    PopEnableExactRFC822Size                : False
    PopSuppressReadReceipt                  : False
    PopForceICalForCalendarRetrievalOption  : True
    ExternalImapSettings                    :
    InternalImapSettings                    :
    ImapEnabled                             : True
    ImapUseProtocolDefaults                 : False
    ImapMessagesRetrievalMimeFormat         : BestBodyFormat
    ImapEnableExactRFC822Size               : False
    ImapSuppressReadReceipt                 : False
    ImapForceICalForCalendarRetrievalOption : True

    The settings described here are per-mailbox settings.  There is no global setting to change the default for the entire tenant. There is also no method to adjust the settings per-client.