This is the official blog of the Exchange Server Product Group. All content here is considered authoritative and supported by Microsoft, unless otherwise specified.
Would you like to suggest a topic for the Exchange team to blog about? Send suggestions to us.
Editor's Note: Updates added below for important information related to Exchange Server 2010 SP3 Update Rollup 8.
The Exchange team is announcing today a number of releases. Today’s releases include updates for Exchange Server 2013, 2010, and 2007. The following packages are now available on the Microsoft download center.
These releases represent the latest set of fixes available for each of their respective products. The releases include fixes for customer reported issues and minor feature improvements. The cumulative updates and rollup updates for each product version contain important updates for recently introduced Russian time zones, as well as fixes for the security issues identified in MS14-075. Also available for release today are MS14-075 Security Updates for Exchange Server 2013 Service Pack 1 and Exchange Server 2013 Cumulative Update 6.
Exchange Server 2013 Cumulative Update 7 includes updates which make migrating to Exchange Server 2013 easier. These include:
Customers with Public Folders deployed in an environment where multiple Exchange versions co-exist will want to read Brian Day’s post for additional information.
Cumulative Update 7 includes minor improvements in the area of backup. We encourage all customers who backup their Exchange databases to upgrade to Cumulative Update 7 as soon as possible and complete a full backup once the upgrade has been completed. These improvements remove potential challenges restoring a previously backed up database.
For the latest information and product announcements about Exchange 2013, please read What's New in Exchange 2013, Release Notes and Exchange 2013 documentation on TechNet.
Cumulative Update 7 includes Exchange-related updates to Active Directory schema and configuration. For information on extending schema and configuring Active Directory, please review Prepare Active Directory and Domains in Exchange 2013 documentation.
Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU7) or the prior (e.g., CU6) Cumulative Update release.
Update 12/12/2014:
Exchange Server 2010 SP3 Update Rollup 8 has been re-released to the Microsoft download center resolving a regression discovered in the initial release. The update RU8 package corrects the issue which impacted users connecting to Exchange from Outlook. The issue was insulated to the MAPI RPC layer and was able to be isolated to quickly deliver the updated RU8 package. The updated RU8 package is version number 14.03.0224.002 if you need to confirm you have the updated package. The updates for Exchange Server 2013 and 2007 were not impacted by this regression and have not been updated.
Update 12/10/2014:
An issue has been identified in the Exchange Server 2010 SP3 Update Rollup 8. The update has been recalled and is no longer available on the download center pending a new RU8 release. Customers should not proceed with deployments of this update until the new RU8 version is made available. Customers who have already started deployment of RU8 should rollback this update.
The issue impacts the ability of Outlook to connect to Exchange, thus we are taking the action to recall the RU8 to resolve this problem. We will deliver a revised RU8 package as soon as the issue can be isolated, corrected, and validated. We will publish further updates to this blog post regarding RU8.
This issue only impacts the Exchange Server 2010 SP3 RU8 update, the other updates remain valid and customers can continue with deployment of these packages.
The Exchange Team
On a recent project I had to consider how to implement meeting rooms in Exchange 2007 SP1. I read all of the available TechNet articles and posts and then I realized that it was not necessarily easy to set up meeting rooms with correct policies on the first try. So, I made a synthesis on how to quickly create the meeting room of your dreams, in hopes that this can help you. Resource Mailbox Overview Resource mailboxes are specific types of mailboxes that can represent meeting rooms or shared equipment and can be included as resources in meeting requests. The Active Directory user that is associated with a resource mailbox is a disabled account. The different types of resource mailboxes in Microsoft Exchange Server 2007 are: Room mailbox: a resource mailbox that is assigned to a meeting location, such as a conference room, auditorium, or training room. Room mailboxes can be included as resources in meeting requests. Equipment mailbox: a resource mailbox that is assigned to a non-location specific resource, such as a portable computer projector, microphone, or a company car. Equipment mailboxes can be included as resources in meeting requests. Shared mailbox: a mailbox that is not primarily associated with a single user and is generally configured to allow logon access for multiple users. After a shared mailbox is created (by using the Exchange Management Shell), you must grant permissions to all users that require access to the shared mailbox. Even if this is not a resource mailbox, I mention it here because companies commonly use that kind of mailbox for collaboration or business needs. Example 1: How to create a resource mailbox Create a Room mailbox: New-Mailbox -database "Storage Group 1\Mailbox Database 1" -Name ConfRoom1 -OrganizationalUnit "Conference Rooms" -DisplayName "ConfRoom1" -UserPrincipalName ConfRoom1@contoso.com -Room Create an Equipment mailbox: New-Mailbox -database "First Storage Group\Mailbox Database" -Name VCR1 -OrganizationalUnit Equipment -DisplayName "VCR1" - UserPrincipalName VCR1@contoso.com -Equipment Create a Shared mailbox: New-Mailbox -database "Storage Group 1\Mailbox Database 1" -Name SharedMailbox01 -OrganizationalUnit "Resource Mailboxes" -DisplayName "SharedMailbox01" -UserPrincipalName SharedMailbox01@contoso.com -Shared (from http://technet.microsoft.com/en-us/library/bb201680.aspx) Resource Mailbox Properties You can configure resource mailbox properties for resource mailboxes. For example, you can use the ResourceCapacity, Office, and ResourceCustom parameters with the Set-Mailbox cmdlet to configure some of these settings. Custom resource properties can help users select the most appropriate room or equipment by providing additional information about the resource. For example, you can create a custom property for room mailboxes called AV. You can add this property to all rooms that have audio-visual equipment. This allows users to identify which conference rooms have audio-visual equipment available. A custom resource cannot contain a value; it's only a flag that can be added to a resource mailbox, flags are defined globally for the Exchange organization. Before you can assign custom resource properties to a room or equipment mailbox, you must first create these properties by modifying the resource configuration of your Exchange organization. Custom resource can be added with the Set-ResourceConfig cmdlet. Note: All entries provided to the Set-ResourceConfig cmd-let must start with either Room/ or Equipment/. Setting a new entry using the Set-ResourceConfig cmdlet will overwrite all existing entries, and not add a new entry to the list. Use the Get-ResourceConfig cmdlet to query the existing entries, and then append to the list. For every custom resource property you create in your organization, you must specify to which resource mailbox type it applies (room or equipment). When you are managing a resource mailbox, you can assign only those custom resource properties that apply to that specific resource mailbox type. For example, if you are configuring a room mailbox, you can assign only the custom resource properties that apply to room mailboxes. In Microsoft Exchange Server 2003 and earlier versions, LDAP filtering syntax is used to create custom address lists, global address lists (GALs), e-mail address policies, and distribution groups. In Exchange Server 2007, the new OPATH filtering syntax replaces the LDAP filtering syntax. For example a new address lists can only be based on properties filterable by the -RecipientFilter parameter (complete list: http://technet.microsoft.com/en-us/library/bb738157.aspx ). Other properties, including any customer schema extensions, cannot be used in the -RecipientFilter parameter. So LDAP attributes defined to search for rooms or create Address Book views must be included in OPATH properties to allow for a wide use within Exchange 2007. Example 2: Create Custom Properties for Resource Mailbox Set-ResourceConfig -ResourcePropertySchema ("Room/TV", "Room/VCR", "Equipment/Auto") Example 3 : Configure Resource Mailbox Properties Set-Mailbox -Identity "ResourceMailbox01" -ResourceCustom ("TV","VCR") -ResourceCapacity 50 (from http://technet.microsoft.com/en-us/library/aa996915.aspx) Room Mailbox Settings Before explaining how to create the different types of room mailbox, we must focus on the settings that can be done with Set-MalboxCalendarSettings. With this cmdlet you can configure many parameters on the resource mailbox (maximum meeting duration allowed, default reminder time, etc...). A complete list with description is available at http://technet.microsoft.com/en-us/library/aa996340.aspx. The main parameter that interests us is AutomateProcessing which allows enabling or disabling calendar management on the resource mailbox. The three possible values are: None Both resource booking and Calendar Attendant will be disabled on the mailbox. (Meeting requests will not be processed and stacked in the inbox of room mailbox). AutoUpdate This is the default value. The Calendar Attendant will process meeting requests which will sit in the calendar of the room in a "tentative state" waiting a delegate approval. (The meeting organizer will receive only the decision of the delegate) AutoAccept Resource booking will be enabled on the room mailbox. This means that the room will take into account the policies for the incoming requests (who can schedule.). (With automatic booking configuration, the organizer will receive the decision of the room. Otherwise organizer will first receive a message of recognition pending delegate approval). Note: Calendar Attendant automatically places new meetings on the calendar as tentative appointments, updates existing meetings with new information, and deletes out-of-date meeting requests without any client interaction. The Calendar Attendant also processes meeting forward notifications by sending a notification when a meeting request is forwarded and adding meeting attendees to the calendar when a meeting notification is received. Resource Booking Attendant automates acceptance and declination of resource booking requests. Policies can be set up for each resource based upon by whom, when, and for how long a resource can be booked. The AutoAccept value enables the resource booking policies to manage who can book the room and under what conditions. For each room mailbox, each user can be member of different policies: BookInPolicy: List of users who are allowed to submit in-policy meeting requests. In-policy requests from these users will automatically be approved; RequestInPolicy: List of users who are allowed to submit in-policy meeting requests. In-policy requests from these users will be subject to approval by a delegate; RequestOutOfPolicy: List of users who are allowed to submit out-of-policy meeting requests. Out-of-policy requests from these users will be subject to approval by a delegate; In the context of resource mailboxes, InPolicy and OutOfPolicy simply mean whether or not the meeting invitation matches any restrictions enabled on the resource mailbox. There are also policies to specify permissions for all users (AllBookInPolicy, AllRequestInPolicy, AllRequestOutOfPolicy). For example MaximumDurationInMinutes value for the resource mailbox is 30 minutes, any meeting invitation longer than 30 minutes would be OutOfPolicy. Using the RequestOutOfPolicy field, you can manually add users that are allowed to request meetings that are not within the policy. Figure 1 : Booking Policy - Who can schedule a resource for an Auto-Accept resource mailbox Room Mailbox Main Scenarios Now that we know how to create a meeting room with ideas a little clearer on strategies, let's look at main scenarios of room mailbox that we can implement: Room with automatic booking; Room with meeting requests forwarded to a delegate; Room requiring the logon of a delegate to manage the meeting requests. Room with automatic booking To set automatic booking , set AutomateProcessing to AutoAccept to enable resource booking policies. With the default configuration of room policies, all users will then be allowed to send in-policy meeting requests. These requests will be processed automatically by the room. Example 4: How to enable automatic booking on a Resource Mailbox Set-MailboxCalendarSettings -Identity "Conference Room" -AutomateProcessing AutoAccept (from http://technet.microsoft.com/en-us/library/bb123495.aspx) Room with meeting requests forwarded to a delegate To set the room forwarding the incoming meeting requests to a delegate for approval, you must enable and configure policies, and define a delegate: Enable policies: set AutomateProcessing to AutoAccept; All incoming meeting request must be approved by a delegate: set AllRequestInPolicy to True and AllBookInPolicy to False ; Define a delegate under ResourceDelegates parameter. A Resource delegate will have the following permission: Editor on the Calendar folder of the resource mailbox; Editor on the "FreeBusy Data" system folder of the resource mailbox; Ability of "Send on behalf" of resource mailbox. Example 5: How to set a Room to forward request to a delegate Set-MailboxCalendarSettings -Identity "Training Room" -AutomateProcessing AutoAccept -ResourceDelegates "Isabelle Dupont" -AllBookInPolicy:$false -AllRequestInPolicy:$true The delegate can now manage meeting requests forwarded by the room mailboxes from his own mailbox by accepting or rejecting them. He can also access to the calendar folder of the room mailbox (by the "Open other user's folder" feature of Outlook client). It should be noted that the responses received by the organizers will be from the delegate on behalf of the room mailbox. Note: When the Set-MailboxCalendarSettings cmdlet is re-run to modify any settings the original delegate's permissions are removed. The delegate is still displayed when running the 'Get-MailboxCalendarSettings' cmdlet however if you look at the permissions on the resource calendar, the delegate's permissions have been removed. To re-grant permissions on the resource calendar you must run a "Set-MailboxCalendarSettings resource_alias -ResourceDelegates:$null" command. Afterwards you can re-grant permissions to the intended user. Until this problem is fixed, we would recommend running this command before making any changes to resource delegates. Room whose management is done directly by the delegate It's the default of a newly created room with the AutomateProcessing parameter set to AutoUpdate. The Calendar Attendant will process meeting requests which will sit in the calendar of the room in a "tentative state" waiting a delegate approval. The delegate needs permissions to connect to the resource mailbox and manage the meeting requests: "Full Mailbox Access" to access the resource mailbox and for example "Send-As" to respond to requests in a transparent manner. Example 6: The delegate manage the request from the resource mailbox Set-MailboxCalendarSettings -Identity "Conference Room" -AutomateProcessing AutoUpdate Add-MailboxPermission -AccessRights FullAccess -Identity "Conference Room" -User "Isabelle Dupont" Add-ADPermission -Identity "Conference Room" -User "Isabelle Dupont" -ExtendedRights Send-As Note: "Send As" versus "Send on Behalf" Send As permission will allow a user to send as another user. Send on Behalf permission will allow a user to send on behalf of another user. This means that the recipient knows who really sent the message because it is clearly stated in the message. Synthesis Based on the previously detailed main scenarios the minimum parameters to set are the following: Resource Calendar Settings (set-mailboxcalendarsettings) Automate Processing All Book In Policy All Request In Policy Resource Delegate Room Mailbox Automatic Booking AutoAccept True (default value) False (default value) None (default value) Room Mailbox Manual Approval Request forwarded to delegates AutoAccept False True List of Delegates Room Mailbox Manual Approval Delegates approve from room mailbox AutoUpdate (default value) True (default value) False (default value) None (default value)
Resource mailboxes are specific types of mailboxes that can represent meeting rooms or shared equipment and can be included as resources in meeting requests. The Active Directory user that is associated with a resource mailbox is a disabled account. The different types of resource mailboxes in Microsoft Exchange Server 2007 are:
Example 1: How to create a resource mailbox
Create a Room mailbox:
New-Mailbox -database "Storage Group 1\Mailbox Database 1" -Name ConfRoom1 -OrganizationalUnit "Conference Rooms" -DisplayName "ConfRoom1" -UserPrincipalName ConfRoom1@contoso.com -Room
Create an Equipment mailbox:
New-Mailbox -database "First Storage Group\Mailbox Database" -Name VCR1 -OrganizationalUnit Equipment -DisplayName "VCR1" - UserPrincipalName VCR1@contoso.com -Equipment
Create a Shared mailbox:
New-Mailbox -database "Storage Group 1\Mailbox Database 1" -Name SharedMailbox01 -OrganizationalUnit "Resource Mailboxes" -DisplayName "SharedMailbox01" -UserPrincipalName SharedMailbox01@contoso.com -Shared
(from http://technet.microsoft.com/en-us/library/bb201680.aspx)
You can configure resource mailbox properties for resource mailboxes. For example, you can use the ResourceCapacity, Office, and ResourceCustom parameters with the Set-Mailbox cmdlet to configure some of these settings.
Custom resource properties can help users select the most appropriate room or equipment by providing additional information about the resource. For example, you can create a custom property for room mailboxes called AV. You can add this property to all rooms that have audio-visual equipment. This allows users to identify which conference rooms have audio-visual equipment available. A custom resource cannot contain a value; it's only a flag that can be added to a resource mailbox, flags are defined globally for the Exchange organization.
Before you can assign custom resource properties to a room or equipment mailbox, you must first create these properties by modifying the resource configuration of your Exchange organization. Custom resource can be added with the Set-ResourceConfig cmdlet.
Note:
Example 2: Create Custom Properties for Resource Mailbox
Set-ResourceConfig -ResourcePropertySchema ("Room/TV", "Room/VCR", "Equipment/Auto")
Example 3 : Configure Resource Mailbox Properties
Set-Mailbox -Identity "ResourceMailbox01" -ResourceCustom ("TV","VCR") -ResourceCapacity 50
(from http://technet.microsoft.com/en-us/library/aa996915.aspx)
Before explaining how to create the different types of room mailbox, we must focus on the settings that can be done with Set-MalboxCalendarSettings. With this cmdlet you can configure many parameters on the resource mailbox (maximum meeting duration allowed, default reminder time, etc...). A complete list with description is available at http://technet.microsoft.com/en-us/library/aa996340.aspx.
The main parameter that interests us is AutomateProcessing which allows enabling or disabling calendar management on the resource mailbox. The three possible values are:
The AutoAccept value enables the resource booking policies to manage who can book the room and under what conditions. For each room mailbox, each user can be member of different policies:
In the context of resource mailboxes, InPolicy and OutOfPolicy simply mean whether or not the meeting invitation matches any restrictions enabled on the resource mailbox. There are also policies to specify permissions for all users (AllBookInPolicy, AllRequestInPolicy, AllRequestOutOfPolicy).
For example MaximumDurationInMinutes value for the resource mailbox is 30 minutes, any meeting invitation longer than 30 minutes would be OutOfPolicy. Using the RequestOutOfPolicy field, you can manually add users that are allowed to request meetings that are not within the policy.
Figure 1 : Booking Policy - Who can schedule a resource for an Auto-Accept resource mailbox
Now that we know how to create a meeting room with ideas a little clearer on strategies, let's look at main scenarios of room mailbox that we can implement:
To set automatic booking , set AutomateProcessing to AutoAccept to enable resource booking policies. With the default configuration of room policies, all users will then be allowed to send in-policy meeting requests. These requests will be processed automatically by the room.
Example 4: How to enable automatic booking on a Resource Mailbox
Set-MailboxCalendarSettings -Identity "Conference Room" -AutomateProcessing AutoAccept
(from http://technet.microsoft.com/en-us/library/bb123495.aspx)
To set the room forwarding the incoming meeting requests to a delegate for approval, you must enable and configure policies, and define a delegate:
Example 5: How to set a Room to forward request to a delegate
Set-MailboxCalendarSettings -Identity "Training Room" -AutomateProcessing AutoAccept -ResourceDelegates "Isabelle Dupont" -AllBookInPolicy:$false -AllRequestInPolicy:$true
The delegate can now manage meeting requests forwarded by the room mailboxes from his own mailbox by accepting or rejecting them. He can also access to the calendar folder of the room mailbox (by the "Open other user's folder" feature of Outlook client). It should be noted that the responses received by the organizers will be from the delegate on behalf of the room mailbox.
Note: When the Set-MailboxCalendarSettings cmdlet is re-run to modify any settings the original delegate's permissions are removed. The delegate is still displayed when running the 'Get-MailboxCalendarSettings' cmdlet however if you look at the permissions on the resource calendar, the delegate's permissions have been removed. To re-grant permissions on the resource calendar you must run a "Set-MailboxCalendarSettings resource_alias -ResourceDelegates:$null" command. Afterwards you can re-grant permissions to the intended user. Until this problem is fixed, we would recommend running this command before making any changes to resource delegates.
It's the default of a newly created room with the AutomateProcessing parameter set to AutoUpdate.
The Calendar Attendant will process meeting requests which will sit in the calendar of the room in a "tentative state" waiting a delegate approval. The delegate needs permissions to connect to the resource mailbox and manage the meeting requests: "Full Mailbox Access" to access the resource mailbox and for example "Send-As" to respond to requests in a transparent manner.
Example 6: The delegate manage the request from the resource mailbox
Set-MailboxCalendarSettings -Identity "Conference Room" -AutomateProcessing AutoUpdate
Add-MailboxPermission -AccessRights FullAccess -Identity "Conference Room" -User "Isabelle Dupont"
Add-ADPermission -Identity "Conference Room" -User "Isabelle Dupont" -ExtendedRights Send-As
Note: "Send As" versus "Send on Behalf"
Based on the previously detailed main scenarios the minimum parameters to set are the following:
Resource Calendar Settings
(set-mailboxcalendarsettings)
Automate
Processing
All Book In Policy
All Request In Policy
Resource Delegate
Room Mailbox
Automatic Booking
AutoAccept
True
(default value)
False
None
Manual Approval
Request forwarded to delegates
List of Delegates
Delegates approve from room mailbox
AutoUpdate
Whatever the scenario, a delegate can modify the resource booking parameter (except the delegate's part) by accessing the resource mailbox with Outlook Web Access (https://mail.contonso.com/room@contoso.com). To do this, the delegate needs the "Full Mailbox Access" permission to the resource mailbox.
Figure 2 : Resource Mailbox Settings with Outlook Web Access
For further reading and the most up-to-date information:
-- Murat Gunyar
Today on the Office blog we announced that service pack 1 for the 2013 set of products including Office, SharePoint and Exchange will be released early next year. We know our Exchange customers have been looking for confirmation of the release but also have a desire for an early look at what's coming with Exchange Server 2013 Service Pack 1 (SP1). So let's have a first look a few things you can expect to see in SP1. But wait… we haven’t released CU3 – well, news about CU3 is imminent - stay tuned for more information about CU3 coming very soon.
In this post we are highlighting a few of the notable improvements to be included in SP1. This isn't an all-inclusive list, so stay tuned for additional details as we approach release.
SP1 will require customers to update their Active Directory schema - customers should assume this requirement for all Exchange Server 2013 updates. Plan for this required update to quickly take advantage SP1 updates. Active Directory Schema updates for Exchange are additive and always backwards compatible with previous releases and versions.
On behalf of the Exchange Product Group, thanks again for your continued support. As always, let us know what you think!
Brian Shiers Exchange Technical Product Manager
The Exchange team is announcing today the availability of Update Rollup 6 for Exchange Server 2010 Service Pack 3. Update Rollup 6 is the latest rollup of customer fixes available for Exchange Server 2010 Service Pack 3. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 6 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 6 may be found in KB2936871. Customers running any Service Pack 3 Update Rollup for Exchange Server 2010 can move to Update Rollup 6 directly.
The release is now available on the Microsoft Download Center. Update Rollup 6 will be available on Microsoft Update in early July.
Note: KB articles may not be fully available at the time of publishing of this post.
EDIT 8/22/2008: Corrected a typo in a "DisabledComponents" registry key name.
It's been a while since I've been thinking of writing a blog post about various aspects of Outlook Anywhere that people have been asking questions about. Somehow, I keep getting myself caught up in one thing or another, and have consequently delayed writing this blog post by almost 4 months. Ugh. Better late than never I figure.
Given how long this blog post is overdue, I plan to cover a lot of topics, from frequently asked questions to common misconceptions to problems with Outlook Anywhere to suggested solutions for different problems.
How does Outlook Anywhere work?
I won't cover details on the cmdlets that enable and change settings for Outlook Anywhere. There is already a bunch of documentation on it. Instead, let's do a slightly deeper dive than the cmdlet documentation provides.
The values that you provide to Outlook Anywhere settings can be classified into 2 types of properties - client facing and server facing. Examples of client facing properties are ClientAuthenticationMethod, External Host Name. Examples of Server facing properties are IISAuthenticationMethods, SSLOffloading. Client facing properties are picked up by Autodiscover and supplied to Outlook to configure client access to the Outlook Anywhere service. Server facing properties are picked up by a servicelet called RpcHttpConfigurator which runs as part of the Microsoft Exchange Service Host service. This servicelet runs every 15 mins by default, but the interval can be adjusted by changing the value of the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\MSExchangeServiceHost\RpcHttpConfigurator\PeriodicPollingMinutes regkey. Note that setting this value to 0 turns off the RpcHttpConfigurator.
When the RpcHttpConfigurator runs, it picks up the IISAuthenticationMethods and SSLOffloading values from the AD and stamps it on the \rpc vdir settings in the IIS metabase - overwriting any previously set value. This means that if you manually change the settings on this vdir, you should expect to be run over pretty shortly by the RpcHttpConfigurator (unless you have set the reg key to 0).
Ok, so that's just part of what the servicelet does.
Outlook Anywhere depends on the RPC/HTTP Windows component to do the marshalling and unmarshalling of the RPC packets from the client to the CAS server. A client side RPC component is responsible for marshalling every RPC packet into an HTTP tunnel and sending it over to the \rpc vdir on the CAS server. RPCProxy is an ISAPI extension that unmarshals the RPC packet, retrieves the RPC endpoint that the client wants to talk to and forwards the packet to the endpoint. But imagine if you were able to connect to any server in the organization if you were able to auth against an IIS box running RPCProxy. By the weakest link theory, all you'd need to do would be hack into a single IIS server and you'd have free access to all servers in the org. Ouch ! To alleviate this problem, RPCProxy only allows connections to be made to servers and ports that are in a trusted list. This list is maintained through the HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc\RpcProxy\ValidPorts regkey and contains all the servers/ports that RPCProxy is allowed to talk to. So, the other part of what the RpcHttpConfigurator servicelet does it that is queries the AD for all mailbox servers and stamps them in the ValidPorts regkey allowing access to ports 6001, 6002, 6004 for both FQDN and Netbios access. So, you will typically see something like mbx1:6001-6002;mbx1:6004;mbx1.contoso.com:6001-6002;mbx1.contoso.com:6004 as the value for the key. As new mailbox servers are added to the org, they will be picked up when the servicelet runs and be added to the key. Again, if you manually change this regkey, you should expect to be bulldozed by the servicelet.
Note that the ValidPorts key is only used by RPCProxy as a filter to disallow communication with unlisted server ports. It is not used to determine which server to send requests to. For the same reason, the order in which servers are listed in this key does not matter. I just thought I'd clarify this since I was recently told that there was confusion on what this key accomplished.
Ok, simple enough, now that all the configuration is done, how does Outlook Anywhere actually establish its connections. The following diagram may help:
As you see above, the client specifies the VIP of the Load balancer (or direct CAS FQDN if the CAS is exposed to the Internet) as the HTTP endpoint and the mailbox server as the RPC endpoint. The query string is somewhat like this:
http://nlb.contoso.com/rpc/rpcproxy.dll?mbx1.contoso.com:6001
This tells the RPCProxy on CAS1 that the client is trying to connect to server mbx1.contoso.com on port 6001. RPCProxy looks up the ValidPorts key and if mbx1.contoso.com:6001 is listed there, it allows the connection to go through.
The blue and red arrows above represent the 2 different connections spawned by the RPC/HTTP client component to represent a single RPC connection. This is done because HTTP connections are half duplex (i.e. they either allow you to send information or receive information, not both at the same time). In the case of RPC, connections need to be long lived and full duplex, so the RPC_IN_DATA connection acts as the sending half duplex connection, while the RPC_OUT_DATA connection acts as the receiving half duplex connection. Since HTTP requires that each connection be given a max length, each of these connections are 1GB "long" and are terminated when this limit is reached. Each of these connections is tagged with a client session id. When the RPC server component receives the RPC_IN_DATA and RPC_OUT_DATA with the same client session id, it knows that for any request received on the RPC_IN_DATA connection, it must reply back on the RPC_OUT_DATA connection. That's magic.
Ok, so you already know this, but I'll reiterate - the mailbox server has 3 ports that are used for RPC/HTTP: port 6001 is used for Mail connections, port 6002 is used for directory referral, port 6004 is used for proxying directory connections to AD. The Referral Service running on port 6002 and DSProxy running on port 6004 are part of the same mad.exe process, and the Referral Service just refers clients back to DSProxy to establish their Directory connections. If you Ctrl+Right Click the Outlook icon and click on Connection Status, it will tell you what connections exist (Mail vs. Directory), what server they are going to and what protocol they are using (HTTPS vs. TCP(direct Exchange RPC connection)).
I have conveniently omitted any discussion around certificates, since that can take up another few blog posts. As some would say, that is beyond the scope of this article and is left as an exercise to the reader.
How do I know Outlook Anywhere is working?
Simple... when no one is complaining! Seriously though, it is preferable is to run diagnostics on Outlook Anywhere before subjecting it to thousands of users. The one tool that works pretty well in most cases is rpcping. Yes, it has a lot of parameters and is confusing, but it does provide pretty good diagnostic information and as long as you have the KB open, you can figure out where problems lie. Start by pinging just the RPCProxy by using the -E option. Once that works, move onto testing the mailbox server endpoints by removing the -E and adding -e 6001 instead. Similarly for 6002, 6004.
A typical command line would be something like this. Refer to http://support.microsoft.com/kb/831051 for usage details
rpcping -t ncacn_http -o RpcProxy=cas1.contoso.com -P "user,domain,password" -H 1 -F 3 -a connect -u 9 -v 3 -s mailbox.contoso.com -I " user,domain,password " -e 6004
How does Outlook Anywhere not work?
Unfortunately, there are some cases where Outlook Anywhere does not work without requiring manual tweaks. This is the part I wish I had blogged about earlier. I'm sure there are poor folks out there that have hit these issues and wasted their time figuring out what I had already learned...
DSProxy and IPv6
As of E12 SP1, Outlook Anywhere on Windows 2008 requires that IPv6 be manually turned off on the CAS server. This is because the DSProxy component that listens on port 6004 (mad.exe) for directory connections does not listen on the IPv6 stack. If you do a netstat -ano | findstr 6004, you will see only 1 LISTENING entry - the one that corresponds to the IPv4 stack. Contrast this with ports 6001 and 6002 that have 2 entries.
(As most of you already know, if you are running your Mailbox role on the same machine as a DC, lsass.exe not mad.exe listens on port 6004, so this problem will not surface since lsass.exe listens on both protocol stacks.)
How do you turn off IPv6 ? It depends on whether you are running CAS and Mailbox on the same server or different ones.
If you're in a multi-server scenario where the RPCProxy is not on the same server as the Mailbox, then you need to do the following:
If you're in a single-server scenario where the RPCProxy and Mailbox are on the same machine, then the above does not work since the loopback interface still uses IPv6. In this case, you need to make the following changes in the system32\drivers\etc\hosts file:
Thanks to Kevin Reeuwijk and others for finding and reporting the issue and solution. A fix (make DSProxy listen on the IPv6 stack) is on the way and is expected to be available in Exchange 2007 SP1 RU4 in Q3 2008.
DSProxy and Split RPC_IN_DATA, RPC_OUT_DATA connections
In the diagram above, you will notice that I have used a Source IP Loadbalancing layer. This ensures that the RPC_IN_DATA and RPC_OUT_DATA connections coming from a single Outlook instance are always affinitized to the same CAS server. However, there are some legitimate scenarios where Source IP affinity is not viable for customers. A typical example is when a large number of end users are behind NAT devices causing all connections to end up with the same IP and hence the same CAS server... yay load balancing! Outlook Anywhere does not support cookies, so cookie based Load balancing cannot be used either. The only way of spreading load across the server farm is to use with no affinity or SSL-ID based affinity. However, this has the problem that the RPC_IN_DATA and RPC_OUT_DATA connections could (and most likely would) end up on different CAS servers as shown in the diagram below:
If you've been reading closely, you'll remember my earlier mention that the RPC server component is well aware of client session IDs and can reply on RPC_OUT_DATA for any requests on RPC_IN_DATA. And if that's the case, we should still be fine since Outlook always specifies the mailbox server as it's RPC endpoint. Well, almost. We are fine for ports 6001 and 6002 which are real RPC end points. The issue is with port 6004 where DSProxy pretends to be an RPC endpoint, but is just a proxy as the name implies. DSProxy only proxies client connections through to the DC. In the example above, RPC_IN_DATA is proxied to DC1 while RPC_OUT_DATA is proxied to DC2. The DCs are the real RPC endpoints. However, now that the 2 connections have been split, neither of the DCs is aware of the other connection and requests sent on RPC_IN_DATA are lost in oblivion. We call this split connectivity and it is a problem surfaced by SSL-ID or no affinity load balancing. While I would recommend not using these configurations if avoidable, it is clear as described earlier that these may be the only alternatives. Think hard if this is the case since the workaround that I am describing below will be tedious to maintain.
The goal of these steps is to eliminate the possibility of split connectivity by (1) having clients bypass DSProxy wherever possible and (2) constrain DSProxy to talking to a single DC for any requests to DSProxy.
First off, you need to avoid using DSProxy wherever possible. Normally, the Referral Service running on port 6002 refers clients to DSProxy on port 6004. By setting the following regkey, you instruct Referral Service to not send clients to DSProxy, but instead give them a referral to a DC for directory connections. So, instead of client connections going from Client to RPCProxy to DSProxy to DC, the path would be from Client to RPCProxy to DC. Note that the client is not directly connecting to the DC, so it is not required to publish the DCs to the internet or open any new firewall ports. See KB http://support.microsoft.com/kb/872897 for details:
On the Mailbox servers: a DWORD entry needs to be created on each Mailbox server named "Do Not Refer HTTP to DSProxy" at HKLM\System\CCS\Services\MSExchangeSA\Parameters\ and the value set to 1
Next, as indicated earlier, the RPCProxy will block access to the DC servers unless there servers are included in the ValidPorts regkey. So, set the following on the Client Access Servers
Finally, you need to make sure that the DCs are listening on port 6004:
On the Global Catalog servers: a REG_MULTI_SZ entry needs to be created on each GC named NSPI interface protocol sequences at HKLM\System\CCS\Services\NTDS\Parameters\ and the value set to ncacn_http:6004
These fixes will make sure that all directory connections bypass DSProxy and terminate at the DCs, thereby allowing the DC RPC server side component to receive both the RPC_IN_DATA and RPC_OUT_DATA connections.
There is 1 last thing to deal with in this SSL-ID load balanced configuration. Outlook profile creation hard codes a call to DSProxy on 6004. Which means that we can get split connectivity during profile creation. To deal with this minimal volume of traffic, there is 1 final regkey that should be set on the mailbox servers:
On the Mailbox Servers - set the HKLM\System\CCS\Services\MSExchangeSA \Parameters key "NSPI Target Server" to the FQDN of the DC that profile creation should use.
By using only 1 DC for profile creation, all DSProxy calls will be proxied into that single DC, once again avoiding split connectivity.
That's it folks!
Of course, subsequent releases will provide cleaner solutions for such topologies, but for now, rest assured that having gone through the above steps multiple times, I feel your pain.
That's pretty much it. I hope that adds some clarity to how Outlook Anywhere works and hasn't succeeded in confusing everyone even more.
Until the next post - Hasta Luego!
- Sid
Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time.
Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries.
With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols.
We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click!
Let’s get into the specifics of some of the cool features of Log Parser Studio …
Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later.
The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS.
Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation).
Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application.
The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key!
You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries.
A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once.
Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed.
In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual.
Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e.
Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual.
There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows:
Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates.
Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install.
CSV and TXT are the currently supported output file types.
Want to skip all the details & just run some queries right now? Start here …
The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved.
1. Setup your default CSV output path:
a. Go to LPS | Options | Preferences | Default Output Path. b. Browse to and select the folder you would like to use for exported results. c. Click Apply. d. Any queries that export CSV files will now be saved in this folder. NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that you move this to another location.
a. Go to LPS | Options | Preferences | Default Output Path.
b. Browse to and select the folder you would like to use for exported results.
c. Click Apply.
d. Any queries that export CSV files will now be saved in this folder. NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that you move this to another location.
2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched.
You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar.
NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query.
3. Choose a query from the library and run it:
a. Click the Library tab if it isn’t already selected. b. Choose a query in the list and double-click it. This will open the query in its own tab. c. Click the Run Single Query button to execute the query
a. Click the Library tab if it isn’t already selected.
b. Choose a query in the list and double-click it. This will open the query in its own tab.
c. Click the Run Single Query button to execute the query
The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV.
As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual.
Have fun with Log Parser Studio! & always remember – There’s a query for that!
Kary Wall Escalation Engineer Microsoft Exchange Support
In Exchange 2010 we added a feature called the Allow/Block/Quarantine list (or ABQ for short). This feature was designed to help IT organizations control which of the growing number of Exchange ActiveSync-enabled devices are allowed to connect to their Exchange Servers. With this feature, organizations can choose which devices (or families of devices) can connect using Exchange ActiveSync (and conversely, which are blocked or quarantined).
Some of you may remember my previous post on this topic dealing with organizations that do not have Exchange 2010 and thus I wanted to show you the far better way you can do this in Exchange 2010 (which is also what you will see in Office 365 and Exchange Online if you are looking at our cloud-based offerings).
It is important to understand that the ABQ list is not meant to displace policy controls implemented using Exchange ActiveSync policies. Policy controls allow you to control and manage device features (such as remote wipe, PIN passwords, encryption, camera blocking, etc.) whereas the ABQ list is about controlling which devices are allowed to connect (for example, there may be a lot of devices that support EAS PIN policies, but some IT departments only want to allow certain devices to connect to limit support or testing costs). The easy takeaway is that Exchange ActiveSync policies allow you to limit device access by capabilities while the Allow/Block/Quarantine list allows you to control device access by device type. If you're curious as to what devices OS support which policies, the Wikipedia article we blogged about is a good place to look.
When we designed the ABQ list, we talked to a lot of organizations to find out how all of you use (or wanted to use) this kind of technology. What we realized is that there is a continuum of organizations; from permissive organizations that let employees connect whatever device they have to their Exchange Server, all the way to restrictive organizations that only support specific devices. Since we always want to make our software as flexible for IT as possible (as we know there are a lot of you folks that are using our software in a lot of different ways) we created this feature so that no matter which type of organization you are (or even if you are one that is in between these two extremes) we could help meet your needs. Below are some descriptions and "how-to"s for using the ABQ list in these different ways.
Restrictive organizations follow a more traditional design where only a set of supported devices is allowed to connect to the Exchange server. In this case, the IT department will only choose to allow the particular devices they support and all other devices are blocked.
It's important to note that a restrictive organization is created by specifying a set of allowed devices and blocking the unknown.
Below is a flow chart of the logic.
Permissive organizations allow all (or most) to connect to their Exchange Server. In these cases, the ABQ list can help organizations block a particular device or set of devices from connecting. This is useful if there's a security vulnerability or if the device is putting a particularly heavy load on the Exchange server. In these cases, the IT department can identify the misbehaving device and block that device until a fix or update for that device brings it into compliance. All other devices, including the unknown devices, are given access. Below is a flowchart of that logic.
Of course, if you are limiting the devices that connect to your organization, there's almost always a need for an exception. Whether it's testing a new device before rolling it out to the organization as a supported device, or an exception made for an executive, we wanted to give you the ability to make an exception without allowing all users with that device to access your organization's email and PIM data. Below is a flowchart of that logic.
Quarantining devices is useful when an IT department wants to monitor new devices connecting to their organization. Both permissive and restrictive organizations may choose to employ this mechanism. In a permissive organization, quarantine can be used so that IT administrators know what devices, and which users, are making new connections. In restrictive organizations, this can be used to see who is trying to work around policy and also gauge demand from "Bring Your Own Device" (BYOD) users. Below is a flowchart of that logic. Note that you could also choose to quarantine at the device/device family level if you wanted (not shown in the diagram for simplicity sake).
Now that we've gone through the theory, let's talk about how we would do this in practice.
Note for all you Exchange Management Shell (EMS) gurus, you can also configure device access using PowerShell cmdlets if you prefer.
To create a new rule, select New from the Device Access Rules section of the ABQ page (#5 in the screenshot above).
When setting up a rule for a device, it is important to understand the difference between the "family" of the device and the specific device. This information is communicated as part of the EAS protocol and is reported by the device itself. In general, you can think of the deivce rule as applying only to the particular device type (like an HTC-ST7377 as shown in the image below) whereas a device family might be something more broad like "Pocket PC". This distinction between the specific (device type) and the general (device family) is important since many device manufacturers actually release the same device with different names on different carriers. To make it so that you don't have to make a separate rule for each device. For instance, the HTC Touch Pro was available on all four majour US carriers as well as some of the regional ones, and that's just the USA, not to mention the other versions around the world. As you can see, making a rule for each of those different devices (which are all in the same family and effectively the same device) could mean a lot of extra work for IT, so we added the family grouping to help you make good decisions about devices in bulk. It's important to note that when making a new rule you select the device family or the model but not both. Once you've selected the device or a device family, you can then choose what Exchange will do with that device (in this example, I'm just going to do a specific device).
This brings you to the New Device Access Rule page. The easiest way to set the rule is to select Browse, which will show you a list of all the devices or device families that have recently connected to your Exchange Server. Once you've selected the device or family, you can choose the action to take. This is where you can choose to block the device if you are a permissive organization looking to limit a specific device for a specific reason or where you can set access rules if you are a restrictive organization (in such a case you would just create an allow rule for each supported device and then set the state for all unknown devices to block (we'll talk about how to set the action for unknown devices in the next section)). Once you select the action (Allow access, Block access, or Quarantine), click Save and you're done! You can repeat this process for each rule you want to create. You can also have both block and allow rules simultaneously.
To access the rule for unknown devices, select Edit (#4 in Figure 5 above). On the Exchange ActiveSync Settings page, you can configure the action to take when Exchange sees a user trying to connect with a device that it does not recognize. By default, Exchange allows connections from all devices for users that are enabled for EAS. This example configures the Exchange organization to quarantine all unknown devices. This means that if there's no rule for the device (or device family) or if there's no exception for the particular user, then an unknown device will follow this behavior.
Quarantine notifications We have the ability to specify who gets an email alert when a device is placed in quarantine. You can add one or more administrators (or users) or even a distribution group to this list of notified individuals. Anyone on this list will receive an email like the one shown in the screenshot below. The notification provides you information about who tried to connect the device, the device details and when the attempt was made.
Custom quarantine message You can also set a custom message that will be delivered to the user in their Inbox and on their device. Although the device is in quarantine, we send this one message to the device so the user doesn't automatically call help desk because their device isn't syncing. The custom message is added to the notification email to the user that their device is in quarantine (see example image below).
The user and device will also now appear on the Quarantined Devices list on the ABQ configuration page (circled in red in the image below).
The device will stay in quarantine until an administrator decides to allow or block the device in quarantine. This can be done by selecting the device and then clicking on the Allow or Block buttons in Quarantined Devices (#1 in the screenshot below). This creates a personal exemption (the "one off case" mentioned earlier). If you wish to create an access rule that is to apply to all devices of the same family or model, you can select Create a rule for similar devices. (#2 in the image below) to open a new, prepopulated, rule.
Of course we realize that many organizations are dynamic and have changing requirements and policies. Any of the rules that have been set up can be changed dynamically by accessing the ABQ page in the ECP and editing, deleting, or adding the desired rule.
Adam Glick (@MobileGlick) Sr. Technical Product Manager
P.S. To read about Microsoft's licensing of Exchange ActiveSync, check out this article on Microsoft NewsCenter. Julia White also put up a more business focused blog in the UC Blog about the importance of EAS to Exchange 2010 customers.
This article is part 2 in a series that discusses namespace planning, load balancing principles, client connectivity, and certificate planning.
Unlike previous versions of Exchange, Exchange 2013 no longer requires session affinity at the load balancing layer.
To understand this statement better, and see how this impacts your designs, we need to look at how CAS2013 functions. From a protocol perspective, the following will happen:
Step 5 is the fundamental change that enables the removal of session affinity at the load balancer. For a given protocol session, CAS now maintains a 1:1 relationship with the Mailbox server hosting the user’s data. In the event that the active database copy is moved to a different Mailbox server, CAS closes the sessions to the previous server and establishes sessions to the new server. This means that all sessions, regardless of their origination point (i.e., CAS members in the load balanced array), end up at the same place, the Mailbox server hosting the active database copy.This is vastly different from previous releases – in Exchange 2010, if all requests from a specific client did not go to the same endpoint, the user experience was negatively affected.
The protocol used in step 6 depends on the protocol used to connect to CAS. If the client leverages the HTTP protocol, then the protocol used between the Client Access server and Mailbox server is HTTP (secured via SSL using a self-signed certificate). If the protocol leveraged by the client is IMAP or POP, then the protocol used between the Client Access server and Mailbox server is IMAP or POP.
Telephony requests are unique, however. Instead of proxying the request at step 6, CAS will redirect the request to the Mailbox server hosting the active copy of the user’s database, as the telephony devices support redirection and need to establish their SIP and RTP sessions directly with the Unified Messaging components on the Mailbox server.
Figure 1: Exchange 2013 Client Access Protocol Architecture
However, there is a concern with this architectural change. Since session affinity is not used by the load balancer, this means that the load balancer has no knowledge of the target URL or request content. All the load balancer uses is layer 4 information, the IP address and the protocol/port (TCP 443):
Figure 2: Layer 4 Load Balancing
The load balancer can use a variety of means to select the target server from the load balanced pool, such as, round-robin (each inbound connection goes to the next target server in the circular list) or least-connection (load balancer sends each new connection to the server that has the fewest established connections at that time).
Unfortunately, this lack of knowledge around target URL (or the content of the request), introduces complexities around health probes.
Exchange 2013 includes a built-in monitoring solution, known as Managed Availability. Managed Availability includes an offline responder. When the offline responder is invoked, the affected protocol (or server) is removed from service. To ensure that load balancers do not route traffic to a Client Access server that Managed Availability has marked as offline, load balancer health probes must be configured to check <virtualdirectory>/healthcheck.htm (e.g., https://mail.contoso.com/owa/healthcheck.htm). Note that healthcheck.htm does not actually exist within the virtual directories; it is generated in-memory based on the component state of the protocol in question.
If the load balancer health probe receives a 200 status response, then the protocol is up; if the load balancer receives a different status code, then Managed Availability has marked that protocol instance down on the Client Access server. As a result, the load balancer should also consider that end point down and remove the Client Access server from the applicable load balancing pool.
Administrators can also manually take a protocol offline for maintenance, thereby removing it from the applicable load balancing pool. For example, to take the OWA proxy protocol on a Client Access server out of rotation, you would execute the following command:
Set-ServerComponentState <Client Access Server> -Component OwaProxy –Requestor Maintenance –State Inactive
For more information on server component states, see the article Server Component States in Exchange 2013.
If the load balancer did not utilize the healthcheck.htm in its health probe, then the load balancer would have no knowledge of Managed Availability’s removal of (or adding back) a server from the applicable load balancing pool. The end result is that the load balancer would have one view of the world, while Managed Availability would have another view of the world. In this situation, the load balancer could direct requests to a Client Access server that Managed Availability has marked down, which would result in a negative (or broken) user experience. This is why the recommendation exists to utilize healthcheck.htm in the load balancing health probes.
Now that we understand how health checks are performed, let’s look at four scenarios:
In this scenario, a single namespace is deployed for all the HTTP protocol clients (mail.contoso.com). The load balancer is operating at layer 4 and is not maintaining session affinity. The load balancer is also configured to check the health of the target Client Access servers in the load balancing pool; however, because this is a layer 4 solution, the load balancer is configured to check the health of only a single virtual directory (as it cannot distinguish OWA requests from RPC requests). Administrators will have to choose which virtual directory they want to target for the health probe; you will want to choose a virtual directory that is heavily used. For example, if the majority of your users utilize OWA, then targeting the OWA virtual directory in the health probe is appropriate.
Figure 3: Single Namespace with No Session Affinity
As long as the OWA health probe response is healthy, the load balancer will keep the target CAS in the load balancing pool. However, if the OWA health probe fails for any reason, then the load balancer will remove the target CAS from the load balancing pool for all requests associated with that particular namespace. In other words, in this example, health from the perspective of the load balancer, is per-server, not per-protocol, for the given namespace. This means that if the health probe fails, all client requests will have to be directed to another server, regardless of protocol.
Figure 4: Single Namespace with No Session Affinity - Health Probe Failure
In this scenario, a single namespace is deployed for all the HTTP protocol clients (mail.contoso.com). The load balancer is configured to utilize layer 7, meaning SSL termination occurs and the load balancer knows the target URL. The load balancer is also configured to check the health of the target Client Access servers in the load balancing pool; in this case, a health probe is configured on each virtual directory.
As long as the OWA health probe response is healthy, the load balancer will keep the target CAS in the OWA load balancing pool. However, if the OWA health probe fails for any reason, then the load balancer will remove the target CAS from the load balancing pool for OWA requests. In other words, in this example, health is per-protocol; this means that if the health probe fails, only the affected client protocol will have to be directed to another server.
Figure 5: Single Namespace with Layer 7 (No Session Affinity) - Health Probe Failure
In this scenario, a single namespace is deployed for all the HTTP protocol clients (mail.contoso.com). The load balancer is configured to maintain session affinity (layer 7), meaning SSL termination occurs and the load balancer knows the target URL. The load balancer is also configured to check the health of the target Client Access servers in the load balancing pool; in this case, the health probe is configured on each virtual directory.
Figure 6: Single Namespace with Session Affinity - Health Probe Failure
This scenario combines the best of both worlds – provides per-protocol health checking, while not requiring complex load balancing logic.
In this scenario, a unique namespace is deployed for each HTTP protocol client; for example:
Figure 7: Multiple Namespaces with No Session Affinity
The load balancer is configured to not maintain session affinity (layer 4). The load balancer is also configured to check the health of the target Client Access servers in the load balancing pool; in this case, the health probes are effectively configured to target the health of each virtual directory, as each virtual directory is defined with a unique namespace, and while the load balancer still has no idea what the URL is being accessed, the result is as if it does know.
Figure 8: Multiple Namespaces with No Session Affinity - Health Probe Failure
The downside to this approach is that it introduces additional namespaces, additional VIPs (one per namespace), and increases the number of names added as subject alternative names on the certificate, which can be costly depending on your certificate provider. But, this does not introduce extra complexity to the end user – the only URL the user needs to know is the OWA URL. ActiveSync, Outlook, and Exchange Web Services clients will utilize Autodiscover to determine the correct URL.
The following table identifies the benefits and concerns with each approach:
Exchange 2013 introduces significant flexibility in your namespace and load balancing architecture. With load balancing, the decision ultimately comes down to balancing functionality vs. simplicity. The simplest solution lacks session affinity management and per-protocol health checking, but provides the capability to deploy a single namespace. At the other end of the spectrum, you can utilize session affinity management, per-protocol health checking with a single namespace, but at the cost of increased complexity. Or you could balance the functionality and simplicity spectrums, and deploy a load balancing solution that doesn’t leverage session affinity, but provides per-protocol health checking at the expense of requiring a unique namespace per protocol.
Ross Smith IV Principal Program Manager Office 365 Customer Experience
Update 4/16/2009: We have corrected an incorrect permissions entry in this blog post.
Heard of the term calendar concierge? Look up http://msexchangeteam.com/archive/2006/07/24/428390.aspx. In this post I'm going to talk about what happened to the Auto Accept Agent in Exchange Server 2007 and how you can define, schedule and manage resources easily and reliably.
Booking resources (e.g. a conference room) in conjunction with a meeting frequently leads to multiple meeting updates, general confusion and lost productivity for both organizers and attendees. The system should enable organizers to reliably find and book an available resource in one attempt and later confirm the reservation while minimizing attendee confusion. This is accomplished in Exchange Server 2007.
In Exchange 2003 there are two ways for customers to automate resource booking using Outlook and Exchange:
The Auto Accept Agent (AAA) is a server-side store event sink available in the Exchange 2003 SP1 timeframe. It provides automatic server-side processing of meeting requests sent to resource mailboxes that have been registered with the agent. The agent handles both requests and cancellations and sends responses to the meeting organizer. AAA uses EXOLEDB and CDOEX for notification of incoming messages and calendar item processing, respectively.
Direct booking is an Outlook-specific feature that uses the organizer's Outlook client (Outlook 2000 or later) to book an appointment directly into a resource mailbox schedule. The Outlook client of the person organizing the meeting performs all the necessary tasks, such as conflict checking and placing the reservation on the resource calendar. The resource mailbox must be manually configured with Outlook to support direct booking. It can be set up to allow automatically accept non-conflicting meeting requests and to allow/deny recurring bookings.
Exchange 2007 provides a reliable resource management solution that maps to information worker goals and increases organizational productivity. Exchange Server 2007 introduces changes to the resource booking architecture that address many of the concerns.
Resource management improvements have been made in the following areas.
Exchange 2007 identifies meeting resources as either a room or equipment and includes special attributes for each of these types of resources. For example, a room resource includes a capacity attribute. Custom attributes, such as audio-visual capabilities can also be defined. The Resource Booking attendant provides the following features:
The following table shows a comparison of the features available for direct booking in Outlook, using the Auto Accept Agent, and resource scheduling in Exchange Server 2007.
x
Define list of users who can book directly
Control how far requests are booked in the future
Define list of users who can book with approval, book outside policy
Set available hours, max duration
Custom meeting response text*
*Its per server in AAA and per mailbox in Exchange Server 2007
Before going further I want to explain a few points one should be aware while working with a resource mailbox in Exchange Server 2007.
A resource mailbox has the same structure as a user mailbox - it is composed of an Active Directory mailbox-enabled user object and an Exchange mailbox.
The major difference between a user and resource mailbox is that the resource mailbox:
I will talk more above Exchange Server 2007/Exchange server 2003 environments and how legacy resource mailboxes can be converted to Exchange Server 2007 resources without interrupting the ability of legacy clients to send meeting requests to them later on. In this post I'm going to concentrate on a pure Exchange Server 2007 environment.
Resource mailbox scheduling and administration in Exchange Server 2007 is primarily handled by the Resource Booking Attendant. The calendar and Resource Assistant interact with each other. The Resource Assistant provides a call that determines if a mailbox is a resource or not.
This can be done either from the Shell (Powershell/Exchange Management Shell) or by using the EMC.
Create a new mailbox using the EMC:
Expand Recipient Configuration > select Mailbox, and then click New Mailbox in the Mailbox section of the Actions task pane.
Figure 1: Click New Mailbox to start the New Mailbox Wizard
Note: In Exchange 2007, only disabled accounts can be used as resource mailboxes. When you create a new resource mailbox, the user account is disabled by default. If you click Existing User and then Browse, only disabled accounts are presented. Enabling the user account for a resource mailbox is NOT a supported configuration.
Use the Shell to create a new mailbox:
New-Mailbox -Name:"Resource1" -Alias:Resource1 -OrganizationalUnit:Users -Database:"Database Name" -UserPrincipalName:"Resource1@domain.com" -DisplayName:"Resource Mailbox" -Room
If you use the EMC, the end result is the same. In the final page of the wizard, the EMC actually shows you the Shell command that it uses.
Figure 2: The final page of the New Mailbox Wizard shows the Shell command used
This will create a new resource mailbox mailbox. At the command prompt type
Get-Mailbox Resource1 | fl *resource*
The output:
IsResource : True ResourceType : Room
Notice the ResourceType is Room.
At this point, the resource mailbox is not completely configured. If you attempt to book the resource it will not automatically accept the meeting. After creating a resource mailbox, you'll need to configure it to auto-accept meetings to which the resource mailbox has been invited. Otherwise, the resource mailbox does not automatically accept meetings sent to it and meetings sent to it will sit in the calendar of the resource in a "tentative state".
Note: To learn more about the Format-List cmdlet (the short form or alias fl is used here), please see this. Additionally, piping the output to fl *resource* will display all the attributes (of that mailbox) where the attribute name contains the string resource.
Let's check calendar settings for the mailbox:
Get-MailboxCalendarSettings Resource1 | fl
The output shows that AutomateProcessing is set to AutoUpdate by default:
AutomateProcessing : AutoUpdate
If AutomateProcessing is set to AutoUpdate (the property that controls the automatic acceptance of meeting requests), then the meeting organizer receives no response from the resource. In order to accept a meeting, one would have to log into the resource mailbox (using an account that has permissions to access the resource mailbox) and accept it.
Using OWA or the Shell, you can configure a mailbox to automatically process meeting requests and cancellations.
Set-MailboxCalendarSettings Resource1 -AutomateProcessing:Autoaccept Get-MailboxCalendarSettings Resource1 | fl
AutomateProcessing: AutoAccept
You can log on to the resource mailbox using OWA and configure the resource account to automatically process meeting requests and cancellations from the Options page. What account do you use to log into OWA? Because the account for a resource mailbox is disabled, you can use either of the following methods to log into the resource mailbox using OWA:
Explicit OWA logon to the resource mailbox with credentials for an account that has FullAccess permissions to the mailbox.
Use this command to grant FullAccess permissions to User1 for the Conference Room1 resource mailbox.
Add-MailboxPermission -Identity:Resource1 -AccessRights:fullaccess -User:user1
After User1 has been given FullAccess rights, in your browser, enter the explicit URL for the resource mailbox: http://servername/owa/Resource1@domain.com. When prompted for credentials, enter the username and password for an account that has FullAccess permission to the resource mailbox - in this case User1.
Log into OWA using an account that has has FullAccess permissions to the resource mailbox and select Open Other Mailbox.
Enter the normal URL for OWA: http://servername/owa
When prompted for credentials, enter the username and password for an account that has FullAccess permissions to the resource mailbox.
In the upper right corner of the OWA page, click the dropdown next to the logged on username, select Open Other Mailbox and then enter the name of the resource mailbox.
Figure 3: Select Open Other Mailbox to open another mailbox in OWA
When you log into a resource mailbox, click on Options and notice Resource Settings on the left pane as an available option just for resource mailboxes. You can set the following options on this page. Every single option via the shell is available on this page.
Exchange 2007 creates an All Rooms address list as seen in the following screenshot:
Figure 4: Exchange 2007 creates an All Rooms address list by default
In Outlook 2007, the All Rooms search feature transpires itself as shown below. Instead of clicking the To" button, which is everything in the GAL, you can now click on Rooms and we come up with the All Rooms address list. If you select a room, it automatically adds it in the Resources well. This should work if the resource is added to the "Required" field.
Figure 5: Selecting Rooms when scheduling a meeting, Outlook 2007 displays all rooms from the Rooms address list
If you want to lock down the resource booking options, you have the option to do so. Let's take a look at a few of the options available to lock down resource mailboxes.
Open up the shell prompt and type:
Take a look at the different parameters that can be set for a resource mailbox. Most of the parameters are self-explanatory. Let's focus instead on some of the policy-based settings, such as RequestInPolicy, BookInPolicy, RequestOutOfPolicy, AllBookInPolicy, AllRequestOutOfPolicy, etc
The default options allow all users to book resources if they are within the set policies (i.e. up to 180 days in the future, up to 1440 minutes in duration, etc.), and will reject all other meetings. In the context of resource mailboxes, InPolicy and OutOfPolicy simply mean whether or not the meeting invitation matches any restrictions enabled on the resource mailbox. For example MaximumDurationInMinutes value for the resource mailbox is 30 minutes, any meeting invitation longer than 30 minutes would be considered OutOfPolicy. Using the RequestOutOfPolicy parameter, you can manually add users that are allowed to request meetings that are not within the policy, and if you really want to lock things down, you can set the AllBookInPolicy value to False, and then manually add users to the BookInPolicy field, or more restrictive, to the RequestInPolicy field.
By default, the BookInPolicy parameter is configured for Everyone. If you leave BookInPolicy with the default setting and you configure the RequestInPolicy parameter with one or more SMTP addresses, the BookInPolicy setting overrides RequestInPolicy. The meeting is automatically accepted if it is within policy.
Compared to the options that were available with Auto Accept Agent, these settings allow you a lot of control to lock down and customize resource booking permissions.
You can't use the EMC to set resource booking policies. To run the Set-MailboxCalendarSettings cmdlet, the account you use must be delegated Exchange Organization Administrator role.
To control who can schedule a resource, use the following parameters in conjunction with the Set-MailboxCalendarSettings command:
AllBookInPolicy, AllRequestInPolicy, AllRequestOutOfPolicy, BookInPolicy, RequestInPolicy, RequestOutOfPolicy, ForwardRequestsToDelegates, TentativePendingApproval, ResourceDelegates
To control when a resource can be scheduled, use the following parameters in conjunction with the Set-MailboxCalendarSettings command:
AllowConflicts, BookingWindowInDays, EnforceSchedulingHorizon, MaximumDurationInMinutes, AllowRecurringMeetings, ScheduleOnlyDuringWorkingHours, ConflictPercentageAllowed, MaximumConflictInstances
To control what meeting information will be visible on the resource's calendar, use the following parameters in conjunction with the Set-MailboxCalendarSettings command:
DeleteAttachments, DeleteComments, RemovePrivateProperty, DeleteSubject, DisableReminders, AddOrganizerToSubject, DeleteNonCalendarItems, OrganizerInfo
To customize the response message that meeting organizers will receive, you can use the following parameters in the Set-MailboxCalendarSettings command:
AddAdditionalResponse, AdditionalResponse
Here's how "Restricting who can book" will look like:
More information regarding booking policies can be found in Set-MailboxCalendarSettings cmdlet help.
To use the Exchange Management Shell to customize the response message for resource scheduling, run the following command
Set-MailboxCalendarSettings -Id ResourceMailbox01 -AddAdditionalResponse:$true -AdditionalResponse:<text>
As an example:
Set-MailboxCalendarSettings -Identity "ResourceMailbox01" -AddAdditionalResponse "Add your response text here"
You can also set a custom response through OWA as mentioned previously.
Figure 6: Creating a custom response message using Outlook Web Access
In the context of resource mailboxes, the term delegate is used very loosely. You do not use the Delegates tab in the Outlook Tools > Options dialog box to configure the delegate even though the user(s) managing the resource mailbox might appear on the Delegates tab. These users appear on the Delegates tab because they have Send-on-behalf permissions to the resource mailbox. In scenarios where you are using the AllRequestOutOfPolicy, RequestOutOfPolicy, AllRequestInPolicy, or RequestInPolicy parameters, you need to use a delegate to respond to meetings that are not automatically accepted or declined by the resource mailbox.
Note: If you don't want to forward requests to a delegate (ForwardRequestsToDelegates = false), you can get away with granting FullAccess permissions for the resource mailbox to a regular user. This user can respond to meeting invites from the Inbox of the resource mailbox.
Because resource mailboxes use disabled accounts in Exchange 2007, the steps to create a delegate for a resource mailbox are a little different than in earlier versions of Exchange. To configure a delegate for an Exchange 2007 resource mailbox, use the following steps.
Run a command similar to the following to specify the delegate for the resource mailbox.
Set-MailboxCalendarSettings Resource1 -ResourceDelegates Delegate1
Note: Because of a bug in ResourceDelegates, the complete permissions for the delegate are not added to the resource mailbox. This is fixed in Exchange 2007 SP1. Therefore, you must also perform one the following methods to provide the necessary permissions to the delegate.
Use either of the following methods to provide adequate permissions on the resource mailbox to the delegate:
Method 1: Provide Full Access permissions to the delegate
Assign FullAccess mailbox permission for the resource mailbox to a delegate:
Add-MailboxPermission Resource1 -AccessRights FullAccess -User Delegate1
Method 2: Modify the Free/Busy Permissions for the resource mailbox
To modify just the free/busy permissions on the Resource mailbox, use the following steps:
Assign FullAccess mailbox permission for the resource mailbox to an administrator:
Add-MailboxPermission Resource1 -AccessRights FullAccess -User Admin1
Note: Granting an administrator user Full Access permission instead of the delegate is a technique you can use for central administration of resource mailboxes. This allows the administrator to gain access to the resource mailbox while also allowing the delegate to process meetings for the resource.
Now check the delegates on the resource mailbox using this command:
Get-MailboxCalendarSettings Resource1 |fl
It should now show:
ResourceDelegates : {Delegate1}
The default setting for the ForwardRequestsToDelegates parameter is true. Therefore, meetings are forwarded to the delegates (listed under ResourceDelegates). If this is set to false, the delegate will not receive the forwarded invite.
Happy resource booking in Exchange Server 2007.
Nagesh Mahadev
Note: We have learned that customers using Exchange Server 2013 and Exchange Server 2007 co-existence can experience an issue causing Exchange Server 2013 CU6 databases to failover. Please refer to KB2997209 for the specific scenario impacted.
The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013. Exchange Server 2013 Cumulative Update 6 and updated UM Language Packs are now available on the Microsoft Download Center. CU6 represents the continuation of our Exchange 2013 servicing and builds upon Exchange 2013 CU5. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved in Exchange 2013 CU6 can be found in KB 2961810. Customers running any previous release of Exchange 2013 can move directly to CU6 today. Customers deploying Exchange 2013 for the first time may skip previous releases and start their deployment with CU6 as well.
We would like to call your attention to a couple of items in particular about the CU6 release:
CU6 includes Exchange-related updates to Active Directory schema and configuration. For information on extending schema and configuring Active Directory, please review Prepare Active Directory and domains in Exchange 2013 documentation.
Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU6) or the prior (e.g., CU5) Cumulative Update release.
Note: Documentation may not be fully available at the time this post was published.
The Exchange team is announcing today the availability of Update Rollup 3 for Exchange Server 2010 Service Pack 3. Update Rollup 3 is the latest rollup of customer fixes available for Exchange Server 2010. The release contains fixes for customer reported issues and previously released security bulletins. Update Rollup 3 is not considered a security release as it contains no new previously unreleased security bulletins. A complete list of issues resolved in Exchange Server 2010 Service Pack 3 Update Rollup 3 may be found in KB2891587.
Note: The KB article may not be fully available at the time of publishing this post.
The release is now available on the Microsoft Download Center.
A few years back, a very detailed blog post was released on Troubleshooting Exchange 2007 Store Log/Database growth issues.
We wanted to revisit this topic with Exchange 2010 in mind. While the troubleshooting steps needed are virtually the same, we thought it would be useful to condense the steps a bit, make a few updates and provide links to a few newer KB articles.
The below list of steps is a walkthrough of an approach that would likely be used when calling Microsoft Support for assistance with this issue. It also provides some insight as to what we are looking for and why. It is not a complete list of every possible troubleshooting step, as some causes are simply not seen quite as much as others.
Another thing to note is that the steps are commonly used when we are seeing “rapid” growth, or unexpected growth in the database file on disk, or the amount of transaction logs getting generated. An example of this is when an Administrator notes a transaction log file drive is close to running out of space, but had several GB free the day before. When looking through historical records kept, the Administrator notes that approx. 2 to 3 GBs of logs have been backed up daily for several months, but we are currently generating 2 to 3 GBs of logs per hour. This is obviously a red flag for the log creation rate. Same principle applies with the database in scenarios where the rapid log growth is associated to new content creation.
In other cases, the database size or transaction log file quantity may increase, but signal other indicators of things going on with the server. For example, if backups have been failing for a few days and the log files are not getting purged, the log file disk will start to fill up and appear to have more logs than usual. In this example, the cause wouldn’t necessarily be rapid log growth, but an indicator that the backups which are responsible for purging the logs are failing and must be resolved. Another example is with the database, where retention settings have been modified or online maintenance has not been completing, therefore, the database will begin to grow on disk and eat up free space. These scenarios and a few others are also discussed in the “Proactive monitoring and mitigation efforts” section of the previously published blog.
It should be noted that in some cases, you may run into a scenario where the database size is expanding rapidly, but you do not experience log growth at a rapid rate. (As with new content creation in rapid log growth, we would expect the database to grow at a rapid rate with the transaction logs.) This is often referred to as database “bloat” or database “space leak”. The steps to troubleshoot this specific issue can be a little more invasive as you can see in some analysis steps listed here (taking databases offline, various kinds of dumps, etc.), and it may be better to utilize support for assistance if a reason for the growth cannot be found.
Once you have established that the rate of growth for the database and transaction log files is abnormal, we would begin troubleshooting the issue by doing the following steps. Note that in some cases the steps can be done out of order, but the below provides general suggested guidance based on our experiences in support.
Use Exchange User Monitor (Exmon) server side to determine if a specific user is causing the log growth problems.
If it appears that the user in Exmon is a ?, then this is representative of a HUB/Transport related problem generating the logs. Query the message tracking logs using the Message Tracking Log tool in the Exchange Management Consoles Toolbox to check for any large messages that might be running through the system. See #15 for a PowerShell script to accomplish the same task.
With Exchange 2007 Service Pack 2 Rollup Update 2 and higher, you can use KB972705 to troubleshoot abnormal database or log growth by adding the described registry values. The registry values will monitor RPC activity and log an event if the thresholds are exceeded, with details about the event and the user that caused it. (These registry values are not currently available in Exchange Server 2010)
Check for any excessive ExCDO warning events related to appointments in the application log on the server. (Examples are 8230 or 8264 events). If recurrence meeting events are found, then try to regenerate calendar data server side via a process called POOF. See http://blogs.msdn.com/stephen_griffin/archive/2007/02/21/poof-your-calender-really.aspx for more information on what this is.
Event Type: Warning Event Source: EXCDO Event Category: General Event ID: 8230 Description: An inconsistency was detected in username@domain.com: /Calendar/<calendar item> .EML. The calendar is being repaired. If other errors occur with this calendar, please view the calendar using Microsoft Outlook Web Access. If a problem persists, please recreate the calendar or the containing mailbox.
Event Type: Warning Event ID : 8264 Category : General Source : EXCDO Type : Warning Message : The recurring appointment expansion in mailbox <someone's address> has taken too long. The free/busy information for this calendar may be inaccurate. This may be the result of many very old recurring appointments. To correct this, please remove them or change their start date to a more recent date.
Important: If 8230 events are consistently seen on an Exchange server, have the user delete/recreate that appointment to remove any corruption
Collect and parse the IIS log files from the CAS servers used by the affected Mailbox Server. You can use Log Parser Studio to easily parse IIS log files. In here, you can look for repeated user account sync attempts and suspicious activity. For example, a user with an abnormally high number of sync attempts and errors would be a red flag. If a user is found and suspected to be a cause for the growth, you can follow the suggestions given in steps 5 and 6.
Once Log Parser Studio is launched, you will see convenient tabs to search per protocol:
Some example queries for this issue would be:
If a suspected user is found via Exmon, the event logs, KB972705, or parsing the IIS log files, then do one of the following:
Set-Casmailbox –Identity <Username> –MapiEnabled $False
If closing the client/devices, or killing their sessions seems to stop the log growth issue, then we need to do the following to see if this is OST or Outlook profile related:
Have the user launch Outlook while holding down the control key which will prompt if you would like to run Outlook in safe mode. If launching Outlook in safe mode resolves the log growth issue, then concentrate on what add-ins could be attributing to this problem.
For a mobile device, consider a full resync or a new sync profile. Also check for any messages in the drafts folder or outbox on the device. A corrupted meeting or calendar entry is commonly found to be causing the issue with the device as well.
If you can gain access to the users machine, then do one of the following:
1. Launch Outlook to confirm the log file growth issue on the server.
2. If log growth is confirmed, do one of the following:
3. Follow the Running Process Explorer instructions in the below article to dump out dlls that are running within the Outlook Process. Name the file username.txt. This helps check for any 3rd party Outlook Add-ins that may be causing the excessive log growth. 970920 Using Process Explorer to List dlls Running Under the Outlook.exe Process http://support.microsoft.com/kb/970920
4. Check the Sync Issues folder for any errors that might be occurring
Let’s attempt to narrow this down further to see if the problem is truly in the OST or something possibly Outlook Profile related:
If renaming the OST causes the problem to recur again, then recreate the users profile to see if this might be profile related.
Ask Questions:
Check to ensure File Level Antivirus exclusions are set correctly for both files and processes per http://technet.microsoft.com/en-us/library/bb332342(v=exchg.141).aspx
If Exmon and the above methods do not provide the data that is necessary to get root cause, then collect a portion of Store transaction log files (100 would be a good start) during the problem period and parse them following the directions in http://blogs.msdn.com/scottos/archive/2007/11/07/remix-using-powershell-to-parse-ese-transaction-logs.aspx to look for possible patterns such as high pattern counts for IPM.Appointment. This will give you a high level overview if something is looping or a high rate of messages being sent. Note: This tool may or may not provide any benefit depending on the data that is stored in the log files, but sometimes will show data that is MIME encoded that will help with your investigation
If nothing is found by parsing the transaction log files, we can check for a rogue, corrupted, and large message in transit:
1. Check current queues against all HUB Transport Servers for stuck or queued messages:
get-exchangeserver | where {$_.IsHubTransportServer -eq "true"} | Get-Queue | where {$_.Deliverytype –eq “MapiDelivery”} | Select-Object Identity, NextHopDomain, Status, MessageCount | export-csv HubQueues.csv
Review queues for any that are in retry or have a lot of messages queued:
Export out message sizes in MB in all Hub Transport queues to see if any large messages are being sent through the queues:
get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,@{Name="Message Size MB";expression={$_.size.toMB()}} | sort-object -property size –descending | export-csv HubMessages.csv
Export out message sizes in Bytes in all Hub Transport queues:
get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,size | sort-object -property size –descending | export-csv HubMessages.csv
2. Check Users Outbox for any large, looping, or stranded messages that might be affecting overall Log Growth.
get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv
Note: This does not get information for users that are running in cached mode.
Utilize the MSExchangeIS Client\Jet Log Record Bytes/sec and MSExchangeIS Client\RPC Operations/sec Perfmon counters to see if there is a particular client protocol that may be generating excessive logs. If a particular protocol mechanism if found to be higher than other protocols for a sustained period of time, then possibly shut down the service hosting the protocol. For example, if Exchange Outlook Web Access is the protocol generating potential log growth, then stopping the World Wide Web Service (W3SVC) to confirm that log growth stops. If log growth stops, then collecting IIS logs from the CAS/MBX Exchange servers involved will help provide insight in to what action the user was performing that was causing this occur.
Run the following command from the Management shell to export out current user operation rates:
To export to CSV File:
get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount, progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| export-csv LogonStats.csv
To view realtime data:
get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount, progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| ft
Key things to look for:
In the below example, the Administrator account was storming the testuser account with email. You will notice that there are 2 users that are active here, one is the Administrator submitting all of the messages and then you will notice that the Windows2000Account references a HUB server referencing an Identity of testuser. The HUB server also has *no* UserName either, so that is a giveaway right there. This can give you a better understanding of what parties are involved in these high rates of operations
UserName : Administrator Windows2000Account : DOMAIN\Administrator Identity : /o=First Organization/ou=First Administrative Group/cn=Recipients/cn=Administrator MessagingOperationCount : 1724 OtherOperationCount : 384 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 576 TotalOperationCount : 2684
UserName : Windows2000Account : DOMAIN\E12-HUB$ Identity : /o= First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testuser MessagingOperationCount : 630 OtherOperationCount : 361 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 0 TotalOperationCount : 1091
Enable Perfmon/Perfwiz logging on the server. Collect data through the problem times and then review for any irregular activities. You can reference Perfwiz for Exchange 2007/2010 data collection here http://blogs.technet.com/b/mikelag/archive/2010/07/09/exchange-2007-2010-performance-data-collection-script.aspx
Run ExTRA (Exchange Troubleshooting Assistant) via the Toolbox in the Exchange Management Console to look for any possible Functions (via FCL Logging) that may be consuming Excessive times within the store process. This needs to be launched during the problem period. http://blogs.technet.com/mikelag/archive/2008/08/21/using-extra-to-find-long-running-transactions-inside-store.aspx shows how to use FCL logging only, but it would be best to include Perfmon, Exmon, and FCL logging via this tool to capture the most amount of data. The steps shown are valid for Exchange 2007 & Exchange 2010.
Export out Message tracking log data from affected MBX server.
Download the ExLogGrowthCollector script and place it on the MBX server that experienced the issue. Run ExLogGrowthCollector.ps1 from the Exchange Management Shell. Enter in the MBX server name that you would like to trace, the Start and End times and click on the Collect Logs button.
Note: What this script does is to export out all mail traffic to/from the specified mailbox server across all HUB servers between the times specified. This helps provide insight in to any large or looping messages that might have been sent that could have caused the log growth issue.
Copy/Paste the following data in to notepad, save as msgtrackexport.ps1 and then run this on the affected Mailbox Server. Open in Excel for review. This is similar to the GUI version, but requires manual editing to get it to work.
#Export Tracking Log data from affected server specifying Start/End Times Write-host "Script to export out Mailbox Tracking Log Information" Write-Host "#####################################################" Write-Host $server = Read-Host "Enter Mailbox server Name" $start = Read-host "Enter start date and time in the format of MM/DD/YYYY hh:mmAM" $end = Read-host "Enter send date and time in the format of MM/DD/YYYY hh:mmPM" $fqdn = $(get-exchangeserver $server).fqdn Write-Host "Writing data out to csv file..... " Get-ExchangeServer | where {$_.IsHubTransportServer -eq "True" -or $_.name -eq "$server"} | Get-MessageTrackingLog -ResultSize Unlimited -Start $start -End $end | where {$_.ServerHostname -eq $server -or $_.clienthostname -eq $server -or $_.clienthostname -eq $fqdn} | sort-object totalbytes -Descending | export-csv MsgTrack.csv -NoType Write-Host "Completed!! You can now open the MsgTrack.csv file in Excel for review"
You can also use the Process Tracking Log Tool at http://blogs.technet.com/b/exchange/archive/2011/10/21/updated-process-tracking-log-ptl-tool-for-use-with-exchange-2007-and-exchange-2010.aspx to provide some very useful reports.
Save off a copy of the application/system logs from the affected server and review them for any events that could attribute to this problem.
Enable IIS extended logging for CAS and MB server roles to add the sc-bytes and cs-bytes fields to track large messages being sent via IIS protocols and to also track usage patterns (Additional Details).
Get a process dump the store process during the time of the log growth. (Use this as a last measure once all prior activities have been exhausted and prior to calling Microsoft for assistance. These issues are sometimes intermittent, and the quicker you can obtain any data from the server, the better as this will help provide Microsoft with information on what the underlying cause might be.)
procdump -mp -s 120 -n 2 store.exe d:\DebugData
Open a case with Microsoft Product Support Services to get this data looked at.
2814847 - Rapid growth in transaction logs, CPU use, and memory consumption in Exchange Server 2010 when a user syncs a mailbox by using an iOS 6.1 or 6.1.1-based device
2621266 - An Exchange Server 2010 database store grows unexpectedly large
996191 - Troubleshooting Fast Growing Transaction Logs on Microsoft Exchange 2000 Server and Exchange Server 2003
Kevin Carker (based on a blog post written by Mike Lagase)
EDIT 8/12/2010: Added a note about the necessity to manually enable MSProxy in remote forest.
We are seeing some trends where quite a few customers are migrating mailboxes to a new Exchange organization, in a different Active Directory (AD) forest. This blog post is aimed at helping to explain the fundamentals of what is required to move mailboxes across forests so that you can be prepared with the correct data, make better plans, and successfully perform a migration without encountering painful problems. The blog post doesn't cover how to setup and configure shared address space or Free/busy.
After reading this blog post, you should have better understanding of:
The trends we are seeing currently show that companies are having more trouble understanding the different scenarios than performing the migration. There are several scenarios here, and Microsoft has tools, documentation, and scripts to assist in each one of them.
There are many reasons companies choose to have multiple forests or maybe find themselves with multiple forests, requiring cross-forest moves of users and mailboxes. For instance:
The common Active Directory topologies that are supported in Exchange 2010 are as follows:
Exchange deployment topologies vary due to organizational size and business complexity. Variations may include Single Forest, Resource Forest, Hybrid Forest, and Cross Forest topology. For purposes of discussion the following forest definitions will be used going forward:
Forest Name Active Directory user object status Mailbox Status Exchange Forest Enabled User Object Mailbox Enabled Account Forest Enabled User Object No mailbox enabled objects Resource Forest Disabled User Object (linked to a separate enabled user object in an Account Forest) Mailbox Enabled Hybrid Forest Both 1.) AD Enabled Mailbox Enabled 2.) AD Disabled Mailbox Enabled Both mailbox enabled and disabled objects
Forest Name
Active Directory user object status
Mailbox Status
Exchange Forest
Enabled User Object
Mailbox Enabled
Account Forest
No mailbox enabled objects
Resource Forest
Disabled User Object (linked to a separate enabled user object in an Account Forest)
Hybrid Forest
Both
1.) AD Enabled Mailbox Enabled
2.) AD Disabled Mailbox Enabled
Both mailbox enabled and disabled objects
Most of the Cross-Org Move Mailbox scenarios are closely related to the Active Directory Forests involved in the migration. There are 3 major scenarios to be considered:
1. Move from Exchange Forest A to Exchange Forest B. This means that the user is a security principal in forest A and after he is moved to forest B, he is a security principal in forest B as well.
2. Move from Account Forest to Exchange Resource Forest.
3. Move from Exchange Resource Forest to Account Forest. This is the reverse of #2.
Cross-forest is when all users from the same organization are only contacts or mail enabled user objects in the other forest.
Below are some AD forest configuration examples. The forest scenarios don't necessarily imply there is a "move" or migration going on, some are long-term configurations.
A Resource Forest scenario is a deployment that has at least one Exchange Resource Forest that hosts user mailboxes (but not active user accounts or enabled user accounts) and at least one other forest that hosts the AD user accounts. In other words, Exchange is installed into an AD forest which is separate from the "user account" AD forest.
The user objects in the Exchange forest are never logged onto by an end user and are disabled.
Typically this scenario is maintained initially for co-existence while migrating and decommissioning a forest. It is different from a typical cross-forest scenario because there may be both enabled and disabled users in both forests for the same organization. In some cases, an organization may actually need to maintain the Hybrid Forest scenario over the long-term. While this is a supported scenario, it comes with additional complexity that must be addressed:
For more information refer to Understanding Federation
Cross-forest
Both forests contain mailboxes and user accounts and contacts. This type of configuration has user accounts always enabled and mailbox enabled, with a corresponding contact in the other forest. The following diagram depicts how different objects are represented in the corresponding forest:
For more information on forests related to Cross Org migrations, refer to http://msexchangeteam.com/archive/2006/11/02/430289.aspx
Depending on the current topology you have employed, you may find yourself planning to move users into the new forest and then following with moving their mailboxes as well. There are essentially three ways of planning to move your resources:
Note: Customers using "out of the box" GALSync MA may probably not know how to customize ILM.
Important Note: Our recommendation on working with ADMT is to rely on the PrepareMoveRequest script to create the local user object for mailbox move, and then use ADMT to migrate SIDHistory and password and merge this into the MEU created by PrepareMoveRequest.ps1 script.
The point of doing ILM or the script first is to ensure the MEUs are all created with the correct msExch* attributes. This also ensures the following benefits:
At this point it doesn't matter if ADMT is used to migrate/merge the user objects all at once or in "batches" of user objects. ADMT can be controlled better to ensure only merging of SIDhistory and certain other mandatory attributes if it's not already populated. Running ADMT first, without ensuring exclusions on msExch* attributes, can cause corrupted objects which the script cannot correctly convert with the -UseLocalObject switch. Important Note: When SP1 ships, we will support running ADMT first and then the PrepareMoveRequest script later.
At this point it doesn't matter if ADMT is used to migrate/merge the user objects all at once or in "batches" of user objects. ADMT can be controlled better to ensure only merging of SIDhistory and certain other mandatory attributes if it's not already populated.
Running ADMT first, without ensuring exclusions on msExch* attributes, can cause corrupted objects which the script cannot correctly convert with the -UseLocalObject switch.
Important Note: When SP1 ships, we will support running ADMT first and then the PrepareMoveRequest script later.
There are basically 5 steps involved with moving a mailbox across a forest in Exchange 2010. They are: Preparing Active Directory, Network Prerequisites, Administrator Permissions, Moving Mailboxes and Clean-up. Each of these steps is series of smaller steps that need to be taken in order to move a mailbox from one Exchange forest to and Exchange 2010 forest.
The first step in Cross Forest mailbox moves is preparing Active Directory. In the target forest a mail enabled user account must be created with certain attributes. The method used for creating the target account and setting the mandatory attributes is up to the organization administrator. ADMT and ILM can be used to synchronize/pull over the attributes from the source forest.
Exchange Provisioning using ILM 2007
If you deployed ILM for cross-forest global address list (GAL) synchronization, the recommended approach to creating the mail-enabled user is to use ILM 2007 Service Pack 1 (SP1) Feature Pack 1 (FP1) or Forefront Identity Manager 2010 (FIM) GALSync MA. We've created sample code that you can use to learn how to customize ILM to synchronize the source mailbox user and target mail user.
For more information, including how to download the sample code, refer to this link.
To deploy Exchange 2010 in a cross-forest topology, you must first install Exchange 2010 in the new forest. Then, provision the mail-enabled users representing the source mailboxes so that Exchange 2010 can move the mailbox and migrated users can see all addresses.
Deployment steps:
Note: The main purpose of the sample code is to encourage customers to customize, or add more functions to the sample code. The sample code is very basic and it only copies very basic attributes. Customers who rely on this sample code may find many attributes missing.
Note: The Availability service is supported only for Outlook 2007 clients and newer. If Outlook 2003 clients still exist in one of the forests, the only solution will be to deploy Exchange 2007 first in the Exchange 2010 organization (because adding it late is not possible if Exchange 2010 is deployed first) and implement the IOREPL tool to replicate Free/Busy system public folders to the Exchange 2007 server. The Free/Busy system public folder replicas can then be replicated using PF replication to your Exchange 2010 server. IOREPL will not replicate a public/system folder directly to an Exchange 2010 server.
For more information review:
Exchange Provisioning using ILM 2007 and FIM 2010http://technet.microsoft.com/en-us/magazine/ff472471.aspx
Prepare-MoveRequest.ps1
It may be difficult for some customers to synchronize the prerequisite attributes for performing mailbox moves without using ILM. You may have some other solution in place that does not synchronize the required attributes, and does not allow customization. Small companies may not have a solution at all and simply wish to transition users from an existing forest (that is set to be obsolete) to a new, clean Exchange 2010 forest.
To solve this problem, the PrepareMoveRequest script has been written to prepare the AD target object and synchronize the required attributes for cross-forest moves to work. The script creates the target MEU if necessary, or synchronizes an existing MEU when possible.
The PrepareMoveRequest script prepares Exchange 2003, Exchange 2007, and Exchange 2010 mailbox users for migration to an Exchange 2010 forest.
For more information about using the sample script, refer to the following link.
The PrepareMoveRequest script supports 2 scenarios:
Note: The scenario that the script doesn't support is that some external process created a local user object and relies on the script to copy all the attributes and links from the remote MBX to the local user. This is the ADMT scenario described after this scenario.
In order to run New-MoveRequest cmdlet to move a mailbox from an Exchange 2003/2007/2010 source forest to an Exchange2010 target forest, the target forest must contain a valid MEU account with the set of AD attributes described in this section. These attributes are synchronized by the PrepareMoveRequest script.
There are certain mandatory attributes that should be present on the target mail user for New-MoveRequest to run properly. These attributes are always set by the PrepareMoveRequest script, either as they are taken from the source MBX, or as determined by the script. The attributes are listed here http://technet.microsoft.com/en-us/library/ee861103.aspx.
Process Overview: Run PrepareMoveRequest script first and then ADMT
To create the target mail enabled user account in an Exchange 2010 forest from the source mailbox enabled account in the source Exchange forest, the PrepareMoveRequest script must be executed in the target Exchange 2010 forest. The script pulls the mailbox enabled account attributes from the source forest.
The script can be used to provision one target MEU account at a time, but can also take data that is passed by pipeline as input to provision MEUs in bulk.
Since PrepareMoveRequest script relies on Update-Recipient task that exists only in Exchange Management Shell, all the below commands need to be run in Exchange Management Shell. Running in PowerShell will only result in error.
$Local = Get-Credential
Input the target forest's Administrator Credentials in "Domain\User" and Password format.
Note: The account used should have permissions to call Update-Recipient which is available only to Exchange Enterprise Admin.
$Remote = Get-Credential
Input the Source forest's Administrator Credentials in "Domain\User" and Password format.
Note: Since the PrepareMoveRequest script will also update the source object's proxyAddresses to include the target object's legacyDN as X500 address, the account used to run this command should have Read and Write access for the source forest.
[PS] C:\>.\Prepare-MoveRequest.Ps1 -Identity "DN of a user from SourceForest" -RemoteForestDomainController "FQDN of Source DC" -RemoteForestCredential $Remote -LocalForestDomainController "FQDN of Target Forest DC" -LocalForestCredential $Local -TargetMailUserOU "Distinguished name of OU in TargetForest" -UseLocalObject
Note 1: You can use the -Verbose flag to check which attributes have been set if you want to get a detailed list of the attributes that were touched.
Note 2: You can use the -UseLocalObject parameter here.
Note: If the local matching object is found and UseLocalObject is not defined, the script will throw an error.
If you are sure that you didn't prepare local object before, you could remove this parameter to ensure accidental overriding.
legacyExchangeDN, mail, mailnickname, msExchmailboxGuid, proxyAddresses, X500, targetAddress, userAccountControl, userprincipalName
Note: Currently the Active Directory Migration Tool (ADMT) v3.1 is not supported on Windows 2008 R2 Servers. If you plan to use ADMT v3.1, it must be installed on Windows 2008 server.
Currently, several customers are running ADMT first and then running the PrepareMoveRequest script. When a user is created via ADMT, the PrepareMoveRequest script doesn't work since there are no proxyAddresses for the script to match the source forest user with the target forest user.
The recommended approach is to copy at least 1 proxy address using ADMT. However, if you use the -UseLocalObject parameter, the script will only copy the 3 mandatory parameters (msExchMailboxGUID, msExchArchiveGUID, msExchArchiveName). This is not very useful. Customers can simply copy these 3 themselves.
Important Note: In SP1, we are adding the OverwriteLocalObject parameter. This is designed for the ADMT case. ADMT can copy the SIDhistory, password, and proxyAddresses, and the PrepareMoveRequest script can sync the other email attributes. In this case, it will copy attributes from source to target, so it's the opposite of UseLocalObject.
ADMT and Exchange Attributes
ADMT transfers Exchange attributes (e.g. homeMDB, homeMTA, showInAddressBook, msExch*) which make the target user look like a legacy mailbox in the target domain. This leaves the target account in an invalid state (e.g. homeMDB still points to the old forest) which is unexpected for the
PrepareMoveRequest.ps1 script. To prevent this, Exchange attributes are excluded from ADMT.
The PrepareMoveRequest.ps1 script can identify and match existing accounts in the target forest based on their SMTP address (proxyAddresses attribute).
Note: It can also do this based on the MasterAccountSid, but this is only populated for accounts in a resource forest scenario.
More precisely, the script will use the existing target accounts if the following are true:
If all these are true, the script will copy further attributes needed (especially msExchMailboxGUID) to the target account so that the move request can process the accounts.
By default, ADMT 3.1 does NOT migrate "mail", "msExchMailboxGuid" and "proxyAddresses" attributes because of security reasons. This is documented in the below article under "System attribute exclusion list"
Managing Users, Groups, and User Profileshttp://technet.microsoft.com/en-us/library/cc974331(WS.10).aspx
Important Note: When running ADMT second after ILM due to both forests having the same schema (attributes), unexpected Exchange attributes are brought over. This can cause issues. HomeMDB for example is brought over and causes the MEU to look like a legacy mailbox, and is unusable.
To resolve the problem of ADMT being run first, and leaving the user in an invalid state for the PrepareMoveRequest.ps1 script, you can create the following VB script/ADMT COM object model to exclude all Exchange attributes from being migrated by ADMT.
Set O = CreateObject("ADMT.Migration"). o.SystemPropertiesToExclude = " HomeMDB,HomeMTA,showInAddressBook,msExchHomeServerName, mail, proxyAddresses, msExch*"
This allows update-recipient to find the target object and match it with the source account and merge the two together. For more information, refer to the below article:
You will find that several custom attributes are missing when you use ADMT to migrate users between two forests
http://support.microsoft.com/kb/937537
When mailboxes are moved from one Exchange 2010 forest to another Exchange 2010 forest, the process is handled through Exchange 2010 Client Access Servers using the MRSProxy service. The only port required to be open between the forests for MRSProxy to use HTTPS traffic is port 443. This works even if the source mailboxes are on 2003 or 2007 MBX servers as long as an Exchange 2010 CAS server exists in both organizations.
Note: The whole forest doesn't need to be Exchange 2010 in order to use the MRSProxy. If there is at least one Exchange 2010 CAS in the forest (with access to the Mailbox Servers and AD), it can be used as the MRS Proxy for moves from a mostly Exchange 2003 or Exchange 2007 forest. This can be called the "Remote" scenario (or the "MRSProxy" scenario). By default, MRSProxy is disabled. To start MRSProxy on the Client Access server in the remote forest, you must modify the Client Access server's Web.config file. For more information refer to http://technet.microsoft.com/en-us/library/ee732395.aspx. If CAS servers are behind the NLB, you should do this on all servers that can take the load.
If the mailbox is being moved from legacy Exchange forest then the mailbox replication service will need to have the same TCP ports open that is needed for a normal local mailbox move. Listed are the TCP ports that are needed for a local mailbox move. These ports will be needed to be open both ways for mailboxes to be moved.
Note: This is more of the "Remote Legacy" scenario, but it can be used between two Exchange 2010 forests as well as between one Exchange 2010 forest and one Exchange 2003/2007 forest.
Port Protocol 808 (TCP) Mailbox Replication Service uses to communicate 53 (TCP) DNS 135 (TCP) RPC End Point 389 (TCP) LDAP 3268 LDAP 1024 > (TCP) if mailbox store is not statically configured then 1024 higher ports need to be open 88 (TCP) Kerberos 445 (TCP) Microsoft-DS Service 443 (TCP) Mailbox Replication Proxy service uses port 443 to communicate with other Exchange 2010 client access server via HTTPS.
Port
Protocol
808 (TCP)
Mailbox Replication Service uses to communicate
53 (TCP)
DNS
135 (TCP)
RPC End Point
389 (TCP)
LDAP
3268
1024 > (TCP)
if mailbox store is not statically configured then 1024 higher ports need to be open
88 (TCP)
Kerberos
445 (TCP)
Microsoft-DS Service
443 (TCP)
Mailbox Replication Proxy service uses port 443 to communicate with other Exchange 2010 client access server via HTTPS.
Also it is necessary for servers in both forests to successfully perform name resolution using DNS.
For cross forest mailbox moves via the MRSProxy service, the source and target servers use certificates to encrypt the HTTPS traffic. The CAS Servers in the source and target forests must have installed a valid certificate that has been issued by a trusted certificate authority recognized by the server in the opposite forest.
In order to move mailboxes across different Exchange forests the account used to initiate the move request in the target forest and the account used to access the mailbox and directory in the source forest must have the proper permissions. The permissions that are needed for the account in the source forest depend on the type of move.
The account must have the privileges made available by membership in the Recipient Administrators group.
The migration account must have the following permissions.
In the target Exchange 2010 organization the account used to create and manage the move request must be a member of the Organization Management or Recipient Management role groups, or have the following RBAC roles assigned either directly or through group membership:
Only the Move Mailbox role is required to have access to the New-MoveRequest command. However, the Mail Recipients and Mail Recipient Creation roles may also be required to creating and managing target accounts in preparation for mailbox moves.
There are two methods to move a mailbox across forests using Exchange 2010. The method used depends on the type of cross forest move. Both Remote and Remote Legacy cross forest moves can be performed from the Exchange Management Shell, but only Remote moves can be performed from the Exchange Management Console.
To create a new move request for a cross forest move using Exchange Management Console (EMC), the console must have a session open to both the target and source forests at the same time using the feature Add Exchange Forest. This makes it possible to maintain a connection to an Exchange 2010 server in the source forest, and an Exchange 2010 server in the target forest. With a connection to servers in both source and target organizations via the EMC, you will be able to identify a mailbox that is to be moved from the source forest, while initiating the move request on an Exchange 2010 server in the target forest.
To initiate a cross forest move with the Exchange Management Console, navigate to the Mailboxes folder in the Recipient Configuration node of the source forest, select the mailbox(es) to be moved, and then select New Remote Move Request. This starts the New Remote Move Request.
To initiate a cross forest mailbox move in the Exchange Management Shell a New-MoveRequest command must be issued with Remote* parameters. Move requests issued without Remote* parameters are local moves within the same Exchange forest.
The New-MoveRequest cmdlet requires certain attributes to be synchronized between the source MBX account and the target MEU account in order for the mailbox move to succeed. This is described in the previous steps.
In the target domain, perform the move request by running the below cmdlet
New-MoveRequest -Identity "Distinguished name of User in Target Forest" -RemoteLegacy -TargetDatabase "E2K10 Mailbox Database Name" -RemoteGlobalCatalog "FQDN of Source DC" -RemoteCredential $Remote -TargetDeliveryDomain "Target domain name"
After the move completes, the proxyAddresses and targetAddress attributes should have changed in the target forest. If the accounts are disabled in the target forest, enable it, set a password and log into OWA and test.
After Online Mailbox Move (OMM), the source object is changed from MBX to MEU and target object is changed from MEU to MBX
For more information on performing cross forest moves in Exchange 2010, refer to Managing Move Requests
When the MRS completes the moving of mailbox data from the source forest to the destination forest it mailbox enables the target user account. If the user account is disabled it leaves the account disabled. The MRS mailbox disables the source account, and converts it into a MEU account with a target address that refers to the primary SMTP address of the target mailbox account. The New-MoveRequest takes the TargetDeliveryDomain parameter. This is what determines which targetAddress to stamp. MRS checks the list of proxyAddresses for one (not necessarily the primary SMTP) that matches the FQDN specified in the TargetDeliveryDomain. The MRS will stamp this address as the targetAddress on the MEU. We moved away from using the primary SMTP address because there is a need to maintain the primary STMP when moving mailboxes cross-forest since this is part of a user's identity. When the primary SMTP address is the same on both forests, mail flow becomes more difficult.
If the source account is to be retired and removed from the source forest, the administrator must plan for this manual operation outside of the mailbox move operation.
As mentioned earlier, SP1, will include the PrepareMoveRequest script as part of the install. Additionally, we are fixing a couple of issues with that script:
The most common issues related to PrepareMoveRequest script are listed below. These are not relevant if you have deployed the customized ILM, or if you have already run PrepareMoveRequest.
Finally, a few words of thanks:
I had the privilege of working with several SMEs from the Product Group, Consulting, UE and Support who helped me visualize, plan, develop and complete this blog. I would like to call out Ian Liu (Program Manager) who was instrumental in sharing his vision and being accessible at all times while writing this blog. I also want to thank Daniel Talbot for his expertise on this subject and his many contributions to the blog. Other contributors that I'd like to express gratitude are Andrew Ehrensing and Huangjian Guo. Thanks to Ying Zhang, Ramon Infante, Jeff Kizner, Kweku Ako-Adjei , Ben Winzenz, Kristi Simmons, Bill Haenlin, Laura La Fleur, Nino Bilic, Jonathan Runyon and Ayla Kol for their review and feedback. Last but not the least, I'd like to thank William Rall for his innumerable thorough reviews and feedback that helped shape this blog.
- Nagesh Mahadev
We know a lot of you have been waiting for this, and so it is with great excitement that we announce that Exchange Server 2013 RTM Cumulative Update 1 (CU1) has been released to the web and is available for immediate download! This is the first release using the new servicing model for Exchange Server 2013. In addition to this article, the Exchange 2013 RTM CU1 release notes are also available.
Note: Article links that may not have been available at the time of this post's publishing are now available. Updated Exchange 2013 documentation, including Release Notes, is now available on TechNet.
CU1 is the minimum version of Exchange 2013 required for on-premises coexistence with supported legacy Exchange Server versions. The final build number for CU1 is 15.0.620.29. For more information on coexistence, check out the Planning and Deployment documentation, and this Ignite webcast covering deployment of and coexistence with Exchange Server 2013.
Unlike previous versions, cumulative updates do not use the rollup infrastructure; cumulative updates are actually full builds of the product, meaning that when you want to deploy a new server, you simply use the latest cumulative update build available and do not necessarily need to apply additional Exchange Server updates.
Prior to upgrading or deploying the new build onto a server, you will need to update Active Directory. For those of you with a diverse Active Directory permissions model you will want to perform the following steps:
As mentioned in the Exchange Server 2013 CU1 release notes, when you deploy the first Exchange 2013 Mailbox server in an existing Exchange organization, a new default Offline Address Book is created.
Figure 1: The new OAB as shown in an Exchange Server 2010 SP3 & 2013 CU1 environment
All existing clients that rely on an OAB will see this new default OAB the next time they look for an OAB update. This will cause these clients to perform a full OAB download. To prevent this from happening, you can configure your existing mailbox databases to explicitly point to the current default OAB prior to introducing the first Exchange 2013 server. You can do this one of two ways:
Figure 2: Modifying the default Offline Address Book at the database level in the EMC
Get-MailboxDatabase | Where {$_.OfflineAddressBook -eq $Null} | FT Name,OfflineAddressBook -AutoSize
If no values are returned then you are already prepared. However, if you need to configure some databases, then this next command will find all mailbox databases in an Exchange 2007 or Exchange 2010 environment with no default OAB defined at the database level, and it will set it to the current default OAB in the org.
Get-MailboxDatabase | Where {$_.OfflineAddressBook -eq $Null} | Set-MailboxDatabase -OfflineAddressBook (Get-OfflineAddressBook | Where {$_.IsDefault -eq $True})
To confirm all Exchange 2007/2010 mailbox databases now have a defined default OAB, re-run the first command. This time it should return no entries.
Once the preparatory steps are completed, you can then deploy CU1 and start your coexistence journey. If this is your first Exchange 2013 server deployment, you will need to deploy both an Exchange 2013 Client Access Server and an Exchange 2013 Mailbox Server into the organization. As explained in Exchange 2013 Client Access Server Role, CAS 2013 is simply an authentication and proxy/redirection server; all data processing (including the execution of remote PowerShell cmdlets) occurs on the Mailbox server. You can either deploy a multi-role server or each role separately (just remember if you deploy them separately, you cannot manage the Exchange 2013 environment until you install both roles).
If you already deployed Exchange 2013 RTM code and want to upgrade to CU1, you will run setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms from a command line after completing the Active Directory preparatory steps or run through the GUI installer. Deploying future cumulative updates will operate in the same manner.
As you start migrating your mailboxes to Exchange 2013, one thing you may notice is that your mailboxes appear to be larger post move.
As you can imagine, with hosting millions of mailboxes in Office 365, accurate storage reporting is essential, just like in your on-premises deployments. One of the learnings that we accrued into the on-premises product is ensuring that the mailbox usage statistics are more closely aligned with the capacity usage within the Mailbox database. The impact of reporting space more accurately means that mailbox quota limits may need to be adjusted prior to the mailbox move so that users are not locked out of their mailbox during the migration process.
Our improved space calculations may result in a mailbox’s reported size increasing on average of 30% when the mailbox is moved from a legacy version of Exchange to Exchange 2013. For example, if a mailbox is reported as 10GB in size on Exchange Server 2010, then when the mailbox is moved to Exchange 2013, it may be reported as 13GB. This does not mean that migrating to Exchange 2013 will increase your capacity footprint by 30% per mailbox; it only means that the statistics are including more data about the space the mailbox consumes. 30% is an average value, based on what we have experienced in Exchange Online. Customers with pilot mailboxes should determine what their own average increase value may be as some environments may see higher or lower values depending on the most prevalent type of email within their mailboxes. Again, this does not mean there will be an increase in the size of the database file on disk; only the attribution of space to each mailbox will increase.
Exchange 2013 RTM CU1 includes a number of bug fixes and enhancements over the RTM release of Exchange 2013. Some of the more notable enhancements are identified below.
As discussed recently, an Address Book Policy Routing Agent has been included in Exchange 2013 RTM CU1. For all the juicy details, see Address Book Policies, Jamba Jokes and Secret Agents.
In Exchange 2010 you could not use a group as an owner for another group for membership management. Instead you had to deploy explicit permissions on groups or use a script as a workaround.
Since Exchange 2010’s release both Microsoft Support and the Exchange Product Group received resounding feedback on the need for this capability. The good news is that with Exchange 2013 RTM CU1 groups can once again be owners of groups for membership management.
In Exchange Server 2013 RTM there was no way to access Public Folder content through Outlook Web App. In CU1 you will now have access to Public Folders you have added as favorites via your favorites menu either in Outlook or Outlook Web App. However, this access is limited to Public Folders stored on Exchange Server 2013.
Figure 3: Adding a Public Folder as a favorite in Outlook Web App in Exchange Server 2013 RTM CU1
Remember, you cannot start creating Public Folders on Exchange Server 2013 until all users have been migrated to Exchange Server 2013. For how to migrate from legacy Public Folders to Exchange Server 2013 Public Folders, see Migrate Public Folders to Exchange 2013 From Previous Versions.
The Exchange Admin Center (EAC) has been enhanced and now includes Unified Messaging management, improvements in the migration UI allowing more migration options reducing the gap between PowerShell and the UI, and general overall improvements in the user experience for consistency and simplification based on customer feedback.
There are have been several enhancements in the high availability and Managed Availability space. In particular:
On behalf of the Exchange Product Group, thanks again for your continued support and patience, and please keep the feedback coming.
Exchange Team
I’ve recently returned from TechEd North America 2011 in Atlanta, Georgia, where I had a wonderful time seeing old friends and new, and talking with customers and partners about my favorite subject: high availability in Exchange Server 2010. In case you missed TechEd, or were there and missed some sessions, you can download slide decks and watch presentations on Channel 9.
While at TechEd, I noticed several misconceptions that were being repeated around certain aspects of Exchange HA. I thought a blog post might help clear up these misconceptions.
But first let’s start with some terminology to make sure everyone understands the components, settings and concepts I’ll be discussing.
Active Manager
OK, that’s enough terms for now.
Now let’s discuss (and dispel) these misconceptions, in no particular order.
The actual name – Alternate Witness Server – originates from the fact that its intended purpose is to provide a replacement witness server for a DAG to use after a datacenter switchover. When you are performing a datacenter switchover, you’re restoring service and data to an alternate or standby datacenter after you’ve deemed your primary datacenter un-usable from a messaging service perspective.
Although you can configure an Alternate Witness Server (and corresponding Alternate Witness Directory) for a DAG at any time, the Alternate Witness Server will not be used by the DAG until part-way through a datacenter switchover; specifically, when the Restore-DatabaseAvailabilityGroup cmdlet is used.
The Alternate Witness Server itself does not provide any redundancy for the Witness Server, and DAGs do not dynamically switch witness servers, nor do they automatically start using the Alternate Witness Server in the event of a problem with the Witness Server.
The reality is that the Witness Server does not need to be made redundant. In the event the server acting as the Witness Server is lost, it is a quick and easy operation to configure a replacement Witness Server from either the Exchange Management Console or the Exchange Management Shell.
In this scenario, you have one DAG member in the primary datacenter (Portland) and one DAG member in a secondary datacenter (Redmond). Because this is a two-member DAG, it will use a Witness Server. Our recommendation is (and has always been) to locate the Witness Server in the primary datacenter, as shown below.
Figure 1: When extending a two-member DAG across two datacenters, locate the Witness Server in the primary datacenter
In this example, Portland is the primary datacenter because it contains the majority of the user population. As illustrated below, in the event of a WAN outage (which will always result in the loss of communication between some DAG members when a DAG is extended across a WAN), the DAG member in the Portland datacenter will maintain quorum and continue servicing the local user population, and the DAG member in the Redmond datacenter will lose quorum and will require manual intervention to restore to service after WAN connectivity is restored.
Figure 2: In the event of a WAN outage, the DAG member in the primary datacenter will maintain quorum and continue servicing local users
The reason for this behavior has to do with the core rules around quorum and DAGs, specifically:
Going back to our example, consider the placement of the Witness Server in a third datacenter, which would look like the following:
Figure 3: Locating the Witness Server in a third datacenter does not provide you with any different behavior
The above configuration does not provide you with any different behavior. In the event WAN connectivity is lost between Portland and Redmond, one DAG member will retain quorum and one DAG member will lose quorum, as illustrated below:
Figure 4: In the event of a WAN outage between the two datacenters, one DAG member will retain quorum
Here we have two DAG members; thus two voters. Using the formula V/2 + 1, we need at least 2 votes to maintain quorum. When the WAN connection between Portland and Redmond is lost, it causes the DAG’s underlying cluster to verify that it still has quorum.
In this example, the DAG member in Portland is able to place an SMB lock on the witness.log file on the Witness Server in Olympia. Because the DAG member in Portland is the locking node, it gets the weighted vote, and now therefore holds the two votes necessary to retain quorum and keep its cluster and DAG functions operating.
Although the DAG member in Redmond can communicate with the Witness Server in Olympia, it cannot place an SMB lock on the witness.log file because one already exists. And because it cannot communicate with the locking node, the Redmond DAG member is in the minority, it loses quorum, and it terminates its cluster and DAG functions. Remember, it doesn’t matter if the other DAG members can communicate with the Witness Server; they need to be able to communicate with the locking node in order to participate in quorum and remain functional.
As documented in Managing Database Availability Groups on TechNet, if you have a DAG extended across two sites, we recommend that you place the Witness Server in the datacenter that you consider to be your primary datacenter based on the location of your user population. If you have multiple datacenters with active user populations, we recommend using two DAGs (also as documented in Database Availability Group Design Examples on TechNet).
In addition to Misconception Number 2, there is a related misconception that extending an even member DAG to two datacenters and using a witness server in a third enables greater resilience because it allows you to configure the system to perform a “datacenter failover.” You may have noticed that the term “datacenter failover” is not defined above in the Terminology section. From an Exchange perspective, there’s no such thing. As a result, no configuration can enable a true datacenter failover for Exchange.
Remember, failover is corrective action performed automatically by the system. There is no mechanism to achieve this for datacenter-level failures in Exchange 2010. While the above configuration may enable server failovers and database failovers, it cannot enable datacenter failovers. Instead, the process for recovering from a datacenter-level failure or disaster is a manual process called a datacenter switchover, and that process always begins with humans making the decision to activate a second or standby datacenter.
Activating a second datacenter is not a trivial task, and it involves much more than the inner workings of a DAG. It also involves moving messaging namespaces from the primary datacenter to the second datacenter. Moreover, it assumes that the primary datacenter is no longer able to provide a sufficient level of service to meet the needs of the organization. This is a condition that the system simply cannot detect on its own. It has no awareness of the nature or duration of the outage. Thus, a datacenter switchover is always a manual process that begins with the decision-making process itself.
Once the decision to perform a datacenter switchover has been made, performing one is a straightforward process that is well-documented in Datacenter Switchovers.
Datacenter Activation Coordination (DAC) mode has nothing whatsoever to do with failover. DAC mode is a property of the DAG that, when enabled, forces starting DAG members to acquire permission from other DAG members in order to mount mailbox databases. DAC mode was created to handle the following basic scenario:
In this scenario, the starting DAG members in the primary datacenter have no idea that a datacenter switchover has occurred. They still believe they are responsible for hosting active copies of databases, and without DAC mode, if they have a sufficient number of votes to establish quorum, they would try to mount their active databases. This would result in a bad condition called split brain, which would occur at the database level. In this condition, multiple DAG members that cannot communicate with each other both host an active copy of the same mailbox database. This would be a very unfortunate condition that increases the chances of data loss, and make data recovery challenging and lengthy (albeit possible, but definitely not a situation we would want any customer to be in).
The way databases are mounted in Exchange 2010 has changed. Yes, the Information Store still performs the mount, but it will only do so if Active Manager asks it to. Even when an administrator right-clicks a mailbox database in the EMC and selects Mount Database, it is Active Manager that provides the administrative interface for that task, and performs the RPC request into the Information Store to perform the mount operation (even on Mailbox servers that are not members of a DAG).
Thus, when every DAG member starts, it is Active Manager that decides whether or not to send a mount request for a mailbox database to the Information Store. When a DAG is enabled for DAC mode, this startup and decision-making process by Active Manager is altered. Specifically, in DAC mode, a starting DAG member must ask for permission from other DAG members before it can mount any databases.
DAC mode works by using a bit stored in memory by Active Manager called the Datacenter Activation Coordination Protocol (DACP). That’s a very fancy name for something that is simply a bit in memory set to either a 1 or a 0. A value of 1 means Active Manager can issue mount requests, and a value of 0 means it cannot.
The starting bit is always 0, and because the bit is held in memory, any time the Microsoft Exchange Replication service (MSExchangeRepl.exe) is stopped and restarted, the bit reverts to 0. In order to change its DACP bit to 1 and be able to mount databases, a starting DAG member needs to either:
If either condition is true, Active Manager on a starting DAG member will issue mount requests for the active databases copies it hosts. If neither condition is true, Active Manager will not issue any mount requests.
Reverting back to the intended DAC mode scenario, when power is restored to the primary datacenter without WAN connectivity, the DAG members starting up in that datacenter can communicate only with each other. And because they are starting up from a power loss, their DACP bit will be set to 0. As a result, none of the starting DAG members in the primary datacenter are able meet either of the conditions above and are therefore unable to change their DACP bit to 1 and issue mount requests.
So that’s how DAC mode prevents split brain at the database level. It has nothing whatsoever to do with failovers, and therefore leaving DAC mode disabled will not enable automatic datacenter failovers.
By the way, as documented in Understanding Datacenter Activation Coordination Mode on TechNet, a nice side benefit of DAC mode is that it also provides you with the ability to use the built-in Exchange site resilience tasks.
This is a case where two separate functions are being combined to form this misperception: the AutoDatabaseMountDial setting and a feature known as Incremental Resync (aka Incremental Reseed v2). These features are actually not related, but they appear to be because they deal with roughly the same number of log files on different copies of the same database.
When a failure occurs in a DAG that affects the active copy of a replicated mailbox database, a passive copy of that database is activated one of two ways: either automatically by the system, or manually by an administrator. The automatic recovery action is based on the value of the AutoDatabaseMountDial setting.
As documented in Understanding Datacenter Activation Coordination Mode, this dial setting is the administrator’s way of telling a DAG member the maximum number of log files that can be missing while still allowing its database copies to be mounted. The default setting is GoodAvailability, which translates to 6 or fewer logs missing. This means if 6 or fewer log files never made it from the active copy to this passive copy, it is still OK for the server to mount this database copy as the new active copy. This scenario is referred to as a lossy failover, and it is Exchange doing what it was designed to do. Other settings include BestAvailability (12 or fewer logs missing) and Lossless (0 logs missing).
After a passive copy has been activated in a lossy failover, it will create log files continuing the log generation sequence based on the last log file it received from the active copy (either through normal replication, or as a result of successful copying during the ACLL process). To illustrate this, let’s look at the scenario in detail, starting before a failure occurs.
We have two copies of DB1; the active copy is hosted on EX1 and the passive copy is hosted on EX2. The current settings and mailbox database copy status at the time of failure are as follows:
At this point, someone accidentally powers off EX1, and we have a lossy failover in which DB1\EX2 is mounted as the new active copy of the database. Because E0000000006 is the last log file DB1\EX2 has, it continues the generation stream, creating log files E0000000007, E0000000008, E0000000009, E0000000010, and so forth.
An administrator notices that EX1 is turned off and they restart EX1. EX1 boots up and among other things, the Microsoft Exchange Replication service starts. The Active Manager component, which runs inside this service, detects that:
Any time a lossy failover occurs where there original active copy may be viable for use, there is always divergence in the log stream that the system must deal with. This state causes DB1\EX1 to automatically invoke a process called Incremental Resync, which is designed to deal with divergence in the log stream after a lossy failover has occurred. Its purpose is to resynchronize database copies so that when certain failure conditions occur, you don’t have to perform a full reseed of a database copy.
In this example, divergence occurred with log generation E0000000007, as illustrated below:
Figure 5: Divergence in the log stream occurred with log E0000000007
DB1\EX2 received generations 1 through 6 from DB1\EX1 when DB1\EX1 was the active copy. But a failover occurred, and logs 7 through 10 were never copied from EX1 to EX2. Thus, when DB1\EX2 became the active copy, it continued the log generation sequence from the last log that it had, log 6. As a result, DB1\EX2 generated its own logs 7-10 that now contain data that is different from the data contained in logs 7-10 that were generated by DB1\EX1.
To detect (and resolve) this divergence, the Incremental Resync feature starts with the latest log generation on each database copy (in this example, log file 10), and it compares the two different log files, working back in the sequence until it finds a matching pair. In this example, log generation 6 is the last log file that is the same on both systems. Because DB1\EX1 is now a passive copy, and because its logs 7 through 10 are diverged from logs 7 through 10 on DB1\EX2, which is now the active copy, these log files will be thrown away by the system. Of course, this does not represent lost messages because the messages themselves are recoverable through the Transport Dumpster mechanism.
Then, logs 7 through 10 on DB1\EX2 will be replicated to DB1\EX1, and DB1\EX1 will be a healthy up-to-date copy of DB1\EX2, as illustrated below:
Figure 6: Incremental Resync corrects divergence in the log stream
I should point out that I am oversimplifying the complete Incremental Resync process, and that it is more complicated than what I have described here; however, for purposes of this discussion only a basic understanding is needed.
As we saw in this example, even though DB1\EX2 lost four log files, it will still able to mount as the new active database copy because the number of missing log files was within EX2’s configured value for AutoDatabaseMountDial. And we also saw that, in order to correct divergence in the log stream after a lossy failover, the Incremental Resync function threw away four logs files.
But the fact that both operations dealt with four log files does not make them related, nor does it mean that the system is throwing away log files based on the AutoDatabaseMountDial setting.
To help understand why these are really not related functions, and why AutoDatabaseMountDial does not throw away log files, consider the failure scenario itself. AutoDatabaseMountDial simply determines whether a database copy will mount during activation based on the number of missing log files. The key here is the word missing. We’re talking about log files that have not been replicated to this activated copy. If they have not been replicated, they don’t exist on this copy, and therefore, they cannot be thrown away. You can’t throw away something you don’t have.
It is also important to understand that the Incremental Resync process can only work if the previous active copy is still viable. In our example, someone accidentally shut down the server, and typically, that act should not adversely affect the mailbox database or its log stream. Thus, it left the original active copy intact and viable, making it a great candidate for Incremental Resync.
But let’s say instead that the failure was actually a storage failure, and that we’ve lost DB1\EX1 altogether. Without a viable database, Incremental Resync can’t help here, and all you can do to recover is to perform a reseed operation.
So, as you can see:
This has been followed by statements like:
a Hub Transport server with 16 GB of memory runs twice as slow as a Hub Transport server with 8 GB of memory, and the Exchange 2010 server roles were optimized to run with only 4 to 8 GB of memory.
This misconception isn’t directly related to high availability, per se, but because scalability and cost all factor into any Exchange high availability solution, it’s important to discuss this, as well, so that you can be confident that your servers are sized appropriately and that you have the proper server role ratio.
It is also important to address this misconception because it’s blatantly wrong. You can read our recommendations for memory and processors for all server roles and multi-role servers in TechNet. At no time have we ever said to limit memory to 8 GB or less on a Hub Transport or Client Access server. In fact, examining our published guidance will show you that the exact opposite is true.
Consider the recommended maximum number of processor cores we state that you should have for a Client Access or Hub Transport server. It’s 12. Now consider that our memory guidance for Client Access servers is 2 GB per core and for Hub Transport it is 1 GB per core. Thus, if you have a 12-core Client Access server, you’d install 24 GB of memory, and if you had a 12-core Hub Transport server, you would install 12 GB of memory.
Exchange 2010 is a high-performance, highly-scalable, resource-efficient, enterprise-class application. In this 64-bit world of ever-increasing socket and core count and memory slots, of course Exchange 2010 is designed to handle much more than 4-8 GB of memory.
Microsoft’s internal IT department, MSIT knows first-hand how well Exchange 2010 scales beyond 8 GB. As detailed in the white paper, Exchange Server 2010 Design and Architecture at Microsoft: How Microsoft IT Deployed Exchange Server 2010, MSIT deployed single role Hub Transport and Client Access servers with 16 GB of memory.
It has been suggested that a possible basis for this misconception is a statement we have in Understanding Memory Configurations and Exchange Performance on TechNet that reads as follows:
Be aware that some servers experience a performance improvement when more memory slots are filled, while others experience a reduction in performance. Check with your hardware vendor to understand this effect on your server architecture.
The reality is that, the statement is there because if you fail to follow your hardware vendor’s recommendation for memory layout, you can adversely affect performance of the server. This statement, while important for Exchange environments, has nothing whatsoever to do with Exchange, or any other specific application. It’s there because server vendors have specific configurations for memory based on a variety of elements, such as chipset, type of memory, socket configuration, processor configuration, and more. By no means does it mean that if you add more than 8 GB, Exchange performance will suffer. It just means you should make sure your hardware is configured correctly.
As stated in the article, and as mentioned above:
This misconception is really related more to Misconception Number 5 than to high availability, because again it’s addressing the scalability of the solution itself. Like Misconception Number 5, this one is also blatantly wrong.
The fact is, a properly sized two-member DAG can host thousands of mailboxes, scaling far beyond 250 users. For example, consider the HP E5000 Messaging System for Exchange 2010, which is a pre-configured solution that uses a two-member DAG to provide high availability solutions for customers with a mailbox count ranging from 250 up to 15,000.
Ultimately, the true size and design of your DAG will depend on a variety of factors, such as your high availability requirements, your service level agreements, and other business requirements. When sizing your servers, be sure to use the guidance and information documented in Understanding Exchange Performance, as it will help ensure your servers are sized appropriately to handle your organization’s messaging workload.
Have you heard any Exchange high availability misconceptions? Feel free to share the details with me in email. Who knows, it might just spawn another blog post!
For more information on the high availability and site resilience features of Exchange Server 2010, check out these resources:
Scott Schnoll
The Exchange team blog article OAB in Exchange Server 2013 introduced the new Offline Address Book (OAB) generation and distribution architecture in Exchange Server 2013. Take a few moments to visit the article if you haven’t seen it yet or re-visit it for a quick refresher.
The OAB management and administration is different in Exchange 2013 because of architecture changes. Additionally, the new Exchange Admin Center does not currently have options for managing OABs. This means that, at this time, you will need to use Exchange Management Shell for OAB-related tasks.
This article takes you through commonly performed tasks in OAB administration and has a couple of real life scenarios to help understand the tasks better.
Note: If you are in multi-forest Active Directory domain environment, make sure the Shell session has ViewEntireForest is enabled, else some of the commands in the article won’t return any output.
Command to enable ViewEntireForest:
Set-ADServerSettings -ViewEntireForest $true
Creating a new OAB in Exchange 2013 no longer uses the -Server parameter. In order to create a new OAB, you should only specify the address lists to be required.
The following example creates OAB for address list named “Global Address List FAB”
New-OfflineAddressBook -Name OAB-FAB -AddressLists "Global Address List FAB"
The arbitration mailboxes in Exchange Server 2013 are assigned certain “Persisted capability” that defines the purpose/function of the arbitration mailbox.
An arbitration mailbox with Persisted Capability “OrganizationCapabilityOABGen” is responsible for OAB generation. We will refer this mailbox as “Organization Mailbox” throughout the article.
Exchange Server 2013 mailbox server hosting the Organization Mailbox will generate all OAB’s defined in the environment.
For a non-DAG environment, use following command to identify the OAB Generation servers:
Get-Mailbox -Arbitration | where {$_.PersistedCapabilities -like "*oab*"} | ft name,servername
For a DAG environment, identifying OAB generation server(s) is a two-step process.
Step1: Identify the mailbox database hosting organization mailbox with OAB Gen capability.
Use the following command to list the arbitration mailboxes with persisted capability of OABGen and database on which this mailbox is hosted:
Get-Mailbox -Arbitration | where {$_.PersistedCapabilities -like "*oab*"} | ft name,database
Step2: Identify the mailbox server where the database hosting organization mailbox is mounted
Use following command to identify active copy of mailbox database:
Get-MailboxDatabaseCopyStatus db1
The server where database status is “mounted” is the current OAB generation server.
There are two methods of changing the OAB generation server.
Move the organization mailbox to a mailbox database on a server intended to be designated as OAB Generation server.
Example:
DB1 is a single copy database present on the server Exch1 and hosts the organization mailbox. DB2 is mailbox database present on Exch2.
The following command can be used to move the organization mailbox to DB2 and make Exch2 the OAB generation server.
Get-Mailbox -Arbitration -database db1| where {$_.PersistedCapabilities –like “*oab*”} | New-MoveRequest -TargetDatabase db2
This method is more suited for environments that have single copy of mailbox database hosting the Organization Mailbox.
This method is suited for environments that have multiple copies of the mailbox database hosting Organization Mailbox.
DB1 hosts the Organization Mailbox and has copies on servers Exch1 and Exch2. DB1 is currently active on Exch1.
The following command can be used to activate DB1 on Exch2 and therefore make it the OAB generation server:
Move-ActiveMailboxDatabase DB1 -ActivateOnServer Exch2
Note: Review guidelines mentioned in “Placement of Organization Mailbox” below before changing the OAB Generation server.
Administrators can create additional Organization Mailboxes for fault tolerance or for serving users in a geographically disbursed Exchange deployment.
Creating a new Organization Mailbox is a two step process:
Step1: Create a new arbitration mailbox
New-Mailbox -Arbitration -Name "OAB Seattle" -Database DB2Seattle -UserPrincipalName oabs@contoso.com –DisplayName “OAB Mailbox for Seattle”
Step2: Enable OABGen capability
Set-Mailbox -Arbitration oabs -OABGen $true
Note: Review guidelines mentioned in “Placement of Organization Mailbox” below before creating additional organization mailboxes.
The OAB Generation till Exchange Server 2010 was based on a “Schedule” set on OAB properties. You might see a “Schedule” defined when viewing properties of Exchange 2013 OAB. But, the Exchange Server 2013 OAB generation does not take place according to the “Schedule” defined on OAB properties:
Instead, Exchange Server 2013 OAB Generation takes place according to OABGeneratorWorkCycle and OABGeneratorWorkCycleCheckpoint properties configured at the Mailbox Server.
The values in the above screenshot mean OAB is generated once every day.
The Exchange Server 2013 CAS role proxies the OAB download request to an appropriate Mailbox role server. The CAS role maintains log of each request it handles in the log files, present in folder %ExchangeInstallPath%\Logging\HttpProxy\OAB\
These log files are an excellent tool to identify which mailbox server the CAS chose to serve the request.
Information of some important fields from the log file:
The log file can be imported in Excel for better readability.
The Exchange Server 2013 OAB generation can be forced to start immediately by two methods.
Below command will force OAB generation of an OAB named "Default Offline Address Book" across all organization mailboxes.
Update-OfflineAddressBook "default offline address book"
Note: This command initiates an RPC request to each mailbox server hosting an active organization mailbox.
The Microsoft Exchange Mailbox Assistant service on Mailbox Role is responsible for generating OAB. Restarting this service generates all OAB’s defined in the environment on a specific mailbox server, if it’s hosting an active organization mailbox.
Exchange Server 2013 CAS role proxies the OAB download request to a “nearest” mailbox server hosting an active Organization Mailbox. It can proxy the request in round robin fashion if it finds more than one organization mailbox active in same AD site. Prior to CU5, this will result in frequent full OAB downloads and is therefore, not recommended.
Hence, current guidance is to plan organization mailbox placement such that you will have one organization mailbox active in an AD site. This applies to creating a new organization mailbox as well as to creating copies of mailbox database that hosts an organization mailbox.
Prior to CU5, customers should only deploy a single OAB generation mailbox per Exchange organization to prevent users from accessing different OAB generation mailboxes and requiring a full OAB download. With CU5 and later, customers can assign OABs to specific OAB generation mailboxes and not have to worry about accidentally triggering full OAB downloads due to accessing different OAB generation mailboxes. For more information, please see the article, OAB Improvements in Exchange 2013 Cumulative Update 5.
The following scenarios discuss a real life situation to further explain the new OAB management methods.
Contoso has Exchange Server 2013 Mailbox & CAS role servers deployed at Dallas and Seattle sites. John, the Exchange Admin for Contoso, analyzes the http proxy log files on CAS servers and finds the OAB download request for Seattle users is going to Dallas servers. On further investigation, John finds he has just one Organization Mailbox present in Dallas, hence OAB download requests of all the users are going to Dallas server.
He decides to create a new Organization Mailbox at Seattle site with following commands:
Step1: Create a new Arbitration Mailbox
Step2: Enable the Arbitration Mailbox with OABGen capability
Ben is an administrator of Exchange 2013 deployment at Tail Spin Toys. The default OAB generation schedule does not suit them and they want to generate OAB approximately every fourth hour of the day.
Ben will use following command to change properties of the mailbox servers that will be hosting the Organization Mailbox.
Set-MailboxServer Exch1 -OABGeneratorWorkCycle 01.00:00:00 -OABGeneratorWorkCycleCheckpoint 04:00:00
After a couple of days, John analyzes Event ID 17002 in application log and makes sure the OAB is generated every four hours.
Hopefully, you find this post useful! Let us know your feedback below!
Bhalchandra Atre
Update: 4/23/2012: Microsoft has completed deployment of the interim solution that should eliminate the need for manual server reconfiguration of the affected devices when your Office 365 server location changes. We continue to work with device manufacturers to help them resolve their Exchange ActiveSync protocol implementation issues.
Update 3/5/2012: In order to mitigate issues with some mobile device implementations of redirection, Microsoft is currently deploying an interim solution that should eliminate the need for manual server reconfiguration of the affected devices when your Office 365 server location changes. We estimate that the fix will be fully deployed worldwide by April 30th, 2012. Look for the announcement on the blog when the fix is fully deployed with instructions for reconfiguring affected devices. In the meantime, we continue to work with device manufacturers to help them resolve their Exchange ActiveSync protocol implementation issues.
This article explains how mobile devices connect to Exchange Online (Office 365) service and how the connectivity may be impacted if the device does not support certain Exchange ActiveSync (EAS) protocol requirements.
Most mobile devices that connect to Exchange do so using the Exchange ActiveSync protocol. Each successive version of the protocol offers new capabilities. (The Exchange ActiveSync article maintained by the Exchange community on Wikipedia has more details. -Editor)
Before any device accesses an Exchange mailbox, it negotiates with the Exchange server to determine the highest protocol version that they both support, and then uses this protocol version to communicate. Through the protocol version negotiation, the device and the server agree to behave in a particular manner in accordance with the version selected.
In Office 365, we store multiple copies of user mailboxes, geographically distributed across different sites and datacenters. This redundancy ensures that if one copy of the mailbox fails for some reason (for example due to a hardware failure on a particular server), we can access the same mailbox elsewhere. At any given time, one copy of a particular mailbox is considered active and the remaining ones are deemed passive. When a user connects to their mailbox, they take actions on the active copy, and changes are then propagated to its passive copies.
The switch from one active copy of a mailbox to another one stored on a different mailbox server may happen for the following different reasons:
In most cases, the fail over and load balancing are not scheduled in advance. The process is executed automatically when the need arises, without manual intervention.
In Office 365, EAS devices connect to a publicly-facing Exchange Client Access Server (CAS). CAS authenticates the user based upon the provided credentials and retrieves the user’s mailbox version and the mailbox’s location. The mailbox’s location is the Active Directory forest and site where the active copy of the user mailbox is stored.
The CAS will handle the connection in one of the following ways, depending on the mailbox location relative to the location of the CAS:
For an overview of proxying and redirection, see Understanding Proxying and Redirection in Exchange 2010 documentation.
Phones and tablets connect to Office 365 in a number of ways, depending on the device capabilities, configuration and which protocol version has been negotiated. Specifically:
Office 365 contains a number of Active Directory forests, each of which contains several sites. Each forest has a default front-end site. When a device connects to a forest, it transparently connects to the front-end site for that forest.
Depending on whether the device connects to the Active Directory site where the user’s mailbox is located, the connection logic either retrieves the content directly, or proxies or redirects the device to the correct site.
More recent versions of EAS protocol support the redirection command. When a device using a more recent version of the protocol reaches a CAS in a site that doesn't contain the requested mailbox, the server responds to the request by redirecting the device to a CAS in the site hosting the active copy of the user’s mailbox. We assume that devices which advertise to the server support for EAS protocol version 12.1 and later comply with the EAS requirement to support the HTTP redirection error code.
Note: If you want to determine the Exchange ActiveSync protocol version that your device is currently using, refer to your device manufacturer’s documentation.
A problem can occur when a device claims to support redirection, but does not reliably do so. These devices cannot access the mailbox, and the user may receive a number of errors depending on the device (for example, unable to connect to server). A very small number of devices connecting to Office 365 are impacted by this failure to implement Exchange ActiveSync completely (about 1%).
Modifying the Office 365 deployment to compensate for these devices that don’t correctly support redirection would result in a degraded experience for all mobile device users. Performance for the devices is better if they connect to the correct Active Directory site directly after being redirected.
Phones and tablets that are part of the Exchange ActiveSync Logo Program support redirection and thus, do not experience this issue. We are working with a number of other manufacturers to help them support the redirection logic and fix their connectivity issues.
If your users are having trouble connecting to their Office 365 mailboxes on devices that don’t fully support redirection, use one of the following methods to fix the issue:
Note: Although the setting is listed as the server name for POP, it's also an endpoint for Exchange ActiveSync.
Note: When you use the Host name as your Exchange server setting, you may need to update the setting in the future. As I described before, the mailboxes may be moved from one site to another, and devices that do not support the redirect command correctly will lose connectivity. If your user mailbox moves due to failover or upgrades, your site name (Host name) may change and you may need to reconfigure your device to point to the new site.
Katarzyna Puchala
The title of this post was changed shortly after publishing. The permalink URL may differ from the post title.
Earlier last week, we moved the Exchange team blog to TechNet (check out EHLO Again! for a brief recap and what's new, in case you missed it). Nino Bilic and some Exchange MVPs pointed out that the blog had actually moved back to TechNet. Well, we’re glad to be back!
More good news: this week, our friends in TechNet informed us that You Had Me At EHLO is by far the number one blog on TechNet – and has been in that spot since day one! Thanks again to the Exchange community.
We’ve faced some issues with migrating downloads and we've been busy fixing URL redirects and links to popular downloads throughout the week. Sorry for the inconvenience folks – please do report any dead links in blog comments or through the contact form. On the positive side, the number of requests we’ve received throughout the week for the Exchange 2010 Mailbox Server Role Requirements Calculator (and its Exchange 2007 sibling), ExFolders and other great tools and scripts we’ve published over the years on EHLO tells us a lot about how popular they are!
We love Office 2010, and it’s no secret that we love Outlook 2010 the most! We love the Office user experience and the ribbon UI. It brings a lot of great Office functionality and frequently-used options to the fore, without having to dive into multiple dialog boxes and layers of menus. It’s a great fit for today’s ultra-high-resolution screens on laptops and desktops, and as this Office Casual video shows, Office 2010 does very well with multi-touch and stylus/pen-based input as well.
The Office team also made sure most keyboard shortcuts power users are familiar with continue to work (for Outlook 2010, see Keyboard shortcuts for Microsoft Outlook 2010).
As part of this UI overhaul, we lost one of our favorite conveniences in Outlook – the ability to quickly access message headers. Although not something most users would do frequently, Exchange folks do frequently need to see message headers. Sometimes it’s for troubleshooting purposes, but frequently it’s also for the warm fuzzy feeling we get from knowing the message passed through the right SMTP hops, has the expected x-headers, or simply to check if it was processed by the wonderful antispam agents and what the agents really thought of the message.
In previous versions of Outlook, this was as simple as right-clicking a message and selecting Message Options. Quick. No fuss. You’re done, and you’re out. Like a Windows Phone ad. And back to your life.
That shortcut’s gone. No, not the Message Options window itself, but the convenience of the right-click. To access message headers, you must double-click to open the message, click File to access the Backstage view and click the Properties button. Yes, that’s a few more clicks than what most IT folks would like!
The good news is – Outlook (and Office) is nothing if not customizable.
Here’s how you can add the message headers goodness back to Outlook by customizing the Quick Access Toolbar.
The Quick Access Toolbar is a customizable toolbar that contains a set of commands that are independent of the tab on the ribbon that is currently displayed. Check out more ways to Customize the Quick Access Toolbar (also includes a video).
As an added bonus: you can reduce it to a single click! That's a whopping 75% fewer mouse clicks (compared to the default method in Outlook 2010), in case you're counting, and 50% fewer mouse clicks compared to previous versions.
Figure 1: The Message Options button now shows up in the Quick Access Toolbar
To view message headers (or other message options), select a message and then click the Message Options button.
Bharat Suneja
When faced with eDiscovery requests, organizations need to be able to preserve email records, search relevant records and produce them for review.
In Exchange Server 2010 and Office 365, Litigation Hold makes it possible to preserve mailbox items. When a user or a process attempts to delete an item permanently, it is removed from the user’s view to an inaccessible location in the mailbox. Additionally, when a user or a process modifies an item, a Copy-on-write (COW) is performed and a copy of the original item is saved right before the changed version is committed, preserving original content. The process is repeated for every change, preserving a copy of all subsequent versions.
Using Multi-Mailbox Search, also new in Exchange 2010, delegated legal, human resources or IT personnel (referred to as discovery managers because they need to be assigned Discovery Management permissions) can search mailbox content across their entire Exchange 2010 organization. Messages returned from a search can be copied to a Discovery mailbox, which is a special type of mailbox with higher mailbox quotas and no capability to send or receive messages.
Since the release of Exchange 2010 and Office 365, we have received a lot of feedback from organizations of all sizes about the messaging policy & compliance features, including archiving, eDiscovery & hold. When planning the evolution of compliance features, we’ve kept your feedback front and center. Let’s take a look at what has changed.
Integration with the New SharePoint Exchange offers an integrated eDiscovery & Hold experience with the new SharePoint. Using the eDiscovery Center, you can search and hold in-place all content related to a case -– SharePoint web sites, documents, file shares indexed by SharePoint, mailbox content in Exchange and archived Lync content from a single location. You can export content associated with case, including files, lists, web pages and Exchange mailbox content. Mailbox content is exported as a .PST file. An XML manifest that complies with the Electronic Discovery Reference Model (EDRM) specification provides an overview of the exported information.
To search Exchange content, SharePoint uses Exchange’s Federated Search API. Regardless of whether you search Exchange content from the EAC or using SharePoint, the same search results are returned. The new SharePoint and Exchange both use the same underlying indexing and querying engine – Microsoft Search Foundation, which allows you to use the same search query for both SharePoint and Exchange content.
Let’s take a look at how one discovery manager performs an In-Place eDiscovery search.
Robin works on the legal team at marketing firm Contoso. Contoso receives a request from a company called Tailspin Toys to assist with a marketing campaign for a new toy they are producing. Contoso is known for doing great toy marketing campaigns since they do a lot of work in the toy industry. This is great for business but they also have to be careful because many of the toy companies with which they work are competitors. Contoso just finished a highly successful marketing campaign with another toy company called Wingtip Toys and Robin wants to ensure that there's no confidential information that may accidentally get past from one customer to another through his team. To that end, Robin wants to search through her company's email and documents with the help of her legal team to make sure there are no potential issues.
To use In-Place eDiscovery, a user must be delegated the Discovery Management role group. You can delegate the role to authorized legal, compliance management or human resources personnel. Robin is one of those legal team members. This ability to have scoped roles in the new Exchange 2013 allows IT Pros to delegate compliance responsibilities to folks like Robin without giving them full access to all Exchange server functionality.
Robin starts by navigating to the Exchange Administration center Center. The EAC’s Compliance Management tab is where you can manage compliance features in the new Exchange. Because Robin doesn’t have any other Exchange administrator roles, she only sees the interface relevant to the Discovery Management role group. On the compliance management tab, she can only see In-Place eDiscovery & Hold.
Figure 1: In-Place eDiscovery and Hold tab is accessible to users with delegated Discovery Management permissions
She clicks on the Add button to start the New new In-Place eDiscovery & Hold wizard and enters a name and an optional description for the search.
Figure 2: Create an In-Place eDiscovery search using the new In-Place eDiscovery & Hold wizard in EAC
Robin can search all mailboxes in the Exchange organization or select the mailboxes she wants to search.
Figure 3: Specify mailboxes (to search or search all mailboxes)
On the Search query page, Robin can select the option to return all mailbox content or just specific content. Robin wants to find specific content related to work done between hers team members and WingTip Toys. She has the option to perform a simple search by just entering in a few key words or more complex search if she wants with Boolean operators like ANDs, ORs, parenthesis, etc. so she can be very specific as to what she is looking for. This can be a big time and cost savings for her since multiple gigabyte mailboxes are very common and she wants to reduce that set of content down to the minimum amount she needs to look at to find what she wants.
Figure 4: Specify a search query, including keywords, start and end dates, sender and recipients
In addition to using Boolean logic she’s also using the proximity operator (NEAR), which allows her to find words that are close to each other. You can also see her using a wildcard character so in this case she is looking for the word wingtip within three words of toy, toys, toymaker or anything similar.
In this particular case, Robin wants to look for these keywords anywhere in a given email, but if she wants to be more specific, for example search for a phrase only in the message subject, she could type in Subject: and then her phrase right after it. Depending on how specific she wants to be, she can create complex queries. You can use several hundred keywords in a query.
She can also choose specific types of messages. An Exchange mailbox has email but also calendar items, tasks, notes and other items related to personal information management. The new Exchange allows her to search all of those items or she can narrow the query down to specific types of items. She selects email and also meetings so she can track which ones of her employees met with Wingtip and read the meeting invites to find out what was discussed.
Figure 5: Select all message types or specify the message types to search
Once Robin has created hers query to define what content is important to her, she has a few options in terms of what to do with the results. If she feels it's important to protect this content she has the option to place it on hold. When content is placed on hold, Exchange automatically captures any attempts to edit or delete or delete data and stores those items in a hidden folder in the mailbox. It's completely invisible to the end-users so it doesn't interrupt their daily workflow, but it does keep that important data for recovery later.
Figure 6: Placing search results on an In-Place Hold
We will talk more about In-Place Hold in Part II of this post.
Robin clicks Finish. The search is running against Exchange 2013 mailboxes and placing items on hold.
When the search is complete, Robin takes a look at the total size and item count to see if it’s manageable. If there are a million items, her query is likely too broad;, if there are no items, it may be too narrow. If she wants to dig into the details, she can view the search statistics to see exactly how each keyword contributed to the overall result set. That lets her really be targeted about the way she's tweaking her queries so she can quickly get a result set down to a manageable size.
Figure 7: Use search estimate and keyword statistics to fine-tune search queries
Once she is done tweaking her query, she can stop the search and discuss with her team or legal counsel whether the query is correct. She can also create additional eDiscovery searches and use different query parameters.
She can also choose to preview messages returned in the search.
Figure 8: eDiscovery Search Preview to preview messages and determine query effectiveness
The eDiscovery Search Preview displays message count and total size for each mailbox searched. The preview functionality is built on Outlook Web App, which shows the message in its native format without any changes.
Figure 9: eDiscovery Search Preview displays live message preview without copying messages to a Discovery mailbox
Robin can quickly scroll through all of her results to view additional items that came back with her search. Since she is using the full- fidelity Outlook Web App preview, she can also view attachments.
Once Robin has previewed her results and she's happy with them, she can make a copy for of them for later review, or export them so that she can export them to handoff to her outside legal counsel. To do that, she simply clicks on the Copy search results link.
Figure 10: Copying messages returned by the search to a Discovery mailbox
When copying messages to a discovery mailbox, she has the following options:
The last thing Robin will pick is the Discovery mailbox into which she wants to put her search results.
After copying is completed, Robin can see that the copy operation is complete and she has a link to the mailbox where the results are stored. Robin can now navigate to the copy of her search results to view them. In this view, she does have the ability to perform a review on her items, she can tag items that are important, or if she decides some are not important, she can take them and move them to the deleted items folder so that they are no longer in her view.
Once that's done, if Robin needs to share the consolidated results with an outside counsel, she can use her Outlook client to export the consolidated results list to a PST file.
We’ve provided you with an overview of the In-Place eDiscovery & In-Place Hold functionality in the new Exchange. In Part II of this post, which is scheduled to be published shortly, we will dig deeper into In-Place Hold.
Bharat Suneja and Julian Zbogar-Smith
Go to In-Place eDiscovery and In-Place Hold in the New Exchange – Part II
This one can be a bit counter intuitive for folks coming from any legacy version of Exchange...
You got your shiny new Exchange 2010 download and excitedly installed it on your 2010 hardware. You have setup all your CAS server settings, your DAG is up and running. Your pilot users have been moved over and everything went well.
Now you move over the executives and not two days later they are in your office complaining that they can't manage the distribution groups that they own. They were able to do it previously but now it isn't working.
A little bit of testing later and you see that they are right. You are hitting an error message in Outlook 2007 when trying to manage groups:
So what does the "Changes to the distribution list membership cannot be saved. You do not have sufficient permission to perform this operation on this object." error mean, when you are connecting to an Exchange 2010 server?
It means Exchange 2010 is doing its job - as designed...
Exchange 2010 has quite a few features built in to allow your users to manage their own accounts and information. One of these features is the ability to manage distribution groups in a much richer format than Outlook 2007 provides.
This allows your users to join existing group, manage some of the properties of groups they own, membership in groups they own, and even create and remove groups. It is that last piece that got our beta customers a bit concerned. The ability to manage your own groups is good... the ability to create and remove groups - not so good.
Also this feature was turned on by default. So by default you would install Exchange 2010 Beta and any users you put on it could create and remove Distribution Groups. With that in mind the Product group decided to turn this feature off by default going forward and in RTM.
Turning it off is very easy... so is turning it on. All you have to do is assign the MyDistributionGroups RBAC role to the Default Role Assignment Policy. We even have the ability to do that in the ECP.
Since all of the built in RBAC roles have to function as parents to any roles that you create the product group had to leave the ability to create and remove distribution groups on this role to ensure that any customers that wanted their users to have that functionality would be able to assign it.
"Fix" isn't really the right word. We need to modify the default solution to meet this specific need. For a number of customers the needs are going to be the following:
To help customers fill these needs we have created a short little script. This script will allow you to make any combination of the above work in your environment.
The Manage-GroupManagementRole.ps1 script is now available from TechNet Script Center.
To run the script you need to copy the contents of the script to a text file on the machine you are going to run it on. Then save the file as a .ps1... I recommend Manage-GroupManagementRole.ps1 .
To fill all of the above requirements with minimal effort run the following from an Exchange Powershell Prompt:
Manage-Groupmanagmeentrole.ps1 -creategroup -removegroup
This will create everything you need with the correct settings using the default names in the script. If you would like help on the script you can either look in the contents of the file or run it with no switches.
What the script does is actually rather simple:
When complete your users will be able to manage distribution groups but not create or remove them.
Each step the script takes is documented in the script and you are welcome to extract just what you need from it. It is designed to handle more than just the basic scenario to give it a bit more flexibility.
When finished you end up with a new role and a new role assignment. If we look at these in PowerShell we see:
Here is the Role that is created by the script. By default it is named MyDistributionGroupsManagement and is a child of the MyDistributionGroups role.
Here we can see all of the cmdlet that the role authorizes users to use. You can see that remove-distributiongroup and new-distributiongroup are not listed.
This is the piece that glues everything together. The role assignment is created using the name of the role and the name of the policy we are assigning the role too. The Default Role Assignment Policy applies to all users in the org by default. So everyone on Exchange 2010 will now have the ability to manage their own distribution groups.
Hopefully this post and the attached script will help you in getting your 2010 environment up and running where you want it. If you have any questions about the script or this process please feel free to post them here and I will do my best to address them.
- Matt Byrd
Introduction
Users often delete data from their mailboxes that they later want recovered. Recovery of these deleted items is the most common reason for IT admins to recover data from traditional point-in-time backups today.
With previous versions of Exchange Server, administrators implemented two solutions to enable single item recovery, dumpster and restores. Both had their issues, unfortunately.
Exchange 2010 aims to reduce the complexity and administrative costs associated with single item recovery.
The following definitions may be useful for understanding the content within this article:
The end user single item recovery functionality was enabled through the store via the store dumpster. Administrators configured the dumpster setting on a per database or per mailbox basis. The default deleted item retention window in Exchange 2003 is 7 days, while with Exchange 2007 the default is 14 days.
The end user recovery process worked typically like this (see figure 1):
Figure 1. Dumpster in Previous Versions
If the item's deletion timestamp is beyond the deleted item retention period, then the item is not available for end user recovery. Instead, the user must call Help Desk and request recovery of the item. This involves:
If the backup is invalid, or if there is no backup for the time period in question, then the deleted data is unrecoverable.
The other issue that needs highlighting is the fact that in previous versions of Exchange there is no way to prevent the end user from purging the data from the Recover Deleted Items view. This poses a significant legal risk for organizations that must ensure compliance requirements are met.
In previous releases of Exchange Server, dumpster is essentially a view stored per folder. Items in the dumpster (henceforth known as Dumpster 1.0) stay in the folder where they were soft-deleted (shift-delete or delete from Deleted Items) and are stamped with the ptagDeletedOnFlag flag. These items are special-cased in the store to be excluded from normal Outlook views and quotas. In addition, data with this flag cannot be searched or indexed.
Note: Users can perform a shift-delete and cause a message to bypass the Deleted Items folder and go straight to dumpster. Prior to Outlook 2007, the Recover Deleted Items tool, by default, only exposed the dumpster view for the Deleted Items folder. By setting the DumpsterAlwaysOn registry key (http://support.microsoft.com/kb/886205) you can enable the Recover Deleted Items tool for all folders in the mailbox and thus expose dumpster data for any folder in the Exchange 2003 or Exchange 2007 mailbox.
One of the key architectural changes in Exchange 2010 is to truly enable a litigation hold experience for customers that have legal compliance requirements. As a result, Exchange 2010 must meet these requirements:
In order to facilitate these requirements, Dumpster was re-architected. Unlike Dumpster 1.0, Dumpster 2.0 is no longer simply a view. Dumpster in Exchange 2010 is implemented as a folder called the Recoverable Items and is located within the Non-IPM subtree of the user's mailbox (note that this is a hidden section of the mailbox and is not exposed to the end user through any client interface). The folder has three sub-folders:
The Deletions folder replaces the ptagDeletedOnFlag view that was displayed when a user accessed the Recover Deleted Items tool. When a user soft deletes or performs an Outlook hard delete against an item, the item is moved to the Recoverable Items\Deletions folder. When the user accesses Outlook/OWA Recover Deleted Items, the RPC Client Access service translates the request and returns the Recoverable Items\Deletions folder view.
The Versions and Purges folders will be covered in the Single Item Recovery in Exchange 2010 section.
By architecting Dumpster to be a folder, three of the requirements are immediately met:
To ensure that Denial of Service attacks by placing large quantities of data into dumpster are prevented, Dumpster 2.0 has configurable quota settings. These settings can be configured per database and per mailbox:
Note: Exchange 2010 includes capability for each mailbox to also maintain an archive mailbox as well. There is a dumpster for both the primary mailbox and the archive mailbox. Data deleted in the primary mailbox is placed in the primary mailbox dumpster, while data deleted in the archive mailbox is placed in the archive mailbox dumpster.
But how does Exchange 2010 meet the other two requirements, ensuring data is not either accidentally or maliciously purged and that versions are tracked? Exchange 2010 now includes two mechanisms to meet those requirements:
Exchange 2010 includes the ability to ensure that data within the mailbox is preserved for a period of time. This feature can be enabled enabled on a per mailbox basis by running the following cmdlet:
Set-Mailbox -SingleItemRecoveryEnabled $true
Note: When enabling this feature, you will be notified that it could take up to 60 minutes for single item recovery to take effect. This is an approximation due to Active Directory replication. Be sure to evaluate your Active Directory replication topology with respect to administering recipients to understanding how Active Directory replication may impact changes in your environment.
The time period by which the deleted data is maintained is based on the deleted item retention window. The default time period is 14 days in Exchange 2010 and is configurable per database or per mailbox. The following cmdlets let you alter this behavior:
For the mailbox database: Set-MailboxDatabase -DeletedItemRetention
For the mailbox: Set-Mailbox -RetainDeletedItemsFor
Note: Regardless, of whether you have Single Item Recovery enabled, calendar items are maintained in the Recoverable Items folder structure for 120 days. Long-term data preservation via litigation hold will disable the expiration of the items.
At this point you may be thinking, how is this any different than in previous versions of Exchange? With short-term preservation deleted items will still be moved into the Recoverable Items folder structure. However, the data cannot be purged until deletion timestamp is past the deleted item retention window. Even if the end user attempts to purge the data, the data is retained. Consider this example by a malicious user:
However, the message was not purged from the mailbox store. Instead the message was moved from the Recoverable Items\Deletions folder to the Recoverable Items\Purges folder. All store hard-deleted items end up in this folder when single item recovery is enabled. The Recoverable Items\Purges folder is not visible to the end user, meaning that they do not see data retained in this folder in the Recover Deleted Items tool.
When the message deletion timestamp has exceeded the deleted item retention window, Records Management will purge the item. See Figure 2 for a visual representation of this behavior.
Figure 2. Dumpster 2.0
Not only does short term preservation prevent purging of data before the deleted item retention window has expired, but it also enables versioning functionality. Essentially when an item is changed, a copy-on-write is performed to preserve the original version of the item. The original item is placed in the Recoverable Items\Versions folder. This folder is not exposed to the end user. What triggers a copy-on-write?
The data stored in the Recoverable Items\Versions folder is indexed and discoverable by compliance officers.
Customers sometimes require mechanisms by which data is maintained for longer periods of time, say indefinitely. This is typically due to litigation hold that occurs when the organization is undergoing a lawsuit. With Exchange 2010, litigation hold can be enabled via the Exchange Control Panel or by setting the property LitigationHoldEnabled via the Set-Mailbox cmdlet.
When litigation hold is enabled, records management purging of dumpster data ceases. Consider the following four cases:
Also, when litigation hold is enabled, the FIFO deletion at warning limit is ignored . When a user's Recoverable Items folder exceeds the warning quota for recoverable items (as specified by the RecoverableItemsWarningQuota parameter), an event is logged in the Application event log of the Mailbox server. When the folder exceeds the quota for recoverable items (as specified by the RecoverableItemsQuota parameter), users won't be able to empty the Deleted Items folder or permanently delete mailbox items. Also copy-on-write won't be able to create copies of modified items. Therefore, it's critical that you monitor the Recoverable Items quotas for mailbox users placed on litigation hold.
Data that is stored in the Recoverable Items\Purges folder is not accessible or discoverable by the end user. However the data is indexed and discoverable for those that have the proper access role in the Exchange organization role. Role Based Access Control (RBAC) provides the Discovery Management role to allow secure search access to non-technical personnel, without providing elevated privileges to make any operational changes to Exchange Server configuration. Compliance officers or other Exchange administrators with the Discovery Management role can leverage the Exchange Control Panel (ECP) to perform discovery searches using an easy-to-use search interface.
When a single item recovery request is received by help desk, the following actions can be taken:
Naturally after understanding the features included in Exchange 2010, a logical follow up question is "Do I still need backups for single item recovery?" The answer depends on your backup requirements and your capacity planning.
Today many customers minimize the deleted item retention window, yet they maintain long backup retention time periods (from 14 days to several months to years).
Let's consider a customer that currently maintains backups for 90 days and only retains deleted items within Exchange for 5 days. This customer is performing backup restores on a weekly basis to recover deleted items for end users. If the customer moved to Exchange 2010 they could move that process into Exchange by simply increasing their mailboxes capacity for dumpster:
By increasing each mailbox's capacity by a minimum of 350MB, backups are no longer needed for single item recovery. Single item recovery can be maintained and performed within Exchange.
But let's not stop there. What if the requirement is that items must be recoverable for 1 year? Assuming the same assumptions used in the previous example with the exception that deleted item retention is now configured for 365 days, each mailbox needs an additional minimum 1.4GB of space.
Ultimately, if the storage subsystem is planned and designed appropriately and mailbox resiliency features are leveraged, traditional point-in-time backups can be relegated to a disaster recovery mechanism, if they are even needed at all.
Exchange 2010 introduces the concept of single item recovery. Single Item Recovery prevents purging of data and provides versioning capability (the ability to retain the unaltered form of the item). By default this data is retained until the age of the deleted item has exceeded the deleted item retention window. In addition, Exchange 2010 enables long-term preservation of data for litigation hold scenarios by preventing the purging of data all together. The following table summarizes the behavior in Exchange 2010:
Feature State
Soft-deleted items kept in dumpster
Modified (versions) and store hard-deleted items kept in dumpster
User can purge items from dumpster
MRM automatically purges items from dumpster
Single item recovery disabled
Yes
No
Yes, 14 days by default and 120 days for calendar items
Single item recovery enabled
Litigation hold enabled
- Ross Smith IV
Note: Exchange Server 2013 Cumulative Update 5 and later supports certificate-based authentication with ActiveSync.
In previous posts, we have discussed certificate based authentication (CBA) for Outlook Web App, and Greg Taylor has covered publishing Outlook Web App and Exchange ActiveSync (EAS) with certificate based authentication using ForeFront TMG in this whitepaper. Certificate based authentication for OWA only can also be accomplished using ForeFront Unified Access Gateway.
In this post, we will discuss how to configure CBA for EAS for Exchange 2010 in deployments without TMG or UAG.
To recap some of the common questions administrators and IT decision-makers have regarding CBA:
What is certificate based authentication? CBA uses a user certificate to authenticate the user/client (in this case, to access EAS). The certificate is used in place of the user entering credentials into their device.
What certificate based authentication is not: By itself, CBA is not two-factor authentication. Two-factor authentication is authentication based on something you have plus something you know. CBA is only based on something you have.
However, when combined with an Exchange ActiveSync policy that requires a device PIN, it could be considered two-factor authentication.
Why would I want certificate based authentication? By deploying certificate based authentication, administrators gain more control over who can use EAS. If users are required to obtain a certificate for EAS access, and the administrator controls certificate issuance, access control is assured.
Another advantage: Because we're not using the password for authentication, password changes don't impact device access. There will be no interruption in service for EAS users when the they change their password.
Things to remember: There will be added administration overhead. You will either need to stand up your own internal Public Key Infrastructure (PKI) using Active Directory Certificate Services (AD CS, formerly Windows Server Certificate Services) or a 3rd-party PKI solution, or you will have to purchase certificates for your EAS users from a public certification authority (CA). This will not be a one-time added overhead. Certificates expire, and when a user’s certificate expires, they will need a new one, requiring either time invested in getting the user a new certificate, or budget invested in purchasing one.
You need access to a CA for client certificates. This can be a public CA solution, individual certificates from a vendor, or an Active Directory Certificate Services solution. Regardless, the following requirements must be met:
To configure the Client Access server to enforce CBA:
Important: IISreset does not pick up the changes properly. You must restart this service.
Once this is configured, all that's left to do is client configuration.
You'll need to get the user’s certificate to the device. This will be accomplished in different ways for different devices. Some possibilities include:
Caution: Use appropriate security measures to ensure that only the user who owns the certificate is able to access it from the device.
Once the certificate is on the device, the user can configure the Exchange ActiveSync client (usually a mail app) on the device. When configuring EAS for the first time, users will be required to enter their credentials. When the device communicates with the Client Access Server for the first time, users will be prompted to select their certificate. After this is configured, if users check the account properties, they'll see a message similar to the following:
Microsoft Exchange uses certificates to authenticate users when they log on. (A user name and password is not required.)
This is an added step that requires some simple changes, and must be performed whether TMG is used to access Exchange 2010 or not. Use the following steps to enable this for access to Exchange 2003 mailboxes.
Apply the hotfix from the following article (or one that has a later version of EXADMIN.DLL) to the Exchange 2003 servers where the mailboxes are homed.
937031 Event ID 1036 is logged on an Exchange 2007 server that is running the CAS role when mobile devices connect to the Exchange 2007 server to access mailboxes on an Exchange 2003 back-end server
The article instructs you to run a command to configure IIS to support both Kerberos and NTLM. You must run the command the command prompt using CSCRIPT, as shown below:
cscript adsutil.vbs set w3svc/WebSite/root/NTAuthenticationProviders "Negotiate,NTLM"
On the Exchange 2003 mailbox server, launch Exchange System Manager and follow these steps:
Use the following steps to configure the Exchange 2010 to Exchange 2003 communication for Kerberos Constrained Delegation (KCD).
Note: You may need to add the SPN as per Setspn Overview
Thanks to: DJ Ball for his previous work in documenting certificate based authentication for Outlook Web App (see How to Configure Certificate Based Authentication for OWA - Part I and How to Configure Certificate Based Authentication for OWA - Part II Mattias Lundgren, for starting the documentation process on certificate based authentication for EAS. DJ Ball and Will Duff for reviewing this document. Henning Peterson and Craig Robicheaux for reviewing this document. Greg Taylor for technical review.
Jeff Miller
Robert's Rules of Exchange is a series of blog posts in which we take a fictitious company, describe their existing Exchange implementation, and then walk you through the design, installation, and configuration of their Exchange 2010 environment. See Robert's Rules of Exchange: Table of Blogs for all posts in this series.
A great big hello to all my faithful readers out there! I have to apologize for not posting in a while. Since the beginning of the holidays, I have been exceedingly busy, but I really want to get back to the Robert’s Rules posts, and I hope to be more involved in this going forward. Also, please keep the great comments and questions coming! I read and try to answer every single one of them. If you are going to TechEd North America 2011 in Atlanta, I’ll be there presenting, doing some stuff with the MCM Exchange team and generally making a nuisance of myself, so come look me up and introduce yourself!
In this blog post, I want to talk a little about multi-role servers. This is something that the Exchange team and Microsoft Services presents as the first notional solution (“notional solution” meaning the first idea of a solution, the first “rough draft” of what we propose deploying) to almost every customer we work with, and something that causes a lot of confusion since it is certainly a different set of guidance than our previous guidance. So, I want to talk about what we mean by “multi-role servers”, why we think they are a great solution, sizing multi-role deployments, when they might not fit your deployment, and what the real impact is in moving away from multi-role in your solution.
When we talk about multi-role servers in Exchange 2010, we are talking about a single server with all three of the core Exchange 2010 roles installed – Client Access, Hub Transport and Mailbox roles. While having any given two of these roles installed is technically a multi-role deployment, and even in Exchange Server 2007 we saw a lot of customers collocating the Client Access and Hub Transport roles, when we talk about multi-role, we are really talking about collocation of all three core roles on the same server.
In Exchange 2007, we did not support the Client Access or Hub Transport roles on clustered servers, so there was no way to have a high availability deployment (meaning Cluster Continuous Replication clusters) with multi-role servers. In Exchange 2010, we introduced Database Availability Groups (DAGs), which don't have that limitation, and subsequently we have changed the first line recommendation to utilize multi-role servers where possible. The rest of this post discusses why we believe that in almost every single scenario, this is the best solution for our customers.
One of the things I really tried to hammer home in the Storage Planning post was the idea of simplicity. Every time I sit down with a customer to discuss how they should deploy Exchange 2010, I start with the most simple solution that I can possibly think of. Remember that we have already argued that simplicity typically brings many “Good Things™” into a solution. These “Good Things™” include (but are not limited to) a lower capital expenditure (CapEx), a lower operational expenditure (OpEx), a higher chance of success of both deployment and meeting operational requirements such as high availability and site resilience. Complexity, on the other hand, introduces risk. Complexity when it is not needed is a “Bad Thing™”. Of course, when a complexity is brought on because of a requirement, it is not a “Bad Thing™”. It is only when we don’t really need to introduce that complexity that I have a problem with it.
Based on the last blog post (the Storage post), we know that Robert’s Rules is going with the simple storage infrastructure of direct attached storage with no RAID – what Microsoft calls JBOD. If we combine that with multi-role servers where every server we deploy in the environment is exactly the same and we significantly reduce the complexity of the system. Every server has the same amount of RAM, the same number of disks, the same network card, the same video card, the same drivers, the same firmware/BIOS – everything is the same. You have less servers in your test lab to track, you have less different drivers or firmware to test, you have an easier time deciding what version of what software or firmware to deploy to what servers. On top of that, every server has exactly the same settings, including the same OS settings and the same Exchange settings, as well as any other agents or virus scanners or whatever on those servers. Everything is exactly the same at the server level. This significantly reduces your OpEx because you have a single platform to both deploy and support – simplicity of management means that your people need to do less work to support those servers, and that means it costs you less money!
Now, let’s think about the number of servers we need in an environment. I’m going to play around with the Exchange 2010 Mailbox Role Requirements Calculator a bit here, so I’ve downloaded the latest version (14.4 as of this writing). I will also start with a solution with separate roles across the board – Mailbox, Client Access and Hub Transport all separated. This is totally supported, and what many of my customers believe is the Microsoft recommended approach. After we size that and figure out how many servers we’re talking about, we will look at the multi-role equivalent.
Looking on a popular server vendor web site at their small and medium business server page, I found a server that happens to have an Intel X5677 processor, so I’ll use that as my base system – an 8-core server. Using the Exchange Processor Query Tool, I find that servers using this processor at 8-cores per server have an average SPECint 2006 Rate Value of 297, so I’ll use that in the calculator as my processor numbers. Note that by default, the servers in the Role Calculator are not marked as multi-role.
Opening the Role Requirements calculator fresh from download with no changes, I’ll put those values in as my server configuration – 8 cores and SPECint2006 rate value of 297. Making only that change, we can then look at the Role Requirements page. We have 6 servers in a DAG, 4 copies of the data, and we have 24,000 users, and the server processors will be 36% utilized, and the servers will require 48 GB of RAM. Not bad, all in all. EXCEPT… That is really quite underutilized as far as processor is concerned. Open the calculator yourself, and “hover over” the “Mailbox Role CPU Utilization” field under the “Server Configuration” section of the “Role Requirements” tab. There is a comment to help you understand the utilization numbers. That comment says that for Mailbox role machines in a DAG with only the mailbox role configured, that we should not go over 80% utilization. But we’re at 36% utilization. That’s a lot of wasted processor! We just spent the money on an 8-core system, and we aren’t using that. So, I’m going to pull 4 cores out of each server.
According to the Processor Query Tool, a 4-core system with that processor will have a SPECint2006 rate value of 153. Let’s see what that does by putting that into the Mailbox Role Calculator. That moves us to 69% processor utilization, which is much better. I would feel much better recommending that to one of my customers. This change didn’t affect our memory requirements at all.
The next thing we’ll look at is the number of cores of the other roles we need. At the top of the “Role Requirements” tab, we can see that this solution will require a minimum of 2 cores of Hub Transport, and 8 cores of Client Access. So, as good engineers we propose our customers have one 4-core server for Hub Transport and two 4-core servers for Client Access, right? Absolutely not! We designed this solution for 2 server failures (6 servers in the DAG with 4 copies can sustain 3 server failures, which is a bit excessive, but sustaining 2 server failures is quite common, so for our example we’ll stick with that). So, for CAS and HT both, we need 2 additional servers for server failure scenarios. If I lose 2 CAS servers, I still need to have 8 cores online on my remaining CAS servers to support a fully utilized environment – that means I need a minimum of 4 CAS servers with 4-cores each. If I lose 2 HT servers, I need 1 remaining server to handle my message traffic load (really one half of a server – 2 cores – but you can’t do that), so I need a minimum of 3 HT servers.
How many servers is this all together? We have 6 Mailbox servers, 4 Client Access servers, and 3 Hub Transport servers. That would be 13 servers, in total. Not too bad for 24,000 users, right? What are the memory models of these three servers? CAS guidance is 2 GB per core, and HT guidance is 1 GB per core. So we have 6 servers with 48 GB (Mailbox), 4 servers with 8 GB (CAS) and 4 servers with 4 GB (HT). Our relatively simple environment here has 3 different server types, 3 different memory configurations, 3 different OS configurations, 3 different Exchange configurations, and 3 different patching/maintenance plans. Simple? I think not.
Now, using the same server, the same processor, the same everything as above, we’ll simply change the calculator to multi-role. On the “Input” tab, I change the multi-role switch (“Server Multi-Role Configuration (MBX+CAS+HT)”) to “Yes”. Now, over to the “Role Requirements” tab, and … WHOA!!! Big red blotch on my spreadsheet! What does that mean? Can’t be good.
Once again, if we look at the comment that the sadistic individual who wrote the calculator has left for us, we can see that a multi-role server in a mailbox resiliency solution should not have 40% or higher utilization for the Mailbox role. This is because we have the Client Access and Hub Transport roles on the server as well, and they have processor requirements. What we basically do here is allocate half of our processor utilization to the CAS and HT roles in this situation. So, let’s go back to the 8 cores per server using SPECint2006 rate value of 297.
That change gets us back into the right utilization range. Looking at the “Role Requirements” tab again, we now have a “Mailbox Role CPU Utilization” of 36%. Since the maximum we want is 39%, that is a decent utilization of our hardware. The other change we see is that we were bumped from 48 GB of RAM per server to 64 GB of RAM, which is another cost impact to the price of the servers, but the bump from 48 GB to 64 GB is not nearly as expensive as it was a few years ago, and I see a lot of customers purchasing 64 GB if not more RAM in almost all of their servers (I actually see a lot of 128 GB machines out there).
Now, the great thing about this is the fact that we are down to 6 servers total. Let’s think about the things that go into the total cost of ownership of servers:
I think that the bottom line here is that there are a lot of reasons to start with the multi-role server as your “first cut” architecture, not the least of which is the simplicity of the design compared to the complexity introduced by having each role separated.
There are very few cases where the multi-role servers are not the appropriate choice, and quite often in those cases, manipulating the number of servers or the number of DAGs slightly will change things so that the multi-role is possible.
Let’s think about the case with virtualization. Our guidance around virtualization is that a good “rule of thumb” overhead factor is 10% overhead for the user load on a server if you are virtualizing. In other words, a given guest machine will be able to do 10% less work than expected if it was a physical machine with the exact same physical processor. Or, another way to look at this is that each user will cause about 10% more work in the guest than they would in a physical implementation. So, I can use a 1.1 “Megacycle Multiplication Factor” in my user tier definition on the “Input” tab, and that just puts me to 39% processor utilization for this hardware. Of course, we haven’t taken into account the fact that we haven’t allocated processors for the host, and the fact that we have to pay extra licensing fees to VMware if we want to run 8-core guest OSes.
If we go back to our 4-core example, set our “Megacycle Multiplication Factor” to 1.1, and say that we have 2 DAGs rather than 1, our processor utilization for these 4-core multi-role servers goes to 38%, making this a reasonable virtualized multi-role solution. Other customers might decide to split the roles out, possibly with say 6 Mailbox role servers virtualized, and CAS and HT collocated on 6 more virtualized servers.
Either of these solutions are certainly supported solutions, but we now would have twice as many servers to monitor and manage as we would with our physical multi-role servers. And as we saw above – more servers will typically mean more cost in your patching OpEx. What we’re really trying to do here is force a solution (virtualization) when the requirements don’t drive us that direction in many cases. Don’t get me wrong – I fully support and recommend a virtualized Exchange environment when the customer’s requirements drive us to leverage virtualization as a tool to give them the best Exchange solution for the money they spend, but when customers want to virtualize Exchange just because they want every workload virtualized, that is trying to shove a round peg into a square hole. Note that the next installment of Robert’s Rules will be around virtualized Exchange, when it is right, and when it is wrong.
I see this as exactly the same question as the virtualization question above. Certain sizing situations might take the proposed solution out of the realm of where blade servers can provide a hardware capability that is required. For instance, I have seen cases where the hardware requirements of the multi-role servers are quite a bit more than a blade server can provide (think 16, 24 or 32-core machines with 128 GB of RAM or similar). Generally, this means that you have a very high user load on those servers, and you could reduce the core count or memory requirements by reducing the number of users per server. As we showed above, you can do this by adding more servers or more DAGs.
We all know that the recommendation for Exchange 2010 is to use hardware load balancers, but the fact is that using Windows Network Load Balancing (WNLB - a software load balancer that comes with Windows Server 2008 and Windows Server 2008 R2 as well as older versions) is supported and some customers will use that. Hardware load balancers cost money. Sometimes there is no money for purchase of hardware load balancers for smaller implementations – although there are some very cost effective hardware load balancers from some Microsoft partners out there today that could get you into a highly available hardware load balanced solution for approximately US$3,000.00. So I would argue that there are very few customers that couldn’t afford that.
The single real limitation of the multi-role is the fact that WNLB is not supported on the same server running Windows Failover Clustering. Although Exchange 2010 administrators never have to deal with cluster.exe or any other cluster admin tools, the fact is that DAGs utilize Windows Failover Clustering services to provide a framework for the DAG, utilizing features such as the quorum model and a few other things. This means that if you have a DAG in a multi-role architecture, and you need load balancing (as all highly available solutions will need), you will be forced to purchase a hardware load balancer.
In some organizations where the network team has standardized on a given load-balancing appliance vendor and their branch offices need Exchange in high availability deployed at the branch office locations, it is possible that they will not be allowed to purchase another vendor’s small-load small-cost load-balancer hardware. In cases like this where WNLB is required, a multi-role implementation will not be possible.
As we have discussed already, sizing of multi-role Exchange Server 2010 deployments is not much different than the sizing you do today. The Mailbox Role Calculator is already designed to support that configuration. Just choose “Yes” what is the second setting in the top left of the “Input” tab (at least that is where it is in version 14.4 of the calculator), and make sure that your processor utilization isn’t over 39%.
Note that the “39%” is really based on the fact that Microsoft recommendations are built around a maximum processor utilization of 80%. In reality, that is an arbitrary number that Microsoft chose because it is quite safe – if we make recommendations around that 80% number and for some reason you have 10% more utilization than you expected, you are still safe because in the worst case, you will then have 90% utilization on those servers. Some of our customers choose 90% as their threshold, and this is a perfectly acceptable number when you know what you are doing and what the risks are around your estimation of how your users will utilize Exchange servers and the processors on those server. The spreadsheet will have red flags, but those are just to make sure you see that there is something outside of our 80% total number.
There are a few things that are different in the technical details of how Exchange works and how you will manage Exchange when you consider having multi-role or separated role implementations. For instance, when the Mailbox role hosting a mailbox database and Hub Transport role coexist on a server within a DAG, the two roles are aware of this. In this scenario when a user sends a message, if possible, the Mailbox Submission service on the Mailbox role on this server will select a Hub Transport role on another server within the Active Directory site to submit the message for transport (it will fall back to the Hub Transport role collocated on that server if there is not another Hub Transport role in the AD site). This allows Exchange as a system to remove the single point of failure where a message goes from the mailbox database on a server to transport on the same server and then the server fails, thus not allowing shadow redundancy to provide transport resilience. The reverse is also true – if a message is delivered from outside the site to the Hub Transport that is collocated with the Mailbox role holding the destination mailbox, the message will be redirected through another Hub Transport role if possible to provide for transport redundancy. For more information on this, please see Hub Transport and Mailbox Server Roles Coexistence When Using DAGs in the documentation.
Another thing to note is that when you design your DAG implementation, you should always define where the file share will exist for your File Share Witness (FSW) cluster resource. When you create the DAG with the New-DatabaseAvailabilityGroup cmdlet, you can choose to not specify the WitnessDirectory and WitnessServer parameters. In this case, Exchange will choose a server and directory for your FSW, typically on a Hub Transport server. Well, in the case where every Hub Transport server in the Active Directory site is also a member of that DAG, this introduces a problem! Where do you store the File Share Witness? My solution to that is to have another machine (could be virtual, but if so, it must be on separate physical hardware than your DAG servers) that can host a file share. This could be a machine designated as a file share host, or a printer host, or similar. I wouldn’t recommend a Global Catalog or Domain Controller server because as an administrator, I don’t want file shares on my Domain Controllers for security reasons and I don’t want to grant my Exchange Trusted Subsystem security group domain administrator privileges!
There are also some management scenarios that you might need to account for. For instance, when you patch a multi-role server, you might have to change your patching plans compared to what you are doing with your Exchange 2003 or 2007 implementation today. For more information on this, please see my Patching the Multi-Role Server DAG post.
You will configure Exchange the same way whether you have the roles separated or you have a multi-role deployment. The majority of the difference comes down to what we have already discussed: sizing of the servers, simplicity in the design of your implementation, simplicity of managing the servers, and in most cases cost savings because of less servers and more simple management. Choosing not to deploy multi-role Exchange 2010 architectures introduces complexity to your system that is most likely not required, and that introduces costs and raises risk to your Exchange implementation (remember that every complexity raises risks, no matter how small the complexity or how small the risk).
The conclusion here is the same thing you will hear me saying over and over again. You should always start your Exchange Server 2010 design efforts around the most simple solution possible – multi-role servers with JBOD storage (no-RAID on direct attached storage). Only move away from the simplest solution if there is a real reason to do so. As I said, I always start design discussions with multi-role, and that is the recommended solution from the Exchange team.
When I first designed this blog series, the idea was that I wanted to make sure I show how to do load balancing. At the time, we didn’t have easily available virtualized versions of hardware load balancers, at least not on the Hyper-V platform. Since starting the series Kemp has a Hyper-V version of their load balancer available, and I am going to show that in the blog. Ross Smith IV has been telling me over and over that there is little value in showing Windows Network Load Balancing since we strongly recommend a hardware load balancer in large enterprise deployments.
SO…
I’m going to redesign my Exchange 2010 implementation at Robert’s Rules to utilize the multi-role environment. Before writing the next post (on Virtualization of Exchange, as I said above), I will revise the scenario article with a new image and utilization of a 4-node DAG – 2 nodes in the HSV datacenter, 2 nodes in the LFH datacenter.
And once again, thanks to all of you for reading these posts! Keep up the great questions and comments!!
Robert Gillies
You may have noticed a change in the behavior of the Safe Senders list within Outlook starting in Exchange 2010. Users can no longer add accepted domains to Outlook’s Safe Senders list.
This was done as an anti-spam deterrent as we have all seen cases where Joe The Spammer spoofs the mail from your own domain. Adding your own domain to the Safe Senders list would bypass all Outlook client-side anti-spam checks, dumping that message from the Nigerian prince (spoofed using your own domain) into your users’ Inboxes. Not so good unless you were really waiting for that business opportunity.
A valid SPF record and our anti-spam agents (specifically the SenderID agent) would go a long way to block these types of spam. However, many customers out there have not exactly jumped on the SPF bandwagon.
You can learn more about SenderID filtering in Sender ID and Understanding Sender ID. Use the Sender ID Framework SPF Record Wizard to create an SPF record for your domain.
With Exchange 2010, you CAN still add individual email addresses from your own accepted domains to the Safe Senders list - you just can’t add the entire domain, as shown in the screenshot above.
If you try to do this for a user via the Shell, you will get the very helpful error below:
“<@yourdomain.com>” is your e-mail address or domain and can’t be added to your Safe Senders and Recipients list.
We tell you exactly why we are throwing an error.
How about when a user does this via Outlook? First, Outlook lets the user add a domain.
However after a few minutes the entry will magically disappear.
In Exchange 2010 SP1, a bug was introduced where if the user added the accepted domain to his/her Safe Senders list via Outlook - not only would the accepted domain entry disappear but it would take the user’s entire safe senders list with it. This was fixed in E2010 SP1 RU3v3 where we are back to the expected behavior.
Many customers have various applications that submit mail anonymously to Exchange where the messages come from email addresses from your accepted domains.
Some of you have apps submitting from so many accepted domain addresses that it wouldn’t be feasible (let alone fun) attempting to add all of these addresses individually to the safe senders list in Outlook to ensure these messages do not end up in junk mail.
Now that we can’t add the whole domain, what are our options?
When the sending SMTP host’s IP address is on the IP Allow List in Exchange, it bypasses all anti-spam checks (except sender/recipient filtering) and the Content Filter agent stamps an SCL of -1 on the message which Outlook will honor.
X-MS-Exchange-Organization-AuthAs: Anonymous X-MS-Exchange-Organization-Antispam-Report: IPOnAllowList X-MS-Exchange-Organization-SCL: -1
So, go ahead and run the Install-AntispamAgents.ps1 from the Scripts folder on your Hub Transport server, and then add IP addresses or subnets of our application servers to the IP Allow List.
If using the Shell, use this command to add an IP address to the IP Allow List:
Add-IPAllowListEntry –IPAddress 192.168.10.120
Now I know what you’re thinking: Why don’t we just add Externally Secured Authentication as an authentication type on a Receive Connector, scope the connector’s remote IP range to the sending application servers and call it a day?
Well, not so fast... you see, while Externally Secured gives the sending IP the ms-Exch-Bypass-Anti-Spam extended right, this only circumvents the Exchange Anti-Spam checks, not Outlook’s. And it is Outlook that’s moving the message into junk mail in this case.
Also note that Externally Secured does not stamp any SCL X-headers on the message as an SCL of -1 would’ve bypassed Outlook’s checks. The only header this authentication type creates is the one below:
X-MS-Exchange-Organization-AuthAs: Internal
If you're still confused about Exchange and Outlook anti-spam checks, take a look at Exchange anti-spam myths revealed.
Big thanks to Tak Chow for tech reviewing this post.
Tom Kern
The Exchange support team relatively frequently receives cases where mobile devices using Exchange ActiveSync (EAS) protocol send too many requests to Exchange server resulting in a situation where server runs out of resources, effectively causing a ‘denial of service’ (DOS) attack. The worst outcome of such a situation is that the server also becomes unavailable to other users who may not be using EAS protocol to connect. We have documented this issue with possible mitigations in the following KnowledgeBase article:
2469722 Unable to connect using Exchange ActiveSync due to Exchange resource consumption
A recent example of this issue was Apple iOS 4.0 devices retrying a full sync every 30 seconds (see TS3398). Another example could be some devices that do not understand how to handle a ‘mailbox full’ response from the Exchange server, resulting in several tries to reconnect. This can cause such devices to attempt to connect & sync with the mailbox more than 60 times in a minute, killing battery life on the device and causing performance issues on server.
Managing mobile devices & balancing available server resources among different types of clients can be a daunting challenge for IT administrators. Trying to track down which devices are causing resource depletion issues on Exchange 2010/2007 Client Access server (CAS) or Exchange 2003 Front-end (FE) server is not an easy task. As referenced in the article above, you can use Log Parser to extract useful statistics from IIS logs (see note below), but most administrators do not have the time & expertise to draft queries to extract such information from lengthy logs.
The purpose of this post is to introduce everyone in Exchange community to a new PowerShell script that can be utilized to identify devices causing resource depletion issue, help in spotting performance trends and automatically generate reports for continuous monitoring. Using this script you can easily & quickly drill into your users' EAS activity, which can be a major task when faced with IIS logs that can get up to several gigabytes in size. The script makes it easier to identify users with multiple EAS devices. You can use it as a tool to establish a baseline during periods of normal EAS activity and then use that for comparison and reporting when things sway in other directions. It also provides an auto-monitoring feature which you can use to receive e-mail notifications.
Note: The script works with IIS logs on Exchange 2010, Exchange 2007 and Exchange 2003 servers. All communication between mobile devices using EAS protocol and Microsoft Exchange is logged in IIS Logs on CAS/FE servers in W3C format. The default W3C fields enabled for logging do vary between IIS 6.0 and 7.0/7.5 (IIS 7.0 has the same fields as 7.5). This script works against both versions.
Because EAS uses HTTP, all EAS requests are logged in IIS logs, which is enabled by default. Sometimes administrators may disable IIS logging to save space on servers. You must check whether logging is enabled or not and find the location of log files by following these steps:
IIS 7
IIS 6
Before we delve into the specifics of the script, let's review some important requirements for mobile devices that use EAS to communicate with Microsoft Exchange.
The script utilizes Microsoft Log Parser 2.2 to parse IIS logs and generate results. It creates different SQL queries for Log Parser based on the switches (see table below) you use. A previous blog post Exchange 2003 - Active Sync reporting talking about Log Parser that touches on similar points. The information in that post still applies to Exchange 2010 & 2007. Since that blog post, additional commands were added to EAS protocol), which are also utilized by this new script while processing the logs.
Here's a list of the EAS commands that the script will report in results:
Sync, SendMail, SmartForward, SmartReply, GetAttachment, GetHierarchy, CreateCollection, DeleteCollection, MoveCollection, FolderSync, FolderCreate, FolderDelete, FolderUpdate, MoveItems, GetItemEstimate, MeetingResponse, Search, Settings, Ping, ItemOperations, Provision, ResolveRecipients, ValidateCert
For more details about each EAS command, see ActiveSync HTTP Protocol Specification on MSDN.
In addition to these commands, the following parameters are also logged by the script.
Note: The existence of the 503 status code does not imply that a server must use it when becoming overloaded. Some servers may wish to simply refuse the connection. (Ref: RFC 2616)
InvalidContent, ServerError, ServerErrorRetryLater, MailboxQuotaExceeded, DeviceIsBlockedForThisUser, AccessDenied, SyncStateNotFound, DeviceNotFullyProvisionable, DeviceNotProvisioned, ItemNotFound, UserDisabledForSync
You can process logs using this script to retrieve the following details:
Please make sure you have the following installed on your machine before using this script:
IIS CSV Headers to Export on in the –HTMLReport.
Defaults: "DeviceID,Hits,Ping,Sync,FolderSync,DeviceType,User-Agent"
IIS Log Directory. Example.- IISLogs D:\Server,'D:\Server 2'
Top Hits to return. Example: TopHits 50, This cannot be used with Hourly or ReportBySeconds
Below are some examples (with commands) on how you can use the script and why you might use them.
The following command will parse all the IIS Logs in the folder W3SVC1 and only report the hits by users & devices that are greater than 1000.
.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000
[In above command, script ‘ActiveSyncReport.ps1’ is located at the root of C drive, -IISLog switch specifies the default location of IIS logs, -LogparserExec switch points to the location of Log Parser executable application file, -ActiveSyncOutputFolder switch provides the location where output or result file needs to be saved, MinimumHits with a value of ‘1000’ is the script parameter explained in the above table]
Output:
Usually if a device is sending over 1000 requests per day, we consider this ‘high usage’. If the hits (requests) are above 1500, there could be an issue on the device or environment. In that case, the device & its user’s activity should be further investigated.
As a real world example, in one case we noticed there were several users who were hitting their Exchange server via EAS a lot (~25K hits, 1K hits per hour) resulting in depletion of resources on the server. Upon further investigation we saw that all of those users’ requests were resulting in a 507 error on mailbox servers on the back-end. Talking to those EAS users we discovered that during that time period they were hitting their mailbox size limits (25 MB) & were trying to delete mail from different folders to get under the size limit. In such situations, you may also see HTTP 503 (‘TooManyJobsQueued’) responses in IIS logs for EAS requests as described in KB: 2469722
Here the following command will parse all the IIS Logs in the folder C:\IISLogs and will look for the Device ID xxxxxx and display its hourly statistics.
.\ActiveSyncReport.ps1 -IISLog " C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports –DeviceID xxxxxx -Hourly
With the above information you can pick a user/device and see the hourly trends. This can help identify if it’s a user action or a programmatic one.
As a real world example, in one case we had to find out which devices were modifying calendar items. So we looked at the user/device activity and sorted that by different commands they were sending to the server. After that we just concentrated on which users/devices were sending ‘MeetingResponse’ command and its frequency, time period & further related details. That helped us narrowing the issue to related users and their calendar specific activity to better address the underlying calendaring issue.
Another device related command & error to look for is ‘Options’ command and if it does not succeed for a device then the HTTP 409 error code is returned in IIS log.
The following command will parse only the files that match the date 12-24-2011 in the folder W3SVC1 and will only report the hits greater than 1000.
.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000 –Date 12-24-2011
With the above information you can identify users sending high number of requests. Also, within the columns, you can see what kind of commands those users are sending. This helps in coming up with more directed & efficient troubleshooting techniques.
When analyzing IIS logs with the help of script, you should look for one specific command being sent over and over again. The frequency of particular commands being sent is important, any command failing frequently is also very important & one should further look into that. We should also look & compare the wait times between the executions of certain commands. Generally, commands taking longer time to execute or resulting in delayed response from server will be suspicious & should be further investigated. Keep in mind though, the Ping command is an exception as it takes longer to execute and you will see it frequently in the log as well, which is expected.
If you notice continuous failures to connect for a device with an error code of 403 that could mean that the device is not enabled for EAS based access. Sometimes mobile device users complain of connectivity issues not realizing that they’re actually not entering their credentials correctly (understandably it’s easy to make such mistakes on mobile devices). When looking thru logs, you can focus on that user & may find that user’s device is failing after issuing the ‘Provision’ command.
You may want to create a report or generate an e-mail with such reports and details of user activity.
The following command will parse all the IIS Logs in the folder W3SVC1 and will only report the hits greater than 1000. Additionally it will create an HTML report of the results.
.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000 -HTMLReport
The following command will parse all the files in the folders C:\Server1_Logs and D:\Server2_Logs and will also email the generated report to ‘user@contoso.com’.
.\ActiveSyncReport.ps1 -IISLog "C:\Server1_Logs",”D:\Server2_Logs” -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -SendEmailReport -SMTPRecipient user@contoso.com –SMTPSender user2@contoso.com -SMTPServer mail.contoso.com
We sincerely hope our readers find this script useful. Please do let us know how these scripts made your lives easier and what else can we do to further enhance it.
Konstantin Papadakis and Brian Drepaul
Special Thanks to: M. Amir Haque, Will Duff, Steve Swift, Angelique Conde, Kary Wall, Chris Lineback & Mike Lagase