Exchange Team Blog

  • Released: Exchange Server 2013 Service Pack 1

    Exchange Server 2013 Service Pack 1 (SP1) is now available for download! Please make sure to read the release notesbefore installing SP1. The final build number for Exchange Server 2013 SP1 is 15.00.0847.032.

    SP1 has already been deployed to thousands of production mailboxes in customer environments via the Exchange Server Technology Adoption Program (TAP). In addition to including fixes, SP1 provides enhancements to improve the Exchange 2013 experience. These include enhancements in security and compliance, architecture and administration, and user experiences. These key enhancements are introduced below.

    Note: Some of the documentation referenced may not be fully available at the time of publishing of this post.

    Security and Compliance

    SP1 provides enhancements improving security and compliance capabilities in Exchange Server 2013. This includes improvements in the Data Loss Prevention (DLP) feature and the return of S/MIME encryption for Outlook Web App users.

    • DLP Policy Tips in Outlook Web App – DLP Policy Tips are now enabled for Outlook Web App (OWA) and OWA for Devices. These are the same Policy Tips available in Outlook 2013. DLP Policy Tips appear when a user attempts to send a message containing sensitive data that matches a DLP policy. Learn more about DLP Policy Tips.
    • DLP Document Fingerprinting – DLP policies already allow you to detect sensitive information such as financial or personal data. DLP Document Fingerprinting expands this capability to detect forms used in your organization. For example, you can create a document fingerprint based on your organization’s patent request form to identify when users are sending that form, and then use DLP actions to properly control dissemination of the content. Learn more about DLP Document Fingerprinting.
    • DLP sensitive information types for new regions – SP1 provides an expanded set of standard DLP sensitive information types covering an increased set of regions. SP1 adds region support for Poland, Finland and Taiwan. Learn more about the DLP sensitive information types available.
    • S/MIME support for OWA – SP1 also reintroduces the S/MIME feature in OWA, enabling OWA users to send and receive signed and encrypted email. Signed messages allow the recipient to verify that the message came from the specified sender and contains the only the content from the sender. This capability is supported when using OWA with Internet Explorer 9 or later. Learn more about S/MIME in Exchange 2013.

    Architecture & Administration

    These improvements help Exchange meet our customer requirements and stay in step with the latest platforms.

    • Windows Server 2012 R2 support – Exchange 2013 SP1 adds Windows Server 2012 R2 as a supported operating system and Active Directory environment for both domain and forest functional levels. For the complete configuration support information refer to the Exchange Server Supportability Matrix. This matrix includes details regarding Windows Server 2012 R2 support information about earlier versions of Exchange.
    • Exchange Admin Center Cmdlet Logging – The Exchange 2010 Management Console includes PowerShell cmdlet logging functionality. Listening to your feedback, we’re happy to announce that this functionality is now included in the Exchange Admin Center (EAC). The logging feature enables you to capture and review the recent (up to 500) commands executed in the EAC user interface while the logging window is open. Logging is invoked from the EAC help menu and continues logging while the logging window remains open.

    image

    image

    • ADFS for OWA – Also new for Outlook Web App in SP1 is claims-based authentication for organizations using Active Directory Federation Services. Learn more about the scenario.
    • Edge Transport server role – SP1 also reintroduces the Edge Transport server role. If you have deployed Exchange 2013 with a supported legacy Exchange Edge Transport role, you don’t need to upgrade. That configuration is still supported. But we do recommend that future deployments use the Exchange 2013 Edge Transport role. Learn more about Edge Transport in Exchange 2013.
    • New communication method for Exchange and Outlook – SP1 introduces a new communication method for Exchange Server and Microsoft Outlook called MAPI over HTTP(MAPI/HTTP). This communication method simplifies connectivity troubleshooting and improves the user connection experience with resuming from hibernate or switching networks. MAPI/HTTP is disabled by default, allowing you to decide when to enable it for your organization. MAPI/HTTP can be used in place of RPC/HTTP (Outlook Anywhere) for your Outlook 2013 SP1 clients while Outlook 2013 RTM and older clients continue to use RPC/HTTP. Learn more about deploying MAPI/HTTP.
    • DAGs without Cluster Administrative Access Points - Windows Server 2012 R2 introduces failover clusters that can operate without an administrative access point: no IP addresses or IP address resource, no network name resource, and no cluster name object. SP1 enables you to create a DAG without an administrative access point on Windows Server 2012 R2 from EAC or PowerShell. This is an optional DAG configuration for SP1 and requires Windows Server 2012 R2. DAGs with administrative access points continue to be supported. Learn more about creating a DAG without an administrative access point here and here.
    • SSL offloading – SP1 now supports SSL offloading, allowing you to terminate incoming SSL connections in front of your CAS servers and move the SSL workload (encryption & decryption tasks) to a load balancer device. Learn how to configure SSL offloading in Exchange 2013.

    User Experience

    We know the user experience is crucial to running a great messaging platform. SP1 provides continued enhancements to help your users work smarter.

    • Enhanced text editor for OWA - OWA now uses the same rich text editor as SharePoint, thereby improving the user experience, and enabling several new formatting and composition capabilities that you expect from modern Web application - more pasting options, rich previews to linked content, and the ability to create and modify tables.

    image

    • Apps for Office in Compose – Mail apps are now available for use during the creation of new mail messages. This allows developers to build and users to leverage apps that can help them while they are composing mails. The compose apps leverage the Apps for Office platform and can be added via the existing Office store or corporate catalogs. Learn more about Apps for Office.

    image

    Upgrading to SP1/Deploying SP1

    As with all cumulative updates (CUs), SP1 is a full build of Exchange, and the deployment of SP1 is just like the deployment of a cumulative update.

    Active Directory Preparation

    Prior to or concurrent with upgrading or deploying SP1 onto a server, you must update Active Directory. These are the required actions to perform prior to installing SP1 on a server.

    1. Exchange 2013 SP1 includes schema changes. Therefore, you will need to execute the following command to apply the schema changes.

    setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms

    2. Exchange 2013 SP1 includes enterprise Active Directory changes (e.g., RBAC roles have been updated to support new cmdlets and/or properties). Therefore, you will need to execute the following command.

    setup.exe /PrepareAD /IAcceptExchangeServerLicenseTerms

    Server Deployment

    Once the above preparatory steps are completed, you can install SP1 on your servers. Of course, as always, if you don’t separately perform the above steps, they will be performed by Setup when you install your first Exchange 2013 SP1 server. If this is your first Exchange 2013 server deployment, you will need to deploy both Client Access Server and Mailbox Server roles in your organization.

    If you already deployed Exchange 2013 RTM code and want to upgrade to SP1, you will run the following command from a command line.

    setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms

    Alternatively you can start the installation through the GUI installer.

    Hybrid deployments and EOA

    Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., SP1) or the prior (e.g., CU3) Cumulative Update release.

    Note: We have learned some customers using 3rd party or custom transport agents may experience issues after installation of SP1.  If you experience installation issues consult KB 2938053 to resolve this issue with transport agents.

    Looking Ahead

    Our next update for Exchange 2013 will be released as Exchange 2013 Cumulative Update 5. This CU release will continue the Exchange Server 2013 release process.

    If you want to learn more about Exchange Server 2013 SP1 and have the opportunity to ask questions to the Exchange team in person, come join us at the Microsoft Exchange Conference.

    Brian Shiers
    Technical Product Manager, Exchange

  • GAL Photos in Exchange 2010 and Outlook 2010

    Over the years, displaying recipient photographs in the Global Address List (GAL) has been a frequently-requested feature, high on the wish lists of many Exchange folks. Particularly in large organizations or geographically dispersed teams, it's great to be able to put a face to a name for people you've never met or don't frequently have face time with. Employees are commonly photographed when issuing badges/IDs, and many organizations publish the photos on intranets.

    There have been questions about workarounds or third-party add-ins for Outlook, and you can also find some sample code on MSDN and elsewhere. A few years ago, an unnamed IT person wrote ASP code to make employee photos show up on the intranet based on the Employee ID attribute in Active Directory - which was imported from the company's LDAP directory. A fun project to satisfy the coder alter-ego of the said IT person.

    Luckily, you won't need to turn to your alter-ego to do this. Exchange 2010 and Outlook 2010 make this task a snap, with help from Active Directory. Active Directory includes the Picture attribute (we'll refer to it using its ldapDisplayName: thumbnailPhoto) to store thumbnail photos, and you can easily import photos— not the high-res ones from your 20 megapixel digital camera, but small, less-than-10K-ish ones, using Exchange 2010's Import-RecipientDataProperty cmdlet.

    Photos in Active Directory? Really?

    The first question most IT folks would want to ask is— What's importing all those photos going to do to the size of my Active Directory database? And how much Active Directory replication traffic will this generate? The thumbnailPhoto attribute can accomodate photos of up to 100K in size, but the Import-RecipientDataProperty cmdlet won't allow you to import a photo that's larger than 10K. (Note, the attribute limit was stated as 10K earlier. This has been updated to state the correct value. -Bharat)

    The original picture used in this example was 9K, and you can compress it further to a much smaller size - let's say approximately 2K-2.5K, without any noticeable degradation when displayed at the smaller sizes. If you store user certificates in Active Directory, the 10K or smaller size thumbnail pictures are comparable in size. Storing thumbnails for 10,000 users would take close to 100 Mb, and it's data that doesn't change frequently.

    Note: The recommended thumbnail photo size in pixels is 96x96 pixels.

    With that out of the way, let's go through the process of adding pictures.

    A minor schema change

    First stop, the Active Directory Schema. A minor schema modification is required to flip the thumbnailPhoto attribute to make it replicate to the Global Catalog.

    Note: If you're on Exchange 2010 SP1, skip this step. The attribute is modified by setup / SchemaPrep.

    1. If you haven't registered the Schema MMC snap-in on the server you want to make this change on, go ahead and do so using the following command:

      Regsvr32 schmmgmt.dll

    2. Fire up a MMC console (Start -> Run -> MMC) and add the Schema snap-in
    3. In the Active Directory Schema snap-in, expand the Attributes node, and then locate the thumbnailPhoto attribute. (The Schema snap-in lists attributes by its ldapDisplayName).
    4. In the Properties page, select Replicate this attribute to the Global Catalog, and click OK.

      Figure 1: Modifying the thumbnailPhoto attribute to replicate it to Global Catalog

    Loading pictures into Active Directory

    Now you can start uploading pictures to Active Directory using the Import-RecipientDataProperty cmdlet, as shown in this example:

    Import-RecipientDataProperty -Identity "Bharat Suneja" -Picture -FileData ([Byte[]]$(Get-Content -Path "C:\pictures\BharatSuneja.jpg" -Encoding Byte -ReadCount 0))

    To perform a bulk operation you can use the Get-Mailbox cmdlet with your choice of filter (or use the Get-DistributionGroupMember cmdlet if you want to do this for members of a distribution group), and pipe the mailboxes to a foreach loop. You can also retrieve the user name and path to the thumbnail picture from a CSV/TXT file.

    Thumbnails in Outlook 2010

    Now, let's fire up Outlook 2010 and take a look what that looks like.

    In the Address Book/GAL properties for the recipient


    Figure 2: Thumbnail displayed in a recipient's property pages in the GAL

    When you receive a message from a user who has the thumbnail populated, it shows up in the message preview.


    Figure 3: Thumbnail displayed in a message

    While composing a message, the thumbnail also shows up when you hover the mouse on the recipient's name.


    Figure 4: Recipient's thumbnail displayed on mouse over when composing a message

    There are other locations in Outlook where photos are displayed. For example, in the Account Settings section in the Backstage Help view.

    Update from the Outlook team

    Our friends from the Outlook team have requested us to point out that the new Outlook Social Connector also displays GAL Photos, as well as photos from Contacts folders and from social networks, as shown in this screenshot.


    Figure 5: Thumbnail photos displayed in the People Pane in the Outlook Social Connector

    More details and video in Announcing the Outlook Social Connector on the Outlook team blog.

    GAL Photos and the Offline Address Book

    After you've loaded photos in Active Directory, you'll need to update the Offline Address Book (OAB) for Outlook cached mode clients. This example updates the Default Offline Address Book:

    Update-OfflineAddressBook "Default Offline Address Book"

    In Exchange 2010, the recipient attributes included in an OAB are specified in the ConfiguredAttributes property of the OAB. ConfiguredAttributes is populated with a default set of attributes. You can modify it using the Set-OfflineAddressBook cmdlet to add/remove attributes as required.

    By default, thumbnailPhoto is included in the OAB as an Indicator attribute. This means the value of the attribute isn't copied to the OAB— instead, it simply indicates the client should get the value from Active Directory. If an Outlook client (including Outlook Anywhere clients connected to Exchange using HTTPS) can access Active Directory, the thumbnail will be downloaded and displayed. When offline, no thumbnail downloads. Another example of an Indicator attribute is the UmSpokenName.

    You can list all attributes included in the default OAB using the following command:

    (Get-OfflineAddressBook "Default Offline Address Book").ConfiguredAttributes

    For true offline use, you could modify the ConfiguredAttributes of an OAB to make thumbnailPhoto a Value attribute. After this is done and the OAB updated, the photos are added to the OAB (yes, all 20,000 photos you just uploaded...). Depending on the number of users and sizes of thumbnail photos uploaded, this would add significant bulk to the OAB. Test this scenario thoroughly in a lab environment— chances are you may not want to provide the GAL photo bliss to offline clients in this manner.

    To prevent Outlook cached mode clients from displaying thumbnail photos (remember, the photo is not in the OAB – just a pointer to go fetch it from Active Directory), you can remove the thumbnailPhoto attribute from the ConfiguredAttributes property of an OAB using the following command:

    $attributes = (Get-OfflineAddressBook "Default Offline Address Book").ConfiguredAttributes
    $attributes.Remove("thumbnailphoto,Indicator")
    Set-OfflineAddressBook "Default Offline Address Book" -ConfiguredAttributes $attributes

    Bharat Suneja

    Updates:

    • 11/3/2010: Corrected size limit of thumbnailPhoto attribute to 100K.
    • 8/25/2011: Added note to reflect Exchange 2010 SP1 setup / SchemaPrep modifies thumbnailPhoto attribute to replicate to Global Catalog.
    • GAL Photos now has an FAQ. Check out GAL Photos: Frequently Asked Questions.

    To visit this post again, use the short URL aka.ms/galphotos. To go to the 'GAL Photos: Frequently Asked Questions' post, use aka.ms/galphotosfaq.

  • Ask the Perf Guy: Sizing Exchange 2013 Deployments

    Since the release to manufacturing (RTM) of Exchange 2013, you have been waiting for our sizing and capacity planning guidance. This is the first official release of our guidance in this area, and updates to our TechNet content will follow in a future milestone.

    As we continue to learn more from our own internal deployments of Exchange 2013, as well as from customer feedback, you will see further updates to our sizing and capacity planning guidance in two forms: changes to the numbers mentioned in this document, as well as further guidance on specific areas not covered here. Let us know what you think we are missing and we will do our best to respond with better information over time.

    First, some context

    Historically, the Exchange Server product group has used various sources of data to produce sizing guidance. Typically, this data would come from scale tests run early in the product development cycle, and we would then fine-tune that guidance with observations from production deployments closer to final release. Production deployments have included Exchange Dogfood (our internal pre-release deployment that hosts the Exchange team and various other groups at Microsoft), Microsoft IT’s corporate Exchange deployment, and various early adopter programs.

    For Exchange 2013, our guidance is primarily based on observations from the Exchange Dogfood deployment. Dogfood hosts some of the most demanding Exchange users at Microsoft, with extreme messaging profiles and many client sessions per user across multiple client types. Many users in the Dogfood deployment send and receive more than 500 messages per day, and typically have multiple Outlook clients and multiple mobile devices simultaneously connected and active. This allows our guidance to be somewhat conservative, taking into account additional overhead from client types that we don’t regularly see in our internal deployments as well as client mixes that might be different from what's considered “normal” at Microsoft.

    Does this mean that you should take this conservative guidance and adjust the recommendations such that you deploy less hardware? Absolutely not. One of the many things we have learned from operating our own very high-scale service is that availability and reliability are very dependent on having capacity available to deal with those unexpected peaks.

    Sizing is both a science and an art form. Attempting to apply too much science to the process (trying to get too accurate) usually results in not having enough extra capacity available to deal with peaks, and in the end, results in a poor user experience and decreased system availability. On the other hand, there does need to be some science involved in the process, otherwise it’s very challenging to have a predictable and repeatable methodology for sizing deployments. We strive to achieve the right balance here.

    Impact of the new architecture

    From a sizing and performance perspective, there are a number of advantages with the new Exchange 2013 architecture. As many of you are aware, a couple of years ago we began recommending multi-role deployment for Exchange 2010 (combining the Mailbox, Hub Transport, and Client Access Server (CAS) roles on a single server) as a great way to take advantage of hardware resources on modern servers, as well as a way to simplify capacity planning and deployment. These same advantages apply to the Exchange 2013 Mailbox role as well. We like to think of the services running on the Mailbox role as providing a balanced utilization of resources rather than having a set of services on a role that are very disk intensive, and a set of services on another role that are very CPU intensive.

    Another example to consider for the Mailbox role is cache effectiveness. Software developers use in-memory caching to prevent having to use higher-latency methods to retrieve data (like LDAP queries, RPCs, or disk reads). In the Exchange 2007/2010 architecture, processing for operations related to a particular user could occur on many servers throughout the topology. One CAS might be handling Outlook Web App for that user, while another (or more than one) CAS might be handling Exchange ActiveSync connections, and even more CAS might be processing Outlook Anywhere RPC proxy load for that same user. It’s even possible that the set of servers handling that load could be changing on a regular basis. Any data associated with that user stored in a cache would become useless (effectively a waste of memory) as soon as those connections moved to other servers. In the Exchange 2013 architecture, all workload processing for a given user occurs on the Mailbox server hosting the active copy of that user’s mailbox. Therefore, cache utilization is much more effective.

    The new CAS role has some nice benefits as well. Given that the role is totally stateless from a user perspective, it becomes very easy to scale up and down as demands change by simply adding or removing servers from the topology. Compared to the CAS role in prior releases, hardware utilization is dramatically reduced meaning that fewer CAS role machines will be required. Additionally, it may make sense for many customers to consider a multi-role deployment in which CAS and Mailbox are co-located – this allows further simplification of capacity planning and deployment, and also increases the number of available CAS which has a positive effect on service availability. Look for a follow up post on the benefits of a multi-role deployment soon.

    Start to finish, what’s the process?

    Sizing an Exchange deployment has six major phases, and I will go through each of them in this post in some detail.

    1. You begin the process by making sure you fully understand the available guidance on this topic. If you are reading this post, that’s a great start. There may have been updates posted either here on the Exchange team blog, or over on TechNet. Make sure you take a look before proceeding.
    2. The second step is to gather any available data on the existing messaging deployment (if there is one) or estimate user profile requirements if this is a totally new solution.
    3. The third step is perhaps the most difficult. At this point, you need to figure out all of the requirements for the Exchange solution that might impact the sizing process. This can include decisions like the desired mailbox size (mailbox quota), service level objectives, number of sites, number of mailbox database copies, storage architecture, growth plans, deployment of 3rd party products or line-of-business applications, etc. Essentially, you need to understand any aspect of the design that could impact the number of servers, user count, and utilization of servers.
    4. Once you have collected all of the requirements, constraints, and user profile data, it’s time to calculate Exchange requirements. The easiest way to do this is with the calculator tool, but it can also be done manually as I will describe in this post. Clearly the calculator makes the process much easier, so if the calculator is available, use it!
    5. Once the Exchange requirements have been calculated, it’s time to consider various options that are available. For example, there may be a choice between scaling up (deploying fewer larger servers) and scaling out (deploying a larger number of smaller servers), and the options could have various implications on high availability, as well as the total number of hardware or software failures that the solution can sustain while remaining available to users. Another typical decision is around storage architecture, and this often comes down to cost. There are a range of costs and benefits to different storage choices, and the Exchange requirements can often be met by more than one of these options.
    6. The last step is to finalize the design. At this point, it’s time to document all of the decisions that were made, order some hardware, use Jetstress to validate that the storage requirements can be met, and perform any other necessary pre-production lab testing to ensure that the production rollout and implementation will go smoothly.

    Gather requirements and user data

    The primary input to all of the calculations that you will perform later is the average user profile of the deployment, where the user profile is defined as the sum of total messages sent and total messages received per-user, per-workday (on average). Many organizations have quite a bit of variability in user profiles. For example, a segment of users might be considered “Information Workers” and spend a good part of their day in their mailbox sending and reading mail, while another segment of users might be more focused on other tasks and use email infrequently. Sizing for these segments of users can be accomplished by either looking at the entire system using weighted averages, or by breaking up the sizing process to align with the various segments of users. In general it’s certainly easier to size the whole system as a unit, but there may be specific requirements (like the use of certain 3rd party tools or devices) which will significantly impact the sizing calculation for one or more of the user segments, and it can be very difficult to apply sizing factors to a user segment while attempting to size the entire solution as a unit.

    The obvious question in your mind is how to go get this user profile information. If you are starting with an existing Exchange deployment, there are a number of options that can be used, assuming that you aren’t the elusive Exchange admin who actually tracks statistics like this on an ongoing basis. If you are using Exchange 2007 or earlier, you can utilize the Exchange Profile Analyzer (EPA) tool, which will provide overall user profile statistics for your Exchange organization as well as detailed per-user statistics if required. If you are on Exchange 2010, the EPA tool is not an option for you. One potential option is to evaluate message traffic using performance counters to come up with user profile averages on a per-server basis. This can be done by monitoring the MSExchangeIS\Messages Submitted/sec and MSExchangeIS\Messages Delivered/sec counters during peak average periods and extrapolating the recorded data to represent daily per-user averages. I will cover this methodology in a future blog post, as it will take a fair amount of explanation. Another option is to use message tracking logs to generate these statistics. This could be done via some crafty custom PowerShell scripting, or you could look for scripts that attempt to do this work for you already. One of our own consultants points to an example on his blog.

    Typical user profiles range from 50-500 messages per-user/per-day, and we provide guidance for those profiles. When in doubt, round up.

    image001

    The other important piece of profile information for sizing is the average message size seen in the deployment. This can be obtained from EPA, or from the other mentioned methods (via transport performance counters, or via message tracking logs). Within Microsoft, we typically see average message sizes of around 75KB, but we certainly have worked with customers that have much higher average message sizes. This can vary greatly by industry, and by region.

    Start with the Mailbox servers

    Just as we recommended for Exchange 2010, the right way to start with sizing calculations for Exchange 2013 is with the Mailbox role. In fact, those of you who have sized deployments for Exchange 2010 will find many similarities with the methodology discussed here.

    Example scenario

    Throughout this article, we will be referring to an example deployment. The deployment is for a relatively large organization with the following attributes:

    • 100,000 mailboxes
    • 200 message/day profile, with 75KB average message size
    • 10GB mailbox quota
    • Single site
    • 4 mailbox database copies, no lagged copies
    • 2U commodity server hardware platform with internal drive bays and an external storage chassis will be used (total of 24 available large form-factor drive bays)
    • 7200 RPM 4TB midline SAS disks are used
    • Mailbox databases are stored on JBOD direct attached storage, utilizing no RAID
    • Solution must survive double failure events

    High availability model

    The first thing you need to determine is your high availability model, e.g., how you will meet the availability requirements that you determined earlier. This likely includes multiple database copies in one or more Database Availability Groups, which will have an impact on storage capacity and IOPS requirements. The TechNet documentation on this topic provides some background on the capabilities of Exchange 2013 and should be reviewed as part of the sizing process.

    At a minimum, you need to be able to answer the following questions:

    • Will you deploy multiple database copies?
    • How many database copies will you deploy?
    • Will you have an architecture that provides site resilience?
    • What kind of resiliency model will you deploy?
    • How will you distribute database copies?
    • What storage architecture will you use?

    Capacity requirements

    Once you have an understanding of how you will meet your high availability requirements, you should know the number of database copies and sites that will be deployed. Given this, you can begin to evaluate capacity requirements. At a basic level, you can think of capacity requirements as consisting of storage for mailbox data (primarily based on mailbox storage quotas), storage for database log files, storage for content indexing files, and overhead for growth. Every copy of a mailbox database is a multiplier on top of these basic storage requirements. As a simplistic example, if I was planning for 500 mailboxes of 1GB each, the storage for mailbox data would be 500GB, and then I would need to apply various factors to that value to determine the per-copy storage requirement. From there, if I needed 3 copies of the data for high availability, I would then need to multiply by 3 to obtain the overall capacity requirement for the solution (all servers). In reality, the storage requirements for Exchange are far more complex, as you will see below.

    Mailbox size

    To determine the actual size of a mailbox on disk, we must consider 3 factors: the mailbox storage quota, database white space, and recoverable items.

    The mailbox storage quota is what most people think of as the “size of the mailbox” – it’s the user perceived size of their mailbox and represents the maximum amount of data that the user can store in their mailbox on the server. While this is certainly represents the majority of space utilization for Exchange databases, it’s not the only element by which we have to size.

    Database whitespace is the amount of space in the mailbox database file that has been allocated on disk but doesn’t contain any in-use database pages. Think of it as available space to grow into. As content is deleted out of mailbox databases and eventually removed from the mailbox recoverable items, the database pages that contained that content become whitespace. We recommend planning for whitespace size equal to 1 day worth of messaging content.

    Estimated Database Whitespace per Mailbox = per-user daily message profile x average message size

    This means that a user with the 200 message/day profile and an average message size of 75KB would be expected to consume the following whitespace:

    200 messages/day x 75KB = 14.65MB

    When items are deleted from a mailbox, they are really “soft-deleted” and moved temporarily to the recoverable items folder for the duration of the deleted item retention period. Like Exchange 2010, Exchange 2013 has a feature known as single item recovery which will prevent purging data from the recoverable items folder prior to reaching the deleted item retention window. When this is enabled, we expect to see a 1.2 percent increase in mailbox size for a 14 day deleted item retention window. Additionally, we expect to see a 3 percent increase in the size of the mailbox for calendar item version logging which is enabled by default. Given that a mailbox will eventually reach a steady state where the amount of new content will be approximately equal to the amount of deleted content in order to remain under quota, we would expect the size of the items in the recoverable items folder to eventually equal the size of new content sent & received during the retention window. This means that the overall size of the recoverable items folder can be calculated as follows:

    Recoverable Items Folder Size = (per-user daily message profile x average message size x deleted item retention window) + (mailbox quota size x 0.012) + (mailbox quota size x 0.03)

    If we carry our example forward with the 200 message/day profile, a 75KB average message size, a deleted item retention window of 14 days, and a mailbox quota of 10GB, the expected recoverable items folder size would be:

    (200 messages/day x 75KB x 14 days) + (10GB x 0.012) + (10GB x 0.03)
    = 210,000KB + 125,819.12K + 314,572.8KB = 635.16MB

    Given the results from these calculations, we can sum up the mailbox capacity factors to get our estimated mailbox size on disk:

    Mailbox Size on disk = 10GB mailbox quota + 14.65MB database whitespace + 635.16MB Recoverable Items Folder = 10.63GB

    Content indexing

    The space required for files related to the content indexing process can be estimated as 20% of the database size.

    Per-Database Content Indexing Space = database size x 0.20

    In addition, you must additionally size for one additional content index (e.g. an additional 20% of one of the mailbox databases on the volume) in order to allow content indexing maintenance tasks (specifically the master merge process) to complete. The best way to express the need for the master merge space requirement would be to look at the average database file size across all databases on a volume and add 1 database worth of disk consumption to the calculation when determining the per-volume content indexing space requirement:

    Per-Volume Content Indexing Space = (average database size x (databases on the volume + 1) x 0.20)

    As a simple example, if we had 2 mailbox databases on a single volume and each database consumed 100GB of space, we would compute the per-volume content indexing space requirement like this:

    100GB database size x (2 databases + 1) x 0.20 = 60GB

    Log space

    The amount of space required for ESE transaction log files can be computed using the same method as Exchange 2010. You can find details on the process in the Exchange 2010 TechNet guidance. To summarize the process, you must first determine the base guideline for number of transaction logs generated per-user, per-day, using the following table. As in Exchange 2010, log files are 1MB in size, making the math for log capacity quite straightforward.

    Message profile (75 KB average message size) Number of transaction logs generated per day
    50 10
    100 20
    150 30
    200 40
    250 50
    300 60
    350 70
    400 80
    450 90
    500 100

    Once you have the appropriate value from the table which represents guidance for a 75KB average message size, you may need to adjust the value based on differences in the target average message size. Every time you double the average message size, you must increase the logs generated per day by an additional factor of 1.9. For example:

    Transaction logs at 200 messages/day with 150KB average message size = 40 logs/day (at 75KB average message size) x 1.9 = 76

    Transaction logs at 200 messages/day with 300KB average message size = 40 logs/day (at 75KB average message size) x (1.9 x 2) = 152

    While daily log volume is interesting, it doesn’t represent the entire requirement for log capacity. If traditional backups are being used, logs will remain on disk for the interval between full backups. When mailboxes are moved, that volume of change to the target database will result in a significant increase in the amount of logs generated during the day. In a solution where Exchange native data protection is in use (e.g., you aren’t using traditional backups), logs will not be truncated if a mailbox database copy is failed or if an entire server is unreachable unless an administrator intervenes. There are many factors to consider when sizing for required log capacity, and it is certainly worth spending some time in the Exchange 2010 TechNet guidance mentioned earlier to fully understand these factors before proceeding. Thinking about our example scenario, we could consider log space required per database if we estimate the number of users per database at 65. We will also assume that 1% of our users are moved per week in a single day, and that we will allocate enough space to support 3 days of logs in the case of failed copies or servers.

    Log Capacity to Support 3 Days of Truncation Failure = (65 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 7.62GB

    Log Capacity to Support 1% mailbox moves per week = 65 mailboxes/database x 0.01 x 10.63GB mailbox size = 6.91GB

    Total Local Capacity Required per Database = 7.62GB + 6.91GB = 14.53GB

    Putting all of the capacity requirements together

    The easiest way to think about sizing for storage capacity without having a calculator tool available is to make some assumptions up front about the servers and storage that will be used. Within the product group, we are big fans of 2U commodity server platforms with ~12 large form-factor drive bays in the chassis. This allows for a 2 drive RAID array for the operating system, Exchange install path, transport queue database, and other ancillary files, and ~10 remaining drives to use as mailbox database storage in a JBOD direct attached storage configuration with no RAID. Fill this server up with 4TB SATA or midline SAS drives, and you have a fantastic Exchange 2013 server. If you need even more storage, it’s quite easy to add an additional shelf of drives to the solution.

    Using the large deployment example and thinking about how we might size this on the commodity server platform, we can consider a server scaling unit that has a total of 24 large form-factor drive bays containing 4TB midline SAS drives. We will use 2 of those drives for the OS & Exchange, and the remaining drive bays will be used for Exchange mailbox database capacity. Let’s use 12 of those drive bays for databases – that leaves 10 remaining drive bays that could contain spares or remain empty. For this sizing exercise, let’s also plan for 4 databases per drive. Each of those drives has a formatted capacity of ~3725GB. The first step in figuring out the number of mailboxes per database is to look at overall capacity requirements for the mailboxes, content indexes, and required free space (which we will set to 5%).

    To calculate the maximum amount of space available for mailboxes, let’s apply a formula (note that this doesn’t consider space for logs – we will make sure that the volume will have enough space for logs later in the process). First, we can remove our required free space from the available storage on the drive:

    Available Space (excluding required free space) = Formatted capacity of the drive x (1 – free space)

    Then we can remove the space required for content indexing. As discussed above, the space required for content indexing will be 20% of the database size, with an additional 20% of one database for content indexing maintenance tasks. Given the additional 20% requirement, we can’t model the overall space requirement as a simple 20% of the remaining space on the volume. Instead we need to compute a new percentage that takes the number of databases per-volume into consideration.

    image016

    Now we can remove the space for content indexing from our available space on the volume:

    image017

    And we can then divide by the number of databases per-volume to get our maximum database size:

    image018

    In our example scenario, we would obtain the following result:

    image019

    Given this value, we can then calculate our maximum users per database (from a capacity perspective, as this may change when we evaluate the IO requirements):

    image020

    Let’s see if that number is actually reasonable given our 4 copy configuration. We are going to use 16-node DAGs for this deployment to take full advantage of the scalability and high-availability benefits of large DAGs. While we have many drives available on our selected hardware platform, we will be limited by the maximum of 50 database copies per-server in Exchange 2013. Considering this maximum and our desire to have 4 databases per volume, we can calculate the maximum number of drives for mailbox database usage as:

    image021

    With 12 database volumes and 4 database copies per-volume, we will have 48 total database copies per server.

    image022

    With 66 users per database and 100,000 total users, we end up with the following required DAG count for the user population:

    image023

    In this very large deployment, we are using a DAG as a unit of scale or “building block” (e.g. we perform capacity planning based on the number of DAGs required to meet demand, and we deploy an entire DAG when we need additional capacity), so we don’t intend to deploy a partial DAG. If we round up to 8 DAGs we can compute our final users per database count:

    image024

    With 65 users per-database, that means we will expect to consume the following space for mailbox databases:

    Estimated Database Size = 65 users x 10.63GB = 690.95GB
    Database Consumption / Volume = 690.95GB x 4 databases = 2763.8GB

    Using the formula mentioned earlier, we can compute our estimated content index consumption as well:

    690.95GB database size x (4 databases + 1) x 0.20 = 690.95GB

    You’ll recall that we computed transaction log space requirements earlier, and it turns out that we magically computed those values with the assumption that we would have 65 users per-database. What a pleasant coincidence! So we will need 14.53GB of space for transaction logs per-database, or to get a more useful result:

    Log Space Required / Volume = 14.53GB x 4 databases = 58.12GB

    To sum it up, we can estimate our total per-volume space utilization and make sure that we have plenty of room on our target 4TB drives:

    image029

    Looks like our database volumes are sized perfectly!

    IOPS requirements

    To determine the IOPS requirements for a database, we look at the number of users hosted on the database and consider the guidance provided in the following table to compute total required IOPS when the database is active or passive.

    Messages sent or received per mailbox per day Estimated IOPS per mailbox (Active or Passive)
    50 0.034
    100 0.067
    150 0.101
    200 0.134
    250 0.168
    300 0.201
    350 0.235
    400 0.268
    450 0.302
    500 0.335

    For example, with 50 users in a database, with an average message profile of 200, we would expect that database to require 50 x 0.134 = 6.7 transactional IOPS when the database is active, and 50 x 0.134 = 6.7 transactional IOPS when the database is passive. Don’t forget to consider database placement which will impact the number of databases with IOPS requirements on a given storage volume (which could be a single JBOD drive or might be a more complex storage configuration).

    Going back to our example scenario, we can evaluate the IOPS requirement of the solution, recalling that the average user profile in that deployment is the 200 message/day profile. We have 65 users per database and 4 databases per JBOD drive, so we can estimate our IOPS requirement in worst-case (all databases active) as:

    65 mailboxes x 4 databases per-drive x 0.134 IOPS/mailbox at 200 messages/day profile = ~34.84 IOPS per drive

    Midline SAS drives typically provide ~57.5 random IOPS (based on our own internal observations and benchmark tests), so we are well within design constraints when thinking about IOPS requirements.

    Storage bandwidth requirements

    While IOPS requirements are usually the primary storage throughput concern when designing an Exchange solution, it is possible to run up against bandwidth limitations with various types of storage subsystems. The IOPS sizing guidance above is looking specifically at transactional (somewhat random) IOPS and is ignoring the sequential IO portion of the workload. One place that sequential IO becomes a concern is with storage solutions that are running a large amount of sequential IO through a common channel. A common example of this type of load is the ongoing background database maintenance (BDM) which runs continuously on Exchange mailbox databases. While this BDM workload might not be significant for a few databases stored on a JBOD drive, it may become a concern if all of the mailbox database volumes are presented through a common iSCSI or Fibre Channel interface. In that case, the bandwidth of that common channel must be considered to ensure that the solution doesn’t bottleneck due to these IO patterns.

    In Exchange 2013, we expect to consume approximately 1MB/sec/database copy for BDM which is a significant reduction from Exchange 2010. This helps to enable the ability to store multiple mailbox databases on the same JBOD drive spindle, and will also help to avoid bottlenecks on networked storage deployments such as iSCSI. This bandwidth utilization is in addition to bandwidth consumed by the transactional IO activity associated with user and system workload processes, as well as storage bandwidth consumed by the log replication and replay process in a DAG.

    Transport storage requirements

    Since transport components (with the exception of the front-end transport component on the CAS role) are now part of the Mailbox role, we have included CPU and memory requirements for transport with the general Mailbox role requirements described later. Transport also has storage requirements associated with the queue database. These requirements, much like I described earlier for mailbox storage, consist of capacity factors and IO throughput factors.

    Transport storage capacity is driven by two needs: queuing (including shadow queuing) and Safety Net (which is the replacement for transport dumpster in this release). You can think of the transport storage capacity requirement as the sum of message content on disk in a worst-case scenario, consisting of three elements:

    • The current day’s message traffic, along with messages which exist on disk longer than normal expiration settings (like poison queue messages)
    • Queued messages waiting for delivery
    • Messages persisted in Safety Net in case they are required for redelivery

    Of course, all three of these factors are also impacted by shadow queuing in which a redundant copy of all messages is stored on another server. At this point, it would be a good idea to review the TechNet documentation on Transport High Availability if you aren’t familiar with the mechanics of shadow queuing and Safety Net.

    In order to figure out the messages per day that you expect to run through the system, you can look at the user count and messaging profile. Simply multiplying these together will give you a total daily mail volume, but it will be a bit higher than necessary since it is double counting messages that are sent within the organization (i.e. a message sent to a coworker will count towards the profile of the sending user as well as the profile of the receiving user, but it’s really just one message traversing the system). The simplest way to deal with that would be to ignore this fact and oversize transport, which will provide additional capacity for unexpected peaks in message traffic. An alternative way to determine daily message flow would be to evaluate performance counters within your existing messaging system.

    To determine the maximum size of the transport database, we can look at the entire system as a unit and then come up with a per-server value.

    Overall Daily Messages Traffic = number of users x message profile

    Overall Transport DB Size = average message size x overall daily message traffic x (1 + (percentage of messages queued x maximum queue days) + Safety Net hold days) x 2 copies for high availability

    Let’s use the 100,000 user sizing example again and size the transport database using the simple method.

    Overall Transport DB Size = 75KB x (100,000 users x 200 messages/day) x (1 + (50% x 2 maximum queue days) + 2 Safety Net hold days) x 2 copies = 11,444GB

    In our example scenario, we have 8 DAGs, each containing 16-nodes, and we are designing to handle double node failures in each DAG. This means that in a worst-case failure event we would have 112 servers online with 2 failed servers in each DAG. We can use this value to determine a per-server transport DB size:

    image034

    Sizing for transport IO throughput requirements is actually quite simple. Transport has taken advantage of many of the IO reduction changes to the ESE database that have been made in recent Exchange releases. As a result, the number of IOPS required to support transport is significantly lower. In the internal deployment we used to produce this sizing guidance, we see approximately 1 DB write IO per message and virtually no DB read IO, with an average message size of ~75KB. We expect that as average message size increases, the amount of transport IO required to support delivery and queuing would increase. We do not currently have specific guidance on what that curve looks like, but it is an area of active investigation. In the meantime, our best practices guidance for the transport database is to leave it in the Exchange install path (likely on the OS drive) and ensure that the drive supporting that directory path is using a protected write cache disk controller, set to 100% write cache if the controller allows optimization of read/write cache settings. The write cache allows transport database log IO to become effectively “free” and allows transport to handle a much higher level of throughput.

    Processor requirements

    Once we have our storage requirements figured out, we can move on to thinking about CPU. CPU sizing for the Mailbox role is done in terms of megacycles. A megacycle is a unit of processing work equal to one million CPU cycles. In very simplistic terms, you could think of a 1 MHz CPU performing a megacycle of work every second. Given the guidance provided below for megacycles required for active and passive users at peak, you can estimate the required processor configuration to meet the demands of an Exchange workload. Following are our recommendations on the estimated required megacycles for the various user profiles.

    Messages sent or received per mailbox per day Mcycles per User, Active DB Copy or Standalone (MBX only) Mcycles per User, Active DB Copy or Standalone (Multi-Role) Mcycles per User, Passive DB Copy
    50 2.13 2.93 0.69
    100 4.25 5.84 1.37
    150 6.38 8.77 2.06
    200 8.50 11.69 2.74
    250 10.63 14.62 3.43
    300 12.75 17.53 4.11
    350 14.88 20.46 4.80
    400 17.00 23.38 5.48
    450 19.13 26.30 6.17
    500 21.25 29.22 6.85

    The second column represents the estimated megacycles required on the Mailbox role server hosting the active copy of a user’s mailbox database. In a DAG configuration, the required megacycles for the user on each server hosting passive copies of that database can be found in the fourth column. If the solution is going to include multi-role (Mailbox+CAS) servers, use the value in the third column rather than the second, as it includes the additional CPU requirements for the CAS role.

    It is important to note that while many years ago you could make an assumption that a 500 MHz processor could perform roughly double the work per unit of time as a 250 MHz processor, clock speeds are no longer a reliable indicator of performance. The internal architecture of modern processors is different enough between manufacturers as well as within product lines of a single manufacturer that it requires an additional normalization step to determine the available processing power for a particular CPU. We recommend using the SPECint_rate2006 benchmark from the Standard Performance Evaluation Corporation.

    The baseline system used to generate this guidance was a Hewlett-Packard DL380p Gen8 server containing Intel Xeon E5-2650 2 GHz processors. The baseline system SPECint_rate2006 score is 540, or 33.75 per-core, given that the benchmarked server was configured with a total of 16 physical processor cores. Please note that this is a different baseline system than what was used to generate our Exchange 2010 guidance, so any tools or calculators that make assumptions based on the 2010 baseline system would not provide accurate results for sizing an Exchange 2013 solution.

    Using the same general methodology we have recommended in prior releases, you can determine the estimated available Exchange workload megacycles available on a different processor through the following process:

    1. Find the SPECint_rate2006 score for the processor that you intend to use for your Exchange solution. You can do this the hard way (described below) or use Scott Alexander’s fantastic Processor Query Toolto get the per-server score and processor core count for your hardware platform.
      1. On the website of the Standard Performance Evaluation Corporation, select Results, highlight CPU2006, and select Search all SPECint_rate2006 results.
      2. Under Simple Request, enter the search criteria for your target processor, for example Processor Matches E5-2630.
      3. Find the server and processor configuration you are interested in using (or if the exact combination is not available, find something as close as possible) and note the value in the Result column and the value in the # Cores column.
    2. Obtain the per-core SPECint_rate2006 score by dividing the value in the Result column by the value in the # Cores column. For example, in the case of the Hewlett-Packard DL380p Gen8 server with Intel Xeon E5-2630 processors (2.30GHz), the Result is 430 and the # Cores is 12, so the per-core value would be 430 / 12 = 35.83.
    3. To determine the estimated available Exchange workload megacycles on the target platform, use the following formula:

      image035

      Using the example HP platform with E5-2630 processors mentioned previously, we would calculate the following result:

      image036
      x 12 processors = 25,479 available megacycles per-server

    Keep in mind that a good Exchange design should never plan to run servers at 100% of CPU capacity. In general, 80% CPU utilization in a failure scenario is a reasonable target for most customers. Given that caveat that the high CPU utilization occurs during a failure scenario, this means that servers in a highly available Exchange solution will often run with relatively low CPU utilization during normal operation. Additionally, there may be very good reasons to target a lower CPU utilization as maximum, particularly in cases where unanticipated spikes in load may result in acute capacity issues.

    Going back to the example I used previously of 100,000 users with the 200 message/day profile, we can estimate the total required megacycles for the deployment. We know that there will be 4 database copies in the deployment, and that will help to calculate the passive megacycles required. We also know that this deployment will be using multi-role (Mailbox+CAS) servers. Given this information, we can calculate megacycle requirements as follows:

    100,000 users ((11.69 mcycles per active mailbox) + (3 passive copies x 2.74 mcycles per passive mailbox)) = 1,991,000 total mcycles required

    You could then take that number and attempt to come up with a required server count. I would argue that it’s actually a much better practice to come up with a server count based on high availability requirements (taking into account how many component failures your design can handle in order to meet business requirements) and then ensure that those servers can meet CPU requirements in a worst-case failure scenario. You will either meet CPU requirements without any additional changes (if your server count is bound on another aspect of the sizing process), or you will adjust the server count (scale out), or you will adjust the server specification (scale up).

    Continuing with our hypothetical example, if we knew that the high availability requirements for the design of the 100,000 user example resulted in a maximum of 16 databases being active at any time out of 48 total database copies per server, and we know that there are 65 users per database, we can determine the per-server CPU requirements for the deployment.

    (16 databases x 65 mailboxes x 11.69 mcycles per active mailbox) + (32 databases x 65 mailboxes x 2.74 mcycles per passive mailbox) = 12157.6 + 5699.2 = 17,856.8 mcycles per server

    Using the processor configuration mentioned in the megacycle normalization section (E5-2630 2.3 GHz processors on an HP DL380p Gen8), we know that we have 25,479 available mcycles on the server, so we would estimate a peak average CPU in worst-case failure of:

    17.857 / 25,479 = 70.1%

    That is below our guidance of 80% maximum CPU utilization (in a worst-case failure scenario), so we would not consider the servers to be CPU bound in the design. In fact, we could consider adjusting the CPU selection to a cheaper option with reduced performance getting us closer to a peak average CPU in worst-case failure of 80%, reducing the cost of the overall solution.

    Memory requirements

    To calculate memory per server, you will need to know the per-server user count (both active and passive users) as well as determine whether you will run the Mailbox role in isolation or deploy multi-role servers (Mailbox+CAS). Keep in mind that regardless of whether you deploy roles in isolation or deploy multi-role servers, the minimum amount of RAM on any Exchange 2013 server is 8GB.

    Memory on the Mailbox role is used for many purposes. As in prior releases, a significant amount of memory is used for ESE database cache and plays a large part in the reduction of disk IO in Exchange 2013. The new content indexing technology in Exchange 2013 also uses a large amount of memory. The remaining large consumers of memory are the various Exchange services that provide either transactional services to end-users or handle background processing of data. While each of these individual services may not use a significant amount of memory, the combined footprint of all Exchange services can be quite large.

    Following is our recommended amount of memory for the Mailbox role on a per mailbox basis that we expect to be used at peak.

    Messages sent or received per mailbox per day Mailbox role memory per active mailbox (MB)
    50 12
    100 24
    150 36
    200 48
    250 60
    300 72
    350 84
    400 96
    450 108
    500 120

    To determine the amount of memory that should be provisioned on a server, take the number of active mailboxes per-server in a worst-case failure and multiply by the value associated with the expected user profile. From there, round up to a value that makes sense from a purchasing perspective (i.e. it may be cheaper to configure 128GB of RAM compared to a smaller amount of RAM depending on slot options and memory module costs).

    Mailbox Memory per-server = (worst-case active database copies per-server x users per-database x memory per-active mailbox)

    For example, on a server with 48 database copies (16 active in worst-case failure), 65 users per-database, expecting the 200 profile, we would recommend:

    16 x 65 x 48MB = 48.75GB, round up to 64GB

    It’s important to note that the content indexing technology included with Exchange 2013 uses a relatively large amount of memory to allow both indexing and query processing to occur very quickly. This memory usage scales with the number of items indexed, meaning that as the number of total items stored on a Mailbox role server increases (for both active and passive copies), memory requirements for the content indexing processes will increase as well. In general, the guidance on memory sizing presented here assumes approximately 15% of the memory on the system will be available for the content indexing processes which means that with a 75KB average message size, we can accommodate mailbox sizes of 3GB at 50 message profile up to 32GB at the 500 message profile without adjusting the memory sizing. If your deployment will have an extremely small average message size or an extremely large average mailbox size, you may need to add additional memory to accommodate the content indexing processes.

    Multi-role server deployments will have an additional memory requirement beyond the amounts specified above. CAS memory is computed as a base memory requirement for the CAS components (2GB) plus additional memory that scales based on the expected workload. This overall CAS memory requirement on a multi-role server can be computed using the following formula:

    image044

    Essentially this is 2GB of memory for the base requirement, plus 2GB of memory for each processor core (or fractional processor core) serving active load at peak in a worst-case failure scenario. Reusing the example scenario, if I have 16 active databases per-server in a worst-case failure and my processor is providing 2123 mcycles per-core, I would need:

    image045

    If we add that to the memory requirement for the Mailbox role calculated above, our total memory requirement for the multi-role server would be:

    48.75GB for Mailbox + 5.12GB for CAS = 53.87GB, round up to 64GB

    Regardless of whether you are considering a multi-role or a split-role deployment, it is important to ensure that each server has a minimum amount of memory for efficient use of the database cache. There are some scenarios that will produce a relatively small memory requirement from the memory calculations described above. We recommend comparing the per-server memory requirement you have calculated with the following table to ensure you meet the minimum database cache requirements. The guidance is based on total database copies per-server (both active and passive). If the value shown in this table is higher than your calculated per-server memory requirement, adjust your per-server memory requirement to meet the minimum listed in the table.

    Per-Server DB Copies Minimum Physical Memory (GB)
    1-10 8
    11-20 10
    21-30 12
    31-40 14
    41-50 16

    In our example scenario, we are deploying 48 database copies per-server, so the minimum physical memory to provide necessary database cache would be 16GB. Since our computed memory requirement based on per-user guidance including memory for the CAS role (53.87GB) was higher than the minimum of 16GB, we don’t need to make any further adjustments to accommodate database cache needs.

    Unified messaging

    With the new architecture of Exchange, Unified Messaging is now installed and ready to be used on every Mailbox and CAS. The CPU and memory guidance provided here assumes some moderate UM utilization. In a deployment with significant UM utilization with very high call concurrency, additional sizing may need to be performed to provide the best possible user experience. As in Exchange 2010, we recommend using a 100 concurrent call per-server limit as the maximum possible UM concurrency, and scale out the deployment if the sizing of your deployment becomes bound on this limit. Additionally, voicemail transcription is a very CPU-intensive operation, and by design will only transcribe messages when there is enough available CPU on the machine. Each voicemail message requires 1 CPU core for the duration of the transcription operation, and if that amount of CPU cannot be obtained, transcription will be skipped. In deployments that anticipate a high amount of voicemail transcription concurrency, server configurations may need to be adjusted to increase CPU resources, or the number of users per server may need to be scaled back to allow for more available CPU for voicemail transcription operations.

    Sizing and scaling the Client Access Server role

    In the case where you are going to place the Mailbox and CAS roles on separate servers, the process of sizing CAS is relatively straightforward. CAS sizing is primarily focused on CPU and memory requirements. There is some disk IO for logging purposes, but it is not significant enough to warrant specific sizing guidance.

    CAS CPU is sized as a ratio from Mailbox role CPU. Specifically, we need to get 37.5% of the megacycles used to support active users on the Mailbox role. You could think of this as a 3:8 ratio (CAS CPU to active Mailbox CPU) compared to the 3:4 ratio we recommended in Exchange 2010. One way to compute this would be to look at the total active user megacycles required for the solution, take 37.5% of that, and then determine the required CAS server count based on high availability requirements and multi-site design constraints. For example, consider the 100,000 user example using the 200 message/day profile:

    Total CAS Required Mcycles = 100,000 users x 8.5 mcycles x 0.375 = 318,750 mcycles

    Assuming that we want to target a maximum CPU utilization of 80% and the servers we plan to deploy have 25,479 available megacycles, we can compute the required number of servers quite easily:

    image048

    Obviously we would need to then consider whether the 16 required servers meet our high availability requirements considering the maximum CAS server failures that we must design for given business requirements, as well as the site configuration where some of the CAS servers may be in different sites handling different portions of the workload. Since we specified in our example scenario that we want to survive a double failure in the single site, we would increase our 16 CAS to 18 such that we could sustain 2 CAS failures and still handle the workload.

    To size memory, we will use the same formula that was used for Exchange 2010:

    Per-Server CAS Memory = 2GB + 2GB per physical processor core

    image050

    Using the example scenario we have been using, we can calculate the per-server CAS memory requirement as:

    image051

    In this example, 20.77GB would be the guidance for required CAS memory, but obviously you would need to round-up to the next highest possible (or highest performing) memory configuration for the server platform: perhaps 24GB.

    Active Directory capacity for Exchange 2013

    Active Directory sizing remains the same as it was for Exchange 2010. As we gain more experience with production deployments we may adjust this in the future. For Exchange 2013, we recommend deploying a ratio of 1 Active Directory global catalog processor core for every 8 Mailbox role processor cores handling active load, assuming 64-bit global catalog servers:

    image052

    If we revisit our example scenario, we can easily calculate the required number of GC cores required.

    image053

    Assuming that my Active Directory GCs are also deployed on the same server hardware configuration as my CAS & Mailbox role servers in the example scenario with 12 processor cores, then my GC server count would be:

    image054

    In order to sustain double failures, we would need to add 2 more GCs to this calculation, which would take us to 7 GC servers for the deployment.

    As a best practice, we recommend sizing memory on the global catalog servers such that the entire NTDS.DIT database file can be contained in RAM. This will provide optimal query performance and a much better end-user experience for Exchange workloads.

    Hyperthreading: Wow, free processors!

    Turn it off. While modern implementations of simultaneous multithreading (SMT), also known as hyperthreading, can absolutely improve CPU throughput for most applications, the benefits to Exchange 2013 do not outweigh the negative impacts. It turns out that there can be a significant impact to memory utilization on Exchange servers when hyperthreading is enabled due to the way the .NET server garbage collector allocates heaps. The server garbage collector looks at the total number of logical processors when an application starts up and allocates a heap per logical processor. This means that the memory usage at startup for one of our services using the server garbage collector will be close to double with hyperthreading turned on vs. when it is turned off. This significant increase in memory, along with an analysis of the actual CPU throughput increase for Exchange 2013 workloads in internal lab tests has led us to a best practice recommendation that hyperthreading should be disabled for all Exchange 2013 servers. The benefits don’t outweigh the negative impact.

    There’s an important caveat to this recommendation for customers who are virtualizing Exchange. Since the number of logical processors visible to a virtual machine is determined by the number of virtual CPUs allocated in the virtual machine configuration, hyperthreading will not have the same impact on memory utilization described above. It’s certainly acceptable to enable hyperthreading on physical hardware that is hosting Exchange virtual machines, but make sure that any capacity planning calculations for that hardware are based purely on physical CPUs. Follow the best practice recommendations of your hypervisor vendor on whether or not to enable hyperthreading. Note that the extra logical CPUs that are added when hyperthreading is enabled must not be considered when allocating virtual machine resources during the sizing and deployment process. For example, on a physical host running Hyper-V with 40 physical processor cores and hyperthreading enabled, 80 logical processor cores will be visible to the root operating system. If your Exchange design required 16-core servers, you could place 2 Exchange VMs on the physical host as those 2 VMs would consume 32 physical processor cores without enough physical processor cores to host another 16-core VM (32+16 = 48, which is greater than 40).

    You are going to give me a calculator, right?

    Now that you have digested all of this guidance, you are probably thinking about how much more of a pain it will be to size a deployment compared to using the Mailbox Role Requirements Calculator for Exchange 2010. UPDATE: You can now read about and download the calculator from here.

    Hopefully that leaves you with enough information to begin to properly size your Exchange 2013 deployments. If you have further questions, you can obviously post comments here, but I’d also encourage you to consider attending one of the upcoming TechEd events. I’ll be at TechEd North America as well as TechEd Europe with a session specifically on this topic, and would be happy to answer your questions in person, either in the session or at the “Ask the Experts” event. Recordings of those sessions will also be posted to MSDN Channel9 after the events have concluded.

    Jeff Mealiffe
    Principal Program Manager Lead
    Exchange Customer Experience

    Updates

    • 4/3/2014: Updated CAS CPU and Memory sizing with SP1 guidance
    • 1/6/2015: Updated the hyperthreading section
  • Supporting Windows 8 Mail in your organization

    Windows 8 and Windows RT include a built-in email app named Mail (also referred to as Windows 8 Mail or the Windows 8 Mail app). The Windows 8 Mail app includes support for IMAP and Exchange ActiveSync (EAS) accounts.

    This article includes some key technical details of the Windows 8 Mail app. Use the information to help you support the use of Windows 8 Mail app in your organization. Read this article start to finish, or jump to the topic that interests you. Use the reference links throughout the article for more information.

    NOTE Mail, Calendar, People, and Messaging are apps that are built in to Windows 8 and Windows RT. Although this article discusses the Windows 8 Mail app, please note that much of the information in this article also applies to the Calendar, People, and Messaging apps. This is because, when connected to a server that supports Exchange ActiveSync, the Calendar, and People apps may also display data that was downloaded over the Exchange ActiveSync connection.

    Protocol Support

    The Windows 8 Mail app lets users connect to any service provider that supports either of the following two protocols:

    • Exchange ActiveSync
    • IMAP/SMTP

    POP is not currently supported.

    Exchange ActiveSync

    Exchange ActiveSync can be used to sync data for email, contacts, and calendar. The Windows 8 Mail app supports EAS versions 2.5, 12.0, 12.1, and 14.0. For detailed protocol documentation, see Exchange Sever Protocol Documents on MSDN.

    NOTE All Windows Communications apps (Mail, Calendar, and People) can use the data that is synchronized with Exchange ActiveSync. After a user connects to their account in the Windows 8 Mail app, their contacts and calendar data are available in the other Windows Communications Apps and vice versa.

    The Mail app does not support certificate-based authentication of clients for Exchange ActiveSync.

    IMAP/SMTP

    The Windows 8 Mail app supports the following IMAP and SMTP standards:

    IMAP/SMTP can be used to send and receive email only. Contacts data and calendar data is not synchronized when IMAP/SMTP is used. Microsoft Exchange does not support Public Folders via IMAP. For more details about IMAP support in Exchange, see POP3 and IMAP4 (for Exchange 2010, see Understanding POP3 and IMAP4).

    Sync Configuration

    The Windows 8 Mail app can be configured to synchronize data at different times as follows:

    • Push email (default)
    • Polling at fixed intervals
    • Manually

    If a push email connection can’t be established, it will automatically switch to poll at fixed intervals.

    Push Email

    Push email requires that accounts are either Exchange ActiveSync (which all support Push) or IMAP with the IDLE extension. Not all IMAP servers support IDLE, and it is supported only for the Inbox folder.

    When a push connection can’t be established, Mail will change to polling on 30 minute intervals. Push email on Exchange ActiveSync requires that HTTP connections must be maintained for up to 60 minutes, and IMAP IDLE requires TCP connections to be maintained for up to 30 minutes.

    Account Setup Features

    Windows 8 and Windows RT users can add email accounts to the Windows 8 Mail app using the Settings charm. The Settings charm is always available on the right side of the Windows 8 and Windows RT screen. (For more visual details about Charms & the Windows 8 user interface, see Search, share & more.)

    NOTE This section provides an overview of Windows 8 Mail app account setup. For step-by-step procedures for setting up an account in the Windows 8 Mail app, see What else do I need to know? at the end of this guide.

    To make it as easy as possible to add accounts, account setup only prompts the user to enter the email address and password for the account they want to set up. From that data, Mail attempts to automatically configure the account as follows:

    • The domain portion of the email address is matched against a database of well-known service providers. If it’s a match, its settings are automatically configured.
    • The domain portion of the email address is used to execute Exchange ActiveSync Autodiscover processes. For detailed information, see Autodiscover HTTP Service Protocol Specification on MSDN.
    • If still not configured, the user is prompted to provide detailed settings for their server.

    Exchange ActiveSync

    Screenshot: Exchange ActiveSync configuration in Windows Mail
    Figure 1: Exchange ActiveSync (EAS) configuration in Windows Mail

    Full details needed to connect to an Exchange server – needed only if Autodiscover failed

    The information required to connect to a server via Exchange ActiveSync is:

    • Email address
    • Server address
    • Domain
    • Username
    • Password

    IMAP/SMTP

    Screenshot: IMAP/SMTP configuration in Windows Mail
    Figure 2: IMAP/SMTP configuration in Windows Mail

    The information required to connect to a server via IMAP/SMTP is:

    • Email address
    • Username
    • Password
    • IMAP email server
    • IMAP SSL (if your IMAP server requires SSL encryption)
    • IMAP port
    • SMTP email server
    • SMTP SSL (if your SMTP server requires SSL encryption)
    • SMTP port
    • Whether SMTP server requires authentication
    • Whether SMTP uses the same credentials as IMAP (If not, user must also provide SMTP credentials)

    Security Features

    Mail provides administrators with some level of security through Exchange ActiveSync policies. It doesn’t support any means of managing or securing PCs that are connected via IMAP.

    Policy Support

    Exchange ActiveSync devices can be managed using Exchange ActiveSync policies. Windows 8 Mail supports the following EAS policies. :

    • Password required
    • Allow simple password
    • Minimum password length (to a maximum of 8 characters)
    • Number of complex characters in password (to a maximum of 2 characters)
    • Password history
    • Password expiration
    • Device encryption required (on Windows RT and editions of Windows that support BitLocker. See What's New in BitLocker for details about BitLocker improvements in Windows 8.)
    • Maximum number of failed attempts to unlock device
    • Maximum time of inactivity before locking

    Note that if AllowNonProvisionableDevices is set to false in an EAS policy and the policy contains settings are not part of this list, the device won’t be able to connect to the Exchange server.

    Getting into Compliance

    Most of the policies listed above can be automatically enabled by Mail, but there are certain cases where the user has to take action first. These are:

    • Server requires device encryption:
      • User has a device that supports BitLocker but BitLocker isn’t enabled. User must manually enable BitLocker.
      • User has a Windows RT device that supports device encryption but it is suspended. User must reboot.
      • User has a Windows RT device that supports device encryption, but it isn’t enabled. User must sign into Windows with a Microsoft account.
    • An admin on this PC doesn’t have a strong password: All admin accounts must have a strong password before continuing.
    • The user’s account doesn’t have a strong password: User must set a strong password before continuing.

    ActiveSync Policy v/s Group Policy on domain-joined Windows 8 devices

    If a Windows 8 PC is joined to an Active Directory domain and controlled by Group Policy, there may be conflicting policy settings between Group Policy and an Exchange ActiveSync policy. In the event of any conflict, the strictest rule in either policy takes precedence. The only exception is password complexity rules for domain accounts. Group policy rules for password complexity (length, expiry, history, number of complex characters) take precedence over Exchange ActiveSync policies – even if group policy rules for password complexity are less strict than Exchange ActiveSync rules, the domain account will be deemed in compliance with Exchange ActiveSync policy.

    Remote Wipe

    Mail supports the Exchange ActiveSync remote wipe directive, but unlike Windows Phones, the data deleted by this directive is scoped to the specified Exchange ActiveSync account. The user's personal data is not deleted. For example, if a user has an Outlook.com account for personal use and a Contoso.com account for work use, a remote wipe directive from the Contoso.com server would impact Windows 8 and Windows Phone 7 as follows:

    DataWindows Phone 7Windows 8 Mail
    Contoso.com email Deleted Deleted
    Contoso.com contacts Deleted Deleted
    Contoso.com calendars Deleted Deleted
    Outlook.com email Deleted Not deleted
    Outlook.com contacts Deleted Not deleted
    Outlook.com calendars Deleted Not deleted
    Other documents, files, pictures, etc. Deleted Not deleted

    Account Roaming

    To make it as easy as possible for users to have all of their accounts set up on all of their devices, Windows 8 uploads vital account information to the user’s Microsoft account. This information includes email address, server, server settings, and password. When a user signs into a new PC with their Microsoft account, their email accounts are automatically set up for them.

    Passwords are not uploaded from a PC for any accounts which are controlled by any Exchange ActiveSync policies. Users will have to enter their password to begin syncing a policy-controlled account on a new PC.

    Microsoft Accounts

    Users are required to have a Microsoft Account, formerly known as Windows Live ID, to use the Windows Communications apps. This will usually be the Microsoft account that the user is signed into Windows with, but if they have not done so, they will be prompted to provide one before proceeding.

    Microsoft accounts will automatically sync to Microsoft services using Exchange ActiveSync 14.0 when Mail starts. This will synchronize:

        • Email, if the user’s Microsoft account is also their Hotmail or Outlook.com account
        • Contacts from Windows Live
        • Calendar events

    If the user’s Microsoft account is not a Outlook.com or Hotmail account (for example, dave@contoso.com), Mail will prompt the user to provide the password for their email account, which will be added automatically.

    Data Consumption

    By default, Mail only downloads the last two weeks of email. This is user configurable and can potentially download the user’s entire mailbox. For Exchange ActiveSync accounts, all contacts are downloaded and calendar events are downloaded only for three months behind the current date and 18 months ahead.

    Additionally, messages are only partially downloaded to reduce bandwidth use as follows:

          • Message bodies are truncated to the first 100KB (20KB on metered networks). For more details see Engineering Windows 8 for mobile networks.
          • Attachments are not downloaded automatically.

    Embedded images in email messages are downloaded on-demand as the user reads them, and attachments are downloaded on-demand as the user attempts to open them.

    By default, Mail only downloads the user’s Inbox and Sent folders. Other folders are downloaded once the user accesses them for the first time.

    Mail does not enforce any limits on how many or large of attachments users can send.

    Limitations

    The following features are currently not supported by Mail:

    • Mailbox connections using POP:  IMAP and EAS are supported.

      (Note, this does not mean that Windows 8 does not support POP3. This post is about the Windows 8 Mail app. )

    • Servers that require self-signed certificates: Users can work around the self-signed certificate limitation by manually installing the certificate on their Windows 8 or Windows RT device. For additional information about the self-signed certificates, see Self-Signed Certificates section below.

    • Opaque-Signed and Encrypted S/MIME messages: When S/MIME messages are received in Windows 8 Mail, it displays an email item with a message body that begins with “This encrypted message can’t be displayed.”

      To view email items in the S/MIME format, users must open the message using Outlook Web App, Microsoft Outlook, or another email program that supports S/MIME messages. For more information, see Opaque-Signed and Encrypted S/MIME Message on MSDN.

    Self-Signed Certificates

    Users may experience connectivity errors when trying to connect to an Exchange servers that require self-signed certificates. The user may receive the following error messages.

    Unable to connect. Ensure the information entered is correct.

    <Email address> is unavailable

    NOTE This issue may occur because the Mail app cannot connect to Exchange by using self-signed certificates.

    Consider the following options to resolve this issue.

      1. Option 1: Install a certificate that is signed by a Microsoft-trusted root certification authority (CA) on the server

        This enables Exchange to work for all clients without prompting. For more information about the trust root CAs, see the following topics on TechNet:

      2. Option 2: Install a server’s self-signed certificate on a device

        This enables Exchange to work for Windows 8 devices that have the certificate installed.

    Note To install a self-signed certificate for a domain’s certification authority, the administrator must provide a certificate file (.cer). The certificate can be installed to the trusted root certificate authority store for either of the following options:

    • For the current user This option does not require admin rights but must be completed for each user on the device.
    • For the local device This option requires administrator rights and needs to be done only one time for a device.

    The user or the system administrator can use the .cer file to install the certificate. To do this, use one of the following methods:

    • Command-line tool

      At an elevated command prompt, run the following command:

      certutil.exe -f -addstore root <name_of_certificatefile>.cer

      NOTE The command installs the certificate for all users on the device.

    • User interface

      1. Double-click the certificate file. A certificate dialog opens.
      2. Click Install Certificate. A Certificate Import Wizard window opens.
      3. Select the option to install the certificate for only the current user or for the local device.
      4. Select Place all certificates in the following store
      5. Click Browse to open the store selection dialog. Select Trusted Root Certification Authorities.
      6. Select the store, and then click Ok. You are returned to Certificate Import Wizard dialog, and the certificate store and certificate to be installed into that store are displayed.

    Troubleshooting Windows 8 Mail Client Connectivity

    If Windows 8 Mail users can't successfully connect to their accounts, consider the following:

    • Verify that the user is using the latest version of the Windows 8 Mail app. A user can check for updates to the Windows 8 Mail app by doing the following: from the Start screen, go to Store > Settings > App updates > Check for updates.
    • The user should wait a few minutes and try again.
    • If the account is a cloud-based email account that requires registration (for example, a Microsoft Office 365 account), the user must register their account before they can set up their account in Windows 8 Mail. If the user is a Microsoft Office 365 user, they register their account when they sign in to Office 365 for the first time. If the user is not an Office 365 user, the user registers their account when they sign in to their account using Microsoft account or Outlook Web App.

    TIP The user will see the following message if they haven't registered their account. In Windows 8 Mail, you will see the following message:
    “We couldn’t find the settings for. Provide use with more info and we’ll try connecting again.”

    For information about signing into Outlook Web App or the Office 365 Portal, see Sign In to Outlook Web App.

    After the user signs in to your account using Outlook Web App, the user should sign out, and then try to connect using Windows 8 Mail.

    What else do I need to know?

    Updates

    • 11/26/2012: Updated info about AllowNonProvisionableDevices setting in EAS policies.
    • 11/27/2012: Added links to EAS policy documentation.
    • 11/27/2012: Added info about Public Folder support in IMAP and link to IMAP documentation.
    • 12/3/2012: Added link to Building the Mail app on the Building Windows 8 blog.
    • 12/21/2012: Added links to KB 2784275, 2792112 and 2464593.
    • 2/20/2013: Added note about Certificate-base authentication of clients for Exchange ActiveSync not being supported.
  • Resolving WinRM errors and Exchange 2010 Management tools startup failures

    EDIT 12/10/2010: Added a permissions section.

    As was discussed in the previous (related) blog post "Troubleshooting Exchange 2010 Management Tools startup issues", in Exchange 2010 the Management tools are dependent on IIS. As was discussed in that blog, we have seen situations where the management tool connection to the target Exchange server can fail, and the error that is returned can be difficult to troubleshoot. This generally (but not always) happens when Exchange 2010 is installed on an IIS server that is already in service, or when changes are made to the IIS server settings post Exchange Install. We have seen that these changes are usually done when the IIS administrator is attempting to "tighten up" IIS security by editing the Default Web Site or PowerShell vdir settings.

    The situation is further complicated by the fact that some of the errors presented have similar wording; most seem to originate with WinRM (Windows Remote Management), and in some cases different root problems can produce the exact same error message. In other words, depending on how knowledgeable you are with these errors, troubleshooting them is all around... not much fun.

    I was approached by a good friend of mine and he asked what I thought we could do to make these errors a little easier to troubleshoot. I was studying PowerShell and PowerShell programming at the time (I just happened to be reading Windows PowerShell for Exchange Server 2007 SP1), and I thought that this would be a perfect opportunity to try and apply what I was learning.

    This is the result.

    Introducing the Exchange Management Troubleshooter (or EMTshooter for short).

    What it does:

    The EMTshooter runs on the local (target) Exchange server and attempts to identify potential problems with management tools connection to it.

    The troubleshooter runs in 2 stages. First, it will look at the IIS Default Web Site, the PowerShell vdir, and other critical areas, to identify known causes of connection problems. If it identifies a problem with one of the pre-checks it will make a recommendation for resolving the problem. If the pre-checks pass, the troubleshooter will go ahead and try to connect to the server in the exact same way that the management tools would. If that connection attempt still results in a WinRM-style error, the troubleshooter will attempt to compare that error to a list of stored strings that we have taken from the related support cases that we have seen. If a match is found, the troubleshooter will display the known causes of that error in the CMD window. Here is an example of how this might look like:

    When I was designing the troubleshooter, I could have just written a little error lookup tool that handed over the appropriate content for the error you were getting, but I felt that was not as robust of a solution as I was aiming for (and not much of a learning experience for me). So the tool runs active pre-checks before moving on to the error look-up. The amount of pre-checks it can run depends on the flavor of OS you are running on and the options you have installed on it, such as WMI Compatibility.

    Basically, I have taken all of the documentation that has been created on these errors to date, and created a tool that will make the information available to you based on the error or problem it detects. Hopefully this will cut down on the amount of time it takes to resolve those problems.

    Event reporting:

    When you run the EMTshooter it will log events in the event log. All results that are displayed in the CMD window are also logged in the event log for record keeping.

    Events are logged to the Microsoft-Exchange-Troubleshooters/Operational event log and are pretty self-explanatory.

    Things to remember:

    Depending on your current settings, you may need to adjust the execution policy on your computer to run the troubleshooter, using:

    Set-ExecutionPolicy RemoteSigned

    Or

    Set-ExecutionPolicy Unrestricted

    Remember to set it back to your normal settings after running the troubleshooter.

    This version of the troubleshooter needs to run on the Exchange Server that the management tools are failing to connect to. While our final goal is that the troubleshooter will be able to run anywhere the Exchange Management tools are installed, the tool isn't quite there yet.

    We have seen instances where corruption in the PowerShell vdir or in IIS itself has resulted in errors that seemed to be caused by something else. For instance, we worked on a server that had an error that indicated a problem with the PowerShell vdir network path. But the path was correct. Then we noticed that the PowerShell vdir was missing all its modules, and quite a few other things. Somehow the PowerShell vdir on that Exchange Server had gotten severely... um... modified beyond repair. WinRM was returning the best error it could, and the troubleshooter took that error and listed the causes. None of which solved the problem. So be aware that there are scenarios that even this troubleshooter cannot help at this time.

    The troubleshooter is still a bit rough around the edges, and we plan to improve and expand its capabilities in the future. We also hope to be able to dig a little deeper into the PowerShell vdir settings as time goes on. Also note that the troubleshooter will NOT make any modification to your IIS configuration without explicitly asking you first.

    Permissions required:

    In order to run the troubleshooter, the user will have to have the rights to log on locally to the Exchange server (due to local nature of the troubleshooter at this time) and will need permissions to run Windows PowerShell.

    Installing the troubleshooter:

    First, you will need to download the troubleshooter ZIP file, which you can find here.

    Installing the EMTshooter is pretty easy. Just drop the 4 files from the ZIP file into 1 folder, rename them to .ps1 and run EMTshooter.ps1 from a normal (and local) PowerShell window. I personally just created a shortcut for it on my desktop with the following properties:

    C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command ". 'C:\EMTshooter\EMTshooter.ps1'"

    However, as most users probably won't run this more than a few times you might not need or want an icon. Just remember that EMTshooter.ps1 is the main script to run.

    Providing feedback:

    As I mentioned before, the troubleshooter is still a work in progress. If you wish to provide feedback on it, please post a comment to this blog post. I will be monitoring it and replying as time allows, and also making updates to the troubleshooter if needed. If you run into errors that are not covered by the troubleshooter, please run the troubleshooter, reproduce the error through it and send me the transcript.txt file (you will find it in the folder where the 4 scripts have been placed), along with what you did to resolve the error (if the problem has been resolved). My email is sbryant AT Microsoft DOT com.

    Errors currently covered:

    • Connecting to remote server failed with the following error message : The WinRM client cannot process the request. It cannot determine the content type of the HTTP response from the destination computer. The content type is absent or invalid.
    • Connecting to remote server failed with the following error message: The connection to the specified remote host was refused. Verify that the WS-Management service is running on the remote host and configured to listen for requests on the correct port and HTTP URL.
    • Connecting to remote server failed with the following error message: The WinRM client received an HTTP server error status (500), but the remote service did not include any other information about the cause of the failure. For more information, see the about_Remote_Troubleshooting Help topic. It was running the command 'Discover-ExchangeServer -UseWIA $true -SuppressError $true'.
    • Connecting to remote server failed with the following error message : The WinRM client received an HTTP status code of 403 from the remote WS-Management service.
    • Connecting to the remote server failed with the following error message: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol.
    • Connecting to remote server failed with the following error message : The client cannot connect to the destination specified in the request. Verify that the service on the destination is running and is accepting requests. Consult the logs and documentation for the WS-Management service running on the destination, most commonly IIS or WinRM. If the destination is the WinRM service, run the following command on the destination to analyze and configure the WinRM service:
    • Connecting to remote server failed with the following error message : The WS-Management service does not support the request.
    • Connecting to remote server failed with the following error message : The WinRM client cannot process the request. The WinRM client tried to use Kerberos authentication mechanism, but the destination computer

    - Steve Bryant

  • Exchange 2010 Server Role Requirements Calculator

    EDIT: This post has been updated on 5/24/2013 for the new version of calculator. For the list of latest major changes, please see THIS.

    Overview

    The Exchange Mailbox Server role is arguably one of the most important roles within an Exchange deployment for it stores the data that users will ultimately access on a daily basis. Therefore, ensuring that you design the mailbox server role correctly is critical to your design.

    With Exchange 2010 you can deploy a solution that leverages mailbox resiliency and has multiple database copies deployed across datacenters, implements single item recovery for data recovery, and has the flexibility in storage design to allow you to deploy on storage area networks utilizing fibre-channel or SATA class disks or on direct attached storage utilizing SAS or SATA class disks with or without RAID protection. But, in order to design your solution, you need to understand the following criteria:

    • User profile - the message profile, the mailbox size, and the number of users
    • High availability architecture - the number of database copies you plant to deploy, whether the solution will be site resilient, the desired number of mailbox servers
    • Server's CPU platform
    • Storage architecture - the disk capacity / type and storage solution
    • Backup architecture - whether to use hardware or software VSS and the frequency of the backups, or leverage the Exchange native data protection features
    • Network architecture - the utilization, throughput, and latency aspects

    Previous versions of Exchange were somewhat rigid in terms of the choices you had in designing your mailbox server role. The flexibility in the architecture with Exchange 2010, allows you the freedom to design the solution to meet your needs. Prior to making any decisions, please review the following topics from the Exchange 2010 Online Help:

    After you have determined the design you would like to implement, you can follow the steps in the Exchange 2010 Mailbox Server Role Design Example article within the Exchange 2010 Online Help to calculate your solution's CPU, memory, and storage requirements, or you can leverage the Exchange 2010 Mailbox Server Role Requirements Calculator.

    The calculator is broken out into the following sections (worksheets):

    • Input
    • Role Requirements
    • Activation Scenarios
    • Distribution
    • LUN Requirements
    • Backup Requirements
    • Log Replication Requirements
    • Storage Design

    Important: The data points provided in the calculator are an example configuration. As such any data points entered into the Input worksheet are specific to that particular configuration and do not apply for other configurations. Please ensure you are using the correct data points for your design

    Input

    When you launch the Exchange 2010 Mailbox Server Role Requirements Calculator, you are presented with the Input worksheet. This worksheet is broken down into 5 key areas. This section is where you enter in all the relevant information regarding your intended design, so that the calculator can generate what you need in order to achieve it.

    Note: There are many input factors that need to be accounted for before you can design your solution. Each input factor is briefly listed below; there are additional notes within the calculator that explain them in more detail.

    Environment Configuration

    Within Step 1 you will enter in the appropriate information concerning your messaging environment's configuration - the high availability architecture and database copy configuration, the data and I/O configuration, and CPU inputs.

    Note: For optimal sizing, choose a multiple of the total number of database copies you have selected for the number of mailbox servers.

    Exchange Environment Configuration

    1. What server architecture are you deploying for your global catalogs? You can deploy either the 32-bit or 64-bit architecture for your Active Directory servers. The architecture you deploy will affect your core ratio planning. For more information, please see http://technet.microsoft.com/en-us/library/dd346701.aspx.
    2. Do these servers only have the mailbox server role installed? Having the Hub Transport and Client Access server roles also installed on the along with the mailbox server role affects your design in the areas of load balancing client requests, memory utilization, and CPU utilization.
    3. Will these servers be deployed as guest machines in a virtualized environment?  There is CPU overhead that must be accounted for when deploying guest machines that must be accounted for in the design.  For Hyper-V deployments the overhead is about 10%.  Check with your hypervisor vendor to determine their overhead and adjust the “Hypervisor CPU Adjustment Factor” accordingly.
    4. Are you deploying a database availability group (DAG)? Deploying the solution as DAG provides you additional flexibility and resiliency choices like having multiple mailbox database copies, leveraging flexible mailbox protection features in lieu of traditional backups, and flexibility in your storage architecture (e.g. RAID or JBOD).
    5. How many mailbox servers are you going to deploy within the primary datacenter? If you enter more than a single server (remember a DAG requires at least two and can support a maximum of 16), the calculator will evenly distribute the user mailboxes across the total number of mailbox servers and make performance and capacity recommendations for each server, as well as, for the entire environment. As for the secondary datacenter, the calculator will determine the number of mailbox servers you need to deploy there based on the requirements (number of databases, number of copies, etc.).
    6. How many DAGs are you planning to deploy in the environment? If you enter more than a single DAG, then the calculator will distribute the user mailboxes across the total number of DAGs and make performance and capacity recommendations for each server and each DAG, as well as, for the entire environment.

    Site Resilience Configuration

    1. Are you deploying the DAG in a site resilient configuration? A DAG can be stretched across 2 or more datacenters (the calculator only allows for 1 datacenter) without requiring the AD site or network subnet to be stretched. 
    2. What user distribution model will you be leveraging in your site resilient architecture?When planning a site resilience model with Exchange 2010, keep in mind there are two variables that need to be considered: namespace model and user distribution model. For the namespace or datacenter model, Exchange 2010 requires both datacenters to be in an Active/Active configuration. This means that both datacenters participating in the DAG solution must have active, reachable namespaces and have the ability to support active load at any time. For the user distribution model, the design can support both Active/Passive and Active/Active user distribution. The calculator supports three scenarios for these user distribution models:
      1. Active/Passive User Distribution Model - An Active/Passive user distribution architecture simply has database copies deployed in the secondary datacenter, but no active mailboxes are hosted there and no database copies will be activated there during normal runtime operations. However, the datacenter supports both single cross-datacenter database *overs, and full datacenter activation.
      2. An Active/Active user distribution architecture has the user population dispersed across both datacenters (usually evenly) with each datacenter being the primary datacenter for its specific user population.  In the event of a failure, the user population can be activated in the secondary datacenter (either via cross-datacenter single database *over or via full datacenter activation).  There are two types of Active/Active user distribution models:
        1. Active/Active (Single DAG) - This model stretches a DAG across the two datacenters and has active mailboxes located in each datacenter.  A corresponding passive copy is located in the alternate datacenter.  This scenario does have a single point of failure (potentially), the WAN connection.  Loss of the WAN connection will result in the mailbox servers in one of the datacenters going into a failed state from a failover cluster perspective (due to loss of quorum):
          ---DC1--- ---DC2---
          DAG1 DAG1
          Active Copies Passive Copies
          Passive Copies Active Copies
        2. Active/Active (Multiple DAGs) - This model leverages multiple DAGs to remove single points of failure (e.g., the WAN).  In this model, there are least two DAGs, with each DAG having its active copies in the alternate datacenter:
          ---DC1--- ---DC2---
          DAG-A (active/passive copies) DAG-A (Passive Copies)
          DAG-B (Passive Copies DAG-B (Active/Passive Copies)
    3. In your site resilient architecture, how far behind can you get in terms of log shipping between datacenters? The effect of the RPO is to evaluate the non-contiguous peak hours (defined in Step 5), say 8am and 4pm, and determine the resulting throughput requirement, assuming that you can take the time in between 8 and 4 to catch up (within the specified RPO, of course). By allowing replication to get behind there are two outcomes: 1. Active Manager is less likely to choose a database copy that has a high copy queue length (unless more viable alternatives aren't available). 2. If the copy queue length is greater than the target server's AutoDatabaseMountDial setting, the database will not automatically mount once activated. Manually mounting that database will result in the loss of data that had not been copied.
    4. Will you activation block the mailbox servers in the secondary datacenter? In certain situations (e.g. highly utilized network connection), you may want to control whether cross-site failovers occur automatically. This can be controlled by placing an activation block on the remote mailbox servers, thereby preventing Active Manager from selecting those copies during a failover.
    5. When deploying an Active/Active (Single DAG) architecture, do you want to deploy dedicated disaster recovery Mailbox servers in the alternate datacenter?  If deploying a single DAG Active/Active solution, you can choose to have dedicated DR Mailbox servers deployed in the secondary datacenter to be used in the event of disaster or utilize the existing mailbox servers that are hosting active mailboxes. 

    Mailbox Database Copy Configuration

    1. How many highly available (HA) mailbox database copy instances per database do you plan to deploy within a DAG? Enter in the number of highly available database copies you plan to have within the environment. This value excludes lagged database copies, but does account for both the active and any passive HA database copies, but includes both the active copy and all passive copies you plan to deploy. For optimal sizing, choose a multiple of the total number of mailbox servers you have selected.
    2. How many lagged database copy instances per database do you plan to deploy within a DAG? Lagged database copies are an optional feature that can provide protection against certain disaster scenarios (like logical corruption). Lagged database copies should not be considered an HA database copy as the replay will delay the availability of the database for use once activated. While technically there is no limit to how many lagged copies you can deploy within a DAG, the calculator limits you to a maximum of 2 copies.
    3. How many highly available mailbox database copy instances per database do you plan to deploy within the secondary datacenter within a DAG? If you are deploying a site resilient solution, you can choose to a portion of the total HA database copies deployed in the secondary datacenter.
    4. How many lagged database copy instances per database do you plan to deploy within the secondary datacenter within a DAG? If you are deploying a site resilient solution, you can choose to have a portion or all of your lagged database copies deployed in the secondary datacenter.

    Lagged Database Copy Configuration

    1. Will you deploy the lagged database copies on a dedicated server? Using a dedicated server for lagged database copies certainly makes it easier to manage. For DAGs where the lagged database copies are evenly distributed across all the DAG mailbox servers, you will need to use the Suspend-MailboxDatabaseCopy with the -ActivationOnly flag to prevent them from being mounted, but there are scenarios that can clear this. With a dedicated server you can activation block the entire server and the setting is persistent. The choice can also affect your storage design in terms of choosing RAID or JBOD. Unless you have multiple lagged copies, lagged copies should be placed on storage that is utilizing RAID to provide additional protection. The calculator will determine the appropriate number of lagged copy servers you need to deploy based on the requirements (number of databases, number of copies, etc).
    2. How long will you delay transaction log replay on your lagged copy? This parameter is used to specify the amount of time that the Microsoft Exchange Information Store service should wait before replaying log files that have been copied to the lagged database. The maximum amount of replay delay you can set is 14 days. The value you specify here will influence the log capacity requirements for all copies and the amount of time required to mount a lagged copy.
    3. How long will you delay transaction log truncation on your lagged copy? This parameter is used to specify the amount of time that the Microsoft Exchange Replication service should wait before truncating log files that have been copied to the lagged database. The time period begins after the log has been successfully replayed into the lagged copy. The maximum allowable setting for this value is 14 days. The minimum allowable setting is 0, although setting this value to 0 effectively eliminates any delay in log truncation activity. The value you specify here will influence the log capacity requirements for all copies.

    Exchange Data Configuration

    1. What will be the Data Overhead Factor? Microsoft recommends using 20% to account for any extraneous growth that may occur.
    2. How many mailboxes do you move per week? In terms of transactions, you have to take into account how many mailboxes you will either be moving to this server or within this server, as transactions totaling the size of the mailbox will always get generated at the target database.
    3. Are you going to deploy a Dedicated Restore LUN? A dedicated restore LUN is used as a staging point for the restoration of data or could be used during maintenance activities; if one is selected then additional capacity will not be factored into each database LUN.
    4. What percentage of disk space do you want to ensure remains free on the LUN? Most operations management programs have capacity thresholds that alert when a LUN is more than 80% utilized. This value allows you to ensure that each LUN has a certain percentage of disk space available so that the LUN is not designed and implemented at maximum capacity.
    5. Do you have log shipping compression enabled within the DAG? By default, each DAG is configured to compress and encrypt the socket connection used to ship logs across different IP subnets (you can disable these features all together or enable them for all communications regardless of subnet).
    6. What is your compression rate?  The compression capability that is obtained for the socket connection used to ship logs will vary with each customer, based on the data obtained in the transaction log files.  By default, Microsoft recommends using a value of 30%, however, you can determine this value by analyzing your environment (e.g., once Exchange 2010 is deployed you could evaluate the throughput rate with compression disabled and then compare with compression is enabled) .

    Database Configuration

    1. Do you want to follow Microsoft's recommendations regarding maximum database size? For standalone mailbox server role solutions, Microsoft recommends that the database size should not be more than 200GB in size. For solutions leveraging mailbox resiliency, Microsoft recommends that the database size should not exceed 2TB. Neither of these is by any means a hard limit, but a recommendation based on the impact database size has to recovery times. If you want to follow Microsoft's recommendation, then select Yes. Otherwise, select No.
    2. Do you want to specify a custom Maximum Database Size? If you selected No for the previous field, then you need to enter in a custom maximum database size.
    3. Would you like the calculator to determine the optimum number of databases for the design? By default the calculator will determine the optimum number of databases for the architecture. In the event that you may want to have a defined number of databases, select No to "Automatically Calculate Number of Databases" and enter in a custom number of databases.
    4. Do you want to specify the number of databases that should be deployed? If you selected "No" to the previous field, then you need to enter in the number of databases you would like to have deployed within your mailbox server or DAG architecture.
    5. Do you want to design your database infrastructure such that you deploy the correct number of databases to ensure symmetrical distribution during server failure events?  If possible, the calculator will deploy the correct number of databases such that you can achieve a symmetrical distribution of the active copies across the remaining server infrastructure as the DAG experiences server failures.  Note that to use this option, you must allow the calculator to automatically calculate the required number of databases.

    IOPS Configuration

    1. What will be the I/O Overhead Factor? Microsoft recommends using 20% to ensure adequate headroom in terms of I/O to allow for abnormal spikes in I/O that may occur from to time.
    2. What additional I/O requirements do you need to factor into the solution for each mailbox server's storage design? For example, let's say the solution requires 500 IOPS for the mailboxes and you have decided you want to ensure there is extra I/O capacity to support additional products (e.g. antivirus) to generate load during the peak user usage window. So you enter 300 IOPS in this input factor. The result is that from a host perspective, the solution needs to achieve 800 IOPS. This may require additional testing by comparing a baseline system against a system that has the I/O generating application installed and running.

    Mailbox Configuration

    Within Step 2 you will define your user profile for up to four different tiers of user populations.

    1. How many mailboxes will you deploy in the environment? If deploying a single server environment, this is how many mailboxes you will deploy on this server. If you are deploying multiple servers, then this is how many mailboxes you will deploy in the environment. If you are deploying multiple DAGs, then this is how many mailboxes you will deploy across all of the DAGs. For example, if you choose to deploy 5 servers, and want 3000 mailboxes per server, then enter 15000 here. Or if you plan to deploy 2 DAGs, each with 6 servers, and you entered 24000 total mailboxes, then 12000 mailboxes will be deployed per DAG.
    2. What is the solution's projected growth in terms of number of mailboxes over its lifecycle? Enter in the total percentage by which you believe the number of mailboxes will grow during the solution's lifecycle. For example, if you believe the solution will increase by 30% over the lifecycle of the design and you are starting out with 1000 mailboxes, then at the end of the lifecycle, the solution will have 1300 mailboxes. The calculator will utilize the projected growth plus the number of mailboxes to ensure that the capacity and performance requirements can be sustained throughout the solution's lifecycle.
    3. How much mail do the users send and receive per day on average? The usage profiles found here are based on the work done around the memory and processor scalability requirements.
    4. What is the average message size? For most customers the average message size is around 75KB.
    5. What will be the prohibit send & receive mailbox size limit? If you want to adequately control your capacity requirements, you need to set a hard mailbox size limit (prohibit send and receive) for the majority of your users.
    6. If deploying a personal archive mailbox, what will be the personal archive quota limit? If you want to adequately control your capacity requirements, you need to set a hard mailbox size limit (prohibit send and receive) for the majority of your users.
    7. What is the deleted item retention period? Enter in the deleted item retention period you plan to utilize within the environment. The default retention period is 14 days, however, you should adjust this to match your policy concerning deleted item recovery when enabling Single Item Recovery to eliminate going to backup media to recover deleted items.
    8. Are you deploying Single Item Recovery? Single Item Recovery ensures that all deleted and modified items are preserved for the duration of the deleted item retention window. By default in Exchange 2010 RTM, this is not enabled. When enabled, this feature increases the capacity requirements for the mailbox.
    9. Will you have calendar version logging enabled? By default, all changes to a calendar item are recorded in the mailbox of a user to keep older versions of meeting items for 120 days and can be used to repair the calendar in the event of an issue. This data is stored in the mailbox's dumpster folder. When enabled, this feature increases the capacity requirements for the mailbox.
    10. Do you want to include an IOPS Multiplication Factor in the prediction or custom I/O profile? The IOPS Multiplication Factor can be used to increase the IOPS/mailbox footprint for mailboxes that require additional I/O (for example, these mailboxes may use third-party mobile devices). The way this value is used is as follows: (IOPS value * Multiplication Factor) = new IOPS value.
    11. Do you want to include a Megacycle Multiplication Factor when sizing the CPU requirements for the mailbox tier?  The Megacycles Multiplication Factor can be used to increase the CPU cost/mailbox footprint for mailboxes that require perform more CPU work than a typical mailbox (for example, these mailboxes may use third-party mobile devices). The way this value is used is as follows: (Megacycles value * Multiplication Factor) = new Megacycles value.
    12. Do your Outlook Online Mode clients have versions of Windows Desktop Search older than 4.0 or third-party desktop search engines deployed? The addition of these indexing tools to the online mode clients incur additional read I/O penalties to the mailbox server storage subsystem. Care should be taken when enabling these desktop search engines. Windows Desktop Search 4.0 and later utilizes synchronization protocols that are similar to how Outlook operates in cached mode to index the mailbox contents, and thus has a very minor impact in terms of disk read I/O.
    13. Are you planning to use the I/O prediction formula or define your own IOPS profile to design toward? This question asks whether you want to override the calculator in determining the IOPS / mailbox value. By default the calculator will predict the IOPS / mailbox value based on the number of messages per mailbox, and the user memory profile. For some customers that want to design toward a specific I/O profile, this option will not be viable. Therefore, if you want to design toward a specific I/O profile, select No.
    14. What is your custom IOPS profile / mailbox? Only enter a value in this field if you selected "No" to the "Predict IOPS Value" question.
    15. What will be the database read:write ratio for your custom IOPS profile? Only adjust this value if you selected "No" to the "Predict IOPS Value" parameter. When IOPS prediction is enabled, the calculator will calculate the read:write ratio based on the user profile.

    Backup Configuration

    Within Step 3 you will define your backup model and your tolerance settings, as well as, choose whether to isolate the transaction logs from the database.

    1. What backup methodology will be used to backup the solution? You have several options for a backup methodology, including leveraging a VSS solution (hardware or software based) or leveraging the native data protection features that Exchange provides. The solution you choose will depend on many factors. For example, if you are deploying the mailbox resiliency and single item recovery features, you may be able to forgo a traditional backup architecture in favor of leveraging Exchange as its own backup. Or if you still require a backup (e.g. legal/compliance reasons), then you need to deploy a VSS solution. The type of VSS solution you deploy will depend on your storage architecture. Hardware VSS solutions are available with storage area networks. Software VSS solutions can be leveraged against either storage area networks or direct attached storage architectures. Also, the backup methodology will affect the LUN design; for example, hardware VSS solutions require a LUN architecture that is 2 LUNs / Database.
    2. What will be the backup frequency? You can choose Daily Full, Weekly Full with Daily Differential, Weekly Full with Daily Incremental, or Bi-Monthly Full with Daily Incremental. The backup frequency will affect the LUN design and the disk space requirements (e.g. if performing daily differentials, then you need to account for 7 days of log generation in your capacity design).
    3. Will you isolate the logs from the database?  Database/Log isolation refers to placing the DB file an logs from the same Mailbox Database on to different volumes backed by different physical disks.  For standalone architectures, the Exchange best practice is to separate database file (.edb) and logs from same database to different volumes backed by different physical disks for recoverability purposes.  For solutions leveraging mailbox resiliency, isolation is not required.
    4. How many times can you operate without log truncation? Select how many times you can survive without a full backup or an incremental backup (the minimum value is 1). For example, if you are a performing weekly full backup and daily differential backups, the only time log truncation occurs is during the full backup. If the full backup fails, then you have to wait an entire week to perform another full backup or perform an emergency full backup. This parameter allows you to ensure that you have enough capacity to not have to perform an immediate full backup. If you are leveraging the native data protection features within Exchange as your backup mechanism, then you should enter 3 here to ensure you have enough capacity to allow for 3 days' worth of log generation to occur as a result of potential log replication issues.
    5. How long can you survive a network outage? When a network outage occurs, log replication cannot occur. As a result, the copy queue length will increase on the source; in addition, log truncation cannot occur on the source. For geographically dispersed DAG deployments, network outages can seriously affect the solution's usefulness. If the outage is too long, log capacity on the source may become compromised and as result, capacity must be increased or a manual log truncation event must occur. Once that happens, the remote copies must be reseeded. The Network Failure Tolerance parameter ensures there is enough capacity on the log LUNs so that you can survive an excessive network outage.

    Storage Configuration

    Within Step 4 you will define your storage configuration.

     

    Storage Options

    1. Do you want to consider storage designs that leverage JBOD? JBOD storage refers to placing a database and its transaction logs on a single disk without leveraging RAID. In order to deploy this type of storage solution for your mailbox server environment, you must have 3 or more HA database copies and have a LUN architecture that is equal to 1 LUN / Database. If you select yes for this input, the calculator will attempt to design the solution so that it can be deployed on JBOD storage. Please note that other factors may alter the viability of JBOD, however (e.g. deploying a single lagged database copy on the same mailbox servers hosting your HA database copies).

    Primary Datacenter Disk Configuration

    1. What are the disk capacities and types you plan to deploy? For each type of LUN (database, log, and restore LUN) you plan to deploy, select the appropriate capacity and disk type model.

    Secondary Datacenter Disk Configuration

    1. What are the disk capacities and types you plan to deploy? For each type of LUN (database, log, and restore LUN) you plan to deploy, select the appropriate capacity and disk type model.

    Processor Configuration

    Within Step 5, you will define the number of processor core you have deployed for each mailbox server within your primary and secondary datacenters, as well as, enter the SPECint2006 Rate Value for the system you have selected.  

    When you enable virtualization, you must be sure to configure the processor architecture correctly.  In particular, you must enter in the correct number of processor cores that the guest machine will support, as well as, the correct SPECInt2006 rating for these virtual processor cores. To calculate the SPECInt2006 rate value, you can utilize the following formula:

    X/(N*Y) = per virtual processor SPECInt2006 Rate value

    Where X is the SPECInt2006 rate value for the hypervisor host server
    Where N = the number of physical cores in the hypervisor host
    Where Y = 1 if you will be deploying 1:1 virtual processor-to-physical processor on the hypervisor host
    Where Y = 2 if you will be deploying up to 2:1 virtual processor-to-physical processor on the hypervisor host

    For example, let’s say I am deploying an HP ProLiant DL580 G7 (2.27 GHz, Intel Xeon X7560) system which includes four sockets, each containing an 8-core processor, then the SPECInt2006 rate value for the system is 757.

    If I am deploying my Mailbox server role as a guest machine using Hyper-V, and following best practices where I do not oversubscribed the number of virtual CPUs to physical processors, then

    757/(32*1) = 23.66

    Since each Mailbox server will have a maximum of 4 virtual processors, this means the SPECInt2006 rate value I would enter into the calculator would be 23.66*4 = 95.

    In addition, if you are deploying your Exchange servers as guest machines, you can specify the Hypervisor CPU Adjustment Factor to take into account the overhead of deploying guest machines.

    Server Configuration

    1. How many processor cores and what is their megacycle capability are you planning to deploy in each server?For each server type (primary datacenter, secondary datacenter, and lagged copy server) you plan to deploy, select the number of processor cores and the server’s SPECint2006 rate value.  To determine your SPECint2006 Rate Value:
      1. Open a web browser and got to www.spec.org.
      2. Click on Results, highlight CPU2006 and then select Search CPU2006 Results.
      3. Under Available Configurations, select SPECint2006 Rates and click Go.  Under Simple Request, enter the search criteria (e.g., Processor matches x5550).
      4. Find the server and processor you are planning to deploy and take note of the result value. For example, let's say you are deploying a Dell PowerEdge M710 8-core server with Intel x5550 2.67GHz processors (2670 Hertz); the SPECint_rate2006 results value is 240.

    Log Replication Configuration

    Within Step 6, you will define your hourly log generation rate, the network link, and the network link latency you expect to have within your site resilient architecture.

    1. How many transaction logs are generated for each hour in the day? Enter in the percentage of transaction logs that are generated for each hour in the day by measuring an existing Exchange 2003 or Exchange 2007 server in your environment. If the existing messaging environment is not using Exchange, then evaluate the messaging environment and enter in the rate of change per hour here.

    Now you may be wondering how you can collect this data. We've written a simple VBS script that will collect all files in a folder and output it to a log file. You can use Task Scheduler to execute this script at certain intervals in the day (e.g. every 15 minutes). Once you have generated the log file for a 24 hour period, you can import it into Excel, massage the data (i.e. remove duplicate entries) and determine how many logs are generated for each hour. If you do this for each storage group, you will be able to determine your log generation rate for each hour in the day. This script is named collectlogs.vbsrename (just rename it to collectlogs.vbs) and you can find it here: Collectlogs VBS script

    Network Configuration

    1. What type of network link will you be using between the servers? Select the appropriate network link you will be using between the two datacenters.
    2. What is the latency on the network link? Enter in the latency (in milliseconds) that exists on the network link.

    Role Requirements

    This section provides the solution's I/O, capacity, memory, and CPU requirements.

    rolereq

    Based on the above input factors the calculator will recommend the following architecture, broken down into four sections:

    • Environment Configuration
    • Active Database Copy Configuration
    • Server Configuration
    • Log, Disk Space, and IO Requirements

    Processor Core Ratio Requirements

    This table identifies the required number of processor cores required to support the activated databases. This table is only populated if you populate the processor core megacycle information on the Input tab.

    • The Recommended Minimum Number of Global Catalog Cores identifies the minimum number of processor cores required to sustain the load for global catalog related activities and is based on the number of processor cores required to support the activated databases.

    Client Access Server Requirements

    This table identifies the memory and CPU requirements for dedicated Client Access servers if you choose to not co-locate the server roles. This table is only populated if you populate the processor core megacycle information on the Input tab.

    • The Recommended Minimum Number of Client Access Processor Cores identifies the minimum number of processor cores required per server to sustain the load for client related activities.
    • The Recommended Minimum Number of Client Access Server RAM Configuration identifies the minimum amount of memory required per server to sustain the load for client related activities. This number is scaled to a multiple that can be typically installed in a server.

    Hub Transport Server Requirements

    This table identifies the memory and CPU requirements for dedicated Hub Transport servers if you choose to not co-locate the server roles. This table is only populated if you populate the processor core megacycle information on the Input tab.

    • The Recommended Minimum Number of Hub Transport Processor Cores identifies the minimum number of processor cores required per server to sustain the load for client related activities.
    • The Recommended Minimum Number of Hub Transport RAM Configuration identifies the minimum amount of memory required per server to sustain the load for client related activities. This number is scaled to a multiple that can be typically installed in a server.

    Environment Configuration

    The Environment Configuration table identifies the number of mailboxes being deployed in each datacenter, as well as, how many mailbox servers and lagged copy servers you will deploy in each datacenter.   This table will also identify the minimum number of dedicated Hub Transport and Client Access servers you should deploy in each datacenter (taking into account worst case failure mode of two simultaneous server failures).

    User Mailbox Configuration

    The Mailbox Configuration table provides you with:

    • The Number of Mailboxes that you entered in the Input section (this value will include the projected growth).
    • The Number of Mailboxes / Database provides a breakdown of how many mailboxes from each mailbox tier will be stored within a database.
    • The User Mailbox Size within Database is the actual mailbox size on disk that factors in the prohibit send/receive limit, the number of messages the user sends/receives per day, the deleted item retention window (with or without calendar version logging and single item recovery enabled), and the average database daily churn per mailbox. It is important to note that the Mailbox size on disk is actually higher than your mailbox size limit; this is to be expected.
    • The Transaction Logs Generated / Mailbox value is based on the message profile selected and the average message size and indicates how many transaction logs will be generated per mailbox per day. The log generation numbers per message profile account for:
      • Message size impact. In our analysis of the databases internally we have found that 90% of the database is the attachments and message tables (message bodies and attachments). So if the average message size doubles (from 75 to 150), the worst case scenario would be for the log traffic to increase by 1.9 times. Thereafter, as message size doubles, the impact doubles.
      • Amount of data Sent/received.
      • Database health maintenance operations.
      • Records Management operations
      • Data stored in mailbox that is not a message (tasks, local calendar appts, contacts, etc).
      • Forced log rollover (a mechanism that periodically closes the current transaction log file and creates the next generation).
    • The IOPS / Mailbox value is the calculated IOPS / Mailbox value that is based on the number of messages per mailbox, the user memory profile, and desktop search engine choices. If you had chosen to enter in a specific IOPS / mailbox value rather than allowing the calculator determining the value based on the above requirements, then this value will be that custom value.
    • The Read:Write ratio / Mailbox value defines the ratio of the mailbox's IOPS that are read I/Os. This information is required to accurately design the storage subsystem I/O requirements.

    Database Copy Instance Configuration

    This table highlights how many HA mailbox database copy instances and lagged database copy instances your solution will have within each datacenter for a given DAG.

    Database Configuration

    The Database Configuration table provides you with:

    • The Number of Databases is the calculated number of databases required to support the mailbox population within a standalone server or DAG.
    • The Recommended Number of Mailboxes / Database is the calculated number of mailboxes per database ensuring that the database size does not go above the recommended database size limit.
    • The Available Database Cache / Mailbox value is the amount of database cache memory that is available per mailbox. A large database cache ensures that read I/Os can be reduced.

    Database Copy Configuration

    The Database Copy Configuration table provides you with the number of database copies being deployed within each server and the total number of database copies within the DAG.

    Server Configuration

    The Server Configuration table provides you with the following:

    • The Recommended RAM Configuration for the primary datacenter mailbox servers, secondary datacenter mailbox servers, and lagged copy servers. This is the amount of RAM needed to support the number of maximum activated database copies on a given server, in addition to, the number of mailboxes based on their memory profile.
    • The number of processor cores utilized during the worst failure mode scenario.
    • The CPU Utilization value is the expected CPU Utilization for a fully utilized server based on the megacycles associated with the user profile and the number of database copies. Depending on the environment, this will either be for a standalone server hosting 100% active databases, or a server participating in a DAG that is dealing with a single or double server failure event (or secondary datacenter activation). It is recommended that servers not exceed 80% utilization during peak period. The CPU utilization value is determined by taking the CPU Megacycle Requirements and dividing it by the total number of megacycles available on the server (which is based on the CPU and number of cores). If the calculator highlights the CPU utilization with a red background, then this means the design may not be able to sustain the load - either you must change the design (number of mailboxes, number of copies, etc.) or change the server CPU platform.
    • The CPU Megacycle Requirements value defines the amount of megacycles the primary datacenter servers must be able to sustain when either all mailbox databases are active or the number of mailbox database copies that are activated based on a single server or double server failure event. For secondary servers hosting HA copies, this value defines the amount of megacycles required to support the activation of all databases after datacenter activation. For lagged copy servers, this value defines the amount of megacycles required to support all of the passive lagged copies.
    • The Server Total Available Adjusted Megacycles value defines the total available megacycles the server platform is capable of delivering at 100% CPU utilization.  This value has been normalized against the baseline server platform (Intel Xeon x5470 2x4 3.33GHZ processors).
    • The Possible Storage Architecture outlines whether the solution could utilize RAID or JBOD for the primary datacenter servers, secondary datacenter servers, and lagged copy servers. JBOD is only considered under the following conditions (this assumes you configured the calculator to consider JBOD):
      • In order to deploy on JBOD in the primary datacenter servers: You need a total of 3 or more HA copies within the DAG. If you are mixing lagged copies on the same server that is hosting your HA copies (i.e. not using dedicated lagged copy servers), then you need at least 2 lagged copies.
      • For the secondary datacenter servers to use JBOD: You should have at least 2 HA copies in secondary datacenter. That way loss of a copy in the secondary datacenter doesn't result in requiring a reseed across the WAN or loss of data (in the datacenter activation case). If you are mixing lagged copies on the same server that is hosting your HA copies (i.e. not using dedicated lagged copy servers), then you need at least 2 lagged copies.
      • For dedicated lagged copy servers: You should have at least 2 lagged copies within a datacenter in order to use JBOD. Otherwise loss of disk results in loss of your lagged copy (and whatever protection mechanism that was providing).

    Transaction Log Requirements

    The Transaction Log Requirements table provides you with:

    • The User Transaction Logs Generated / Day indicates how many transaction logs will be generated during the day for each active database, each server, within the DAG, and within the environment.
    • The Average Mailbox Move Transaction Logs Generated / Day indicates how many transaction logs will be generated during the day for active database, each server, within a DAG, and within the environment. This number is an assumption and assumes that an equal percentage of mailboxes will be moved each day, as opposed to moving all mailboxes on the same day.
    • The Average Transaction Logs Generated / Day is the total number of transaction logs that are generated per day for active database, each server, within a DAG, and within the environment (includes user generated logs and mailbox move generated logs).

    Disk Space Requirements

    The Disk Space Requirements table provides you with:

    • The Database Space Required is the amount of space required to support each database and its corresponding copies. This value is derived from the mailbox size on disk, the data overhead factor, whether a dedicated restore LUN is available. This row also shows you the space requirements for each server (based on the total number of database copies), each DAG, and within the environment.
    • The Log Space Required is the amount of space required to support each database log stream and the corresponding copies. This value takes into account the number of mailboxes moved per week (assumes worst case and that all mailboxes are moved on the same day), the type of backup frequency in use, the number of days that can be tolerated without log truncation and the number of transaction logs generated per day. This row also shows you the space requirements for each server (based on the total number of database copies), each DAG, and within the environment.
    • The Database LUN Space Required is the LUN size required to support the database (and potentially its log stream). This calculation takes the total disk space required for the database and adds to it the size of a database plus 110% (if a dedicated restore LUN does not exist) for offline maintenance operations, an additional 10% of the database size for content indexing (if enabled), and includes an amount of free space to ensure the LUN is not 100% utilized (based on LUN Free Space Percentage). This row also shows you the space requirements for each server (based on the total number of database copies), each DAG, and within the environment.
    • The Log LUN Space Required is the LUN size required to support the databases log stream. This field lists the amount of space required to support the transaction logs for a given set of databases and includes an amount of free space to ensure the LUN is not 100% utilized (based on LUN Free Space Percentage). This row also shows you the space requirements for each server (based on the total number of database copies), each DAG, and within the environment.
    • The Restore LUN Space Required is the amount of space needed to support a restore LUN if the option was selected in the Input Factor section; this will include space for up to 7 databases and 7 transaction log sets. Each server will be provisioned with a restore LUN. This row also shows you the space requirements for each server (based on the total number of database copies), each DAG, and within the environment.

    Host IO and Throughput Performance Requirements

    The Host IO and Throughput Performance Requirements table provides you with:

    • The Total Required Database IOPS is the amount of read and write host I/O the database disk set must sustain during peak load (this does not factor in any RAID penalties). This row also shows you the IOPS requirements for each server (based on the total number of database copies), each DAG, and within the environment
    • The Total Required Log IOPS is the amount of read and write host I/O that will occur against the transaction log disk set. This row also shows you the IOPS requirements for each server (based on the total number of database copies), each DAG, and within the environment.
    • The Database Read I/O Percentage defines the percentage of database required IOPS that are read I/Os. This information is required to accurately design the storage subsystem I/O requirements.
    • The amount of throughput required for Background Database Maintenance operations.

    Special Notes

    The Special Notes table will provide you with additional information about your design:

    • When to use GPT disks (when a LUN size is greater than 2TB).

    Activation Scenarios

    If you are deploying a highly available and/or site resilient architecture, then this section will break down the failure scenarios.  The section is broken up into two scenarios:

    1. Scenario 1 - Deploying a DAG architecture within a single datacenter or deploying it in a site resilient Active/Passive user distribution model.
    2. Scenario 2 - Deploying a site resilient DAG architecture with an Active/Active user distribution model.

    Important:  For the purposes of this calculator, the term "primary datacenter" refers to the datacenter that is preferred for hosting the active copies for a given set of databases, while the term "secondary datacenter" refers to the disaster recovery datacenter that is used for datacenter activation and cross-site database failover events.

    Single Datacenter and Active/Passive Environments

    The DAG Member Layout table identifies the number of Active Mailbox servers (those that are hosting active mailboxes within the primary datacenter), the Disaster Recovery Mailbox Servers (those that host passive database copies in the second datacenter), and any Lagged Copy Mailbox servers you may be deploying.

    There are two tables that provide data around the Active Database configuration, one for the primary datacenter, which outlines the single or double server events, and one for the secondary datacenter, which outlines the activation of that datacenter when the primary datacenter is lost. Both tables provide you with:

    • The Number of Active Databases (Normal Run Time) value defines the number of active databases hosted on each server when there are no server outages. Unlike Exchange 2007, Exchange 2010 is no longer bound by an active/passive high availability model. Instead, each server within a DAG can host active mailbox database copies. The calculator distributes the number of unique databases across the primary datacenter servers within the DAG, ensuring an equal distribution of mailbox database copies are activated on each server. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.
    • The Number of Active Databases (After First Server Failure) value defines the number of active databases hosted on each server when there is a single server outage. As a result of the single server outage, the database copies that were activated on the failed server are equally redistributed across all remaining server nodes. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.
    • The Number of Active Databases (After Double Server Failure) value is populated when you have at least 3 HA mailbox copies and at least 4 mailbox servers within your design. It defines the number of active databases hosted on each server when there are two server outages. As a result of the double server outage, the database copies that were activated on the failed servers are equally redistributed across all remaining server nodes. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.

    Active/Active Environments

    This section breaks out the architecture into two perspectives, the layout of Datacenter 1 and the layout of Datacenter 2 with respect to the DAG architecture.  Recall that

    1. Active/Active (Single DAG) - This model stretches a DAG across the two datacenters and has active mailboxes located in each datacenter.  A corresponding passive copy is located in the alternate datacenter.  This scenario does have a single point of failure (potentially), the WAN connection.  Loss of the WAN connection will result in the mailbox servers in one of the datacenters going into a failed state from a failover cluster perspective (due to loss of quorum):
      ---DC1--- ---DC2---
      DAG1 DAG1
      Active Copies Passive Copies
      Passive Copies Active Copies
    2. Active/Active (Multiple DAGs) - This model leverages multiple DAGs to remove single points of failure (e.g., the WAN).  In this model, there are least two DAGs, with each DAG having its active copies in the alternate datacenter:
      ---DC1--- ---DC2---
      DAG-A (active/passive copies) DAG-A (Passive Copies)
      DAG-B (Passive Copies DAG-B (Active/Passive Copies)

    The DAG Member Layout table identifies the number of Active Mailbox servers (those that are hosting active mailboxes within the primary datacenter), the Disaster Recovery Mailbox Servers (those that host passive database copies in the second datacenter), and any Lagged Copy Mailbox servers you may be deploying.

    There are two tables that provide data around the Active Database configuration, one for the primary datacenter, which outlines the single or double server events, and one for the secondary datacenter, which outlines the activation of that datacenter when the primary datacenter is lost. Both tables provide you with:

    • The Number of Active Databases (Normal Run Time) value defines the number of active databases hosted on each server when there are no server outages. Unlike Exchange 2007, Exchange 2010 is no longer bound by an active/passive high availability model. Instead, each server within a DAG can host active mailbox database copies. The calculator distributes the number of unique databases across the primary datacenter servers within the DAG, ensuring an equal distribution of mailbox database copies are activated on each server. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.
    • The Number of Active Databases (After First Server Failure) value defines the number of active databases hosted on each server when there is a single server outage. As a result of the single server outage, the database copies that were activated on the failed server are equally redistributed across all remaining server nodes. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.
    • The Number of Active Databases (After Double Server Failure) value is populated when you have at least 3 HA mailbox copies and at least 4 mailbox servers within your design. It defines the number of active databases hosted on each server when there are two server outages. As a result of the double server outage, the database copies that were activated on the failed servers are equally redistributed across all remaining server nodes. This row exposes the total number of mailboxes that are accessible on each server as a result of the activated database copies. In addition, this row highlights the total number of databases deployed within each datacenter, in the event that cross-site database *overs are allowed to occur.

    Distribution

    The calculator includes a new worksheet, Distribution. Within the Distribution worksheet, you will find the layout we recommend based on the database copy layout principles.

    The Distribution worksheet includes several new options to help you with designing and deploying your database copies:

    • You can determine what the active copy distribution will be like as server failures occur within your environment.
    • You can export a set of CSV and PowerShell scripts that perform the following actions:
      • Diskpart.ps1 (uses Servers.csv) - Formats the physical disks, and mounts them as mount points under an anchor directory on each server.
      • CreateMBDatabases.ps1 (uses MailboxDatabases.csv) - Creates the mailbox database copies with activation preference value of 1.
      • CreateMBDatabaseCopies.ps1 (uses MailboxDatabaseCopies.csv) - Creates the mailbox database copies with the appropriate activation preference values across the server infrastructure.

    Important: The database copy layout the tool provides assumes that each server and its associated database copies are isolated from each other server/copies. It is important to take into account failure domain aspects when planning your database copy layout architecture so that you can avoid having multiple copy failures for the same database

    LUN Design

    The LUN Requirements section is really a continuation of the Storage Requirements section. It outlines what we believe is the appropriate LUN design based on the input factors and the analysis performed in the previous sections.

    Note: The term LUN utilized in the calculator refers only the representation of the disk that is exposed to the host operating system. It does not define the disk configuration.

    The LUN Design highlights the LUN architecture chosen for this server solution. The architecture is derived from the backup type, backup frequency, and high availability architecture that were chosen in the Storage Requirements section.

    There are three types of LUN architecture that can be leveraged within Exchange 2010:

    • 1 LUN / Database
    • 2 LUNs / Database
    • 2 LUNs / Backup Set

    1 LUN / Database

    A single LUN per Database architecture means that both the database and its corresponding log files are placed on the same LUN. In order to deploy a LUN architecture that only utilizes a single LUN per database, you must have a Database Availability Group that has 2 or more copies and not be utilizing a hardware based VSS solution.

    Some of the benefits of this strategy include:

    • Simplified storage administration. Fewer LUNs to manage.
    • Potentially reduce the number of backup jobs.
    • Flexibility to isolate the performance between Databases when not sharing spindles between LUNs.

    Some of the concerns with this strategy include:

    2 LUNs / Database

    With Exchange 2010, in the maximum case of 100 Databases, the number of LUNs you provision will depend upon your backup strategy. If your recovery time objective (RTO) is very small, or if you use VSS clones for fast recovery, it may be best to place each Database on its own transaction log LUN and database LUN. Because doing this will exceed the number of available drive letters, volume mount points must be used.

    Some of the benefits of this strategy include:

    • Enables hardware-based VSS at a database level, providing single database backup and restore.
    • Flexibility to isolate the performance between databases when not sharing spindles between LUNs.
    • Increased reliability. A capacity or corruption problem on a single LUN will only impact one database. This is an important consideration when you are not leveraging the built-in mailbox resiliency features.

    Some of the concerns with this strategy include:

    • 100 databases require 200 LUNs which could exceed some storage array maximums.
    • A separate LUN for each database causes more LUNs per server increasing the administrative costs and complexity.

    2 LUNs / Backup Set

    A backup set is the number of databases that are fully backed up in a night. A solution that performs a full backup on 1/7th of the databases nightly (i.e. using a weekly or bi-monthly full backup with daily incrementals or differentials) can reduce complexity by placing all of the databases to be backed up on the same log and database LUN. This can reduce the number of LUNs on the server.

    Some of the benefits of this strategy include:

    • Simplified storage administration. Fewer LUNs to manage.
    • Potentially reduce the number of backup jobs.

    Some of the concerns with this strategy include:

    Results Pane

    Based on the above input factors the calculator will recommend the following architecture:

    LUN Design

    The LUN Design table highlights the recommended LUN architecture. 

    LUN Configuration

    The LUN Configuration table highlights the number of databases that should be placed on a single LUN. This is derived from LUN Architecture model.

    This section also documents how many LUNs will be required for the entire solution, broken out by Database and Log sets, and the number of restore LUNs per server.

    Database Configuration

    The Database Configuration table outlines the number of databases (or copies) per server, the number of mailboxes per database, the size of each database, and the transaction log size required for each database.

    Database and Log LUN Design

    The database and log LUN Design table outlines the physical LUN layout and follows the recommended number of databases per LUN approach based on the LUN Architecture model. It also documents the LUN size required to support layout (this is where we factor in the additional capacity for content indexing, the LUN Free Space Percentage, and whether you are using a Restore LUN), as well as the transaction log LUN.

    Important: The DB and Log LUN Design Table identify databases by a unique number. However, databases copies are distributed across the servers, and thus, these numbers hold no significance and are used solely as an example to show a server's LUN layout.

    Backup Requirements

    The Backup Requirements section is really a continuation of the Role Requirements section. It outlines what we believe is the appropriate backup design based on the input factors and the analysis performed in the previous sections.

    Backup Configuration

    The Backup Configuration table outlines the number of databases that will be placed within a single LUN and the type of backup methodology and frequency in which the backups will occur.

    Backup Frequency Configuration

    The Backup Frequency Configuration section will provide you with an outline on how you should perform the backups for each server, utilizing either a daily full backup or weekly or bi-monthly full backup frequency.

    Log Replication Requirements

    The Log Replication Requirements section is another continuation of the Role Requirements section. It outlines what we believe is the throughput required to replicate the transaction logs to each target database copy in the secondary datacenter.

    Peak Log and Content Index Replication Throughput Requirements

    The Peak Log and Content Index Replication Throughput Requirements table provides you with:

    • The Peak Log & Content Index Throughput Required / Database is the total throughput required for a single log stream and content index. This value is based on the peak log generation hour.
    • The Peak Log & Content Index Throughput Required Between Datacenters / DAG is the total throughput required to replicate the transaction logs and content index to all database copies (lagged and non-lagged) that exist within the alternate datacenter for the database availability group.
    • The Peak Log & Content Index Throughput Required Between Datacenters / Environment is the total throughput required to replicate the transaction logs and content index to all database copies (lagged and non-lagged) that exist within the alternate datacenter for all database availability groups.

    RPO Log and Content Index Replication Throughput Requirements

    In terms of log replication, RPO means how behind can you get in log shipping? The lower the RPO (a value of 0 or 1 essentially means you want to only lose the open log file), the higher the bandwidth you need because you cannot get behind in log replication. The higher the RPO (approaching 24) less bandwidth is needed as you are expecting to be behind (up to x hours) in log replication and to catch up at some point in the day.

    The RPO Log and Content Index Replication Throughput Requirements table provides you with:

    • The RPO Log & Content Index Throughput Required / Database is the required throughput necessary to replicate the transaction logs and content index based on the RPO to the mailbox servers that are located within the secondary datacenter per database.
    • The RPO Log & Content Index Throughput Required Between Datacenters / DAG is the RPO total throughput required to replicate the transaction logs and content index to all database copies (lagged and non-lagged) that exist within the alternate datacenter for the database availability group.
    • The RPO Log & Content Index Throughput Required Between Datacenters / Environment is the RPO total throughput required to replicate the transaction logs and content index to all database copies (lagged and non-lagged) that exist within the alternate datacenter for all database availability groups.

    Chosen Network Link Suitability

    The Chosen Network Link Suitability table will dictate whether the chosen network link has sufficient capacity to sustain the peak replication throughput requirements and/or the RPO replication throughput requirements. If the network link cannot sustain the log replication traffic, then you will need to either upgrade the network link to the recommended network link throughput, or adjust the design appropriately.

    Recommended Network Link

    The Recommended Network Link table recommends an appropriate network link if the chosen network link does not have sufficient capacity to sustain log replication for solution for both the peak and RPO throughput requirements.

    Note: The Network Link recommendations do not take into account database seeding or any other data that may also utilize the link.

    Storage Design

    The Storage Design worksheet is designed to take the data collected from the Input worksheet and Storage Requirements worksheet and help you determine the number of physical disks needed to support the databases, transaction logs, and Restore LUN configurations.

    Storage Design Input Factors

    In order to determine the physical disk requirements, you must enter in some basic information about your storage solution.

    RAID Parity Configuration

    For the Database/Log RAID Parity Configuration table you need to select the type of RAID building block your storage solution utilizes. For example, some storage vendors build the underlying storage in sets of data+parity (d+p) groups. A RAID-5 3+1 configuration means that 3 disks will be used for capacity and 1 disk will be used for parity, even though parity is distributed across all the disks. So if you had a capacity requirement that would utilize 15 disks, then you would need to deploy 5 3+1 groups to build that RAID-5 array.

    • RAID-1/0 supports 1d+1p, 2d+2p, and 4d+4p groupings
    • RAID-5 supports 3d+1p through 20d+1p groupings (though storage solutions could support more than that).
    • RAID-6 supports 6d+2p groupings.

    Database/Log RAID Rebuild Overhead

    When a disk is lost, the disk needs to be replaced and rebuilt. During this time, the performance of the RAID group is affected. This impact as a result can affect user actions. Therefore, to ensure that RAID rebuilds do not affect the overall performance of the mailbox server, Microsoft recommends that you should ensure sufficient overhead is provisioned into the performance calculations when designing for RAID parity. Most RAID-1/0 implementations will suffer a 25% performance penalty during a rebuild. Most RAID-5 and RAID-6 implementations will suffer a 50% performance penalty during a rebuild.

    The calculator defaults with the following as Microsoft recommendations, but they are adjustable:

    • For RAID-1/0 implementations, ensure that you factor in an additional 35% performance overhead.
    • For RAID-5/RAID-6 implementations, ensure that you factor in an additional 100% performance overhead.

    In addition, you should consult with your storage vendor to determine the appropriate RAID rebuild penalty.

    Database RAID Configuration

    By default, for RAID storage solutions, the calculator will recommend either RAID-1/0 or RAID-5 by evaluating capacity and I/O factors and determining which configuration utilizes the least amount of disks while satisfying the requirements. If you would like to override this and force the calculator to utilize a particular RAID configuration for your databases (e.g., RAID-0 or RAID-6), select "Yes" to this option and then select the appropriate RAID configuration in the cell labeled "Desired RAID Configuration." Note that while you can potentially override the database RAID configuration, you cannot do so for the log RAID configuration - that will always be RAID-1/0.

    Note: The calculator prevents the use of RAID-5 or RAID-6 with 5.2K, 5.4K, 5.9K and 7.2K disk types, due to performance implications.

    Restore LUN RAID Configuration

    You can select the type of parity you will be utilizing and the RAID configuration you will be deploying for your Restore LUN.

    Results Pane

    The Storage Design Results section outputs the recommended configuration for the solution. The recommendations made are for implementing the solution potentially on RAID and JBOD storage.

    RAID Storage Architecture

    The RAID Storage Architecture Table outlines which servers (primary datacenter servers, secondary datacenter servers, or lagged copy servers) should be deployed on RAID storage.

    The RAID Storage Architecture / Server table recommends the optimum RAID configuration and number of disks for each LUN (database, log and restore LUN) for each mailbox server ensuring that performance and capacity requirements are met within the design.

    JBOD Storage Architecture

    The JBOD Storage Architecture Table outlines which servers (primary datacenter servers, secondary datacenter servers, or lagged copy servers) could be deployed on JBOD storage.

    The JBOD Storage Architecture / Server table recommends the optimum JBOD configuration and number of disks for each LUN (database, log and restore LUN) for each mailbox server ensuring that performance and capacity requirements are met within the design.

    Total Disks Required

    By default, the calculator will determine the storage architecture that should be utilized to reduce the total number of disks required to support the design, in addition, to still ensuring you have minimized single points of failure by utilizing RAID and/or JBOD based on the decisions found in the "RAID Storage Architecture" and "JBOD Storage Architecture" tables. However, you can change the storage architecture to be built entirely on RAID or entirely on JBOD (if the design supports JBOD as a possible solution; also keep in mind that certain scenarios (e.g., a single database copy in a datacenter) may result in a single point of failure) by selecting the appropriate value in the "Storage Architecture will be Deployed:" drop-down.

    The Storage Configuration table will output the total number of disks required for each mailbox server that requires RAID or JBOD storage, as well as, identify the total number of disks requiring RAID or JBOD storage in each datacenter.

    Conclusion

    Hopefully you will find this calculator invaluable in helping to determine your mailbox server role requirements for Exchange 2010 mailbox servers. If you have any questions or suggestions, please email strgcalc AT microsoft DOT com.

    For the calculator itself, please see the following link:

    Exchange 2010 Server Role Requirements Calculator

    Ross Smith IV

  • Released: Exchange Server 2013 Cumulative Update 3

    The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013. Cumulative Update 3 for Exchange Server 2013 and updated UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 3 includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved in Exchange Server 2013 Cumulative Update 3 can be found in Knowledge Base Article KB 2892464.

    Note: Some article links may not be available at the time of this post's publication. Updated Exchange 2013 documentation, including Release Notes, will be available on TechNet soon.

    We would like to call attention to an important fix in Exchange Server 2013 Cumulative Update 3 which impacts customers who rely upon Backup and Recovery mechanisms to protect Exchange data. Cumulative Update 3 includes a fix for an issue which may randomly prevent a backup dataset taken from Exchange Server 2013 from restoring correctly. Customers who rely on Backup and Recovery in their day-to-day operations are encouraged to deploy Cumulative Update 3 and initiate backups of their data to ensure that data contained in backups may be restored correctly. More information on this fix is available in KB 2888315.

    In addition to the customer-reported fixes in Cumulative Update 3, the following new enhancements and improvements to existing functionality have also been added for Exchange Server 2013 customers:

    More information on these topics can be found in What’s New in Exchange Server 2013, Release Notes and Exchange 2013 documentation on TechNet.

    Before you deploy Exchange 2013 CU3...

    Here are some things to consider before you deploy Exchange 2013 CU3.

    • Active Directory schema and configuration update: Exchange 2013 CU3 includes Exchange related updates to the Active Directory schema and configuration. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.
    • PowerShell Execution Policy: To prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to Unrestricted on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB 981474 to adjust the settings.
    • Hybrid deployments and EOA: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to maintain currency on Cumulative Update releases.

    Our next update for Exchange 2013, Cumulative Update 4, will be released as Exchange 2013 Service Pack 1. Customers who are accustomed to deploying Cumulative Updates should consider Exchange 2013 SP1 to be equivalent to CU4 and deploy as normal.

    The Exchange Team

  • Part 1: Reverse Proxy for Exchange Server 2013 using IIS ARR

    For a long time, ForeFront TMG (and ISA before it) has been the go-to Microsoft reverse proxy solution for many applications, including Exchange Server. However, with no more development roadmap for TMG 2010 a lot of customers are looking out for an alternative solution that works well with Exchange Server 2013.

    The Windows team have added an additional component called Application Request Routing (ARR, or as Greg the pirate says, ARR!) 2.5 to the Internet Information Service (IIS) role, which enables IIS to handle reverse proxy requests. By using the URL Rewrite Module and Application Request Routing you can implement complex and flexible load balancing and reverse proxy configurations.

    There are two options when implementing this solution and each have their pros and cons, which I'll cover in three posts. In this first post, we'll take a look at:

    1. Installation steps.
    2. Option 1 of implementing ARR as a reverse proxy solution for Exchange 2013 (this option is the simplest of the three configurations).

    In the next 2 posts in the series, we'll cover the second option and some troubleshooting steps. The troubleshooting steps would also help you to verify if you have implemented the reverse proxy solution correctly.

    Here's a diagram of the environment we'll use when discussing how to implement ARR.

    Arr1

    Prerequisites

    1. The IIS ARR server need not be domain joined. It's your choice to decide if you want to domain join this server or not.
    2. The IIS ARR server should have two NICs, one for the internal network and the other for the external network.

      TIP To make sure you're configuring and using the right network interface, rename the NICs to Internal and External.

    3. If you're not using an internal DNS server, you should update the HOSTS file on the IIS ARR server so that it can perform name resolution for the internal CAS and the published Exchange namespaces.
    4. Make sure you have already set the Internal and External URL’s for Outlook Anywhere, OWA, EWS and EAS, have your certificates installed correctly and this is all working as expected. If not, get it working first before you start adding ARR into the mix.

    Installing ARR

    Requirements: IIS ARRis supported on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012. It is also supported on Windows Vista, Windows 7, and Windows 8 with the Web services features installed. Note that IIS ARR does not require IIS 6.0 compatibility mode.

    Note: As with all such changes, we recommend that you test this in a non-production environment before deploying in production environment.

    To install IIS with the ARR module on the server identifid as the Reverse Proxy:

    1. 1. Install IIS, including .NET 3.5.1 and Tracing. You can use run this command in PowerShell to add all of the required features.

      Import-Module ServerManager
      Add-WindowsFeature Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-Net-Ext,Web-Http-Logging,Web-Request-Monitor,Web-Http-Tracing,Web-Filtering,Web-Stat-Compression,Web-Mgmt-Console,NET-Framework-Core,NET-Win-CFAC,NET-Non-HTTP-Activ,NET-HTTP-Activation,RSAT-Web-Server

    2. Export the Exchange certificate (from a CAS) and import the certificate to the local machine certificate store on the IIS Reverse Proxy, together with any required root or intermediate certificates. See the following topics on how to export & import certificates:
      1. Export an Exchange Certificate
      2. Import a Server Certificate (IIS 7)
    3. On the Default Web Site, add an HTTPS binding and associate the (imported) Exchange certificate.

      ARR2

    4. Download and Install the latest version: IIS ARR 2.5.

      If you don’t have internet access on the IIS ARR server, you can use the steps highlighted in How to install Application Request Routing (ARR) 2.5 without Web Platform Installer (WebPI).

    OPTION 1

    This is the simplest way of implementing IIS ARR as a Reverse Proxy solution for Exchange Server 2013. This implementation requires a minimum number of SAN entries in your certificate and minimum number of DNS entries.

    This set up assumes that all protocols (OWA, ECP, EWS etc) have been published with the mail.tailspintoys.com namespace.

    • Certificate: mail.tailspintoys.com, autodiscover.tailspintoys.com
    • DNS: Public IP address for each of the above namespaces

    Step 1: Create a Server Farm

    1. Open IIS and click on Server Farm.
    2. Create a new farm and give it a name as shown below.

      ARR3

    3. On the Add Server page, add each of the Client Access server and click Finish.

      ARR4

    4. Select Yesat the below prompt.

      ARR5

    Step 2: Server Farm Configuration Changes

    On the Server Farm settings node make the configuration changes as detailed below:

    1. Select Caching and choose Disable Disk Cache.
    2. Select Health Test.  This is used to make sure that a particular application is up and running. It is similar to a Load Balancer’s service availability test.

      In Exchange 2013 there is a new component called Managed Availability and it uses various checks to make sure that each of the protocols (OA, OWA, EWS, etc.) are up and running. If any protocol fails this check then an appropriate action is automatically taken. (This was just a very simple explanation as to what Managed availability is of course, but if you can take it, and want a more detailed understanding watch Ross Smith IV’s TechEd 2013 Session). We are going to leverage one of these checks to make sure that the service/protocol is available.

      https://<fqdn>/<protocol>/HealthCheck.htm is the default web page present in Exchange 2013. These URL’s are specific for each protocol and do not have to be created by the administrator.

      Examples:

      https://autodiscover.tailspintoys.com/Autodiscover/HealthCheck.htm

      https://mail.tailspintoys.com/EWS/HealthCheck.htm

      https://mail.tailspintoys.com/OAB/HealthCheck.htm

      Configure the Health Test with the following settings:

      URL: https://mail.tailspintoys.com/OWA/HealthCheck.htm

      Interval: 5 seconds

      Time-Out: 30 seconds

      Acceptable Status Code: 200

      ARR6

    3. Select Load Balance and choose Least Current Request. There are other options, but for this scenario, we find this to be simple and effective.

      ARR7

    4. Select Monitoring and Management. This shows the current state of the CAS that are part of this Server Farm. The Health Status is based on the output of the Health Test mentioned above.

      ARR8

    5. Select Proxy.  Change the below two values.  The actual value for these settings may need to be tweaked for your deployment, but these usually work well as a starting point.

      Time-Out: 200 seconds

      Response Buffer threshold: 0

    6. Select Routing Rules and uncheck Enable SSL Offloading as it is not supported in Exchange 2013.
    7. Select Server Affinity.  Due to major architectural changes in the way CAS works in Exchange 2013 we do not need to maintain session affinity. As long as you can get to a CAS server, you will be able to access your mailbox. Thus leave this setting as is. Which means, no changes required.

    Step 3: Create URL Rewrite Rules

    1. At the IIS Root (this is the root and not the properties of the Default Web Site) click on URL Rewrite.

      ARR9

    2. You should see two URL Rewrite rules already created (these were created when you selected “Yes” at the end of Server Farm creation).
    3. Deletethe one for HTTP .

      ARR10

    4. Open the properties of the HTTPS rule and make the changes as below;
      1. Under Conditions add a condition for {HTTP_HOST} and make sure it looks like this:

        ARR11

      2. Under Action make sure that you have the below options set i.e.: choose the appropriate Server Farm from the drop down menu.

        ARR12

        Note: Make sure the option “Stop processing of subsequent rules” is selected. This is to make sure that the validation process stops once the requested URL finds a match.

      3. Repeatthe same steps of creating a Server Farm and URL Rewrite rule for your AutoDiscover URL (i.e., autodiscover.tailspintoys.com). The final result is as shown below.

        ARR13

    That’s it!!!! ....You are now all set and have a reverse-proxy-with-load-balancing solution for your Exchange 2013 environment!

    Give it a try and see how it works. Make sure DNS for mail.tailspintoys.com resolves to your reverse proxy and try connecting a client. And if it doesn’t work, go back through the steps and see where you went wrong. And if it still doesn’t work, post a comment here, or wait for Part 3, Troubleshooting (so please don’t do all this for the first time in a production environment! Really, we mean it!).

    Finally, here are a couple of additional changes we recommend you review and optionally consider making to your IIS ARR configuration.

    1. Implement the changes (Step3 and Step4) from Install Application Request Routing Version 2.
    2. For optimization of RPC-HTTP traffic make the changes as stated. Click on the root of IIS and open the properties for Request Filtering. Then click on “Edit Feature Settings” and change the settings for “Maximum allowed content length” to the below.

      ARR14

    We've spent time testing this configuration and found it to work as we hoped and expected. Note that support for IIS ARR is provided by the Windows/IIS team, not Exchange. That's no different than support for TMG or UAG (if you use either of these products to publish Exchange).

    We would really appreciate any feedback on your implementation and/or any configuration where this doesn’t seem to work.

    Keep your eyes peeled for the next set of articles where we’ll talk about slightly complex and interesting implementations of IIS ARR for Exchange 2013.

    I would like to thank Greg Taylor (Principal PM Lead) for his help in reviewing this article.

    Part 2 | Part 3

    References

    B. Roop Sankar
    Premier Field Engineer, UK

  • Exchange 2013 Client Access Server Role

    In a previous article, I discussed the new server role architecture in Exchange 2013. This article continues the series by discussing the Client Access server role.

    While this Exchange server role shares the same name as a server role that existed in the last two Exchange Server releases, it is markedly different. In Exchange 2007, the Client Access server role provided authentication, proxy/redirection logic, and performed data rendering for the Internet protocol clients (Outlook Web App, EAS, EWS, IMAP and POP). In Exchange 2010, data rendering for MAPI was also moved to the Client Access server role.

    In Exchange 2013, the Client Access server (CAS) role no longer performs any data rendering functionality. The Client Access server role now only provides authentication and proxy/redirection logic, supporting the client Internet protocols, transport, and Unified Messaging. As a result of this architectural shift, the CAS role is stateless (from a protocol session perspective, log data that can be used in troubleshooting or trending analysis is generated, naturally).

    Session Affinity

    As I alluded to in the sever role architecture blog post, Exchange 2013 no longer requires session affinity at the load balancer. To understand this better, we need to look at how CAS2013 functions. From a protocol perspective, the following will happen:

    1. A client resolves the namespace to a load balanced virtual IP address.
    2. The load balancer assigns the session to a CAS member in the load balanced pool.
    3. CAS authenticates the request and performs a service discovery by accessing Active Directory to retrieve the following information:
      1. Mailbox version (for this discussion, we will assume an Exchange 2013 mailbox)
      2. Mailbox location information (e.g., database information, ExternalURL values, etc.)
    4. CAS makes a decision on whether to proxy the request or redirect the request to another CAS infrastructure (within the same forest).
    5. CAS queries an Active Manager instance that is responsible for the database to determine which Mailbox server is hosting the active copy.
    6. CAS proxies the request to the Mailbox server hosting the active copy.

    The protocol used in step 6 depends on the protocol used to connect to CAS. If the client leverages the HTTP protocol, then the protocol used between the Client Access server and Mailbox server is HTTP (secured via SSL using a self-signed certificate). If the protocol leveraged by the client is IMAP or POP, then the protocol used between the Client Access server and Mailbox server is IMAP or POP.

    Telephony requests are unique, however. Instead of proxying the request at step 6, CAS will redirect the request to the Mailbox server hosting the active copy of the user’s database, as the telephony devices support redirection and need to establish their SIP and RTP sessions directly with the Unified Messaging components on the Mailbox server.

    CAS Protocol Arch
    Figure 1: Exchange 2013 Client Access Protocol Architecture

    In addition to no longer performing data rendering, step 5 is the fundamental change that enables the removal of session affinity at the load balancer. For a given protocol session, CAS now maintains a 1:1 relationship with the Mailbox server hosting the user’s data. In the event that the active database copy is moved to a different Mailbox server, CAS closes the sessions to the previous server and establishes sessions to the new server. This means that all sessions, regardless of their origination point (i.e., CAS members in the load balanced array), end up at the same place, the Mailbox server hosting the active database copy.

    Now many of you may be thinking, wait how does authentication work? Well for HTTP, POP, or IMAP requests that use basic, NTLM, or Kerberos authentication, the authentication request is passed as part of the HTTP payload, so each CAS will authenticate the request naturally. Forms-based authentication (FBA) is different. FBA was one of the reasons why session affinity was required for OWA in previous releases of Exchange – the reason being that that the cookie used a per server key for encryption; so if another CAS received a request, it could not decrypt the session. In Exchange 2013, we no longer leverage a per server session key; instead we leverage the private key from the certificate that is installed on the CAS. As long as all members of the CAS array share the exact same certificate (remember we actually recommend deploying the same certificate across all CAS in both datacenters in site resilience scenarios as well), they can decrypt the cookie.

    Proxy vs. Redirection

    In the previous section, I spoke about CAS proxying the data to the Mailbox server hosting the active database copy. Prior to that, CAS has to make a decision whether it will perform the proxy action or perform a redirection action. CAS will only perform a redirection action under the following circumstances:

    1. The origination request is telephony related.
    2. For Outlook Web App requests, if the mailbox’s location is determined to be in another Active Directory site and there are CAS2013 members in that site that have the ExternalURL populated, then the originating CAS will redirect the request unless the ExternalURLin the target site is the same as in the originating site – in which case CAS will proxy (this is the multiple site single namespace scenario).

      Proxy-Referral
      Figure 2: Exchange 2013 Client Access Proxy and Redirection Behavior Examples

    3. For OWA requests, if the mailbox version is Exchange 2007, then CAS2013 will redirect the request to CAS2007.

    Outlook Connectivity

    For those of you paying attention, you may have noticed I only spoke about HTTP, POP, and IMAP. I didn’t mention RPC/TCP as connectivity solution that CAS supports. And that is for a very specific reason – CAS2013 does not support RPC/TCP as a connectivity solution; it only supports RPC/HTTP (aka Outlook Anywhere). This architecture change is primarily to drive a stable and reliable connectivity model.

    To understand why, you need to keep the following tenets in the back of your mind:

    1. Remember CAS2013 is an authentication and proxy/redirection server. It does no processing of the data (no rendering or transformation). It simply proxies the request to MBX2013 using the client protocol. In this case HTTP.
    2. CAS2013 and MBX2013 are not tied together from a user affinity or geographical perspective. You can have CAS2013 in one datacenter authenticate the request and proxy the request to a MBX2013 server in another datacenter. To enable this we had to change the communication protocols used between server roles. Moving away from RPC to the client protocols that are more tolerant of throughput & latency over WAN/Internet connections.
    3. For a given mailbox, the protocol that services the request is always going to be the protocol instance on the Mailbox server that hosts the active copy of the database for the user’s mailbox. This was done to ultimately uncouple versioning and functionality issues we’ve seen in the past two generations (i.e., have to deploy CAS2010, HT2010, MBX2010 together to get certain functionality and upgrading one doesn’t necessarily give you new capabilities and may break connectivity).

    The last item is tied to this discussion of why we have moved away from RPC/TCP as a connectivity solution. In all prior releases the RPC endpoint was a FQDN. In fact, the shift to the middle tier for RPC processing in CAS2010 introduced a new shared namespace, the RPC Client Access namespace. By moving RPC Client Access back to the MBX2013 role, this would have forced us to use either the MBX2013 FQDN for the RPC endpoint (thus forcing an Outlook client restart for every database *over event) or a shared namespace for the DAG.

    Neither option is appropriate and adds to the complexity and support of the infrastructure. So instead, we changed the model. We no longer use a FQDN for the RPC endpoint. Instead we now use a GUID. The mailbox GUID, to be precise (along with a domain suffix to support multi-tenant scenarios). The mailbox GUID is unique within the (tenant) organization, so regardless of where the database is activated and mounted, CAS can discover the location and proxy the request to the correct MBX2013 server.

    RPC Endpoint
    Figure 3: RPC Endpoint Changes

    This architectural change means that we have a very reliable connection model – for a given session that is routed to CAS2013, CAS2013 will always have a 1:1 relationship with the MBX2013 server hosting the user’s mailbox. This means that the Mailbox server hosting the active copy of the user’s database is the server responsible for de-encapsulating the RPC stream from the HTTP packets.  In the event a *over occurs, CAS2013 will proxy the connection to MBX2013 that assumes the responsibility of hosting the active database copy. Oh, and this means in a native Exchange 2013 environment, Outlook won’t require a restart for things like mailbox moves, *over events, etc.

    The other architectural change we made in this area is the support for internal and external namespaces for Outlook Anywhere. This means you may not need to deploy split-brain DNS or deal with all Outlook clients using your external firewall or load balancer due to our change in MAPI connectivity.

    Third-Party MAPI Products

    I am sure that a few of you are wondering what this change means for third-party MAPI products. The answer is relatively simple – these third-party solutions will need to leverage RPC/HTTP to connect to CAS2013. This will be accomplished via a new MAPI/CDO download that that has been updated to include support for RPC/HTTP connectivity. It will be released in the first quarter of calendar year 2013. To leverage this updated functionality, the third-party vendor will either have to programmatically edit the dynamic MAPI profile or set registry key values to enable RPC/HTTP support.

    I do also want to stress one key item with respect to third-party MAPI support. Exchange 2013 is the last release that will support a MAPI/CDO custom solution. In the future, third-party products (and custom in-house developed solutions) will need to move to Exchange Web Services (EWS) to access Exchange data.

    Namespace Simplification

    Another benefit with the Exchange 2013 architecture is that the namespace model can be simplified (especially for those of you upgrading from Exchange 2010). In Exchange 2010, a customer that wanted to deploy a site-resilient solution for two datacenters required the following namespaces:

    1. Primary datacenter Internet protocol namespace
    2. Secondary datacenter Internet protocol namespace
    3. Primary datacenter Outlook Web App failback namespace
    4. Secondary datacenter Outlook Web App failback namespace
    5. Primary datacenter RPC Client Access namespace
    6. Secondary datacenter RPC Client Access namespace
    7. Autodiscover namespace
    8. Legacy namespace
    9. Transport namespace (if doing ad-hoc encryption or partner-to-partner encryption)

    As I previously mentioned, we have removed two of these namespaces in Exchange 2013 – the RPC Client Access namespaces.

    Recall that CAS2013 proxies requests to the Mailbox server hosting the active database copy. This proxy logic is not limited to the Active Directory site boundary. A CAS2013 in one Active Directory site can proxy a session to a Mailbox server that is located in another Active Directory site. If network utilization, latency, and throughput are not a concern, this means that we do not need the additional namespaces for site resilience scenarios, thereby eliminating three other namespaces (secondary Internet protocol and both Outlook Web App failback namespaces).

    For example, let’s say I have a two datacenter deployment in North America that has a network configuration such that latency, throughput and utilization between the datacenters is not a concern. I also wanted to simplify my namespace architecture with the Exchange 2013 deployment so that my users only have to use a single namespace for Internet access regardless of where their mailbox is located. If I deployed an architecture like below, then the CAS infrastructure in both datacenters could be used to route and proxy traffic to the Mailbox servers hosting the active copies.  Since I am not concerned about network traffic, I configure DNS to round-robin between the VIPs of the load balancers in each datacenter.  The end result is that I have a site resilient namespace architecture while accepting that half of my proxy traffic will be out-of-site.

    namespace
    Figure 4: Exchange 2013 Single Namespace Example

    Transport

    Early on I mentioned that the Client Access Server role can proxy SMTP sessions. This is handled by a new component on the CAS2013 role, the Front-End Transport service. The Front-End Transport service handles all inbound and outbound external SMTP traffic for the Exchange organization, as well as, can be a client endpoint for SMTP traffic. The Front-End Transport Service functions as a layer 7 proxy and has full access to the protocol conversation. Like the client Internet protocols, the Front-End Transport service does not have a message queue and is completely stateless. In addition, the Front-End Transport service does not perform message bifurcation.

    The Front-End Transport service listens on TCP25, TCP587, and TCP717 as seen in the following diagram:

    Trans1
    Figure 5: Front-End Transport Service Architecture

    The Front-End Transport service provides network protection – a centralized, load balanced egress/ingress point for the Exchange organization, whether it be POP/IMAP clients, SharePoint, other third-party or custom in-house mail applications, or external SMTP systems.

    For outgoing messages, the Front-End Transport service is used as a proxy when the Send Connectors (that are located on the Mailbox server) have the FrontEndProxyEnabled property set. In this situation, the message will appear to have originated from CAS2013.

    For incoming messages, the Front-End Transport service must quickly find a single, healthy Transport service on a Mailbox server to receive the message transmission, regardless of the number or type of recipients:

    • For messages with a single mailbox recipient, select a Mailbox server in the target delivery group, and give preference to the Mailbox server based on the proximity of the Active Directory site.
    • For messages with multiple mailbox recipients, use the first 20 recipients to select a Mailbox server in the closest delivery group, based on the proximity of the Active Directory site.
    • If the message has no mailbox recipients, select a random Mailbox server in the local Active Directory site.

    Conclusion

    The Exchange 2013 Client Access Server role simplifies the network layer. Session affinity at the load balancer is no longer required as CAS2013 handles the affinity aspects. CAS2013 introduces more deployment flexibility by allowing you to simplify your namespace architecture, potentially consolidating to a single world-wide or regional namespace for your Internet protocols. The new architecture also simplifies the upgrade and inter-operability story as CAS2013 can proxy or redirect to multiple versions of Exchange, whether they are a higher or lower version, allowing you to upgrade your Mailbox servers at your own pace.

    Ross Smith IV
    Principal Program Manager
    Exchange Customer Experience

  • Released: Exchange Server 2013 Cumulative Update 5

    The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013. Cumulative Update 5 for Exchange Server 2013 and updated UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 5 represents the continuation of our Exchange Server 2013 servicing and builds upon Exchange Server 2013 Service Pack 1. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved in Exchange Server 2013 Cumulative Update 5 can be found in Knowledge Base Article KB2936880. Customers running any previous release of Exchange Server 2013 can move directly to Cumulative Update 5 today. Customers deploying Exchange Server 2013 for the first time may skip previous releases and start their deployment with Cumulative Update 5 as well.

    Note: Some article links may not be available at the time of this post's publication. Updated Exchange 2013 documentation, including Release Notes, will be available on TechNet soon.

    We would like to call your attention to a couple of items in particular about the Cumulative Update 5 release:

    • Based upon customer feedback, we have introduced improvements to OAB management for distributed environments. You can read more about this in a post by Ross Smith IV on the Exchange Team blog. Customers who have deployed Multiple OAB Generation Mailboxes are advised to read this post to help avoid unnecessary OAB downloads.
    • Cumulative Update 5 includes a Managed Availability probe configuration that is frequently restarting the Microsoft Exchange Shared Cache Service in some environments. The service is being added to provide future performance improvements and is not used in Cumulative Update 5. More information is available in KB2971467.

    For the latest information and product announcements please read What’s New in Exchange Server 2013, Release Notesand product documentation available on TechNet.

    Cumulative Update 5 includes Exchange related updates to Active Directory schema and configuration. For information on extending schema and configuring the active directory please review the appropriate TechNet documentation. Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474to adjust the settings.

    Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU5) or the prior (e.g., CU4) Cumulative Update release.

    The Exchange Team

  • Removing specific messages from your Exchange Server

    Ever so often, an Exchange administrator faces a situation where messages that fit specific criteria need to be removed from a large number of mailboxes or from Exchange transport queues. The need may arise due to some sort of mass mailing, a message sent accidentally to a large distribution group or individual recipients, or it could be one of the steps required to be taken as a part of cleanup efforts after a mass-mailing virus outbreak (although the latter have been increasingly rare and generally taken care of by Exchange-aware antivirus scanners).

    The steps for accomplishing this are documented in various places in Exchange documentation, but it can be difficult to refer to multiple sources if you have a mixed environment containing several versions of Exchange Server. We wanted to provide a single place with somewhat generic instructions on how to accomplish these tasks across all currently supported versions of Exchange Server - Exchange 2010, Exchange 2007, and Exchange 2003.

    Removing messages from mailboxes

    Removing messages using the Shell in Exchange 2010 RTM and Exchange 2007

    In Exchange 2010 RTM and Exchange 2007, you can use the Export-Mailbox cmdlet to export or delete messages. In Exchange 2010 SP1, the functionality to export a mailbox is provided by the New-MailboxExportRequest cmdlet and is covered in a separate article. The functionality to search and delete messages is provided by the Search-Mailbox cmdlet.

    Permissions

    In Exchange 2010, the Mailbox Export Import RBAC role must be assigned to the account used to perform this operation (using Export-Mailbox in Exchange 2010 RTM or Search-Mailbox in Exchange 2010 SP1). If the role isn't assigned, you'll be unable to run or "see" the cmdlet.

    The versatile Export-Mailbox cmdlet can export mailbox content based on specific folder names, date and time range, attachment file names, and many other filters. A narrow search will go a long way in preventing accidental deletion of legitimate mail. For more details, syntax and parmeter descriptions, see the following topics:

    The account used to export the data must be an Exchange Server Administrator, a member of the local Administrators group of the target server, and have Full Access mailbox permission assigned on the source and target mailboxes. The target mailbox you specify must already be created; the target folder you specify is created in the target mailbox when the command runs.

    Adding and removing the necessary permissions

    This example retrieves all mailboxes from an Exchange organization and assigns the Full Access mailbox permission to the MyAdmin account. You must run this before exporting or deleting messages from user mailboxes. Note, if you need to export or delete messages only from a few mailboxes, you can use the Get-Mailbox cmdlet with appropriate filters, or specify each source mailbox.

    Get-Mailbox -ResultSize unlimited | Add-MailboxPermission -User MyAdmin -AccessRights FullAccess -InheritanceType all

    After exporting or deleting messages from mailboxes, you can remove the Full Access mailbox permission, as shown in this example:

    Get-Mailbox -ResultSize unlimited | Remove-MailboxPermission -User MyAdmin -AccessRights FullAccess -InheritanceType all

    Removing messages

    Here are a few examples that remove messages.

    This example removes all messages with the subject keyword "Friday Party" and received between Sept 7 and Sept 9 from the Inbox folder of mailboxes on Server1. The messages will be deleted from the mailboxes and copied to the folder DeleteMsgs of the MyBackupMailbox mailbox. The Administrator can now review these items or delete them from the MyBackupMailbox mailbox. The StartDate and EndDate parameters must match the date format setting on the server, whether it is mm-dd-yyyy or dd-mm-yyyy.

    Get-Mailbox -Server Server1 -ResultSize Unlimited | Export-Mailbox -SubjectKeywords "Friday Party" -IncludeFolders "\Inbox" -StartDate "09/07/2010" -EndDate "09/09/2010" -DeleteContent -TargetMailbox MyBackupMailbox -TargetFolder DeleteMsgs -Confirm:$false

    This example removes all messages that contain the words "Friday Party" in the body or subject from all mailboxes.

    Depending on the size of your environment, it is better to do the extraction/deletion in batches by using the Get-Mailbox cmdlet with the Server or Database parameters (Get-Mailbox -Server servername -ResultSize Unlimited or Get-Mailbox -Database DB_Name -ResultSize Unlimited), or specifying a filter using the Filter parameter. You can also use the Get-DistributionGroupMember cmdlet to perform this operation on members of a distribution group.

    Get-Mailbox -ResultSize Unlimited | Export-Mailbox -ContentKeywords "Friday Party" -TargetMailbox MyBackupMailbox -TargetFolder 'Friday Party' -DeleteContent

    It is recommended to always use a target mailbox (by specifying the TargetMailbox and TargetFolder parameters) so you have a copy of the data. You can review messages before purging them so any legitimate mail returned by the filter can be imported back to its owner mailbox. However, it is possible to outright delete all messages without temporarily copying them to a holding mailbox.

    This example deletes all messages that contain the string "Friday Party" in the message body or subject, without copying them to a target mailbox.

    Get-Mailbox | Export-Mailbox -ContentKeywords "Friday Party" -DeleteContent

    Removing messages on Exchange 2003 and Exchange 2000 using ExMerge

    The ExMerge utility can be used to extract mail items from mailboxes located on legacy Exchange Server versions. Follow the steps in KB 328202 HOW TO: Remove a Virus-Infected Message from Mailboxes by Using the ExMerge.exe Tool to remove unwanted messages from user mailboxes.

    Removing messages from Public Folders

    You can use the Outlook Object Model to remove messages from Public Folders. This works on any version of Exchange. The down side is that it's slower and may stumble when it hits huge folders with tens of thousands of items. In Exchange 2010/2007, you can use Exchange Web Services to remove messages from Public Folders. EWS has no problem running against large folders.

    The following posts have more details:

    Removing messages from mail queues

    There may be times where you need to purge messages from Exchange Server's mail queues to prevent delivery of unwanted mail. For more details about mail queues, see Understanding Transport Queues.

    Removing messages from mail queues on Exchange 2010 RTM and Exchange 2007

    Removing a message from the queue is a two-step process. The first thing that must be done is that the message itself must be suspended. Once the messages have been suspended then you can precede with removing them from the queue. The below commands are based on suspending and removing messages based on the Subject of the message.

    Exchange 2007 SP1 and SP2

    This command suspends messages with the string "Friday Party" from transport queues on all Hub Transport servers in your Exchange organization:

    Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where{$_.Subject -eq "Friday Party" -and $_.Queue -notlike "*\Submission*"} | Suspend-Message

    On Exchange 2007 RTM to SP2, you will not be able to suspend or remove message that are held in the Submission queue. So the command will not run against the messages in the submission queue.

    This command removes all suspended messages from queues other than the Submission queue.

    Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where{$_.status -eq "suspended" -and $_.Queue -notlike "*\Submission*"} | Remove-Message -WithNDR $False

    Exchange 2010 and Exchange 2007 SP3

    This command suspends messages that have the string "Friday Party" in the message subject in all queues on Hub Tranpsort servers.

    Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where {$_.Subject -eq "Friday Party"} | Suspend-Message

    This command removes messages that have the string "Friday Party" in the message subject in all queues on Hub Transport servers:

    Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | Where {$_.Subject -eq "Friday Party"} | Remove-Message -WithNDR $False

    Note, you can run the command against an individual Hub Transport server by specifiying the server name after Get-TransportServer.

    Suspend and remove messages from a specified transport queue

    You can also suspend and remove messages from a specified queue. To retrieve a list of queues on a transport server, use the Get-Queue cmdlet.

    This example suspends messages with the string "Friday Party" in the message subject in a specified queue.

    Get-Message -Queue "server\queue" -ResultSize unlimited | where{$_.Subject -eq "Friday Party"} | Suspend-Message

    This example removes messages with the string "Friday Party" in the message subject in the specified queue.

    Get-Message -Queue "server\queue" -ResultSize unlimited | where{$_.Subject -eq "Friday Party" } | Remove-Message -WithNDR $False

    Clear queues in Exchange Server 2000 and Exchange Server 2003 with MFCMAPI

    In Exchange 2003/2000, you can use MFCMapi to clear the queues. For details, see KB 906557 How to use the Mfcmapi.exe utility to view and work with messages in the SMTP TempTables in Exchange 2000 Server and in Exchange Server 2003.

    If there are a large number of messages in the queue, you may want to limit how many are displayed at a time. From the tool bar select Other > Options and under Throttle Level change the value to a more manageable number (for example, 1000).

    Preventing message delivery using Transport Rules

    In Exchange 2010 and Exchange 2007, you can use Transport Rules to inspect messages in the transport pipeline and take the necessary actions, such as deleting a message, based on the specified criteria. See Understanding Transport Rules for more details.

    On Exchange 2010 and Exchange 2007, you can use the New Transport Rule wizard from the EMC to easily create transport rules. The following examples illustrate how to accomplish this using the Shell. Note the variation in sytnax between the two versions. (The Exchange 2010 transport rule cmdlets have been simplified, allowing you to create or modify a transport rule using a one-line command.)

    Creating a Transport Rule to delete messages in Exchange 2010

    This example creates a transport rule to delete messages that contain the string "Friday Party" in the message subject.

    New-TransportRule -Name "purge Friday Party messages" -Priority '0' -Enabled $true -SubjectContainsWords 'Friday Party' -DeleteMessage $true

    Creating a Transport Rule to delete messages in Exchange 2007

    This example creates a transport rule to delete messages that contain the string "Friday Party" in the message subject.

    $condition = Get-TransportRulePredicate SubjectContains
    $condition.Words = @("Friday Party")
    $action = Get-TransportRuleAction DeleteMessage
    New-TransportRule -name "purge Friday Party messages" -Conditions @($condition) -Actions @($action) -Priority 0

    Note: If your Exchange Organization has mixed Exchange 2007 and Exchange 2010 you will have to create a rule for each Exchange version.

    Angelique Conde, Ed Bringas

  • Allowing application servers to relay off Exchange Server 2007

    From time to time, you need to allow an application server to relay off of your Exchange server. You might need to do this if you have a SharePoint, a CRM application like Dynamics, or a web site that sends emails to your employees or customers.

    You might need to do this if you are getting the SMTP error message "550 5.7.1 Unable to relay"

    The top rule is that you want to keep relay restricted as tightly as possible, even on servers that are not connected to the Internet. Usually this is done with authentication and/or restricting by IP address. Exchange 2003 provides the following relay restrictions on the SMTP VS:

    Here are the equivalent options for how to configure this in Exchange 2007.

    Allow all computers which successfully authenticate to relay, regardless of the list above

    Like its predecessor, Exchange 2007 is configured to accept and relay email from hosts that authenticate by default. Both the "Default" and "Client" receive connectors are configured this way out of the box. Authenticating is the simplest method to submit messages, and preferred in many cases.

    The Permissions Group that allows authenticated users to submit and relay is the "ExchangeUsers" group. The permissions that are granted with this permissions group are:

    NT AUTHORITY\Authenticated Users {ms-Exch-SMTP-Submit}
    NT AUTHORITY\Authenticated Users {ms-Exch-Accept-Headers-Routing}
    NT AUTHORITY\Authenticated Users {ms-Exch-Bypass-Anti-Spam}
    NT AUTHORITY\Authenticated Users {ms-Exch-SMTP-Accept-Any-Recipient}

    The specific ACL that controls relay is the ms-Exch-SMTP-Accept-Any-Recipient.

    Only the list below (specify IP address)

    This option is for those who cannot authenticate with Exchange. The most common example of this is an application server that needs to be able to relay messages through Exchange.

    First, start with a new custom receive connector. You can think of receive connectors as protocol listeners. The closest equivalent to Exchange 2003 is an SMTP Virtual Server. You must create a new one because you will want to scope the remote IP Address(es) that you will allow.

    The next screen you must pay particular attention to is the "Remote Network settings". This is where you will specify the IP ranges of servers that will be allowed to submit mail. You definitely want to restrict this range down as much as you can. In this case, I want my two web servers, 192.168.2.55 & 192.168.2.56 to be allowed to relay.

    The next step is to create the connector, and open the properties. Now you have two options, which I will present. The first option will probably be the most common.

    Option 1: Make your new scoped connector an Externally Secured connector

    This option is the most common option, and preferred in most situations where the application that is submitting will be submitting email to your internal users as well as relaying to the outside world.

    Before you can perform this step, it is required that you enable the Exchange Servers permission group. Once in the properties, go to the Permissions Groups tab and select Exchange servers.

    Next, continue to the authentication mechanisms page and add the "Externally secured" mechanism. What this means is that you have complete trust that the previously designated IP addresses will be trusted by your organization.

    Caveat: If you do not perform these two steps in order, the GUI blocks you from continuing.

    Do not use this setting lightly. You will be granting several rights including the ability to send on behalf of users in your organization, the ability to ResolveP2 (that is, make it so that the messages appear to be sent from within the organization rather than anonymously), bypass anti-spam, and bypass size limits. The default "Externally Secured" permissions are as follows:

    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Authoritative-Domain}
    MS Exchange\Externally Secured Servers {ms-Exch-Bypass-Anti-Spam}
    MS Exchange\Externally Secured Servers {ms-Exch-Bypass-Message-Size-Limit}
    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Exch50}
    MS Exchange\Externally Secured Servers {ms-Exch-Accept-Headers-Routing}
    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Submit}
    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Any-Recipient}
    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Authentication-Flag}
    MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Any-Sender}

    Basically you are telling Exchange to ignore internal security checks because you trust these servers. The nice thing about this option is that it is simple and grants the common rights that most people probably want.

    Option 2: Grant the relay permission to Anonymous on your new scoped connector

    This option grants the minimum amount of required privileges to the submitting application.

    Taking the new scoped connector that you created, you have another option. You can simply grant the ms-Exch-SMTP-Accept-Any-Recipient permission to the anonymous account. Do this by first adding the Anonymous Permissions Group to the connector.

    This grants the most common permissions to the anonymous account, but it does not grant the relay permission. This step must be done through the Exchange shell:

    Get-ReceiveConnector "CRM Application" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -ExtendedRights "ms-Exch-SMTP-Accept-Any-Recipient"

    In addition to being more difficult to complete, this step does not allow the anonymous account to bypass anti-spam, or ResolveP2.

    Although it is completely different from the Exchange 2003 way of doing things, hopefully you find the new SMTP permissions model to be sensible.

    More information

    See the following for more information:

    - Scott Landry

  • Configure Automatic Replies for a user in Exchange 2010

    A user is out of office for some reason – on vacation, sick, on a sabbatical or extended leave of absence, or traveling to a remote location on business, and forgets to set an automatic reply, also known as an Out Of Office message or OOF in Exchange/Outlook lingo. As an Exchange administrator, you get an email from the user’s manager asking you to configure an OOF for the user.

    In previous versions of Exchange, you would need to access the user’s mailbox to be able to do this. Out of Office messages are stored in the Non-IPM tree of a user’s mailbox along with other metadata. Without access to the mailbox, you can’t modify data in it. Two ways for an admin to access a mailbox:

    1. Grant yourself Full Access mailbox permission to the user’s mailbox.
    2. Change the user’s password and log in as user.

    It is safe to say that either of these options is potentially dangerous. The first option grants the administrator access to all of the data in the user’s mailbox. The second option grants the administrator access to all of the data that the user account can access within your company and locks the user out of his own user account (as the user in question no longer knows the account password).

    In Exchange 2010, you can configure auto-reply options for your users without using either of the above options. You must be a member of a role group that has either the Mail Recipients or User Options management roles.

    Configure auto-reply options using the Exchange Control Panel

    To configure an auto-reply using the ECP:

    1. From Mail > Options, select Another User (default My Organization).

      Figure 1: Select Another User

    2. Select the user you want to configure the auto-reply for

    3. In the new window, ensure the user's name is displayed in the alert message, and then click Tell people you’re on vacation

      Figure 2: When managing another user in the ECP, an alert near the top of the page displays the name of the user you're managing

    4. From the Automatic Replies tab, configure the auto-reply options for the user (see screenshot).

    In Exchange 2007, we introduced the ability to create different Out of Office messages for external and internal recipients. You can also disable or enable Out of Office messages on a per-user basis and on a per-remote domain basis in Remote Domain settings. For details, see previous post Exchange Server 2007 Out of Office (OOF).

    Configure auto-reply options using the Shell

    This command schedules internal and external auto-replies from 9/8/2011 to 9/15/2011:

    Set-MailboxAutoReplyConfiguration bsuneja@e14labs.com –AutoReplyState Scheduled –StartTime “9/8/2011” –EndTime “9/15/2011” –ExternalMessage “External OOF message here” –InternalMessage “Internal OOF message here”

    To configure auto-replies to be sent until they're disabled (i.e. without a schedule), set the AutoReplyState parameter to Enabled and do not specify the StarTime and EndTime parameters. For detailed syntax and parameter descriptions, see Set-MailboxAutoReplyConfiguration.

    This command retrieves auto-reply settings for a mailbox.

    Get-MailboxAutoReplyConfiguration bsuneja@e14labs.com

    This command disables auto-reply configured for a mailbox:

    Set-MailboxAutoReplyConfiguration bsuneja@e14labs.com –AutoReplyState Disabled –ExternalMessage $null –InternalMessage $null

    Bharat Suneja

  • Client Connectivity in an Exchange 2013 Coexistence Environment

    This article is part 3 in a series that discusses namespace planning, load balancing principles, client connectivity, and certificate planning.

    Over the last several months, we have routinely fielded questions on how various clients connect to the infrastructure once Exchange 2013 is deployed. Our goal with this article is to articulate the various connectivity scenarios you may encounter in your designs. To that end, this article will begin with a walk through of a deployment that consists of Exchange 2007 and Exchange 2010 in a multi-site architecture and show how the connectivity changes with the introduction of Exchange 2013.

    Existing Environment

    Pre-E15 Environment
    Figure 1: Exchange 2007 & Exchange 2010 Multi-Site Architecture

    As you can see from the above diagram, this environment contains three Active Directory sites:

    • Internet Facing AD Site (Site1) - This is the main AD site in the environment and has exposure to the Internet. This site also has a mix of Exchange 2007 and Exchange 2010 servers. There are three namespaces associated with this location – mail.contoso.com and autodiscover.contoso.com resolve to the CAS2010 infrastructure and legacy.contoso.com resolves to the CAS2007 infrastructure.
    • Regional Internet Facing AD Site (Site2) - This is an AD site that has exposure to the Internet. This site has Exchange 2010 servers. The primary namespace is mail-region.contoso.com and resolves to the CAS2010 infrastructure located within this AD site.
    • Non-Internet Facing AD Site (Site3) - This is an AD site that does not have exposure to the Internet. This site contains Exchange 2010 infrastructure.
    Note: The term, Internet Facing AD Site, simply means any Active Directory site containing Client Access servers whose virtual directories have the ExternalURL property populated. Similarly, the term, Non-Internet Facing AD Site, simply means any Active Directory site containing Client Access servers whose virtual directories do not have ExternalURL property populated.

    To understand the client connectivity before we instantiate Exchange 2013 into the environment, let’s look at the four users.

    Autodiscover

    The Autodiscover namespace, autodiscover.contoso.com, as well as, the internal SCP records resolve to the CAS2010 infrastructure located in Site1. Outlook clients and ActiveSync clients (on initial configuration) will submit Autodiscover requests to the CAS2010 infrastructure and retrieve configuration settings based on their mailbox’s location. The Autodiscover service on Exchange 2010 can process Autodiscover requests for both Exchange 2007 and Exchange 2010 mailboxes.

    Note: The recommended practice is to configure the 2007 and 2010 Client Access server’s AutoDiscoverServiceInternalUri value (which is the property value you use to set the SCP record) to point to autodiscover.contoso.com, assuming split-brain DNS is in place. If split-brain DNS is not configured, then set AutoDiscoverServiceInternalUri to a value that resolves to the load balanced VIP for the 2010 Client Access servers in your environment.

    For more information on how Autodiscover requests are performed, see the whitepaper, Understanding the Exchange 2010 Autodiscover Service.

    Internal Outlook Connectivity

    For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2010, they will connect to the Exchange 2010 RPC Client Access array endpoint (assuming one exists). Keep in mind the importance of configuring the RPC Client Access array endpoint correctly, as documented in Ambiguous URLs and their effect on Exchange 2010 to Exchange 2013 Migrations.

    For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2007, they will connect directly to the Exchange 2007 Mailbox server instance hosting the mailbox.

    Outlook Anywhere

    1. Red User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2010 in Site1 will de-encapsulate the RPC data embedded within the HTTP packet and since the target mailbox database is located within the local site, will retrieve the necessary data from the local Exchange 2010 Mailbox server.
    2. Purple User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2010 in Site1 will de-encapsulate the RPC data embedded within the HTTP packet, determine that the mailbox server is located on an Exchange 2007 server, and proxy the Outlook RPC data to the target Exchange 2007 Mailbox server.
    3. Blue User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2010 in Site1 will de-encapsulate the RPC data embedded within the HTTP packet, determine that the mailbox server hosting the mailbox is located in another AD site (in this case Site3) and proxy the Outlook RPC data to the target Exchange 2010 Client Access server.
    4. Orange User will connect to mail-region.contoso.com as his RPC proxy endpoint. CAS2010 in Site2 will de-encapsulate the RPC data embedded within the HTTP packet and since the target mailbox database is located within the local site, will retrieve the necessary data from the local Exchange 2010 Mailbox server.
    Note: In addition to the mail and directory connections, Outlook Anywhere clients also utilize Exchange Web Services and an Offline Address Book, which are provided via the Autodiscover response.

    Outlook Web App

    For more information, see the article Upgrading Outlook Web App to Exchange 2010.

    1. Red User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, determine that the mailbox is located within the local AD site and retrieve the necessary data from the Exchange 2010 Mailbox server.
    2. Purple User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within the local AD site on an Exchange 2007 Mailbox server. CAS2010 will initiate a single sign-on silent redirect (assumes FBA is enabled on source and target) to legacy.contoso.com. CAS2007 will then process the request and retrieve the necessary data from the Exchange 2007 Mailbox server.
    3. Blue User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site3, which does not contain any OWA ExternalURLs. CAS2010 in Site1 will proxy the request to CAS2010 in Site3. CAS2010 in Site3 will retrieve the necessary data from the Exchange 2010 Mailbox server.
    4. For the Orange User, there are three possible scenarios depending on what namespace the users enters and how the environment is configured:
      1. Orange User will connect to mail-region.contoso.com as his namespace endpoint. CAS2010 in Site2 will authenticate the user, do a service discovery, and determine that the mailbox is located within the local AD site and retrieve the necessary data from the Exchange 2010 Mailbox server.
      2. Orange User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site2, which does contain an OWA ExternalURL. CAS2010 in Site1 is not configured to do a cross-site silent redirection, therefore, the user is prompted to use the correct URL to access his mailbox data.
      3. Orange User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site2, which does contain an OWA ExternalURL. CAS2010 in Site1 is configured to do a cross-site silent redirection, therefore, CAS2010 will initiate a single sign-on silent redirect (assumes FBA is enabled on source and target) to mail-region.contoso.com. CAS2010 in Site2 will then facilitate the request and retrieve the necessary data from the Exchange 2010 Mailbox server.

    Exchange ActiveSync

    For more information, see the article Upgrading Exchange ActiveSync to Exchange 2010.

    1. Red User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, determine that the mailbox is located within the local AD site and retrieve the necessary data from the Exchange 2010 Mailbox server.
    2. Purple User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within the local AD site on an Exchange 2007 Mailbox server. CAS2010 proxies the request to CAS2007.
    3. Blue User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site3, which does not contain any EAS ExternalURLs. CAS2010 in Site1 will issue a cross-site proxy the request to CAS2010 in Site3. CAS2010 in Site3 will retrieve the necessary data from the Exchange 2010 Mailbox server.
    4. For the Orange User, there are two possible scenarios:
      1. Orange User will connect to mail-region.contoso.com as his namespace endpoint. CAS2010 in Site2 will authenticate the user, do a service discovery, and determine that the mailbox is located within the local AD site and retrieve the necessary data from the Exchange 2010 Mailbox server.
      2. Orange User will connect to mail.contoso.com as his namespace endpoint. CAS2010 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site2, which does contain an EAS ExternalURL. Assuming the device supports Autodiscover and is on a defined list of devices that supports the 451 redirect response, CAS2010 will issue a 451 response to the device, notifying the device it needs to use a new namespace, mail-region.contoso.com. CAS2010 in Site2 will then facilitate the request and retrieve the necessary data from the Exchange 2010 Mailbox server. If the device is not on the supported list, then CAS2010 in Site1 will proxy the request to CAS2010 in Site2

    Exchange Web Services

    1. For the Red User, Autodiscover will return the mail.contoso.com namespace for the Exchange Web Service URL. CAS2010 in Site1 will service the request.
    2. For the Blue User, Autodiscover will return the mail.contoso.com namespace for the Exchange Web Service URL. CAS2010 will then proxy the request to an Exchange 2010 Client Access server located in Site3.
    3. For the Purple User, Autodiscover will provide back the legacy.contoso.com namespace for the Exchange Web Service URL. CAS2007 in Site1 will service the request.
    4. For the Orange User, Autodiscover will return the mail-region.contoso.com namespace for the Exchange Web Service URL. CAS2010 in Site2 will service the request.

    Client Connectivity with Exchange 2013 in Site1

    Exchange 2013 has now been deployed in Site1 following the guidance documented within the Exchange Deployment Assistant. As a result, Outlook Anywhere has been enabled on all Client Access servers within the infrastructure and the mail.contoso.com and autodiscover.contoso.com namespaces have been moved to resolve to Exchange 2013 Client Access server infrastructure.

    E15 Site1
    Figure 2: Exchange 2013 Coexistence with Exchange 2007 & Exchange 2010 in a Multi-Site Architecture

    To understand the client connectivity now that Exchange 2013 exists in the environment, let’s look at the four users.

    Autodiscover

    The Autodiscover external namespace, autodiscover.contoso.com, as well as, the internal SCP records resolve to the CAS2013 infrastructure located in Site1. Outlook clients and ActiveSync clients (on initial configuration) will submit Autodiscover requests to the CAS2013 infrastructure and depending on the mailbox version, different behaviors occur:

    1. For mailboxes that exist on Exchange 2010, CAS2013 will proxy the request to an Exchange 2010 Client Access server that exists within the mailbox’s local site. This means that for Red User, this will be a local proxy to a CAS2010 in Site1. For Blue User and Orange User, this will be a cross-site proxy to a CAS2010 located in the user’s respective site. CAS2010 will then generate the Autodiscover response.
    2. For mailboxes that exist on Exchange 2007, CAS2013 will proxy the request to an Exchange 2013 Mailbox server in the local site, which will generate the Autodiscover response. This means for Purple User, the MBX2013 server in Site1 will generate the response.
    3. For mailboxes that exist on Exchange 2013, CAS2013 will proxy the request to the Exchange 2013 Mailbox server that is hosting the active copy of the user’s mailbox which will generate the Autodiscover response.

    Internal Outlook Connectivity

    For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2010, they will still connect to the Exchange 2010 RPC Client Access array endpoint.

    For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2007, they will still connect directly to the Exchange 2007 Mailbox server instance hosting the mailbox.

    When you have an Exchange 2013 mailbox you are using Outlook Anywhere, both within the corporate network and outside of the corporate network; RPC/TCP connectivity no longer exists for Exchange 2013 mailboxes.

    In Exchange 2007/2010, the way Outlook Anywhere was implemented is that you had one namespace you could configure. In Exchange 2013, you have both an internal host name and an external host name. Think of it as having two sets of Outlook Anywhere settings, one for when you are connected to the corporate domain, and another for when you are not. You will see this returned to the Outlook client in the Autodiscover response via what looks like a new provider, ExHTTP. However, ExHTTP isn’t an actual provider, it is a calculated set of values from the EXCH (internal Outlook Anywhere) and EXPR (External Outlook Anywhere) settings. To correctly use these settings, the Outlook client must be patched to the appropriate levels (see the Exchange 2013 System Requirements for more information). Outlook will process the ExHTTP in order – internal first and external second.

    Important: In the event that you are utilizing a split-brain DNS infrastructure, then you must utilize the same authentication value for both your internal and external Outlook Anywhere settings, or switch to use different names for Outlook Anywhere inside and out. Outlook gives priority to the internal settings over the external settings and since the same namespace is used for both, regardless of whether the client is internal or external, it will utilize only the internal authentication settings.

    The default Exchange 2013 internal Outlook Anywhere settings don’t require HTTPS. By not requiring SSL, the client should be able to connect and not get a certificate pop-up for the mail and directory connections. However, you will still have to deploy a certificate that is trusted by the client machine for Exchange Web Services and OAB downloads.

    External Outlook Connectivity

    In order to support access for Outlook Anywhere clients whose mailboxes are on legacy versions of Exchange, you will need to make some changes to your environment which are documented in the steps within the Exchange Deployment Assistant. Specifically, you will need to enable Outlook Anywhere on your legacy Client Access servers and enable NTLM in addition to basic authentication for the IIS Authentication Method.

    The Exchange 2013 Client Access server’s RPC proxy component sees the incoming connections, authenticates and chooses which server to route the request to (regardless of version), proxying the HTTP session to the endpoint (legacy CAS or Exchange 2013 Mailbox server).

    1. Red User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2013 in Site1 will determine the mailbox version is 2010 and that the mailbox database hosting the user’s mailbox is located in the local site. CAS2013 will proxy the request to CAS2010 in Site1. CAS2010 will de-encapsulate the RPC from the HTTP packet and obtain the data from the Exchange 2010 Mailbox server.
    2. Purple User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2013 in Site1 will determine the mailbox version is 2007 and that the mailbox database hosting the user’s mailbox is located in the local site. CAS2013 will proxy the request to CAS2007 in Site1. CAS2007 will de-encapsulate the RPC from the HTTP packet and obtain the data from the Exchange 2007 Mailbox server.
    3. Blue User will connect to mail.contoso.com as his RPC proxy endpoint. CAS2013 in Site1 will determine the mailbox version is 2010 and that the mailbox database hosting the user’s mailbox is located in Site3. CAS2013 will proxy the request to CAS2010 in Site3. CAS2010 will de-encapsulate the RPC from the HTTP packet and obtain the data from the Exchange 2010 Mailbox server.
    4. Orange User will continue to access his mailbox using the Exchange 2010 regional namespace, mail-region.contoso.com.

    Outlook Web App

    For Outlook Web App, the user experience will depend on the mailbox version and where the mailbox is located.

    1. Red User will connect to mail.contoso.com as his namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2010 and is located within the local AD site. CAS2013 will proxy the request to an Exchange 2010 Client Access server which will retrieve the necessary data from the Exchange 2010 Mailbox server.
    2. Purple User will connect to mail.contoso.com as his namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within the local AD site on an Exchange 2007 Mailbox server. CAS2013 will initiate a single sign-on silent redirect (assumes FBA is enabled on source and target) to legacy.contoso.com. CAS2007 will then facilitate the request and retrieve the necessary data from the Exchange 2007 Mailbox server.
    3. Blue User will connect to mail.contoso.com as his namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site3, which does not contain any OWA ExternalURLs. CAS2013 in Site1 will proxy the request to CAS2010 in Site3. CAS2010 in Site3 will retrieve the necessary data from the Exchange 2010 Mailbox server.
    4. For the Orange User, there are two possible scenarios:
      1. Orange User will connect to mail-region.contoso.com as his namespace endpoint. In this case, CAS2013 is not involved in any fashion.
      2. Orange User will connect to mail.contoso.com as his namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site2, which does contain an OWA ExternalURL. CAS2013 in Site1 will initiate a single sign-on silent redirect (assumes FBA is enabled on source and target) to mail-region.contoso.com. CAS2010 in Site2 will then facilitate the request and retrieve the necessary data from the Exchange 2010 Mailbox server.
    Note: Let’s assume that Site3 contains Exchange 2007 servers as well. In this scenario, CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2007 and the mailbox is located in Site3. CAS2013 will issue a single sign-on silent redirect to legacy.contoso.com. CAS2007 in Site1 will authenticate the user and proxy the request to the Exchange 2007 Client Access server infrastructure in Site3.

    Exchange ActiveSync

    For Exchange ActiveSync clients, the user experience will depend on the mailbox version and where the mailbox is located. In addition, Exchange 2013 no longer supports the 451 redirect response – Exchange 2013 will always proxy ActiveSync requests.

    1. Red User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2010 and the mailbox is located within the local AD site. CAS2013 will proxy the request to an Exchange 2010 Client Access server which will retrieve the necessary data from the Exchange 2010 Mailbox server.
    2. Purple User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2007. CAS2013 will proxy the request to a local Exchange 2013 Mailbox server. The Exchange 2013 Mailbox server will send the request to an Exchange 2007 Client Access server that exists within the mailbox’s site (specifically, the InternalURL value of the Microsoft-Server-ActiveSync virtual directory on CAS2007), which will retrieve the data from the Exchange 2007 Mailbox server.
    3. Blue User’s ActiveSync client will connect to mail.contoso.com as the namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox is located within Site3. CAS2013 in Site1 will issue a cross-site proxy the request to CAS2010 in Site3. CAS2010 in Site3 will retrieve the necessary data from the Exchange 2010 Mailbox server.
    4. For the Orange User, there are two possible scenarios depending on how the device is configured:
      1. Orange User’s ActiveSync client is configured to connect to mail-region.contoso.com as the namespace endpoint. In this case, CAS2013 is not involved in any fashion. Note that any new device that is configured will automatically be configured to use mail-region.contoso.com.
      2. Orange User’s ActiveSync client is configured to connect to mail.contoso.com as the namespace endpoint. CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2010 and the mailbox is located within another AD site. CAS2013 will issue a cross-site proxy request to an Exchange 2010 Client Access server that resides in the same site as the mailbox.
    Note: Let’s assume that Site3 contains Exchange 2007 servers as well. In this scenario, CAS2013 in Site1 will authenticate the user, do a service discovery, and determine that the mailbox version is 2007 and the mailbox is located in Site3. CAS2013 will proxy the request to a local Exchange 2013 Mailbox server. The Exchange 2013 Mailbox server will send the request to an Exchange 2007 Client Access server that exists within the mailbox’s site, which will retrieve the data from the Exchange 2007 Mailbox server.

    Exchange Web Services

    Coexistence with Exchange Web Services is rather simple.

    1. For the Red User, Autodiscover will return the mail.contoso.com namespace for the Exchange Web Service URL. CAS2013 will then proxy the request to an Exchange 2010 Client Access server within the local site.
    2. For the Purple User, Autodiscover will return the legacy.contoso.com namespace for the Exchange Web Service URL.
    3. For the Blue User, Autodiscover will return the mail.contoso.com namespace for the Exchange Web Service URL. CAS2013 will then proxy the request to an Exchange 2010 Client Access server located in Site3.
    4. For the Orange User, Autodiscover will return the mail-region.contoso.com namespace for the Exchange Web Service URL.
    Note: Let’s assume that Site3 contains Exchange 2007 servers as well. In this scenario, the Autodiscover response will provide back the legacy.contoso.com namespace for the Exchange Service URL. CAS2007 in Site1 will proxy the request to an Exchange 2007 Client Access server that exists within the mailbox’s site, which will retrieve the data from the Exchange 2007 Mailbox server. This is why you cannot remove Exchange 2007 from the Internet-facing Active Directory site as long as Exchange 2007 exists in non-Internet facing Active Directory sites.

    Offline Address Book

    Like with Exchange Web Services, Autodiscover will provide the Offline Address Book URL.

    1. For the Red User, Autodiscover will return the mail.contoso.com namespace for the Offline Address Book URL. CAS2013 will then proxy the request to any Client Access server within the local site that is a web distribution point for the Offline Address Book in question.
    2. For the Purple User, Autodiscover will return the legacy.contoso.com namespace for the Offline Address Book URL.
    3. For the Blue User, Autodiscover will return the mail.contoso.com namespace for the Offline Address Book URL. CAS2013 will then proxy the request to any Client Access server within the local site that is a web distribution point for the Offline Address Book in question.
    4. For the Orange User, Autodiscover will return the mail-region.contoso.com namespace for the Offline Address Book URL.
    Note: Let’s assume that Site3 contains Exchange 2007 servers as well. In this scenario, the Autodiscover response will provide back the legacy.contoso.com namespace for the Offline Address Book URL. This is why you cannot remove Exchange 2007 from the Internet-facing Active Directory site as long as Exchange 2007 exists in non-Internet facing Active Directory sites.

    How CAS2013 Picks a Target Legacy Exchange Server

    It’s important to understand that when CAS2013 proxies to a legacy Exchange Client Access server, it constructs a URL based on the server FQDN, not a load balanced namespace or the InternalURL value.But how does CAS2013 choose which legacy Client Access server to proxy the connection?

    When a CAS2013 starts up, it connects to Active Directory and enumerates a topology map to understand all the Client Access servers that exist within the environment. Every 50 seconds, CAS2013 will send a lightweight request to each protocol end point to all the Client Access servers in the topology map; these requests have a user agent string of HttpProxy.ClientAccessServer2010Ping (yes, even Exchange 2007 servers are targeted with that user string). CAS2013 expects a response - a 200/300/400 response series indicates the target server is up for the protocol in question; a 502, 503, or 504 response indicates a failure. If a failure response occurs, CAS2013 immediately retries to determine if the error was a transient error. If this second attempt fails, CAS2013 marks the target CAS as down and excludes it from being a proxy target. At the next interval (50 seconds), CAS2013 will attempt to determine the health state of the down CAS to determine if it is available.

    The IIS log on a legacy Client Access server will contain the ping events.  For example:

    2014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /ecp - 443 - 192.168.1.42 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 302 0 0 277 170 0
    2014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /PowerShell - 443 - 192.168.1.27 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 0 0 309 177 15
    2014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /EWS - 443 - 192.168.1.134 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 0 0 245 170 0
    2014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 GET /owa - 443 - 192.168.1.220 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 301 0 0 213 169 171
    2014-03-11 14:00:01 W3SVC1 DF-C14-02 157.54.7.76 HEAD /Microsoft-Server-ActiveSync/default.eas - 443 - 192.168.1.29 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 2 5 293 194 31
    2014-03-11 14:00:04 W3SVC1 DF-C14-02 157.54.7.76 HEAD /OAB - 443 - 10.166.18.213 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 2 5 261 170 171

    If for some reason, you would like to ensure a particular CAS2010 is never considered a proxy endpoint (or want to remove it for maintenance activities), you can do so by executing the following cmdlet on Exchange 2010 (note that this feature does not exist on Exchange 2007):

    Set-ClientAccessServer <server> -IsOutofService $True

    IMAP & POP Coexistence

    All this discussion about HTTP-based clients is great, but what about POP and IMAP clients? Like the HTTP-based client counterparts, IMAP and POP clients are also proxied from the Exchange 2013 Client Access server to a target server (whether that be an Exchange 2013 Mailbox server or a legacy Client Access server). However, there is one key difference, there is no health-checking on the target IMAP/POP services.

    When the Exchange 2013 Client Access server receives a POP or IMAP request, it will authenticate the user and perform a service discovery. 

    • If the target mailbox is E2010, CAS2013 will enumerate the POP or IMAP InternalConnectionSettings property value for each Exchange 2010 Client Access server within the mailbox’s site. Therefore, it is important to ensure that the InternalConnectionSettings maps to the server's FQDN, and not a load-balanced namespace.
    • If the target mailbox is E2007, CAS2013 will enumerate each Exchange 2007 Client Access server’s server FQDN within the mailbox’s site.

    CAS2013 will choose a server to proxy the request based on the incoming connection’s configuration. If the incoming connection is over an encrypted channel, CAS2013 will try to locate an SSL proxy target first, TLS next, plaintext lastly. If the incoming connection is over a plaintext channel, CAS2013 will try to locate a plaintext proxy target first, SSL next, TLS lastly.

    Important: Exchange 2013 Client Access servers do not validate that the POP or IMAP services are actually running on the target Client Access servers. It's important, therefore, to ensure that the services are running on every legacy Client Access server if you have POP or IMAP clients in your environment.

    Conclusion

    Hopefully this information dispels some of the myths around proxying and redirection logic for Exchange 2013 when coexisting with either Exchange 2007 or Exchange 2010. Please let us know if you have any questions.

    Ross Smith IV
    Principal Program Manager
    Office 365 Customer Experience

    Updates

    • 3/14/2014: Added section on IMAP/POP coexistence
    • 3/25/2014: Added detail that MBX2013 proxies to the CAS2007 MSAS InternalURL for 2007 EAS mailbox access
  • Released: Exchange 2013 Server Role Requirements Calculator

    It’s been a long road, but the initial release of the Exchange 2013 Server Role Requirements Calculator is here. No, that isn’t a mistake, the calculator has been rebranded.  Yes, this is no longer a Mailbox server role calculator; this calculator includes recommendations on sizing Client Access servers too! Originally, marketing wanted to brand it as the Microsoft Exchange Server 2013 Client Access and Mailbox Server Roles Theoretical Capacity Planning Calculator, On-Premises Edition.  Wow, that’s a mouthful and reminds me of this branding parody.  Thankfully, I vetoed that name (you’re welcome!).

    The calculator supports the architectural changes made possible with Exchange 2013:

    Client Access Servers

    Like with Exchange 2010, the recommendation in Exchange 2013 is to deploy multi-role servers. There are very few reasons you would need to deploy dedicated Client Access servers (CAS); CPU constraints, use of Windows Network Load Balancing in small deployments (even with our architectural changes in client connectivity, we still do not recommend Windows NLB for any large deployments) and certificate management are a few examples that may justify dedicated CAS.

    When deploying multi-role servers, the calculator will take into account the impact that the CAS role has and make recommendations for sizing the entire server’s memory and CPU. So when you see the CPU utilization value, this will include the impact both roles have!

    When deploying dedicated server roles, the calculator will recommend the minimum number of Client Access processor cores and memory per server, as well as, the minimum number of CAS you should deploy in each datacenter.

    Transport

    Now that the Mailbox server role includes additional components like transport, it only makes sense to include transport sizing in the calculator. This release does just that and will factor in message queue expiration and Safety Net hold time when calculating the database size. The calculator even makes a recommendation on where to deploy the mail.que database, either the system disk, or on a dedicated disk!

    Multiple Databases / JBOD Volume Support

    Exchange 2010 introduced the concept of 1 database per JBOD volume when deploying multiple database copies. However, this architecture did not ensure that the drive was utilized effectively across all three dimensions – throughput, IO, and capacity. Typically, the system was balanced from an IO and capacity perspective, but throughput was where we saw an imbalance, because during reseeds only a portion of the target disk’s total capable throughput was utilized. In addition, capacity on the 7.2K disks continue to increase with 4TB disks now available, thus impacting our ability to remain balanced along that dimension. In addition, Exchange 2013 includes a 33% reduction in IO when compared to Exchange 2010. Naturally, the concept of 1 database / JBOD volume needed to evolve. As a result, Exchange 2013 made several architectural changes in the store process, ESE, and HA architecture to support multiple databases per JBOD volume. If you would like more information, please see Scott’s excellent TechEd session in a few weeks on Exchange 2013 High Availability and Site Resilience or the High Availability and Site Resilience topic on TechNet.

    By default, the calculator will recommend multiple databases per JBOD volume. This architecture is supported for single datacenter deployments and multi-datacenter deployments when there is copy and/or server symmetry. The calculator supports highly available database copies and lagged database copies with this volume architecture type. The distribution algorithm will lay out the copies appropriately, as well as, generate the deployment scripts correctly to support AutoReseed.

    High Availability Architecture Improvements

    The calculator has been improved in several ways for high availability architectures:

    • You can now specify the Witness Server location, either primary, secondary, or tertiary datacenter.
    • The calculator allows you to simulate WAN failures, so that you can see how the databases are distributed during the worst failure mode.
    • The calculator allows you to name servers and define a database prefix which are then used in the deployment scripts.
    • The distribution algorithm supports single datacenter HA deployments, Active/Passive deployments, and Active/Active deployments.
    • The calculator includes a PowerShell script to automate DAG creation.
    • In the event you are deploying your high availability architecture with direct attached storage, you can now specify the maximum number of database volumes each server will support. For example, if you are deploying a server architecture that can support 24 disks, you can specify a maximum support of 20 database volumes (leaving 2 disks for system, 1 disk for Restore Volume, and 1 disks as a spare for AutoReseed).

    Additional Mailbox Tiers (sort of!)

    Over the years, a few, but vocal, members of the community have requested that I add more mailbox tiers to the calculator. As many of you know, I rarely recommend sizing multiple mailbox tiers, as that simply adds operational complexity and I am all about removing complexity in your messaging environments. While, I haven’t specifically added additional mailbox tiers, I have added the ability for you to define a percentage of the mailbox tier population that should have the IO and Megacycle Multiplication Factors applied. In a way, this allows you to define up to eight different mailbox tiers.

    Processors

    I’ve received a number of questions regarding processor sizing in the calculator.  People are comparing the Exchange 2010 Mailbox Server Role Requirements Calculator output with the Exchange 2013 Server Role Requirements Calculator.  As mentioned in our Exchange 2013 Performance Sizing article, the megacycle guidance in Exchange 2013 leverages a new server baseline, therefore, you cannot directly compare the output from the Exchange 2010 calculator with the Exchange 2013 calculator.

    Conclusion

    There are many other minor improvements sprinkled throughout the calculator.  We hope you enjoy this initial release.  All of this work wouldn’t have occurred without the efforts of Jeff Mealiffe (for without our sizing guidance there would be no calculator!), David Mosier (VBA scripting guru and the master of crafting the distribution worksheet), and Jon Gollogy (deployment scripting master).

    As always we welcome feedback and please report any issues you may encounter while using the calculator by emailing strgcalc AT microsoft DOT com.

    Ross Smith IV
    Principal Program Manager
    Exchange Customer Experience

  • How to Export and Import mailboxes to PST files in Exchange 2007 SP1

    There might be times when an Exchange Administrator will need to export the contents of individual mailboxes to offline files in order to present specific users with a format that is easily portable and ready to consume using Outlook clients. To fulfill this need Exchange 2007 SP1 will have a new set of features to export and import mailboxes to and from PST files. As I know you will ask - yes, those PST files can be bigger than 2 GB, which was a limitation of Exmerge tool used for this purpose in previous versions of Exchange.

    Export/Import to PST Requirements

    In order to export or import mailboxes to PST files the following requirements must be met:

    • Export/Import to PST must be run from a 32 bit client machine with Exchange Management Tools installed (Version Exchange 2007 SP1 or later). The 32bit requirement comes from a dependency with the Outlook client.
    • Either Outlook 2003 or Outlook 2007 must be installed on the client machine.
    • The user running the task must be an Exchange Organization Admin or an Exchange Server Admin on the server where the mailbox to export/import lives.

    Exporting mailboxes to PST files

    The most basic cmdlet to export a mailbox to a PST file is as follows:

    Export-Mailbox –Identity <mailboxUser> -PSTFolderPath <pathToSavePST>

    PSTFolderPath must be a full path pointing either to a directory or to a (.pst) file. If a directory is specified a PST file named after the mailbox alias will be used as the target of the export. Note that if the PST file already exists the contents of the mailbox will be merged into it.

    Example:

    After the cmdlet finishes execution, the .pst file will be ready in the specified location:

    To export multiple mailboxes to their respective .pst files at once you can pipe in the identities of those mailboxes to the export task. Notice that when bulk exporting the PSTFolderPath parameter must forcefully point to a directory since one .pst file will be created for each mailbox.

    Example:

    Get-Mailbox -Database 'MDB' | Export-Mailbox -PSTFolderPath D:\PSTs

    Importing mailboxes from PST files

    The process for importing mailbox contents from a PST file is quite similar:

    Import-Mailbox -Identity <mailboxUser> -PSTFolderPath <PSTFileLocation>

    Again, PSTFolderPath must be the full path to the directory where the .pst file lives or to the (.pst) file itself. In the case where PSTFolderPath points to a directory the cmdlet will try to match the mailbox alias with the name of an existing .pst file in the specified directory and import the content of that file.

    Example:

    Just as with the export to PST scenario, when bulk importing mailboxes the PSTFolderPath must forcefully point to a directory and the task logic will try to match mailboxes alias with the .pst file names under that location. If no match is found for a particular mailbox, that mailbox will be skipped.

    Example:

    Get-Mailbox -Database 'MDB' | Import-Mailbox -PSTFolderPath D:\PSTs

    Filtering content in Export/Import to PST

    When only specific content is desired in the PST file (or back into the mailbox) a common set of filters can be used to leave out the rest of the messages. Export/Import to PST support the following filters: Locale, StartDate, EndDate, ContentKeywords, SubjectKeywords, AttachmentFileNames, AllContentKeywords, SenderKeywords, and RecipientKeywords.

    Example: Import only those messages that were created between 1/1/06 and 12/1/06 and contain the word "review" in the subject and any of the words {"project","alpha"} in the body.

    Import-mailbox -Identity ricardr -PSTFolderPath D:\PSTs -StartDate 1/1/06 -EndDate 12/1/06 -SubjectKeywords:'review' -ContentKeywords:'project','alpha'

    Now, we realize that you would like to try this today, but please be patient!

    - Ricardo Rosales Guerrero

  • Outlook Connectivity with MAPI over HTTP

    Among the many new features delivered in Exchange 2013 SP1 is a new method of connectivity to Outlook we refer to as MAPI over HTTP (or MAPI/HTTP for short). We’ve seen a lot of interest about this new connection method and today we’ll give you a full explanation of what it is, what it provides, where it will take us in the future, and finally some tips of how and where to get started enabling this for your users.

    What is MAPI over HTTP?

    MAPI over HTTP is a new transport used to connect Outlook and Exchange. MAPI/HTTP was first delivered with Exchange 2013 SP1 and Outlook 2013 SP1 and begins gradually rolling out in Office 365 in May. It is the long term replacement for RPC over HTTP connectivity (commonly referred to as Outlook Anywhere). MAPI/HTTP removes the complexity of Outlook Anywhere’s dependency on the legacy RPC technology. Let’s compare the architectures.

    image

    image

    MAPI/HTTP moves connectivity to a true HTTP request/response pattern and no longer requires two long-lived TCP connections to be open for each session between Outlook and Exchange. Gone are the twin RPC_DATA_IN and RPC_DATA_OUT connections required in the past for each RPC/HTTP session. This change will reduce the number of concurrent TCP connections established between the client and server. MAPI/HTTP will generate a maximum of 2 current connections generating one long lived connection and an additional on-demand short-lived connection.

    Outlook Anywhere also essentially double wrapped all of the communications with Exchange adding to the complexity. MAPI/HTTP removes the RPC encapsulation within HTTP packets sent across the network making MAPI/HTTP a more well understood and predictable HTTP payload.

    An additional network level change is that MAPI/HTTP decouples the client/server session from the underlying network connection. With Outlook Anywhere connectivity, if a network connection was lost between client and server, the session was invalidated and had to be reestablished all over again, which is a time-consuming and expensive operation. In MAPI/HTTP when a network connection is lost the session itself is not reset for 15 minutes and the client can simply reconnect and continue where it left off before the network level interruption took place. This is extremely helpful for users who might be connecting from low quality networks. Additionally in the past, an unexpected server-side network blip would result in all client sessions being invalidated and a surge of reconnections being made to a mailbox server. Depending on the number of Outlook clients reconnecting, the re-establishing of so many RPC/HTTP connections might strain the resources of the mailbox server, and possibly extend the outage in scope (to Outlook clients connected to multiple servers) and time, caused by a single server-side network blip.

    Why MAPI over HTTP?

    You are probably asking yourself why the Exchange team would create a complete replacement for something so well-known and used. Let us explain.

    The original Outlook Anywhere architecture wasn’t designed for today’s reality of clients connecting from a wide variety of network types – many of these are not as fast or reliable as what was originally expected when Outlook Anywhere was designed. Consider connections from cellular networks, home networks, or in-flight wireless networks as a few examples. The team determined the best way to meet current connection needs and also put Exchange in the position to innovate more quickly was to start with a new simplified architecture.

    The primary goal of MAPI/HTTP is provide a better user experience across all types of connections by providing faster connection times to Exchange – yes, getting email to users faster. Additionally MAPI/HTTP will improve the connection resiliency when the network drops packets in transit. Let’s quantify a few of these improvements your users can expect. These results represent what we have seen in our own internal Microsoft user testing.

    When starting Outlook users often see the message “Connecting to Outlook” in the Outlook Status bar. MAPI/HTTP can reduce the amount of time a user waits for this connection. In the scenario when a user first launches Outlook the time to start synchronization improved to 30 seconds vs. 90 seconds for Outlook Anywhere for 70% of the monitored clients.

    Improvements are also delivered when clients are resuming from hibernation or simply re-connecting to a new network. Testing showed that 80% of the clients using MAPI/HTTP started syncing in less than 30 seconds vs. over 40 seconds for Outlook Anywhere clients when resuming from hibernation. This improvement was made possible as MAPI/HTTP implements a pause/resume feature enabling clients to resume using an existing connection rather than negotiating a new connection each time. Current sessions for MAPI/HTTP are valid for 15 minutes, but as we fine tune and expand this duration, these improvements will be even more noticeable.

    Improvements aren’t limited to end users. IT administrators will gain greater protocol visibility allowing them to identify and remediate situations faster and with more confidence. Due to MAPI/HTTP moving to a more traditional HTTP protocol payload, the ability to utilize already known tools common to HTTP debugging is a reality. IIS and HttpProxy logs will now contain information similar to other HTTP based protocols like Outlook Web App and be able to pass information via headers. At times in the past, certain debug procedures for RPC/HTTP were only available via proprietary internal Microsoft tools. This move should put all customers on the same level playing field as far as what tools are available for debug purposes.

    Exchange administrators also will find response returned by Autodiscover for MAPI/HTTP to Outlook is greatly simplified. The settings returned are just protocol version and endpoint URLs for Outlook to connect to Exchange mailbox and directory from inside or outside the customer’s corporate network. Outlook treats the URLs returned as opaque and uses as-is, minimizing the risk of connectivity breaking with future endpoint changes. Since MAPI/HTTP, like any other web protocol, simply sends an anonymous HTTP request to Exchange and gets back the authentication settings, there is no need for Autodiscover to advertise the authentication settings. This makes it easier to roll out changes in authentication settings for Outlook.

    The future

    MAPI/HTTP puts the Exchange team in position to innovate more quickly. It simplifies the architecture removing dependency on the RPC technologies which are no longer evolving as quickly as the customers demand. It provides the path for extensibility of the connection capabilities. A new capability that is on the roadmap for Outlook is to enable multi-factor authentication for users in Office 365. This capability is made possible with the use of MAPI/HTTP and is targeted to be delivered later this year. For a deeper look at this upcoming feature you can review the recent Multi-Factor Authentication for Office 365 blog post. This won’t stop with Office 365 MFA, but provides the extensibility foundation for 3rdparty identity providers.

    How does MAPI/HTTP work?

    Let’s walk through the scenario of an Outlook 2013 SP1 client connecting to Exchange Server 2013 SP1 after MAPI/HTTP has been enabled.

    1. The Outlook client begins with an Autodiscover POST request. In this request Outlook includes a new attribute that advertises the client is MAPI/HTTP capable with the attribute X-MapiHTTPCapability = 1.
    2. The Exchange server sees the request is coming from a MAPI/HTTP capable client and responds with the MAPI/HTTP information including the settings on how to connect to the mailbox using MAPI/HTTP. This assumes the MAPI/HTTP has been configured and enabled on the server.
    3. The Outlook client detects the new connection path and prompts the user to restart Outlook to switch to use the new connection. While the restart is pending Outlook will continue using Outlook Anywhere. We recommend you deploy the latest Office client updates to provide the best user experience. The updates remove the prompt and clients are allowed to make the transition at the next unprompted restart of Outlook.
    4. After the restart, Outlook now uses MAPI/HTTP to communicate with Exchange.

    What’s required?

    So now we have a clear set of advantages you can offer users, let’s review the requirements to enable MAPI/HTTP.

    Server Requirements: Use of MAPI/HTTP requires allExchange 2013 Client Access Servers to be updated to Exchange Server 2013 SP1 (or later). The feature is disabled by default in SP1 so you can get the servers updated without anyone noticing any changes. If you are an Office 365 Exchange Online customer you won’t have anything to worry about on the service side of deployment.

    Client Requirements:Outlook clients must be updated to use MAPI/HTTP. Office 2013 SP1 or Office 365 ProPlus February update (SP1 equivalent for ProPlus) are required for MAPI/HTTP. It is recommend you deploy the May Office 2013 public update or the April update for Office 365 ProPlus to eliminate the restart prompt when MAPI/HTTP is enabled for users.

    Prior version clients will continue to work as-is using Outlook Anywhere. Outlook Anywhere is the supported connection method for those clients. We do plan to add MAPI/HTTP support to Outlook 2010 in a future update. We will announce timing when we are closer to its availability.

    How to get ready

    Part one of getting ready is to get the required updates to your servers and clients as described in the prior section. Part two of getting ready is evaluating potential impacts MAPI/HTTP might have on your on-premises servers. Again if you an Office 365 customer you can ignore this bit.

    When you implement MAPI/HTTP in your organization, it will have an impact on your Exchange server resources. Before you go any further you need to review the impacts to your server resources. The Exchange 2013 Server Role Requirements Calculatorhas been updated to factor in use of MAPI/HTTP. You need to use the most recent version of the calculator (v6.3 or later) before you proceed. MAPI/HTTP increases the CPU load in the Exchange Client Access Servers. This is a 50% increase over Exchange 2013 RTM, however it is still lower than Exchange 2010 requirements. As you plan be mindful that deploying in a multi-role configuration will minimize the impact to your sizing. Again use the calculator to review potential impacts this may have in your environment. This higher CPU use is due to the higher request rate with several short-lived connections, with each request taking care of authentication and proxying.

    To provide the best MAPI/HTTP performance you need to install .NET 4.5.1 on your Exchange 2013 servers. Installing .NET 4.5.1 will avoid long wait times for users thanks to a fix to ensure the notification channel remains asynchronous to avoid queued requests.

    The change in communication between Exchange and Outlook has a small impact on the bytes sent over the wire. The header content in MAPI/HTTP is responsible for an increase in bytes transferred. In a typical message communications we have observed an average packet size increase of 1.2% versus Outlook Anywhere for a 50 KB average packet. In scenarios of data transfers over 10 MB the increase in bytes over the wire is 5-10%. These increases assumes an ideal network where connections are not dropped or resumed. If you consider real world conditions you may actually find MAPI/HTTP data on the wire may be lower than Outlook Anywhere. Outlook Anywhere lacks the ability to resume connections and the cost of re-syncing items can quickly outweigh the increase from the MAPI/HTTP header information.

    Now deploy MAPI/HTTP

    Now that you have prepared your servers with SP1, updated your clients, and reviewed potential sizing impacts you are ready to get on with implementing MAPI/HTTP. It is disabled by default in SP1 and you must take explicit actions to configure and enable it. These steps are well covered in the MAPI over HTTPTechNet article.

    A few important things to remember in your deployment.

    • Namespace: MAPI/HTTP is a new endpoint on CAS and can utilize both an internal namespace and an external namespace. For more information on how to properly plan your namespace design, see Namespace Planning in Exchange 2013 and Load Balancing in Exchange 2013.
    • Certificates: The certificate used in Exchange will need to include both the internal and external MAPI/HTTP virtual directories to avoid any user certificate prompts, thus consider if the names exist on your certificates. Refer to the certificate planning post for additional help planning.
    • MAPI/HTTP Configuration: Enabling MAPI/HTTP is an organizational configuration in Exchange, you won’t have the ability configure this for a subset of servers. If you require more specific control you can control the client behavior with a registry key.

    NOTE: If you require more specific control you can control the client behavior with a registry key on each client machine. This is not recommended or required but included if your situation demands this level of control. This registry entry prevents Outlook from sending the MAPI/HTTP capable flag to Exchange in the Autodiscover request. When you change the registry key on a client it will not take effect until the next time the client performs an Autodiscover query against Exchange.

    To disallow MAPI/HTTP and force RPC/HTTP to be used.

    HKEY_CURRENT_USER\Software\Microsoft\Exchange]
    “MapiHttpDisabled”=dword:00000001

    To allow MAPI/HTTP simply delete the MapiHttpDisabled DWORD, or set it to a value of 0 as below.

    HKEY_CURRENT_USER\Software\Microsoft\Exchange]
    “MapiHttpDisabled”=dword:00000000

    • Connectivity: An important final consideration is to verify load balancers, reverse proxies, and firewalls are configured to allow access to the MAPI/HTTP virtual directories. At this time ForeFront Unified Access Gateway (UAG) 2010 SP4 is not compatible with MAPI/HTTP. The UAG team has committed to deliver support for MAPI/HTTP in a future update.

    How do I know it is working?

    There are a few quick ways to verify your configuration is working as expected.

    1. Test with the Test-OutlookConnectivity cmdlet

    Use this command to test MAPI/HTTP connectivity:

    Test-OutlookConnectivity -RunFromServerId Contoso -ProbeIdentity OutlookMapiHttpSelfTestProbe

    This test is detailed in the MAPI over HTTPTechNet Article.

    2. Inspect MAPI/HTTP server logs

    Administrators can review the following MAPI/HTTP log files to validate how the configuration is operating:

    Location

    Path

    CAS:

    %ExchangeInstallPath%Logging\HttpProxy\Mapi\HTTP

    Mailbox:

    %ExchangeInstallPath%Logging\MAPI Client Access\

    Mailbox:

    %ExchangeInstallPath%Logging\MAPI Address Book Service\

    3. Check Outlook connection status on clients

    You can also quickly verify that the client is connected using MAPI/HTTP. The Outlook Connection status dialog can be launch by CTRL-right clicking the Outlook icon in the notification area and selecting Connection Status. Here are the few key fields to quickly confirm the connection is using MAPI/HTTP.

    Field

    Value

    Protocol

    HTTP (v/s RPC/HTTP for Outlook Anywhere)

    Proxy Server

    Empty

    Server name

    Actual server name (v/s GUID for Outlook Anywhere connections)

    image

    Summary

    MAPI/HTTP provides a simplified transport and resulting architecture for Outlook to connect with Exchange. It enables improved user experiences to allow them faster access to mail and improves the resilience of their Outlook connections. These investments are the foundation for future capabilities such as multi-factor authentication in Outlook. It also helps IT support and troubleshoot client connection issues using standard HTTP protocol tools.

    As with all things new you must properly plan your implementation. Use the deployment guidanceavailable on TechNet and the updated sizing recommendations in the calculator before you start your deployment. With proper use it will guide you to a smooth deployment of MAPI/HTTP.

    Special thanks to Brian Day and Abdel Bahgat for extensive contributions to this blog post.

    Brian Shiers | Technical Product Manager

     

     

    MAPI/HTTP FAQ

    We collected a number of questions which frequently came up during the development, internal dogfooding, and customer TAPtesting of MAPI/HTTP. We hope these answer most of the questions you may have about MAPI/HTTP.

    Can MAPI/HTTP be enabled/disabled on a per-server basis?

    No, it is an organization-wide Exchange setting. The user experience mentioned during database failovers when one server is not yet MAPI/HTTP capable made the functionality to turn MAPI/HTTP on and off per server not a viable solution.

    Can MAPI/HTTP be enabled/disabled on a per-mailbox basis?

    No, there is not currently a Set-CasMailbox parameter to enable/disable MAPI/HTTP for a single mailbox.

    I updated the registry key to disable MAPI/HTTP on a client but the connection didn’t change?

    The registry entry simply controls what Outlook sends to Exchange about its MAPI/HTTP capability during an Autodiscover request. It does not immediately change the connection method Outlook is using nor will it change it with a simple restart of Outlook. Remember, the Autodiscover response Outlook gets only has MAPI/HTTP or RPC/HTTP settings in it so it has no way to immediately switch types. You must allow Outlook to perform its next Autodiscover request and get a response from Exchange after setting this registry entry before the change will take place. If you must attempt to speed along this process, there are two options.

    1. Set the registry entry as you wish, close Outlook, delete the Outlook profile, and then restart Outlook and go through the profile wizard. This should result in an immediate switch, but any settings stored in the Outlook profile are lost.
    2. Set the registry entry as you wish, close Outlook, delete the hidden Autodiscover response XML files in %LOCALAPPDATA%\Microsoft\Outlook, and restart Outlook.
    3. Restart Outlook once more to complete the switch.

    What happens if a user’s mailbox database is mounted on a MAPI/HTTP capable server and then a database failover happens to a non-MAPI/HTTP capable server?

    E.g. When not all mailbox servers in a DAG are MAPI/HTTP capable and MAPI/HTTP has already been enabled in the organization, and then a mailbox failover takes place between the SP1 (or later) and pre-SP1 servers. Additionally this could happen if you move a mailbox from a MAPI/HTTP capable mailbox server to a server that is not MAPI/HTTP capable.

    In the above example Outlook would fail to connect and when Autodiscover next ran the user would get an Outlook restart notification warning because MAPI/HTTP is no longer a viable connection method due to the mailbox being mounted on a pre-SP1 server. After the client restart the client profile would be back to utilizing RPC/HTTP.

    Note: While a mix of MAPI/HTTP capable and non-capable mailbox Exchange servers in the same DAG are supported in an environment with MAPI/HTTP enabled, it is very strongly not recommended due to the possible user experience outlined above. It is suggested the entire organization be upgraded to SP1 or later before enabling MAPI/HTTP in the organization.

    What if a user accesses additional mailboxes where MAPI/HTTP is not yet available?

    Outlook profiles can continue access additional resources using non-MAPI/HTTP connectivity methods even if the user’s primary mailbox utilizes MAPI/HTTP. For example a user can continue to access Legacy Public Folders or Shared Mailboxes on other Exchange servers not utilizing MAPI/HTTP. During the Autodiscover process Exchange will determine and hand back to Outlook the proper connectivity method necessary for each resource being accessed.

    If MAPI/HTTP becomes unreachable will a client fallback to RPC/HTTP?

    No, a user’s profile will never attempt to use RPC/HTTP if MAPI/HTTP becomes unavailable because the original Autodiscover response only contained one connection method to use. There is no fallback from MAPI/HTTP to RPC/HTTP or vice versa. Normal high availability design considerations should ensure the MAPI/HTTP endpoint remain accessible in the event of server or service failures.

    Is MAPI/HTTP replacing Exchange Web Services (EWS)?

    No, MAPI/HTTP is not a replacement for EWS and there are no plans to move current EWS clients to MAPI/HTTP.

    Is RPC/HTTP being deprecated as an available connection method?

    Over time this may take place as non-MAPI/HTTP capable Outlook versions age out of their product support lifecycle, but there are no immediate plans to remove RPC/HTTP as a valid connection method.

    What authentication methods does MAPI/HTTP support?

    A huge architectural improvement by moving to MAPI/HTTP is that MAPI/HTTP is abstracted from authentication. In short authentication is done at the HTTP layer, so whatever HTTP can do, MAPI/HTTP can use.

    Does moving the MAPI/HTTP negatively affect the Lync client at all?

    No, the Lync client uses the same profile as configured by Outlook and will connect via whatever connectivity method is in use by Outlook.

    Is there any kind of SDK/API for third party application usage of MAPI/HTTP?

    The MAPI/HTTP protocol is publically documented (PDF download) and has the same level of documentation support as RPC/HTTP. There are no plans to update the MAPI CDO library for MAPI/HTTP and third party companies are still encouraged to utilize Exchange Web Services as the long-term protocol for interaction with Exchange as discussed in Exchange 2013 Client Access Server Rolearticle.

    What are the client requirements for MAPI/HTTP?

    Use of MAPI/HTTP requires Outlook 2013 clients to obtain Office 2013 Service Pack 1 or the February update for Office 365 ProPlus clients. MAPI/HTTP is planned to be ported to Outlook 2010 at a future date. At this time no other version of Windows Outlook supports MAPI/HTTP.

    How does this affect publishing Exchange 2013 via ARR, WAP, TMG, UAG, B2M, ABC, BBD …

    Publishing Exchange 2013 with MAPI/HTTP in use does not change very much. You will need to ensure devices in front of Exchange handling user access to CAS are allowing access to the Default Web Site’s /mapi/ virtual directory.

    image

    At this time UAG SP3 is not compatible with MAPI/HTTP even with all filtering options disabled. UAG plans to add support for MAPI/HTTP in a future update.

    Learn More

    Still want more information? Review the following sessions on this topic from Microsoft Exchange Conference 2014

    Outlook Connectivity: Current and FutureOutlook Connectivity: Current and FutureOutlook Connectivity: Current and Future

    What's New in Outlook 2013 and Beyond

    What's New in Authentication for Outlook 2013

  • Troubleshooting Exchange 2010 Management Tools startup issues

    EDIT 12/7/2010: For additional help resolving those issues, please see our newer blog post Resolving WinRM errors and Exchange 2010 Management tools startup failures.

    In this blog post, we will be highlighting some of the most common errors that may be seen when attempting to open the Exchange Management tools (Exchange Management Console and Exchange Management Shell).

    To start off, you first need to be aware that in Exchange 2010, all management is done via Remote PowerShell, even when opening the Management Tools on an Exchange server. Where this differs from Exchange 2007 is that there is now a much larger dependency on IIS, as Remote PowerShell requests are sent via the HTTP protocol and use IIS as the mechanism for connections. IIS works with the WinRM (Windows Remote Management) service, and the WSMan (Web Services for Management) protocol to initiate the connection.

    When you click on the Exchange Management Shell shortcut, a Remote PowerShell session is opened. Instead of simply loading the Exchange snap-in (as we did with Exchange 2007), PowerShell connects using IIS to the closest Exchange 2010 server via WinRM. WinRM then performs authentication checks, creates the remote session and presents to you the cmdlets that you have access to via RBAC (Role Based Access Control).

    Since all Remote PowerShell connections go through IIS, we have identified some of the most common errors that may be exhibited when attempting to open the Exchange Management tools along with the most common causes of those errors and how to address these issues. We have attempted to list these in order of frequency.

    Issue:

    Connecting to remote server failed with the following error message: The WinRM client cannot process the request. It cannot determine the content type of the HTTP response from the destination computer. The content type is absent or invalid. For more information, see the about_Remote_Troubleshooting Help topic.

    Possible causes:

    1. Remote PowerShell uses Kerberos to authenticate the user connecting. IIS implements this Kerberos authentication method via a native module. In IIS Manager, if you go to the PowerShell Virtual Directory and then look at the Modules, you should see Kerbauth listed as a Native Module, with the dll location pointing to C:\Program Files\Microsoft\Exchange Server\v14\Bin\kerbauth.dll. If the Kerbauth module shows up as a Managed module instead of Native, or if the Kerbauth module has been loaded on the Default Web Site level (instead of, or in addition to, the PowerShell virtual directory), you can experience this issue. To correct this, make sure that the Kerbauth module is not enabled on the Default Web Site, but is only enabled on the PowerShell virtual directory. The entry type of "Local" indicates that the Kerbauth module was enabled directly on this level, and not inherited from a parent.

    2. If the WSMan module entry is missing from the global modules section of the C:\Windows\System32\Inetsrv\config\ApplicationHost.config file, as follows:

    <globalModules>
               <add name="WSMan" image="C:\Windows\system32\wsmsvc.dll" />

    This will result in the WSMan module displaying as a Managed module on the PowerShell virtual directory.

    To correct this, make sure that the WSMan module has been registered (but not enabled) at the Server level, and has been enabled on the PowerShell virtual directory.

    3. If the user that is attempting to connect is not Remote PowerShell enabled. To check if a user is enabled for Remote PowerShell, you need to open the Exchange Management Shell with an account that has been enabled, and run the following query.

    (Get-User <username>).RemotePowershellEnabled

    This will return a True or False. If the output shows False, the user is not enabled for Remote PowerShell. To enable the user, run the following command.

    Set-User <username> -RemotePowerShellEnabled $True

    Issue:

    Connecting to the remote server failed with the following error message: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. For more information, see the about_Remote_Troubleshooting Help topic.

    Possible Causes:

    1. The http binding has been removed from the Default Web Site. A common scenario for this is if you are running multiple web sites, and attempting to set up a redirect to https://mail.company.com/owa by requiring SSL on the Default Web Site, and creating another web site to do the redirect back to the SSL-enabled website.

    Remote PowerShell requires port 80 to be available on the Default Web Site. If you want to set up an automatic redirect to /owa and redirect http requests to https, you should follow the instructions located at http://technet.microsoft.com/en-us/library/aa998359(EXCHG.80).aspx and follow the directions under the section "For a Configuration in Which SSL is Required on the Default Web Site or on the OWA Virtual Directory in IIS 7.0".

    2. The http binding on the Default Web Site has been modified, and the Hostname field configured. To correct this issue, you need to clear out the Hostname field under the port 80 bindings on the Default Web Site.

    Issue:

    Connecting to remote server failed with the following error message: The WinRM client received an HTTP server error status (500), but the remote service did not include any other information about the cause of the failure. For more information, see the about_Remote_Troubleshooting Help topic. It was running the command 'Discover-ExchangeServer -UseWIA $true -SuppressError $true'.

    In addition, you may see the following warning event in the System log:

    Source: Microsoft-Windows-WinRM
    EventID: 10113
    Level: Warning
    Description: Request processing failed because the WinRM service cannot load data or event source: DLL="%ExchangeInstallPath%Bin\Microsoft.Exchange.AuthorizationPlugin.dll"

    Possible Causes

    1. The ExchangeInstallPath variable may be missing. To check this, go to the System Properties, Environment variables, and look under the System variables. You should see a variable of ExchangeInstallPath with a value pointing to C:\Program Files\Microsoft\Exchange Server\V14\.

    2. The Path of the Powershell virtual directory has been modified. The PowerShell virtual directory must point to the \Program Files\Microsoft\Exchange Server\v14\ClientAccess\PowerShell directory or you will encounter problems.

    Issue:

    Connecting to remote server failed with the following error message: The connection to the specified remote host was refused. Verify that the WS-Management service is running on the remote host and configured to listen for requests on the correct port and HTTP URL. For more information, see the about_Remote_Troubleshooting Help topic.

    Possible Causes:

    1. Make sure the MSExchangePowerShellAppPool is running. If it is, try recycling the Application Pool and check for errors or warnings in the Event logs.

    2. Make sure that the user that is trying to connect is Remote PowerShell Enabled (see the first error for details on how to check this).

    3. Make sure WinRM is properly configured on the server.

    a. Run WinRM Quick Config on the server and ensure that both tests pass and no actions are required. If any actions are required, answer Yes to the prompt to allow the WinRM configuration changes to be made.

    b. Run WinRM enumerate winrm/config/listener and ensure that a listener is present for the HTTP protocol on port 5985 listening on all addresses.

    Issue:

    Connecting to remote server failed with the following error message: The WinRM client received an HTTP status code of 403 from the remote WS-Management service.

    Possible Causes:

    1. The "Require SSL" option has been enabled on the PowerShell Virtual Directory. To resolve this, remove the "Require SSL" option from this Virtual Directory. The Exchange Management Tools connect over port 80, not 443, so if Require SSL is set, when a connection is attempted on port 80, IIS will return a 403 error indicating SSL is required.

    - Ben Winzenz, Solange Trombini

  • Supporting Windows Mail 8.1 in your organization

    Windows 8.1 and Windows RT include a built-in email app named Windows Mail. Mail includes support for IMAP and Exchange ActiveSync (EAS) accounts.

    This article includes some key technical details of Windows Mail in Windows 8.1. (See Supporting Windows 8 Mail in your organization for Windows 8.0.) Use the information to help you support the use of Mail in your organization. Read this article start to finish, or jump to the topic that interests you. Use the reference links throughout the article for more information.

    NOTE Mail, Calendar, and People apps run on Windows 8.1 and Windows RT. Although this article discusses the Mail app, please note that much of the information in this article also applies to the Calendar, and People apps. When connected to a server that supports Exchange ActiveSync, the Calendar, and People apps may also display data that was downloaded over the Exchange ActiveSync connection.

    Protocol Support

    Mail lets users connect to any service provider that supports either of the following two protocols:

    ProtocolProtocol versions & standardsFunctionality

    Exchange ActiveSync (EAS)

    • EAS 2.5
    • EAS 12.0
    • EAS 12.1
    • EAS 14.0
    • EAS 14.1
    • Send and receive email
    • Sync email, contacts & calendar
    • ActiveSync Policies
    • Remote Wipe

    IMAP + SMTP

    • Send and receive email only
    • Contacts and calendar data not synchronized
    • Microsoft Exchange does not support Public Folders via IMAP. See IMAP support in Exchange 2013.

    Post Office Protocol (POP) is not supported.

    NOTE All Windows Communications apps (Mail, Calendar, and People) can use the data that is synchronized using Exchange ActiveSync. After a user connects to their account in the Mail app, their contacts and calendar data is available in the other Windows Communications Apps and vice versa.

    Sync Configuration

    Mail can be configured to synchronize data at different times as follows:

    • Push email (default)
    • Polling at fixed intervals
    • Manually

    If a push email connection can’t be established, it will automatically switch to poll at fixed intervals.

    Push Email

    Push email requires that accounts are either Exchange ActiveSync (which all support Push) or IMAP with the IDLE extension. Not all IMAP servers support IDLE, and it is supported only for the Inbox folder.

    When a push connection can’t be established, Mail will change to polling on 30 minute intervals. Push email on Exchange ActiveSync requires that HTTP connections must be maintained for up to 60 minutes, and IMAP IDLE requires TCP connections to be maintained for up to 30 minutes.

    Account Setup Features

    Windows 8.1 and Windows RT users can add email accounts to Mail using the Settings charm. The Settings charm is always available on the right side of the Windows 8.1 and Windows RT screen. (For more visual details about Charms & the Windows 8.1 user interface, see Search, share, print & more.)

    NOTE This section provides an overview of account setup in Mail. For step-by-step procedures for setting up an account, see What else do I need to know? at the end of this guide.

    To make it as easy as possible to add accounts, account setup only prompts the user to enter the email address and password for the account they want to set up. From that data, Mail attempts to automatically configure the account as follows:

    1. The domain portion of the email address is matched against a database of well-known service providers (such as Outlook.com). If it’s a match, its settings are automatically configured.
    2. The domain portion of the email address is used to discover the user's email settings using the Autodiscover.
    3. If automatic configuration fails, the user is prompted for additional details such as an email server name and domain name.

    Add an Exchange ActiveSync account

    Screenshot: Exchange ActiveSync configuration in Windows Mail
    Figure 1: Exchange ActiveSync (EAS) configuration in Windows Mail

    If automatic configuration fails, the following additional information is required to connect to a server via Exchange ActiveSync:

    • Server address
    • Domain
    • Username

    Add an IMAP/SMTP account

    Screenshot: IMAP/SMTP configuration in Windows Mail
    Figure 2: IMAP/SMTP configuration in Windows Mail

    The information required to connect to a server via IMAP/SMTP is:

    • Email address
    • Username
    • Password
    • IMAP email server
    • IMAP SSL (if your IMAP server requires SSL encryption)
    • IMAP port
    • SMTP email server
    • SMTP SSL (if your SMTP server requires SSL encryption)
    • SMTP port
    • Whether SMTP server requires authentication
    • Whether SMTP uses the same credentials as IMAP (If not, user must also provide SMTP credentials)

    Security Features

    Mail provides administrators with some level of security through Exchange ActiveSync policies (Mobile Device Mailbox Policies in Exchange 2013). It doesn’t support any means of managing or securing PCs that are connected via IMAP. EAS includes support for certificate-based authentication and remote wipe.

    Exchange ActiveSync Policy Support

    Exchange ActiveSync devices can be managed using Exchange ActiveSync policies. Mail supports the following EAS policies. :

    • Password required
    • Allow simple password
    • Minimum password length (to a maximum of 8 characters)
    • Number of complex characters in password (to a maximum of 2 characters)
    • Password history
    • Password expiration
    • Device encryption required (on Windows RT and editions of Windows that support BitLocker. See What's New in BitLocker for details about BitLocker improvements in Windows 8.1.)
    • Maximum number of failed attempts to unlock device
    • Maximum time of inactivity before locking

    Important If AllowNonProvisionableDevices is set to false in an EAS policy and the policy contains settings that are not part of this list, the device won’t be able to connect to the Exchange server.

    Getting into Compliance

    Most of the policies listed above can be automatically enabled by Mail, but there are certain cases where the user has to take action first. These are:

    • Server requires device encryption:
      • User has a device that supports BitLocker but BitLocker isn’t enabled. User must manually enable BitLocker.
      • User has a Windows RT device that supports device encryption but it is suspended. User must reboot.
      • User has a Windows RT device that supports device encryption, but it isn’t enabled. User must sign into Windows with a Microsoft account.
    • An admin on this PC doesn’t have a strong password: All admin accounts must have a strong password before continuing.
    • The user’s account doesn’t have a strong password: User must set a strong password before continuing.

    Windows 8 Picture Passwords and ActiveSync Policy

    If a Windows 8.x user uses a picture password and Exchange ActiveSync policy requires a password, the user will still need to create and enter a password in accordance with the policy.

    ActiveSync Policy v/s Group Policy on domain-joined Windows 8.1 devices

    If a Windows 8.1 PC is joined to an Active Directory domain and controlled by Group Policy, there may be conflicting policy settings between Group Policy and an Exchange ActiveSync policy. In the event of any conflict, the strictest rule in either policy takes precedence. The only exception is password complexity rules for domain accounts. Group policy rules for password complexity (length, expiry, history, number of complex characters) take precedence over Exchange ActiveSync policies – even if group policy rules for password complexity are less strict than Exchange ActiveSync rules, the domain account will be deemed in compliance with Exchange ActiveSync policy.

    Certificate-Based Authentication

    Communications applications can connect to a corporate Exchange service configured to require certificate-based authentication. User authentication certificates can be provisioned to Windows 8.1 devices by administrators or end-users can browse to certificate and install to user certificate storage.

    User can add and connect an email account using a certificate. (For account setup, password entry is required per standard account setup.) User may be prompted to give the Mail application permission to access their user certificate, and should accept the prompt to enable certificate usage. In cases where multiple certificates are available, the user can go to account Settings to select the desired certificate.

    Non-PIN protected software certificates are supported.

    Remote Wipe

    Mail supports the Exchange ActiveSync remote wipe directive, but unlike Windows Phone (which deletes all data on the device), Mail scopes the data deleted to the specified Exchange ActiveSync account for which the remote wipe command is issued. The user's personal data is not deleted. Additionally, attachments saved from that account are made inaccessible.

    For example, if a user has an Outlook.com account for personal use and a Contoso.com account for work use, a remote wipe directive from the Contoso.com server would impact Windows 8.1 and Windows Phone 7 as follows:

    DataWindows Phone 7Windows 8.1 Mail
    Contoso.com email Deleted Deleted
    Contoso.com contacts Deleted Deleted
    Contoso.com calendars Deleted Deleted
    Contoso.com attachments Deleted Not deleted, but not accessible
    Outlook.com email Deleted Not deleted
    Outlook.com contacts Deleted Not deleted
    Outlook.com calendars Deleted Not deleted
    Outlook.com attachments Deleted Not deleted
    Other documents, files, pictures, etc. Deleted Not deleted

    Account Roaming

    To make it as easy as possible for users to have all of their accounts set up on all of their devices, Windows 8.1 uploads vital account information to the user’s Microsoft account. This information includes email address, server, server settings, and password. When a user signs into a new PC with their Microsoft account, their email accounts are automatically set up for them.

    Passwords are not uploaded from a PC for any accounts which are controlled by any Exchange ActiveSync policies. Users will have to enter their password to begin syncing a policy-controlled account on a new PC.

    If using client certificate authentication, the client certificate, and the certificate selection for an account will not be roamed. Users will have to select their desired client certificate to begin syncing a client certificate account on a new PC.

    Microsoft Accounts

    By default, users are required to have a Microsoft account, formerly known as Windows Live ID, to use the Windows Communications apps. This will usually be the Microsoft account that the user is signed into Windows with, but if they have not done so, they will be prompted to provide one before proceeding.

    If the Microsoft account is…Mail will…
    Outlook.com or Hotmail account Automatically sync email, Calendar and Contacts using Exchange ActiveSync
    Not an Outlook.com or Hotmail account
    (for example, dave@contoso.com)
    Prompt the user to provide password for their email account

    Can my organization remove the requirement for a Microsoft account?

    You can apply a Group Policy to a device to make a Microsoft Account optional for the Windows Communications apps.

    Note, the Group Policy setting is configured in Computer Configuration node in the Group Policy and applies to all users of the computer/device to which it's applied. The policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. Windows RT devices can use Local Group Policy.

    To apply the Group Policy setting:

    1. Launch GPEdit by opening the “run” prompt (Windows key + r), and entering GPEdit.msc
    2. Go to Computer Configuration > Administrative Templates > Windows Components > App runtime
    3. Select Allow Microsoft accounts to be optional to configure the policy

    If the Group Policy is applied and a Microsoft account is not used, the Communications apps will:

    1. Prompt the user for a work account (i.e. an Exchange ActiveSync account) password
    2. If account credentials are provided, use Exchange ActiveSync to synchronize email, Contacts and Calendar from the work account

    A user can add additional accounts if desired. You can use corporate firewalls or other mechanisms to block access to any consumer email services as needed.

    The following functionality will be unavailable to a user without a Microsoft Account:

    • Windows Store Application Installs
    • Account Settings roaming to additional devices
    • Connectivity to additional 3rd party services (e.g. Social sites)
    • Email communication from Microsoft regarding any updates to Microsoft Services Agreement.

    Data Consumption

    By default, Mail only downloads one month of email (up from 2 weeks in Windows 8.0). This is user configurable and can potentially download the user’s entire mailbox. For Exchange ActiveSync accounts, all contacts are downloaded and calendar events are downloaded only for three months behind the current date and 18 months ahead.

    Additionally, messages can be only partially downloaded to reduce bandwidth use as follows:

    ContentOn unmetered networksOn metered networks
    Message bodies Truncated to the first 100KB or 20KB depending on folder and device conditions Truncated to the first 20KB. For more details see Engineering Windows 8 for mobile networks.
    Attachments Some attachments are downloaded automatically when device conditions allow.
    Attachments for messages in junk folder are not downloaded automatically.
    Never downloaded automatically.

    Embedded images in email messages are downloaded on-demand as the user reads them, and attachments which are not downloaded can be downloaded on-demand as the user attempts to open them.

    Mail downloads all folders for an account. Users can configure the period of email which is downloaded to adjust the size of data for an account. Mail does not enforce any limits on number and size of attachments users can send.

    Automatic Replies

    Mail allows users to view and set their automatic reply messages (aka Out of Office or OOF messages). There is a visual indication when auto-reply is enabled. Users can view and set automatic reply plain text content. For corporate accounts, separate internal and external auto-reply messages are supported.

    There is no date/time support for specifying start or end time for automatic replies.

    Enterprise Connectivity

    Authenticated Proxies

    The communications applications can connect over LAN or WiFi connections via authenticated proxies which use standard authentication methods including: NTLM, Digest, Negotiate, and Basic authentication.

    Any user credentials entered can be cached for the session, or remembered persistently.

    Self-Signed Certificates

    The communications applications warn the user with a prompt providing an option to connect anyway when trying to connect to services with common service certificate issues. See Self-Signed Certificates in Limitations below for details and recommendations.

    Limitations

    The following features are currently not supported by Mail:

    • Direct mailbox connections using POP: Only EAS and IMAP protocols are supported.

      Note This does not mean that Windows 8.1 does not support POP. This post is about the Mail app. See Using email accounts over POP on Windows 8.1 and Windows RT 8.1 for workarounds.

    • Opaque-Signed and Encrypted S/MIME messages When S/MIME messages are received in Mail, it displays an email item with a message body that begins with “This encrypted message can’t be displayed.”

      To view email items in the S/MIME format, users must open the message using Outlook Web App, Microsoft Outlook, or another email program that supports S/MIME messages. For more information, see Opaque-Signed and Encrypted S/MIME Message on MSDN.

    Self-Signed Certificates in Windows Mail 8.1

    Users may experience connectivity errors when trying to connect to an Exchange server that uses a self-signed certificate or a certificate with other common issues. The user may receive the following error message.

    There’s a problem with a server’s security certificate. It might not be safe to connect to the server because… <details>.

    You can use one of the following options to resolve this issue.

    To resolve issue with self-signed certificates…Use this option if…
    Install a certificate signed by a trusted certification authority (CA) on the server
    • You want Exchange to work for all clients without prompting
    • You do not want your users to ignore or bypass certificate-related errors
    • You want to avoid installing a self-signed certificate or a certificate signed by an untrusted CA on all devices
    Install the server’s self-signed certificate on the device
    • You want to save the cost of a certificate signed by a trusted CA
    • You want Exchange to work from Windows 8.1 devices that have the self-signed certificate installed.
    Instruct users to ignore common certificate issues
    • You want to avoid the cost of a CA-signed certificate or do not want to install the server’s self-signed certificate on all devices
    • Users are knowledgeable about certificate-related errors

    At the prompt, users can connect anyway to ignore common service certificate issues such as self-signed certificates, allowing the communications applications to use an encrypted connection to the email service with the certificate issue. If users choose to connect anyway and ignore the service certificate issues, their selection will be remembered, (can be viewed and changed any time via Settings for account).

    We recommend that users select Cancel when they receive a certificate-related error and contact the administrator to fix the issue (option 1).

    See Digital Certificates and SSL for more information.

    Install a server’s self-signed certificate on the device

    This enables Exchange to work for Windows 8.1 devices that have the certificate installed.

    Note The administrator must provide a certificate file (.cer). The certificate can be installed to the trusted root certificate authority store for either of the following options:

    • For the current user This option does not require admin rights but must be completed for each user on the device.
    • For the local device This option requires administrator rights and needs to be done only one time for a device.

    The user or the system administrator can use the .cer file to install the certificate. To do this, use one of the following methods:

    • Use the command-line

      At an elevated command prompt, run the following command:

      certutil.exe -f -addstore root.cer

      NOTE The command installs the certificate for all users on the device.

    • Use the Certificate Import Wizard

      1. Double-click the certificate file. A certificate dialog opens.
      2. Click Install Certificate. A Certificate Import Wizard window opens.
      3. Select the option to install the certificate for only the current user or for the local device.
      4. Select Place all certificates in the following store
      5. Click Browse to open the store selection dialog. Select Trusted Root Certification Authorities.
      6. Select the store, and then click Ok. You are returned to Certificate Import Wizard dialog, and the certificate store and certificate to be installed into that store are displayed.

    Troubleshooting Mail Client Connectivity

    If a Mail user can't successfully connect to an account, consider the following:

    • Verify that the user is using the latest version of the Mail app. A user can check for updates to the Mail app by doing the following: from the Start screen, go to Store > Settings > App updates > Check for updates.
    • To rule out any transient issues, the user can wait a few minutes and try again.
    • Some cloud-based email services (for example, Microsoft Office 365) require that the user register their account before they can use email clients such as Mail. Office 365 users register their account when they sign in to the service for the first time. If the user is not an Office 365 user, the user registers their account when they sign in to their account using their Microsoft account or sign in to Outlook Web App. The user must sign out of Outlook Web App before they try to connect using Mail again.

    TIP The user will see the following message if they haven't registered their account: “We couldn’t find the settings for. Provide us with more info and we’ll try connecting again.”

    What else do I need to know?

    Updates

  • Exchange 2013 Server Role Architecture

    The Past: Exchange 2007 and Exchange 2010

    In 2005, we set out developing Exchange 2007. During the preliminary planning of Exchange 2007, we realized that the server architecture model had to evolve to deal with hardware constraints, like CPU, that existed at the time and would exist for a good portion of the lifecycle of the product. The architectural change resulted in five server roles, three of which were core to the product:

    1. Edge Transport for routing and anti-malware from the edge of the organization
    2. Hub Transport for internal routing and policy enforcement
    3. Mailbox for storage of data
    4. Client Access for client connectivity and web services
    5. Unified Messaging for voice mail and voice access

    E2007 Design
    Figure 1: Server role architecture in Exchange 2007 & Exchange 2010

    Our goal was to make these server roles autonomous units. Unfortunately, that goal did not materialize in either Exchange 2007 or Exchange 2010. The server roles were tightly coupled along four key dimensions:

    1. Functionality was scattered across all the server roles, thereby making it a requirement that Hub Transport and the Client Access server roles be deployed in every Active Directory site where Mailbox servers were deployed.
    2. This also necessitated a tight versioning alignment – a down-level Hub Transport or Client Access shouldn’t communicate with a higher version Mailbox server; in some cases this was enforced (e.g., Exchange 2007 Hub Transport servers cannot deliver messages to Exchange 2010 Mailbox servers), but in other cases they were not (e.g., an Exchange 2010 SP1 Client Access server can render data from an Exchange 2010 SP2 Mailbox server). The versioning restriction also meant that you could not simply upgrade a single server role to take advantage of new functionality – you had to upgrade all of them.
    3. The server roles were also coupled together from a geographical affinity perspective as well due to the middle tier server roles using RPC as the communication mechanism to the Mailbox server role.
    4. Similarly, from a user perspective, the server roles were also tightly coupled – a set of users that are served by a given Mailbox server, are always served by a given set of Client Access and Hub Transport servers.

    The functionality and versioning aspects are the key issues. To understand it better, let’s look at the following diagram:

    2010Island
    Figure 2: Inter-server communication across different layers of functionality

    As you can see from the above diagram, there are three layers: 1) Protocols 2) Business Logic and 3) Storage.  If you are familiar with the OSI model, you might believe the protocol layer is similar to the application layer and that data has to flow from the application layer through the other layers to get to the physical layer (or vice versa).  However, that is not the case in Exchange 2007 or Exchange 2010; a protocol can interact directly with the storage layer.  For example, the transport instance on Server1 (Exchange 2010 SP1) can deliver mail to the Store service on Server2 (Exchange 2010 SP2). The reverse is also true, the store can submit mail via the Store Submission service on Server2 to the transport service running on Server1. In either scenario, content conversion happens on the lower version server (as content conversion in this example happens within the transport components).  While a newer version interacting with an older version may not be problematic, the same cannot be said when the reverse is true as the older version simply does not know about any changes that exist in the newer version that may break functionality (hence, why we put in blockers in certain circumstances, or provide guidance on upgrade procedures).

    The end result is that in Exchange 2007 and Exchange 2010 deployments, the server roles are deployed as a single monolithic entity.

    The Present: Exchange Server 2013

    When we started our planning for Exchange Server 2013, we focused on a single architectural tenet – improve the architecture to serve the needs of deployments at all scales, from the small 100 user shop to hosting hundreds of millions of mailboxes in Office 365. This single tenet drove us to make major architectural changes and investments across the entire product. This shift is in part, because unlike when we were designing Exchange 2007, we are no longer CPU bound; in fact there is so much readily available CPU on modern server hardware, that the notion of dedicated server roles no longer makes sense as hardware ultimately is wasted (hence the recommendation in Exchange 2010 to deploy multi-role servers).

    In Exchange Server 2013, we have two basic building blocks – the Client Access array and the Database Availability Group (DAG). Each provides a unit of high availability and fault tolerance that are decoupled from one another. Client Access servers make up the CAS array, while Mailbox servers comprise the DAG.

    2013
    Figure 3: New server role architecture simplifies deployments

    So what is the Client Access server in Exchange 2013? The Client Access server role is comprised of three components, client protocols, SMTP, and a UM Call Router. The CAS role is a thin, protocol session stateless server that is organized into a load balanced configuration. Unlike previous versions, session affinity is not required at the load balancer (but you still want a load balancer to handle connection management policies and health checking). This is because logic now exists in CAS to authenticate the request, and then route the request to the Mailbox server that hosts the active copy of the mailbox database.

    The Mailbox server role now hosts all the components and/or protocols that process, render and store the data. No clients will ever connect directly to the Mailbox server role; all client connections are handled by the Client Access server role. Mailbox servers can be added to a Database Availability Group, thereby forming a high available unit that can be deployed in one or more datacenters.

    Unlike the past two generations, these two server roles do not suffer from the same constraints:

    1. Functionality is not dispersed across both server roles. All data rendering now occurs on the Mailbox server that hosts the active mailbox database copy. As we will discuss in more detail in a future article, the Client Access server role is merely a proxy.
    2. Versioning is also decoupled as a result of moving the data rendering stack to a single server role.
    3. Client Access servers and the Mailbox servers no longer leverage the chatty RPC protocol for client sessions, thereby removing the geographical affinity issues.
    4. From a user perspective, users do not need to be serviced by the Client Access servers that are located within the same Active Directory site as the Mailbox servers hosting the user’s mailboxes.

    If we return to the layering diagram, we can see how this changes:

    2013 islandv2
    Figure 4: Inter-server communication in Exchange 2013

    Instead of allowing communication between servers to occur at any layer in the stack, communication must occur between servers at the protocol layer. This ensures that for a given mailbox’s connectivity, the protocol being used is always served by the protocol instance that is local to the active database copy. In other words, if my mailbox is located on Server1 and I want to send a message to a mailbox on Server2, the message must be sent from Server1’s transport components to the transport components on Server2; content conversion of the message then occurs on Server2 as the message is injected into the store.  If I upgrade the Mailbox server with a service pack or cumulative update, then for a given mailbox hosted on that server, all data rendering and content conversions for that mailbox will be local, thus removing version constraints and functionality issues that arose in previous releases.

    Looking Ahead

    This article is the start of several articles focused on architecture and the investments we have made in Exchange Server 2013. Over the next several weeks:

    • We will delve into more specifics with respect to the server roles.
    • Discuss our investments with respect to storage.
    • Discuss sizing Exchange Server 2013.
    • Discuss load balancing and namespace planning.

    And for those of you that did not get a chance to attend MEC 2012, feel free to visit www.iammec.com/video and watch my technical architecture keynote.

    Conclusion

    Exchange Server 2013 introduces a new building block architecture that facilitate deployments at all scales. All core Exchange functionality for a given mailbox is always served by the Mailbox server where the mailbox’s database is currently activated. The introduction of the Client Access server role changes enables you to move away from complicated session affinity load balancing solutions, simplifying the network stack. From an upgrade perspective, all components on a given server are upgraded together, thereby virtually eliminating the need to juggle Client Access and Mailbox server versions. And finally, the new server role architecture aligns Exchange with being able to take advantage of hardware trends for the foreseeable future – core count will continue to increase, disk capacity will continue to increase, and RAM prices will continue to drop.

    Ross Smith IV
    Principal Program Manager
    Exchange Customer Experience

  • Released: Update Rollup 5 for Exchange 2010 Service Pack 3 and Update Rollup 13 for Exchange 2007 Service Pack 3

    The Exchange team is announcing the availability of the following updates:

    Exchange Server 2010 Service Pack 3 Update Rollup 5 resolves customer reported issues and includes previously released security bulletins for Exchange Server 2010 Service Pack 3. A complete list of the issues resolved in this rollup is available in KB2917508.

    Exchange Server 2007 Service Pack 3 Update Rollup 13 provides recent DST changes and adds the ability to publish a 2007 Edge Server from Exchange Server 2013. Update Rollup 13 also contains all previously released security bulletins and fixes and updates for Exchange Server 2007 Service Pack 3. More information on this rollup is available in KB2917522.

    Neither release is classified as a security release but customers are encouraged to deploy these updates to their environment once proper validation has been completed.

    Note: KB articles may not be fully available at the time of publishing of this post.

    The Exchange Team

  • The Preferred Architecture

    During my session at the recent Microsoft Exchange Conference (MEC), I revealed Microsoft’s preferred architecture (PA) for Exchange Server 2013. The PA is the Exchange Engineering Team’s prescriptive approach to what we believe is the optimum deployment architecture for Exchange 2013, and one that is very similar to what we deploy in Office 365.

    While Exchange 2013 offers a wide variety of architectural choices for on-premises deployments, the architecture discussed below is our most scrutinized one ever. While there are other supported deployment architectures, other architectures are not recommended.

    The PA is designed with several business requirements in mind. For example, requirements that the architecture be able to:

    • Include both high availability within the datacenter, and site resilience between datacenters
    • Support multiple copies of each database, thereby allowing for quick activation
    • Reduce the cost of the messaging infrastructure
    • Increase availability by optimizing around failure domains and reducing complexity

    The specific prescriptive nature of the PA means of course that not every customer will be able to deploy it (for example, customers without multiple datacenters). And some of our customers have different business requirements or other needs, which necessitate an architecture different from that shown here. If you fall into those categories, and you want to deploy Exchange on-premises, there are still advantages to adhering as closely as possible to the PA where possible, and deviate only where your requirements widely differ. Alternatively, you can consider Office 365 where you can take advantage of the PA without having to deploy or manage servers.

    Before I delve into the PA, I think it is important that you understand a concept that is the cornerstone for this architecture – simplicity.

    Simplicity

    Failure happens. There is no technology that can change this. Disks, servers, racks, network appliances, cables, power substations, generators, operating systems, applications (like Exchange), drivers, and other services – there is simply no part of an IT services offering that is not subject to failure.

    One way to mitigate failure is to build in redundancy. Where one entity is likely to fail, two or more entities are used. This pattern can be observed in Web server arrays, disk arrays, and the like. But redundancy by itself can be prohibitively expensive (simple multiplication of cost). For example, the cost and complexity of the SAN based storage system that was at the heart of Exchange until the 2007 release, drove the Exchange Team to step up its investment in the storage stack and to evolve the Exchange application to integrate the important elements of storage directly into its architecture. We recognized that every SAN system would ultimately fail, and that implementing a highly redundant system using SAN technology would be cost-prohibitive. In response, Exchange has evolved from requiring expensive, scaled-up, high-performance SAN storage and related peripherals, to now being able to run on cheap, scaled-out servers with commodity, low-performance SAS/SATA drives in a JBOD configuration with commodity disk controllers. This architecture enables Exchange to be resilient to any storage related failure, while enabling you to deploy large mailboxes at a reasonable cost.

    By building the replication architecture into Exchange and optimizing Exchange for commodity storage, the failure mode is predictable from a storage perspective. This approach does not stop at the storage layer; redundant NICs, power supplies, etc., can also be removed from the server hardware. Whether it is a disk, controller, or motherboard that fails, the end result should be the same, another database copy is activated and takes over.

    The more complex the hardware or software architecture, the more unpredictable failure events can be. Managing failure at any scale is all about making recovery predictable, which drives the necessity to having predictable failure modes. Examples of complex redundancy are active/passive network appliance pairs, aggregation points on the network with complex routing configurations, network teaming, RAID, multiple fiber pathways, etc. Removing complex redundancy seems unintuitive on its face – how can removing redundancy increase availability? Moving away from complex redundancy models to a software-based redundancy model, creates a predictable failure mode.

    The PA removes complexity and redundancy where necessary to drive the architecture to a predictable recovery model: when a failure occurs, another copy of the affected database is activated.

    The PA is divided into four areas of focus:

    1. Namespace design
    2. Datacenter design
    3. Server design
    4. DAG design

    Namespace Design

    In the Namespace Planning and Load Balancing Principles articles, I outlined the various configuration choices that are available with Exchange 2013. From a namespace perspective, the choices are to either deploy a bound namespace (having a preference for the users to operate out of a specific datacenter) or an unbound namespace (having the users connect to any datacenter without preference).

    The recommended approach is to utilize the unbound model, deploying a single namespace per client protocol for the site resilient datacenter pair (where each datacenter is assumed to represent its own Active Directory site - see more details on that below). For example:

    • autodiscover.contoso.com
    • For HTTP clients: mail.contoso.com
    • For IMAP clients: imap.contoso.com
    • For SMTP clients: smtp.contoso.com

    namespacedesign
    Figure 1: Namespace Design

    Each namespace is load balanced across both datacenters in a configuration that does not leverage session affinity, resulting in fifty percent of traffic being proxied between datacenters. Traffic is equally distributed across the datacenters in the site resilient pair, via DNS round-robin, geo-DNS, or other similar solution you may have at your disposal. Though from our perspective, the simpler solution is the least complex and easier to manage, so our recommendation is to leverage DNS round-robin.

    In the event that you have multiple site resilient datacenter pairs in your environment, you will need to decide if you want to have a single worldwide namespace, or if you want to control the traffic to each specific datacenter pair by using regional namespaces. Ultimately your decision depends on your network topology and the associated cost with using an unbound model; for example, if you have datacenters located in North America and Europe, the network link between these regions might not only be costly, but it might also have high latency, which can introduce user pain and operational issues. In that case, it makes sense to deploy a bound model with a separate namespace for each region.

    Site Resilient Datacenter Pair Design

    To achieve a highly available and site resilient architecture, you must have two or more datacenters that are well-connected (ideally, you want a low round-trip network latency, otherwise replication and the client experience are adversely affected). In addition, the datacenters should be connected via redundant network paths supplied by different operating carriers.

    While we support stretching an Active Directory site across multiple datacenters, for the PA we recommend having each datacenter be its own Active Directory site. There are two reasons:

    1. Transport site resilience via Shadow Redundancy and Safety Net can only be achieved when the DAG has members located in more than one Active Directory site.
    2. Active Directory has published guidance that states that subnets should be placed in different Active Directory sites when the round trip latency is greater than 10ms between the subnets.

    Server Design

    In the PA, all servers are physical, multi-role servers. Physical hardware is deployed rather than virtualized hardware for two reasons:

    1. The servers are scaled to utilize eighty percent of resources during the worst-failure mode.
    2. Virtualization adds an additional layer of management and complexity, which introduces additional recovery modes that do not add value, as Exchange provides equivalent functionality out of the box.

    By deploying multi-role servers, the architecture is simplified as all servers have the same hardware, installation process, and configuration options. Consistency across servers also simplifies administration. Multi-role servers provide more efficient use of server resources by distributing the Client Access and Mailbox resources across a larger pool of servers. Client Access and Database Availability Group (DAG) resiliency is also increased, as there are more servers available for the load-balanced pool and for the DAG.

    Commodity server platforms (e.g., 2U servers that hold 12 large form-factor drive bays within the server chassis) are use in the PA. Additional drive bays can be deployed per-server depending on the number of mailboxes, mailbox size, and the server’s scalability.

    Each server houses a single RAID1 disk pair for the operating system, Exchange binaries, protocol/client logs, and transport database. The rest of the storage is configured as JBOD, using large capacity 7.2K RPM serially attached SCSI (SAS) disks (while SATA disks are also available, the SAS equivalent provides better IO and a lower annualized failure rate). Bitlocker is used to encrypt each disk, thereby providing data encryption at rest and mitigating concerns around data theft via disk replacement.

    To ensure that the capacity and IO of each disk is used as efficiently as possible, four database copies are deployed per-disk. The normal run-time copy layout (calculated in the Exchange 2013 Server Role Requirements Calculator) ensures that there is no more than a single copy activated per-disk.

    ServerDesign
    Figure 2: Server Design

    At least one disk in the disk pool is reserved as a hot spare. AutoReseed is enabled and quickly restores database redundancy after a disk failure by activating the hot spare and initiating database copy reseeds.

    Database Availability Group Design

    Within each site resilient datacenter pair you will have one or more DAGs.

    DAG Configuration

    As with the namespace model, each DAG within the site resilient datacenter pair operates in an unbound model with active copies distributed equally across all servers in the DAG. This model provides two benefits:

    1. Ensures that each DAG member’s full stack of services is being validated (client connectivity, replication pipeline, transport, etc.).
    2. Distributes the load across as many servers as possible during a failure scenario, thereby only incrementally increasing resource utilization across the remaining members within the DAG.

    Each datacenter is symmetrical, with an equal number of member servers within a DAG residing in each datacenter. This means that each DAG contains an even number of servers and uses a witness server for quorum arbitration.

    The DAG is the fundamental building block in Exchange 2013. With respect to DAG size, a larger DAG provides more redundancy and resources. Within the PA, the goal is to deploy larger DAGs (typically starting out with an eight member DAG and increasing the number of servers as required to meet your requirements) and only create new DAGs when scalability introduces concerns over the existing database copy layout.

    DAG Network Design

    Since the introduction of continuous replication in Exchange 2007, Exchange has recommended multiple replication networks for separating client traffic from replication traffic. Deploying two networks allows you to isolate certain traffic along different network pathways and ensure that during certain events (e.g., reseed events) the network interface is not saturated (which is an issue with 100Mb, and to a certain extent, 1Gb interfaces). However, for most customers, having two networks operating in this manner was only a logical separation, as the same copper fabric was used by both networks in the underlying network architecture.

    With 10Gb networks becoming the standard, the PA moves away from the previous guidance of separating client traffic from replication traffic. A single network interface is all that is needed because ultimately our goal is to achieve a standard recovery model despite the failure - whether a server failure occurs or a network failure occurs, the result is the same, a database copy is activated on another server within the DAG. This architectural change simplifies the network stack, and obviates the need to eliminate heartbeat cross-talk.

    Witness Server Placement

    Ultimately, the placement of the witness server determines whether the architecture can provide automatic datacenter failover capabilities or whether it will require a manual activation to enable service in the event of a site failure.

    If your organization has a third location with a network infrastructure that is isolated from network failures that affect the site resilient datacenter pair in which the DAG is deployed, then the recommendation is to deploy the DAG’s witness server in that third location. This configuration gives the DAG the ability to automatically failover databases to the other datacenter in response to a datacenter-level failure event, regardless of which datacenter has the outage.

    DAG Design
    Figure 3: DAG (Three Datacenter) Design

    If your organization does not have a third location, then place the witness server in one of the datacenters within the site resilient datacenter pair. If you have multiple DAGs within the site resilient datacenter pair, then place the witness server for all DAGs in the same datacenter (typically the datacenter where the majority of the users are physically located). Also, make sure the Primary Active Manager (PAM) for each DAG is also located in the same datacenter.

    Data Resiliency

    Data resiliency is achieved by deploying multiple database copies. In the PA, database copies are distributed across the site resilient datacenter pair, thereby ensuring that mailbox data is protected from software, hardware and even datacenter failures.

    Each database has four copies, with two copies in each datacenter, which means at a minimum, the PA requires four servers. Out of these four copies, three of them are configured as highly available. The fourth copy (the copy with the highest Activation Preference) is configured as a lagged database copy. Due to the server design, each copy of a database is isolated from its other copies, thereby reducing failure domains and increasing the overall availability of the solution as discussed in DAG: Beyond the “A”.

    The purpose of the lagged database copy is to provide a recovery mechanism for the rare event of system-wide, catastrophic logical corruption. It is not intended for individual mailbox recovery or mailbox item recovery.

    The lagged database copy is configured with a seven day ReplayLagTime. In addition, the Replay Lag Manager is also enabled to provide dynamic log file play down for lagged copies. This feature ensures that the lagged database copy can be automatically played down and made highly available in the following scenarios:

    • When a low disk space threshold is reached
    • When the lagged copy has physical corruption and needs to be page patched
    • When there are fewer than three available healthy copies (active or passive) for more than 24 hours

    By using the lagged database copy in this manner, it is important to understand that the lagged database copy is not a guaranteed point-in-time backup. The lagged database copy will have an availability threshold, typically around 90%, due to periods where the disk containing a lagged copy is lost due to disk failure, the lagged copy becoming an HA copy (due to automatic play down), as well as, the periods where the lagged database copy is re-building the replay queue.

    To protect against accidental (or malicious) item deletion, Single Item Recovery or In-Place Hold technologies are used, and the Deleted Item Retention window is set to a value that meets or exceeds any defined item-level recovery SLA.

    With all of these technologies in play, traditional backups are unnecessary; as a result, the PA leverages Exchange Native Data Protection.

    Summary

    The PA takes advantage of the changes made in Exchange 2013 to simplify your Exchange deployment, without decreasing the availability or the resiliency of the deployment. And in some scenarios, when compared to previous generations, the PA increases availability and resiliency of your deployment.

    Ross Smith IV
    Principal Program Manager
    Office 365 Customer Experience

  • Exchange 2010 SP1 FAQ and Known Issues

    Last week we released Exchange Server 2010 Service Pack 1. It has received some great feedback and reviews from customers, experts, analysts, and the Exchange community.

    The starting point for SP1 setup/upgrade should be the What's New in SP1, SP1 Release Notes, and Prerequisites docs. As with any new release, there are some frequently asked deployment questions, and known issues, or issues reported by some customers. You may not face these in your environment, but we're posting these here along with some workarounds so you're aware of them as you test and deploy SP1.

    1. Upgrade order

      The order of upgrade from Exchange 2010 RTM to SP1 hasn’t changed from what was done in Exchange 2007. Upgrade server roles in the following order:

      1. Client Access server
      2. Hub Transport server
      3. Unified Messaging server
      4. Mailbox server

      The Edge Transport server role can be upgraded at any time; however, we recommend upgrading Edge Transport either before all other server roles have been upgraded or after all other server roles have been upgraded. For more details, see Upgrade from Exchange 2010 RTM to Exchange 2010 SP1 in the documenation.

    2. Exchange 2010 SP1 Prerequisites

      Exchange 2010 SP1 requires the installation of 4-5 hotfixes, depending on the operating system – Windows Server 2008, or Windows Server 2008 R2. To install the Exchange 2010 SP1 administration tools on Windows 7 and Windows Vista, you requires 2 hotfixes.

      Note: Due to the shared code base for these updates, Windows Server 2008 and Windows Vista share the same updates. Similarly, Windows Server 2008 R2 and Windows 7 share the same updates. Make sure you select the x64 versions of each update to be installed on your Exchange 2010 servers.

      Update 2/11/2011: Windows 2008 R2 SP1 includes all the required hotfixes listed in this table — 979744, 983440, 979099, 982867 and 977020. If you're installing Exchange 2010 SP1 on a server running Windows 2008 R2 SP1, you don't need to install these hotfixes separately. For a complete list of all updates included in Windows 2008 R2 SP1, see Updates in Win7 and WS08R2 SP1.xls.

      Here’s a matrix of the updates required, including download locations and file names.

      HotfixDownloadWindows Server 2008Windows Server 2008 R2Windows 7 & Windows Vista
      979744
      A .NET Framework 2.0-based Multi-AppDomain application stops responding when you run the application
      MSDN
      or Microsoft Connect
      Windows6.0-KB979744-x64.msu (CBS: Vista/Win2K8)
      Windows6.1-KB979744-x64.msu (CBS: Win7/Win2K8 R2)
      N. A.
      983440
      An ASP.NET 2.0 hotfix rollup package is available for Windows 7 and for Windows Server 2008 R2
      Request from CSS
      N. A.
      Yes
      N.A.
      977624
      AD RMS clients do not authenticate federated identity providers in Windows Server 2008 or in Windows Vista. Without this update, Active Directory Rights Management Services (AD RMS) features may stop working
      Request from CSS Select the download for Windows Vista for the x64 platform.
      N.A.
      N.A.
      979917
      Two issues occur when you deploy an ASP.NET 2.0-based application on a server that is running IIS 7.0 or IIS 7.5 in Integrated mode
      MSDN Windows6.0-KB979917-x64.msu (Vista)
      N. A.
      N. A.
      973136,
      FIX: ArgumentNullException exception error message when a .NET Framework 2.0 SP2-based application tries to process a response with zero-length content to an asynchronous ASP.NET Web service request: "Value cannot be null".

      Microsoft Connect

      Windows6.0-KB973136-x64.msu
      N.A.
      N. A.
      977592
      RPC over HTTP clients cannot connect to the Windows Server 2008 RPC over HTTP servers that have RPC load balancing enabled.

      Request from CSS

      Select the download for Windows Vista (x64)
      N.A.
      N. A.

      979099
      An update is available to remove the application manifest expiry feature from AD RMS clients.

      Download Center
      N. A.
      Windows6.1-KB979099-x64.msu
      N. A.
      982867
      WCF services that are hosted by computers together with a NLB fail in .NET Framework 3.5 SP1
      MSDN Windows6.0-KB982867-v2-x64.msu (Vista) Windows6.1-KB982867-v2-x64.msu (Win7) X86: Windows6.1-KB982867-v2-x86.msu (Win7)
      x64: Windows6.1-KB982867-v2-x64.msu (Win7)
      977020
      FIX: An application that is based on the Microsoft .NET Framework 2.0 Service Pack 2 and that invokes a Web service call asynchronously throws an exception on a computer that is running Windows 7.
      Microsoft Connect
      N. A.

      x64: Windows6.1-KB977020-v2-x64.msu

      x64: Windows6.1-KB977020-v2-x64.msu

      X86: Windows6.1-KB977020-v2-x86.msu

      Some of the hotfixes would have been rolled up in a Windows update or service pack. Given that the Exchange team released SP1 earlier than what was planned and announced earlier, it did not align with some of the work with the Windows platform. As a result, some hotfixes are available from MSDN/Connect, and some require that you request them online using the links in the corresponding KBAs. The administrator experience when initially downloading these hotfixes may be a little odd. However, once you download the hotfixes, and receive two of the hotfixes from CSS, you can use the same for subsequent installs on other servers. In due course, all these updates may become available on the Download Center, and also through Windows Update.

      These hotfixes have been tested extensively as part of Exchange 2010 SP1 deployments within Microsoft and by our TAP customers. They are fully supported by Microsoft.

    3. Prerequisite download pages linked from SP1 Setup are unavailable

      When installing Exchange Server 2010 SP1 the prereq check may turn up some required hotfixes to install. The message will include a link to click for help. Clicking this link redirects you to a page saying that the content does not exist.

      We're working to update the linked content.

      Meanwhile, please refer to the TechNet article Exchange 2010 Prerequisites to download and install the prerequisites required for your server version (the hotfixes are linked to in the above table, but you'll still need to install the usual prerequisites such as .Net Framework 3.5 SP1, Windows Remote Management (WinRM) 2.0, and the required OS components).

    4. The Missing Exchange Management Shell Shortcut

      Some customers have reported that after upgrading an Exchange Server 2010 server to Exchange 2010 SP1, the Exchange Management Shell shortcut is missing from program options. Additionally, the .ps1 script files associated with the EMS may also be missing.

      We’re actively investigating this issue. Meanwhile, here’s a workaround:

      1. Verify that the following files are present in the %ExchangeInstallPath%\bindirectory:
        • - CommonConnectFunctions.ps1
        • - CommonConnectFunctions.strings.psd1
        • - Connect-ExchangeServer-help.xml
        • - ConnectFunctions.ps1
        • - ConnectFunctions.strings.psd1
        • - RemoteExchange.ps1
        • - RemoteExchange.strings.psd1

        NOTE: If these files are missing, you can copy the files from the Exchange Server 2010 Service Pack 1 installation media to the %ExchangeInstallPath%\bin directory. These files are present in the \setup\serverroles\common folder.

      2. Click Start -> AdmiinistrativeTools ->, right-click Windows PowerShell Modules, select Send to -> Desktop (as shortcut)
      3. Go to the Properties of the shortcut and on Target replace the path to C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe -noexit -command ". 'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'; Connect-ExchangeServer -auto"

        Note: if the Exchange installation folder or drive name is different than the default, you need to change the path accordingly.

    5. Upgrading Edge Transport on Forefront Threat Management Gateway (TMG) and Forefront Protection for Exchange 2010

      If you upgrade a server with the Edge Transport server role running with ForeFront Threat Management Gateway (TMG) and ForeFront Protection for Exchange (FPE) enabled for SMTP protection, the ForeFront TMG Managed Control Service may fail to start and E-mail policy configuration settings cannot be applied.

      The TMG team is working on this issue. See Problems when installing Exchange 2010 Service Pack 1 on a TMG configured for Mail protection on the ForeFront TMG (ISA) Team Blog. Exchange 2010 SP1 Release Notes has been updated with the above information.

      The ForeFront TMG product team has released a software update to address this issue. See Software Update 1 for Microsoft Forefront Threat Management Gateway (TMG) 2010 Service Pack 1 now available for download.

    6. Static Address Book Service Port Configuration Changes

      The location for setting the port the address book service should use has changed in SP1. In Exchange 2010 RTM you had to edit the Microsoft.exchange.addressbook.service.exe.config to configure the service port. In SP1 you must use the following registry key:
      Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MSExchangeAB\Parameters
      Value name: RpcTcpPort
      Type: REG_SZ (String)


      When you apply SP1 to a machine where you had previously configured a static port by editing the Microsoft.exchange.addressbook.service.exe.config file, the upgrade process will not carry forward your static port assignments. Following a restart, the Address Book Service will revert to using a dynamic port instead of a static port specified in the config file. This may cause interruptions in service.

      As with all upgrades where servers are in load balanced pools, we recommend you perform a rolling upgrade — removing servers from the pool, updating them and then moving the pool to the newly upgraded machines. Alternatively, we recommend that you upgrade an array of servers by draining connections from any one machine before you upgrade it.

      There are times when these approaches may not be possible. You can maintain your static port configuration, and have it take effect the moment the address book service starts for the first time following the application of the service pack, by creating the registry key BEFORE you apply SP1 to your server. The registry key has no impact pre SP1, and so by configuring it before you apply the Service Pack you can avoid the need to make changes to set the port post install, and avoid any service interruptions.

    7. iPhone, OWA Premium and POP3 & IMAP4 issues due to invalid accepted domain

      After applying E2010 SP1:

      1. iPhone users may not be able to view the content of incoming messages in their Inboxes, and when they try to open a message, they get an error saying:

        This message has not been downloaded from the server.

        Admins may see the following event logged in the Application Event Log on Exchange 2010 CAS Server:

        Watson report about to be sent for process id: 1234, with parameters: E12, c-RTL-AMD64, 14.01.0218.011, AirSync, MSExchange ActiveSync, Microsoft.Exchange.Data.Storage.InboundConversionOptions.CheckImceaDomain, UnexpectedCondition:ArgumentException, 4321, 14.01.0218.015.

      2. OWA Premium users may not be able to reply or forward a message. They may see the following error in OWA:

        An unexpected error occurred and your request couldn't be handled. Exception type: System.ArgumentException, Exception message: imceaDomain must be a valid domain name.

      3. POP3 & IMAP4 users may also not be able to retrieve incoming mail and Admins will see the following event logged in Event Log:

        ERR Server Unavailable. 21; RpcC=6; Excpt=imceaDomain must be a valid domain name.

      Resolution

      Please run the following command under Exchange Management Shell and verify that there is one domain marked as ‘Default’ and it's DomainName & Name values are valid domain names. We were able to reproduce the issue by setting a domain name with a space in it, like "aa bb"

      Get-AcceptedDomain | fl

      If you also have an invalid domain name there (for example, a domain name with a space in it), then removing the space and restarting the server will fix the EAS (iPhone), OWA, POP3 & IMAP4 issues as mentioned above.

      Command to run under EMS would be:

      Set-AcceptedDomain –Identity -Name “ValidSMTPDomainName”

      Thes examples update the Name parameter of the "My Company" and "ABC Local" accepted domains (the space is removed from both):

      Set-AcceptedDomain –Identity “My Company” –Name “MyCompany.Com”
      Set-AcceptedDomain –Identity “ABC Local” –Name “ABC.Local”

    8. Error when adding or removing a mailbox database copy

      If a server running Exchange 2010 RTM (or Exchange 2010 SP1 Beta) is upgraded to Exchange 2010 SP1, administrators may experience an error when using the Add-MailboxdDatabaseCopy or Remove-MailboxDatabaseCopy cmdlets to add or remove DAG members.

      When you try to add a DAG member, you may see the following error:

      Add-MailboxDatabaseCopy DAG-DB0 -MailboxServer DAG-2

      The result:

      WARNING: An unexpected error has occurred and a Watson dump is being generated: Registry key has subkeys and recursive removes are not supported by this method.

      Registry key has subkeys and recursive removes are not supported by this method.
      + CategoryInfo : NotSpecified: (:) [Add-MailboxDatabaseCopy], InvalidOperationException
      + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.Exchange.Management.SystemConfigurationTasks.
      AddMailboxDatabaseCopy

      The command is not successful in adding the copy or updating Active Directory to show the copy was added. This happens due to presence of the DumpsterInfo registry key.

      Workaround: Delete the DumpsterInfo key, as shown below.

      1. Identify the GUID of the database that is being added using this command:

        Get-MailboxDatabase DAG-DB0 | fl name,GUID

        The result:

        Name : DAG-DB0
        Guid : 8d3a9778-851c-40a4-91af-65a2c487b4cc

      2. On the server specified in the add command, using the database GUID identified, remove the following registry key:
        HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v14\Replay\State\<db-guid>\DumpsterInfo

        The GUID identified in this case is 8d3a9778-851c-40a4-91af-65a2c487b4cc. With this information you can now export and delete the DumpsterInfo key on the server where you are attempting to add the mailbox database copy. This can be easily done using the registry editor, but if you have more than a handful of DAG members, this is best automated using the Shell.

        This example removes the DumpsterInfo key from the 8d3a9778-851c-40a4-91af-65a2c487b4cc key:

        Remove-Item HKLM:\Software\Microsoft\ExchangeServer\V14\Replay\State\8d3a9778-851c-40a4-91af-65a2c487b4cc\DumpsterInfo

        To automate this across all servers in your organization, use the DeleteDumpsterRegKey.ps1 script.

        File: deletedumpsterregkey_ps1.txt
        Description: The DeleteDumpsterRegkey.ps1 script can be used to delete the offending DumpsterInfo registry keys that can cause this problem on all Exchange 2010 SP1 Mailbox servers in the organization. Rename the file to DeleteDumpsterRegkey.ps1 (remove the .txt extension).

        For more info, see Tim McMichael’s blog post Exchange 2010 SP1: Error when adding or removing a mailbox database copy.

    Thanks to all the folks in CSS and Exchange teams who helped identify, validate and provide workarounds for some of the issues mentioned above, and to the Exchange community and MVPs for their feedback.

    Bharat Suneja, Nino Bilic
    M. Amir Haque, Greg Taylor,
    & Tim McMichael

    Updates:

    • 9/7/2010: Updated list of files for the missing Exchange Management Shell shortcut issue
    • 9/15/2010: Udpated pre-reqs table:
      - 982867 required on Windows 2008 SP2
      - 983440 not required on Windows 2008 SP2
      - 977020 required on Windows 2008 R2
    • 9/21/2010: Added link to Software Update 1 for ForeFront Threat Management Gateway (TMG) 2010 Service Pack 1
      - Replaced "Request from CSS..." verbiage for KB 979917 with link to KB 979917 download on MSDN
    • 9/22/2010: Updated correct default Exchange install path (highlighted) in 'The Missing Exchange Management Shell Shortcut" section: C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe -noexit -command ". 'C:\Program Files\Microsoft\Exchange Server\V14\bin\RemoteExchange.ps1'; Connect-ExchangeServer -auto"
  • Exchange, Firewalls, and Support… Oh, my!

    Over the years Exchange Server architecture has gone through a number of changes. As a product matures over time you may see us change what is supported as we react to changes in the product architecture, the state of technology as a whole, or major support issues we see come in through our support infrastructure.

    Over the years a large volume of support calls have ended up being caused by communication issues between Exchange servers or between Exchange servers and domain controllers. Often times this results from a network device between the servers not allowing some port or protocol to communicate to the other servers.

    I tried to get Harrison Ford to co-write this article with me given his specific talents, but alas he was busy and regretfully couldn’t partake. Please allow me to start with the short version up front so there is no confusion about what we currently DO and DO NOT support before I lose some of you to;

    b1
    Image Courtesy of: http://knowyourmeme.com/memes/tldr

    Starting with Exchange Server 2007 and current as of Exchange Server 2013, having network devices blocking ports/protocols between Exchange servers within a single organization or between Exchange servers and domain controllers in an organization is not supported. A network device may sit in the communication path between the servers, but a rule allowing “ANY/ANY” port and protocol communication must be in place allowing free communication between Exchange servers as well as between Exchange servers and domain controllers.

    For Exchange Server 2010 this is already articulated at http://technet.microsoft.com/en-us/library/bb331973(v=EXCHG.141).aspx under Client Access Server Connectivity in the Client Access Server section in the following paragraph.

    “In addition to having a Client Access server in every Active Directory site that contains a Mailbox server, it’s important to avoid restricting traffic between Exchange servers. Make sure that all defined ports that are used by Exchange are open in both directions between all source and destination servers. The installation of a firewall between Exchange servers or between an Exchange 2010 Mailbox or Client Access server and Active Directory isn’t supported. However, you can install a network device if traffic isn’t restricted and all available ports are open between the various Exchange servers and Active Directory.”

    Why has this seemingly simple support statement become so muddied and confusing over the years? Maybe we didn’t make it blunt enough to start with, but there could be some other compounding points adding to the confusion.

    Confusion point #1…

    Exchange Server 2003 was the last version of Exchange Server to allow deploying (at the time) a Front-End server in a perimeter network (aka DMZ) while locating the Back-End server in the intranet. While this could be made to work it required a specialized set of rules that essentially turned your perimeter network security model into the following:

    b2
    Image Courtesy of: The Internet

    During the time of Exchange Server 2003 adoption of reverse proxies within perimeter networks was on the rise. Reverse proxies allowed customers to more securely publish Exchange Server for remote access while only allowing a single port and protocol to traverse from the Internet to the perimeter network, and then a single port and protocol to traverse from the perimeter network to the intranet.

    You could go from something complicated like this with endless port and protocol requirements….

    b3
    Figure 1: Legacy Exchange 2003 deployment with Front-End server in a perimeter network. What a mess. Who’s hungry?

    To something simple like this…

    b4
    Figure 2: Reverse proxy in the perimeter network and all Exchange servers within the intranet. Simplicity at its best!

    The resulting increase in simplicity as well as the drop in support cases was strong enough for Microsoft to determine during the lifecycle of the next major version of Exchange Server, 2007, that we would no longer support deploying what is now the Client Access Server role in a perimeter network. From that time on all Exchange servers, except for Edge Transport Server role, were to be deployed on the intranet with unfettered access to each other. We do have this documented in http://technet.microsoft.com/en-us/library/bb232184.aspx.

    Confusion point #2…

    TechNet includes a number of articles that document many if not all of the ports and protocols Exchange Server utilizes during the course of normal operation. These documents are often misunderstood as “configure your firewall this way” articles. The information is only being provided for educational purposes on the inner-workings of Exchange Server, or to aid with the configuration of load balancing or service monitoring mechanisms which often require specific port/protocol definitions to perform their functions correctly. In case you come across them in the future, here is a list of most of those articles.

    Exchange Network Port References

    Understanding Protocols, Ports, and Services in Unified Messaging

    I don’t trust my clients and I like restricting their access. Is that supported?

    This is a different story and yes there are things you can do here to remain supported. Exchange Server has for a number of revisions supported configuring static client communication ports for Windows based Outlook clients. After the client contacts the endpoint mapper service running on Windows under TCP Port 135 it will be handed back the static TCP port you have chosen to use in your environment. For Exchange Server 2010 you may be familiar with the following article describing how to configure static client communication ports for the Address Book Service and the RPC Client Access Service, thereby leaving you with 4 ports required for clients to operate in MAPI/RPC mode.

    • TCP 135 for the Endpoint Mapper
    • TCP 443 for Autodiscover/EWS/ECP
    • TCP <your choice #1> for the Address Book service
    • TCP <your choice #2> for the RPC Client Access service
    • UDP ANY from CAS to the Outlook 2003 client if you’re in online mode and utilizing UDP notifications.

    http://social.technet.microsoft.com/wiki/contents/articles/864.configure-static-rpc-ports-on-an-exchange-2010-client-access-server.aspx

    TechNet also has resources for versions prior to Exchange Server 2010: http://support.microsoft.com/kb/270836.

    Starting in Exchange Server 2013 the only protocol supported for Windows Outlook clients is RPC over HTTPS. This architectural change reduces your required port count to one, TCP 443 for HTTPS, to be utilized by Autodiscover, Exchange Web Services, and RPC over HTTPS (aka Outlook Anywhere). This is going to make your life easy, but don’t tell your boss as they’ll expect you to start doing other things as well. It’ll be our secret. Promise. I’ll go through some examples of supported deployments, but will keep it easy and only use Outlook clients. The same ideas apply to other POP/IMAP/EAS clients as well, just don’t restrict Exchange servers from talking to each other. A setup like the following Outlook 2010 / Exchange 2010 diagram would be entirely supported where we have a firewall between the clients and the servers. In all of the following examples I have chosen static TCP port 59531 for my RPC Client Access Service on CAS and Mailbox, and static TCP port 59532 for my Address Book Service on CAS. UDP notifications are also thrown in for fun for those of you running Outlook 2003 in Online Mode, which I hope is very few and far between these days. Domain controllers were left out of these diagrams to focus on communication directly between clients and Exchange, and load balancers were also kept out for simplicity.

    b5
    Figure 3: Firewall between clients and all Exchange servers. Supported if firewall is configured correctly to allow all necessary client access. AD not shown.

    However, if you attempted to do something naughty like the following diagram and for reasons unknown to us put a firewall between CAS and Mailbox then there had better be ANY/ANY rules in place allowing conversations to originate from either side between Exchange servers.

    b6
    Figure 4: Firewall between CAS and other Exchange servers. Supported only if the firewall is configured for unfettered access between Exchange servers, and Exchange servers and AD. AD not shown.

    Well what if you have multiple datacenters with Exchange and want to firewall everything everywhere because you believe that as the number of firewalls goes up your security must exponentially increase? We’ve got you covered there too, deploy it like this where you’ll see both MAPI/RPC and RPC/HTTPS user examples. I didn’t bother putting load balancers or Domain Controllers into any of these diagrams by the way. I’m putting faith in all of you that you know where those go.

    b7
    Figure 5: Firewalls between users and Exchange servers as well as between datacenters. Supported if the firewalls are configured to allow unfettered access between Exchange servers, between Exchange servers and AD, and appropriate client rules. AD not shown.

    Boy this is going to be easy when all of you migrate to Exchange Server 2013 and are only dealing with RPC/HTTPS connections from clients and SMTP or HTTPS between servers. Except for maybe those pesky POP/IMAP/UM clients…

    Figure 6 below depicts what Exchange 2013 network conversations may look like at a high level. A load balancer and additional CAS were introduced to show we don’t care what CAS a client’s traffic goes through as they all end up at the same Mailbox Server anyways where the user’s database is mounted. You may have read previously Exchange Server 2013 does not require affinity for client traffic and hopefully this visual helps show why.

    The one tricky bit to consider if placing a firewall in between clients and Exchange Server 2013 would be UM traffic as it is not all client to CAS in nature. In Exchange Server 2013 a telephony device first makes a SIP connection through CAS (orange arrows) which after speaking with the UM Service on Mailbox Server will redirect the client so it may establish a direct SIP+RTP session (blue arrow) to the Mailbox Server holding the user’s active database copy for the RTP connection.

    Blog - Exchange and Firewalls - 2013
    Figure 6: Showing at a high level SMTP, Windows Outlook Client, and UM traffic with a firewall between users and Exchange Server 2013.

    So, Microsoft, if you’re saying this should be simple then what can I do and remain in a supported state?

    The key here is to not block traffic between Exchange servers, or between Exchange servers and Domain Controllers. As long no traffic blocking is performed between these servers you will be in a fully supported deployment and will not have to waste time with our support staff proving you really do have all necessary communications open before you can start to troubleshoot an issue. We know many customers will continue to test the boundaries of supportability regardless, but be aware this may drag out your troubleshooting experience and possibly extend an active outage. We prefer to help our customers resolve any and all issues as fast as possible. Staying within support guidelines does in fact help us help you as expeditiously as possible, and in the end will save you time, support costs, labor costs, and last but not least aggravation.

    Brian Day
    Program Manager
    Exchange Customer Experience

  • Database Maintenance in Exchange 2010

    Over the last several months there has been significant chatter around what is background database maintenance and why is it important for Exchange 2010 databases. Hopefully this article will answer these questions.

    What maintenance tasks need to be performed against the database?

    The following tasks need to be routinely performed against Exchange databases:

    Database Compaction

    The primary purpose of database compaction is to free up unused space within the database file (however, it should be noted that this does not return that unused space to the file system). The intention is to free up pages in the database by compacting records onto the fewest number of pages possible, thus reducing the amount of I/O necessary. The ESE database engine does this by taking the database metadata, which is the information within the database that describes tables in the database, and for each table, visiting each page in the table, and attempting to move records onto logically ordered pages.

    Maintaining a lean database file footprint is important for several reasons, including the following:

    1. Reducing the time associated with backing up the database file
    2. Maintaining a predictable database file size, which is important for server/storage sizing purposes.

    Prior to Exchange 2010, database compaction operations were performed during the online maintenance window. This process produced random IO as it walked the database and re-ordered records across pages. This process was literally too good in previous versions – by freeing up database pages and re-ordering the records, the pages were always in a random order. Coupled with the store schema architecture, this meant that any request to pull a set of data (like downloading items within a folder) always resulted in random IO.

    In Exchange 2010, database compaction was redesigned such that contiguity is preferred over space compaction. In addition, database compaction was moved out of the online maintenance window and is now a background process that runs continuously.

    Database Defragmentation

    Database defragmentation is new to Exchange 2010 and is also referred to as OLD v2 and B+ tree defragmentation. Its function is to compact as well as defragment (make sequential) database tables that have been marked/hinted as sequential. Database defragmentation is important to maintain efficient utilization of disk resources over time (make the IO more sequential as opposed to random) as well as to maintain the compactness of tables marked as sequential.

    You can think of the database defragmentation process as a monitor that watches other database page operations to determine if there is work to do. It monitors all tables for free pages, and if a table gets to a threshold where a significant high percentage of the total B+ Tree page count is free, it gives the free pages back to the root. It also works to maintain contiguity within a table set with sequential space hints (a table created with a known sequential usage pattern). If database defragmentation sees a scan/pre-read on a sequential table and the records are not stored on sequential pages within the table, the process will defrag that section of the table, by moving all of the impacted pages to a new extent in the B+ tree. You can use the performance counters (mentioned in the monitoring section) to see how little work database defragmentation performs once a steady state is reached.

    Database defragmentation is a background process that analyzes the database continuously as operations are performed, and then triggers asynchronous work when necessary. Database defragmentation is throttled under two scenarios:

    1. The max number of outstanding tasks This keeps database defragmentation from doing too much work the first pass if massive change has occurred in the database.
    2. A latency throttle of 100ms When the system is overloaded, database defragmentation will start punting defragmentation work. Punted work will get executed the next time the database goes through that same operational pattern. There's nothing that remembers what defragmentation work was punted and goes back and executes it once the system has more resources.

    Database Checksumming

    Database checksumming (also known as Online Database Scanning) is the process where the database is read in large chunks and each page is checksummed (checked for physical page corruption). Checksumming’s primary purpose is to detect physical corruption and lost flushes that may not be getting detected by transactional operations (stale pages).

    With Exchange 2007 RTM and all previous versions, checksumming operations happened during the backup process. This posed a problem for replicated databases, as the only copy to be checksummed was the copy being backed up. For the scenario where the passive copy was being backed up, this meant that the active copy was not being checksummed. So in Exchange 2007 SP1, we introduced a new optional online maintenance task, Online Maintenance Checksum (for more information, see Exchange 2007 SP1 ESE Changes – Part 2).

    In Exchange 2010, database scanning checksums the database and performs post Exchange 2010 Store crash operations. Space can be leaked due to crashes, and online database scanning finds and recovers lost space. Database checksum reads approximately 5 MB per second for each actively scanning database (both active and passive copies) using 256KB IOs. The I/O is 100 percent sequential. The system in Exchange 2010 is designed with the expectation that every database is fully scanned once every seven days.

    If the scan takes longer than seven days, an event is recorded in the Application Log :

    Event ID: 733
    Event Type: Information
    Event Source: ESE
    Description: Information Store (15964) MDB01: Online Maintenance Database Checksumming background task is NOT finishing on time for database 'd:\mdb\mdb01.edb'. This pass started on 11/10/2011 and has been running for 604800 seconds (over 7 days) so far.

    If it takes longer than seven days to complete the scan on the active database copy, the following entry will be recorded in the Application Log once the scan has completed:

    Event ID: 735
    Event Type: Information
    Event Source: ESE
    Description: Information Store (15964) MDB01 Database Maintenance has completed a full pass on database 'd:\mdb\mdb01.edb'. This pass started on 11/10/2011 and ran for a total of 777600 seconds. This database maintenance task exceeded the 7 day maintenance completion threshold. One or more of the following actions should be taken: increase the IO performance/throughput of the volume hosting the database, reduce the database size, and/or reduce non-database maintenance IO.

    In addition, an in-flight warning will also be recorded in the Application Log when it takes longer than 7 days to complete.

    In Exchange 2010, there are now two modes to run database checksumming on active database copies:

    1. Run in the background 24×7 This is the default behavior. It should be used for all databases, especially for databases that are larger than 1TB. Exchange scans the database no more than once per day. This read I/O is 100 percent sequential (which makes it easy on the disk) and equates to a scanning rate of about 5 megabytes (MB)/sec on most systems. The scanning process is single threaded and is throttled by IO latency. The higher the latency, the more database checksum slows down because it is waiting longer for the last batch to complete before issuing another batch scan of pages (8 pages are read at a time).
    2. Run in the scheduled mailbox database maintenance process When you select this option, database checksumming is the last task. You can configure how long it runs by changing the mailbox database maintenance schedule. This option should only be used with databases smaller than 1 terabyte (TB) in size, which require less time to complete a full scan.

    Regardless of the database size, our recommendation is to leverage the default behavior and not configure database checksum operations against the active database as a scheduled process (i.e., don’t configure it as a process within the online maintenance window).

    For passive database copies, database checksums occur during runtime, continuously operating in the background.

    Page Patching

    Page patching is the process where corrupt pages are replaced by healthy copies. As mentioned previously, corrupt page detection is a function of database checksumming (in addition, corrupt pages are also detected at run time when the page is stored in the database cache). Page patching works against highly available (HA) database copies. How a corrupt page is repaired depends on whether the HA database copy is active or passive.

    Page patching process

    On active database copies On passive database copies
    1. A corrupt page(s) is detected.
    2. A marker is written into the active log file. This marker indicates the corrupt page number and that page requires replacement.
    3. An entry is added to the page patch request list.
    4. The active log file is closed.
    5. The Replication service ships the log file to passive database copies.
    6. The Replication service on a target Mailbox server receives the shipped log file and inspects it.
    7. The Information Store on the target server replays the log file and replays up to marker, retrieves its healthy version of the page, invokes Replay Service callback and ships the page to the source Mailbox server.
    8. The source Mailbox server receives the healthy version of the page, confirms that there is an entry in the page patch request list, then writes the page to the log buffer, and correspondingly, the page is inserted into the database cache.
    9. The corresponding entry in the page patch request list is removed.
    10. At this point the database is considered patched (at some later point the checkpoint will advance and the database cache will be flushed and the corrupt page on disk will be overwritten).
    11. Any other copy of this page (received from another passive copy) will be silently dropped, because there is no corresponding entry in the page patch request list.
    1. On the Mailbox server where the corrupt page(s) is detected, log replay is paused for the affected database copy.
    2. The replication service coordinates with the Mailbox server that is hosting the active database copy and retrieves the corrupted page(s) and the required log range from the active copy’s database header.
    3. The Mailbox server updates the database header for the affected database copy, inserting the new required log range.
    4. The Mailbox server notifies the Mailbox server hosting the active database copy which log files it requires.
    5. The Mailbox server receives the required log files and inspects them.
    6. The Mailbox server injects the healthy versions of the database pages it retrieved from the active database copy. The pages are written to the log buffer, and correspondingly, the page is inserted into the database cache.
    7. The Mailbox server resumes log replay.

    Page Zeroing

    Database Page Zeroing is the process where deleted pages in the database are written over with a pattern (zeroed) as a security measure, which makes discovering the data much more difficult.

    With Exchange 2007 RTM and all previous versions, page zeroing operations happened during the streaming backup process. In addition since they occurred during the streaming backup process they were not a logged operation (e.g., page zeroing did not result in the generation of log files). This posed a problem for replicated databases, as the passive copies never had its pages zeroed, and the active copies would only have it pages zeroed if you performed a streaming backup. So in Exchange 2007 SP1, we introduced a new optional online maintenance task, Zero Database Pages during Checksum (for more information, see Exchange 2007 SP1 ESE Changes – Part 2). When enabled this task would zero out pages during the Online Maintenance Window, logging the changes, which would be replicated to the passive copies.

    With the Exchange 2007 SP1 implementation, there is significant lag between when a page is deleted to when it is zeroed as a result of the zeroing process occurring during a scheduled maintenance window. So in Exchange 2010 SP1, the page zeroing task is now a runtime event that operates continuously, zeroing out pages typically at transaction time when a hard delete occurs.

    In addition, database pages can also be scrubbed during the online checksum process. The pages targeted in this case are:

    • Deleted records which couldn’t be scrubbed during runtime due to dropped tasks (if the system is too overloaded) or because Store crashed before the tasks got to scrub the data;
    • Deleted tables and secondary indices. When these get deleted, we don’t actively scrub their contents, so online checksum detects that these pages don’t belong to any valid object anymore and scrubs them.

    For more information on page zeroing in Exchange 2010, see Understanding Exchange 2010 Page Zeroing.

    Why aren’t these tasks simply performed during a scheduled maintenance window?

    Requiring a scheduled maintenance window for page zeroing, database defragmentation, database compaction, and online checksum operations poses significant problems, including the following:

    1. Having scheduled maintenance operations makes it very difficult to manage 24x7 datacenters which host mailboxes from various time zones and have little or no time for a scheduled maintenance window. Database compaction in prior versions of Exchange had no throttling mechanisms and since the IO is predominantly random, it can lead to poor user experience.
    2. Exchange 2010 Mailbox databases deployed on lower tier storage (e.g., 7.2K SATA/SAS) have a reduced effective IO bandwidth available to ESE to perform maintenance window tasks. This is an issue because it means that IO latencies will increase during the maintenance window, thus preventing the maintenance activities to complete within a desired period of time.
    3. The use of JBOD provides an additional challenge to the database in terms of data verification. With RAID storage, it's common for an array controller to background scan a given disk group, locating and re-assigning bad blocks. A bad block (aka sector) is a block on a disk that cannot be used due to permanent damage (e.g. physical damage inflicted on the disk particles). It's also common for an array controller to read the alternate mirrored disk if a bad block was detected on the initial read request. The array controller will subsequently mark the bad block as “bad” and write the data to a new block. All of this occurs without the application knowing, perhaps with just a slight increase in the disk read latency. Without RAID or an array controller, both of these bad block detection and remediation methods are no longer available. Without RAID, it's up to the application (ESE) to detect bad blocks and remediate (i.e., database checksumming).
    4. Larger databases on larger disks require longer maintenance periods to maintain database sequentiality/compactness.

    Due to the aforementioned issues, it was critical in Exchange 2010 that the database maintenance tasks be moved out of a scheduled process and be performed during runtime continuously in the background.

    Won’t these background tasks impact my end users?

    We’ve designed these background tasks such that they're automatically throttled based on activity occurring against the database. In addition, our sizing guidance around message profiles takes these maintenance tasks into account.

    How can I monitor the effectiveness of these background maintenance tasks?

    In previous versions of Exchange, events in the Application Log would be used to monitor things like online defragmentation. In Exchange 2010, there are no longer any events recorded for the defragmentation and compaction maintenance tasks. However, you can use performance counters to track the background maintenance tasks under the MSExchange Database ==> Instances object:

    Counter Description
    Database Maintenance Duration The number of seconds that have passed since the maintenance started for this database. If the value is 0, maintenance has been finished for the day.
    Database Maintenance Pages Bad Checksums The number of non-correctable page checksums encountered during a database maintenance pass
    Defragmentation Tasks The count of background database defragmentation tasks that are currently executing
    Defragmentation Tasks Completed/Sec The rate of background database defragmentation tasks that are being completed

    You'll find the following page zeroing counters under the MSExchange Database object:

    Counter Description
    Database Maintenance Pages Zeroed Indicates the number of pages zeroed by the database engine since the performance counter was invoked
    Database Maintenance Pages Zeroed/sec Indicates the rate at which pages are zeroed by the database engine

    How can I check whitepace in a database?

    You will need to dismount the database and use ESEUTIL /MS to check the available whitespace in a database. For an example, see http://technet.microsoft.com/en-us/library/aa996139(v=EXCHG.65).aspx (note that you have to multiply the number of pages by 32K).

    Note that there is a status property available on databases within Exchange 2010, but it should not be used to determine the amount of total whitespace available within the database:

    Get-MailboxDatabase MDB1 -Status | FL AvailableNewMailboxSpace

    AvailableNewMailboxSpace tells you is how much space is available in the root tree of the database. It does not factor in the free pages within mailbox tables, index tables, etc.  It is not representative of the white space within the database.

    How can I reclaim the whitespace?

    Naturally, after seeing the available whitespace in the database, the question that always ensues is – how can I reclaim the whitespace?

    Many assume the answer is to perform an offline defragmentation of the database using ESEUTIL. However, that's not our recommendation. When you perform an offline defragmentation you create an entirely brand new database and the operations performed to create this new database are not logged in transaction logs. The new database also has a new database signature, which means that you invalidate the database copies associated with this database.

    In the event that you do encounter a database that has significant whitespace and you don't expect that normal operations will reclaim it, our recommendation is:

    1. Create a new database and associated database copies.
    2. Move all mailboxes to the new database.
    3. Delete the original database and its associated database copies.

    A terminology confusion

    Much of the confusion lies in the term background database maintenance. Collectively, all of the aforementioned tasks make up background database maintenance. However, the Shell, EMC, and JetStress all refer to database checksumming as background database maintenance, and that's what you're configuring when you enable or disable it using these tools.


    Figure 1: Enabling background database maintenance for a database using EMC

    Enabling background database maintenance using the Shell:

    Set-MailboxDatabase -Identity MDB1 -BackgroundDatabaseMaintenance $true


    Figure 2: Running background database maintenance as part of a JetStress test

    My storage vendor has recommended I disable Database Checksumming as a background maintenance task, what should I do?

    Database checksumming can become an IO tax burden if the storage is not designed correctly (even though it's sequential) as it performs 256K read IOs and generates roughly 5MB/s per database.

    As part of our storage guidance, we recommend you configure your storage array stripe size (the size of stripes written to each disk in an array; also referred to as block size) to be 256KB or larger.

    It's also important to test your storage with JetStress and ensure that the database checksum operation is included in the test pass.

    In the end, if a JetStress execution fails due to database checksumming, you have a few options:

    1. Don’t use striping  Use RAID-1 pairs or JBOD (which may require architectural changes) and get the most benefit from sequential IO patterns available in Exchange 2010.
    2. Schedule it  Configure database checksumming to not be a background process, but a scheduled process. When we implemented database checksum as a background process, we understood that some storage arrays would be so optimized for random IO (or had bandwidth limitations) that they wouldn't handle the sequential read IO well. That's why we built it so it could be turned off (which moves the checksum operation to the maintenance window).

      If you do this, we do recommend smaller database sizes. Also keep in mind that the passive copies will still perform database checksum as a background process, so you still need to account for this throughput in our storage architecture. For more information on this subject see Jetstress 2010 and Background Database Maintenance.

    3. Use different storage or improve the capabilities of the storage  Choose storage which is capable of meeting Exchange best practices (256KB+ stripe size).

    Conclusion

    The architectural changes to the database engine in Exchange Server 2010 dramatically improve its performance and robustness, but change the behavior of database maintenance tasks from previous versions. Hopefully this article helps your understanding of what is background database maintenance in Exchange 2010.

    Ross Smith IV
    Principal Program Manager
    Exchange Customer Experience