This is the official blog of the Exchange Server Product Group. All content here is considered authoritative and supported by Microsoft, unless otherwise specified.
Would you like to suggest a topic for the Exchange team to blog about? Send suggestions to us.
Exchange Server 2013 Service Pack 1 (SP1) is now available for download! Please make sure to read the release notesbefore installing SP1. The final build number for Exchange Server 2013 SP1 is 15.00.0847.032.
SP1 has already been deployed to thousands of production mailboxes in customer environments via the Exchange Server Technology Adoption Program (TAP). In addition to including fixes, SP1 provides enhancements to improve the Exchange 2013 experience. These include enhancements in security and compliance, architecture and administration, and user experiences. These key enhancements are introduced below.
Note: Some of the documentation referenced may not be fully available at the time of publishing of this post.
SP1 provides enhancements improving security and compliance capabilities in Exchange Server 2013. This includes improvements in the Data Loss Prevention (DLP) feature and the return of S/MIME encryption for Outlook Web App users.
These improvements help Exchange meet our customer requirements and stay in step with the latest platforms.
We know the user experience is crucial to running a great messaging platform. SP1 provides continued enhancements to help your users work smarter.
As with all cumulative updates (CUs), SP1 is a full build of Exchange, and the deployment of SP1 is just like the deployment of a cumulative update.
Prior to or concurrent with upgrading or deploying SP1 onto a server, you must update Active Directory. These are the required actions to perform prior to installing SP1 on a server.
1. Exchange 2013 SP1 includes schema changes. Therefore, you will need to execute the following command to apply the schema changes. setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms 2. Exchange 2013 SP1 includes enterprise Active Directory changes (e.g., RBAC roles have been updated to support new cmdlets and/or properties). Therefore, you will need to execute the following command. setup.exe /PrepareAD /IAcceptExchangeServerLicenseTerms
1. Exchange 2013 SP1 includes schema changes. Therefore, you will need to execute the following command to apply the schema changes.
setup.exe /PrepareSchema /IAcceptExchangeServerLicenseTerms
2. Exchange 2013 SP1 includes enterprise Active Directory changes (e.g., RBAC roles have been updated to support new cmdlets and/or properties). Therefore, you will need to execute the following command.
setup.exe /PrepareAD /IAcceptExchangeServerLicenseTerms
Once the above preparatory steps are completed, you can install SP1 on your servers. Of course, as always, if you don’t separately perform the above steps, they will be performed by Setup when you install your first Exchange 2013 SP1 server. If this is your first Exchange 2013 server deployment, you will need to deploy both Client Access Server and Mailbox Server roles in your organization.
If you already deployed Exchange 2013 RTM code and want to upgrade to SP1, you will run the following command from a command line. setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms Alternatively you can start the installation through the GUI installer.
If you already deployed Exchange 2013 RTM code and want to upgrade to SP1, you will run the following command from a command line.
setup.exe /m:upgrade /IAcceptExchangeServerLicenseTerms
Alternatively you can start the installation through the GUI installer.
Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., SP1) or the prior (e.g., CU3) Cumulative Update release.
Note: We have learned some customers using 3rd party or custom transport agents may experience issues after installation of SP1. If you experience installation issues consult KB 2938053 to resolve this issue with transport agents.
Our next update for Exchange 2013 will be released as Exchange 2013 Cumulative Update 5. This CU release will continue the Exchange Server 2013 release process.
If you want to learn more about Exchange Server 2013 SP1 and have the opportunity to ask questions to the Exchange team in person, come join us at the Microsoft Exchange Conference.
Brian ShiersTechnical Product Manager, Exchange
Over the years, displaying recipient photographs in the Global Address List (GAL) has been a frequently-requested feature, high on the wish lists of many Exchange folks. Particularly in large organizations or geographically dispersed teams, it's great to be able to put a face to a name for people you've never met or don't frequently have face time with. Employees are commonly photographed when issuing badges/IDs, and many organizations publish the photos on intranets.
There have been questions about workarounds or third-party add-ins for Outlook, and you can also find some sample code on MSDN and elsewhere. A few years ago, an unnamed IT person wrote ASP code to make employee photos show up on the intranet based on the Employee ID attribute in Active Directory - which was imported from the company's LDAP directory. A fun project to satisfy the coder alter-ego of the said IT person.
Luckily, you won't need to turn to your alter-ego to do this. Exchange 2010 and Outlook 2010 make this task a snap, with help from Active Directory. Active Directory includes the Picture attribute (we'll refer to it using its ldapDisplayName: thumbnailPhoto) to store thumbnail photos, and you can easily import photos— not the high-res ones from your 20 megapixel digital camera, but small, less-than-10K-ish ones, using Exchange 2010's Import-RecipientDataProperty cmdlet.
The first question most IT folks would want to ask is— What's importing all those photos going to do to the size of my Active Directory database? And how much Active Directory replication traffic will this generate? The thumbnailPhoto attribute can accomodate photos of up to 100K in size, but the Import-RecipientDataProperty cmdlet won't allow you to import a photo that's larger than 10K. (Note, the attribute limit was stated as 10K earlier. This has been updated to state the correct value. -Bharat)
The original picture used in this example was 9K, and you can compress it further to a much smaller size - let's say approximately 2K-2.5K, without any noticeable degradation when displayed at the smaller sizes. If you store user certificates in Active Directory, the 10K or smaller size thumbnail pictures are comparable in size. Storing thumbnails for 10,000 users would take close to 100 Mb, and it's data that doesn't change frequently.
Note: The recommended thumbnail photo size in pixels is 96x96 pixels.
With that out of the way, let's go through the process of adding pictures.
First stop, the Active Directory Schema. A minor schema modification is required to flip the thumbnailPhoto attribute to make it replicate to the Global Catalog.
Note: If you're on Exchange 2010 SP1, skip this step. The attribute is modified by setup / SchemaPrep.
Regsvr32 schmmgmt.dll
Now you can start uploading pictures to Active Directory using the Import-RecipientDataProperty cmdlet, as shown in this example:
Import-RecipientDataProperty -Identity "Bharat Suneja" -Picture -FileData ([Byte[]]$(Get-Content -Path "C:\pictures\BharatSuneja.jpg" -Encoding Byte -ReadCount 0))
To perform a bulk operation you can use the Get-Mailbox cmdlet with your choice of filter (or use the Get-DistributionGroupMember cmdlet if you want to do this for members of a distribution group), and pipe the mailboxes to a foreach loop. You can also retrieve the user name and path to the thumbnail picture from a CSV/TXT file.
Now, let's fire up Outlook 2010 and take a look what that looks like.
In the Address Book/GAL properties for the recipient
Figure 2: Thumbnail displayed in a recipient's property pages in the GAL
When you receive a message from a user who has the thumbnail populated, it shows up in the message preview.
Figure 3: Thumbnail displayed in a message
While composing a message, the thumbnail also shows up when you hover the mouse on the recipient's name.
Figure 4: Recipient's thumbnail displayed on mouse over when composing a message
There are other locations in Outlook where photos are displayed. For example, in the Account Settings section in the Backstage Help view.
Update from the Outlook team
Our friends from the Outlook team have requested us to point out that the new Outlook Social Connector also displays GAL Photos, as well as photos from Contacts folders and from social networks, as shown in this screenshot.
More details and video in Announcing the Outlook Social Connector on the Outlook team blog.
After you've loaded photos in Active Directory, you'll need to update the Offline Address Book (OAB) for Outlook cached mode clients. This example updates the Default Offline Address Book:
Update-OfflineAddressBook "Default Offline Address Book"
In Exchange 2010, the recipient attributes included in an OAB are specified in the ConfiguredAttributes property of the OAB. ConfiguredAttributes is populated with a default set of attributes. You can modify it using the Set-OfflineAddressBook cmdlet to add/remove attributes as required.
By default, thumbnailPhoto is included in the OAB as an Indicator attribute. This means the value of the attribute isn't copied to the OAB— instead, it simply indicates the client should get the value from Active Directory. If an Outlook client (including Outlook Anywhere clients connected to Exchange using HTTPS) can access Active Directory, the thumbnail will be downloaded and displayed. When offline, no thumbnail downloads. Another example of an Indicator attribute is the UmSpokenName.
You can list all attributes included in the default OAB using the following command:
(Get-OfflineAddressBook "Default Offline Address Book").ConfiguredAttributes
For true offline use, you could modify the ConfiguredAttributes of an OAB to make thumbnailPhoto a Value attribute. After this is done and the OAB updated, the photos are added to the OAB (yes, all 20,000 photos you just uploaded...). Depending on the number of users and sizes of thumbnail photos uploaded, this would add significant bulk to the OAB. Test this scenario thoroughly in a lab environment— chances are you may not want to provide the GAL photo bliss to offline clients in this manner.
To prevent Outlook cached mode clients from displaying thumbnail photos (remember, the photo is not in the OAB – just a pointer to go fetch it from Active Directory), you can remove the thumbnailPhoto attribute from the ConfiguredAttributes property of an OAB using the following command:
$attributes = (Get-OfflineAddressBook "Default Offline Address Book").ConfiguredAttributes$attributes.Remove("thumbnailphoto,Indicator")Set-OfflineAddressBook "Default Offline Address Book" -ConfiguredAttributes $attributes
Bharat Suneja
Updates:
To visit this post again, use the short URL aka.ms/galphotos. To go to the 'GAL Photos: Frequently Asked Questions' post, use aka.ms/galphotosfaq.
Since the release to manufacturing (RTM) of Exchange 2013, you have been waiting for our sizing and capacity planning guidance. This is the first official release of our guidance in this area, and updates to our TechNet content will follow in a future milestone.
As we continue to learn more from our own internal deployments of Exchange 2013, as well as from customer feedback, you will see further updates to our sizing and capacity planning guidance in two forms: changes to the numbers mentioned in this document, as well as further guidance on specific areas not covered here. Let us know what you think we are missing and we will do our best to respond with better information over time.
Historically, the Exchange Server product group has used various sources of data to produce sizing guidance. Typically, this data would come from scale tests run early in the product development cycle, and we would then fine-tune that guidance with observations from production deployments closer to final release. Production deployments have included Exchange Dogfood (our internal pre-release deployment that hosts the Exchange team and various other groups at Microsoft), Microsoft IT’s corporate Exchange deployment, and various early adopter programs.
For Exchange 2013, our guidance is primarily based on observations from the Exchange Dogfood deployment. Dogfood hosts some of the most demanding Exchange users at Microsoft, with extreme messaging profiles and many client sessions per user across multiple client types. Many users in the Dogfood deployment send and receive more than 500 messages per day, and typically have multiple Outlook clients and multiple mobile devices simultaneously connected and active. This allows our guidance to be somewhat conservative, taking into account additional overhead from client types that we don’t regularly see in our internal deployments as well as client mixes that might be different from what's considered “normal” at Microsoft.
Does this mean that you should take this conservative guidance and adjust the recommendations such that you deploy less hardware? Absolutely not. One of the many things we have learned from operating our own very high-scale service is that availability and reliability are very dependent on having capacity available to deal with those unexpected peaks.
Sizing is both a science and an art form. Attempting to apply too much science to the process (trying to get too accurate) usually results in not having enough extra capacity available to deal with peaks, and in the end, results in a poor user experience and decreased system availability. On the other hand, there does need to be some science involved in the process, otherwise it’s very challenging to have a predictable and repeatable methodology for sizing deployments. We strive to achieve the right balance here.
From a sizing and performance perspective, there are a number of advantages with the new Exchange 2013 architecture. As many of you are aware, a couple of years ago we began recommending multi-role deployment for Exchange 2010 (combining the Mailbox, Hub Transport, and Client Access Server (CAS) roles on a single server) as a great way to take advantage of hardware resources on modern servers, as well as a way to simplify capacity planning and deployment. These same advantages apply to the Exchange 2013 Mailbox role as well. We like to think of the services running on the Mailbox role as providing a balanced utilization of resources rather than having a set of services on a role that are very disk intensive, and a set of services on another role that are very CPU intensive.
Another example to consider for the Mailbox role is cache effectiveness. Software developers use in-memory caching to prevent having to use higher-latency methods to retrieve data (like LDAP queries, RPCs, or disk reads). In the Exchange 2007/2010 architecture, processing for operations related to a particular user could occur on many servers throughout the topology. One CAS might be handling Outlook Web App for that user, while another (or more than one) CAS might be handling Exchange ActiveSync connections, and even more CAS might be processing Outlook Anywhere RPC proxy load for that same user. It’s even possible that the set of servers handling that load could be changing on a regular basis. Any data associated with that user stored in a cache would become useless (effectively a waste of memory) as soon as those connections moved to other servers. In the Exchange 2013 architecture, all workload processing for a given user occurs on the Mailbox server hosting the active copy of that user’s mailbox. Therefore, cache utilization is much more effective.
The new CAS role has some nice benefits as well. Given that the role is totally stateless from a user perspective, it becomes very easy to scale up and down as demands change by simply adding or removing servers from the topology. Compared to the CAS role in prior releases, hardware utilization is dramatically reduced meaning that fewer CAS role machines will be required. Additionally, it may make sense for many customers to consider a multi-role deployment in which CAS and Mailbox are co-located – this allows further simplification of capacity planning and deployment, and also increases the number of available CAS which has a positive effect on service availability. Look for a follow up post on the benefits of a multi-role deployment soon.
Sizing an Exchange deployment has six major phases, and I will go through each of them in this post in some detail.
The primary input to all of the calculations that you will perform later is the average user profile of the deployment, where the user profile is defined as the sum of total messages sent and total messages received per-user, per-workday (on average). Many organizations have quite a bit of variability in user profiles. For example, a segment of users might be considered “Information Workers” and spend a good part of their day in their mailbox sending and reading mail, while another segment of users might be more focused on other tasks and use email infrequently. Sizing for these segments of users can be accomplished by either looking at the entire system using weighted averages, or by breaking up the sizing process to align with the various segments of users. In general it’s certainly easier to size the whole system as a unit, but there may be specific requirements (like the use of certain 3rd party tools or devices) which will significantly impact the sizing calculation for one or more of the user segments, and it can be very difficult to apply sizing factors to a user segment while attempting to size the entire solution as a unit.
The obvious question in your mind is how to go get this user profile information. If you are starting with an existing Exchange deployment, there are a number of options that can be used, assuming that you aren’t the elusive Exchange admin who actually tracks statistics like this on an ongoing basis. If you are using Exchange 2007 or earlier, you can utilize the Exchange Profile Analyzer (EPA) tool, which will provide overall user profile statistics for your Exchange organization as well as detailed per-user statistics if required. If you are on Exchange 2010, the EPA tool is not an option for you. One potential option is to evaluate message traffic using performance counters to come up with user profile averages on a per-server basis. This can be done by monitoring the MSExchangeIS\Messages Submitted/sec and MSExchangeIS\Messages Delivered/sec counters during peak average periods and extrapolating the recorded data to represent daily per-user averages. I will cover this methodology in a future blog post, as it will take a fair amount of explanation. Another option is to use message tracking logs to generate these statistics. This could be done via some crafty custom PowerShell scripting, or you could look for scripts that attempt to do this work for you already. One of our own consultants points to an example on his blog.
Typical user profiles range from 50-500 messages per-user/per-day, and we provide guidance for those profiles. When in doubt, round up.
The other important piece of profile information for sizing is the average message size seen in the deployment. This can be obtained from EPA, or from the other mentioned methods (via transport performance counters, or via message tracking logs). Within Microsoft, we typically see average message sizes of around 75KB, but we certainly have worked with customers that have much higher average message sizes. This can vary greatly by industry, and by region.
Just as we recommended for Exchange 2010, the right way to start with sizing calculations for Exchange 2013 is with the Mailbox role. In fact, those of you who have sized deployments for Exchange 2010 will find many similarities with the methodology discussed here.
Throughout this article, we will be referring to an example deployment. The deployment is for a relatively large organization with the following attributes:
The first thing you need to determine is your high availability model, e.g., how you will meet the availability requirements that you determined earlier. This likely includes multiple database copies in one or more Database Availability Groups, which will have an impact on storage capacity and IOPS requirements. The TechNet documentation on this topic provides some background on the capabilities of Exchange 2013 and should be reviewed as part of the sizing process.
At a minimum, you need to be able to answer the following questions:
Once you have an understanding of how you will meet your high availability requirements, you should know the number of database copies and sites that will be deployed. Given this, you can begin to evaluate capacity requirements. At a basic level, you can think of capacity requirements as consisting of storage for mailbox data (primarily based on mailbox storage quotas), storage for database log files, storage for content indexing files, and overhead for growth. Every copy of a mailbox database is a multiplier on top of these basic storage requirements. As a simplistic example, if I was planning for 500 mailboxes of 1GB each, the storage for mailbox data would be 500GB, and then I would need to apply various factors to that value to determine the per-copy storage requirement. From there, if I needed 3 copies of the data for high availability, I would then need to multiply by 3 to obtain the overall capacity requirement for the solution (all servers). In reality, the storage requirements for Exchange are far more complex, as you will see below.
To determine the actual size of a mailbox on disk, we must consider 3 factors: the mailbox storage quota, database white space, and recoverable items.
The mailbox storage quota is what most people think of as the “size of the mailbox” – it’s the user perceived size of their mailbox and represents the maximum amount of data that the user can store in their mailbox on the server. While this is certainly represents the majority of space utilization for Exchange databases, it’s not the only element by which we have to size.
Database whitespace is the amount of space in the mailbox database file that has been allocated on disk but doesn’t contain any in-use database pages. Think of it as available space to grow into. As content is deleted out of mailbox databases and eventually removed from the mailbox recoverable items, the database pages that contained that content become whitespace. We recommend planning for whitespace size equal to 1 day worth of messaging content.
Estimated Database Whitespace per Mailbox = per-user daily message profile x average message size
This means that a user with the 200 message/day profile and an average message size of 75KB would be expected to consume the following whitespace:
200 messages/day x 75KB = 14.65MB
When items are deleted from a mailbox, they are really “soft-deleted” and moved temporarily to the recoverable items folder for the duration of the deleted item retention period. Like Exchange 2010, Exchange 2013 has a feature known as single item recovery which will prevent purging data from the recoverable items folder prior to reaching the deleted item retention window. When this is enabled, we expect to see a 1.2 percent increase in mailbox size for a 14 day deleted item retention window. Additionally, we expect to see a 3 percent increase in the size of the mailbox for calendar item version logging which is enabled by default. Given that a mailbox will eventually reach a steady state where the amount of new content will be approximately equal to the amount of deleted content in order to remain under quota, we would expect the size of the items in the recoverable items folder to eventually equal the size of new content sent & received during the retention window. This means that the overall size of the recoverable items folder can be calculated as follows:
Recoverable Items Folder Size = (per-user daily message profile x average message size x deleted item retention window) + (mailbox quota size x 0.012) + (mailbox quota size x 0.03)
If we carry our example forward with the 200 message/day profile, a 75KB average message size, a deleted item retention window of 14 days, and a mailbox quota of 10GB, the expected recoverable items folder size would be:
(200 messages/day x 75KB x 14 days) + (10GB x 0.012) + (10GB x 0.03) = 210,000KB + 125,819.12K + 314,572.8KB = 635.16MB
Given the results from these calculations, we can sum up the mailbox capacity factors to get our estimated mailbox size on disk:
Mailbox Size on disk = 10GB mailbox quota + 14.65MB database whitespace + 635.16MB Recoverable Items Folder = 10.63GB
The space required for files related to the content indexing process can be estimated as 20% of the database size.
Per-Database Content Indexing Space = database size x 0.20
In addition, you must additionally size for one additional content index (e.g. an additional 20% of one of the mailbox databases on the volume) in order to allow content indexing maintenance tasks (specifically the master merge process) to complete. The best way to express the need for the master merge space requirement would be to look at the average database file size across all databases on a volume and add 1 database worth of disk consumption to the calculation when determining the per-volume content indexing space requirement:
Per-Volume Content Indexing Space = (average database size x (databases on the volume + 1) x 0.20)
As a simple example, if we had 2 mailbox databases on a single volume and each database consumed 100GB of space, we would compute the per-volume content indexing space requirement like this:
100GB database size x (2 databases + 1) x 0.20 = 60GB
The amount of space required for ESE transaction log files can be computed using the same method as Exchange 2010. You can find details on the process in the Exchange 2010 TechNet guidance. To summarize the process, you must first determine the base guideline for number of transaction logs generated per-user, per-day, using the following table. As in Exchange 2010, log files are 1MB in size, making the math for log capacity quite straightforward.
Once you have the appropriate value from the table which represents guidance for a 75KB average message size, you may need to adjust the value based on differences in the target average message size. Every time you double the average message size, you must increase the logs generated per day by an additional factor of 1.9. For example:
Transaction logs at 200 messages/day with 150KB average message size = 40 logs/day (at 75KB average message size) x 1.9 = 76 Transaction logs at 200 messages/day with 300KB average message size = 40 logs/day (at 75KB average message size) x (1.9 x 2) = 152
While daily log volume is interesting, it doesn’t represent the entire requirement for log capacity. If traditional backups are being used, logs will remain on disk for the interval between full backups. When mailboxes are moved, that volume of change to the target database will result in a significant increase in the amount of logs generated during the day. In a solution where Exchange native data protection is in use (e.g., you aren’t using traditional backups), logs will not be truncated if a mailbox database copy is failed or if an entire server is unreachable unless an administrator intervenes. There are many factors to consider when sizing for required log capacity, and it is certainly worth spending some time in the Exchange 2010 TechNet guidance mentioned earlier to fully understand these factors before proceeding. Thinking about our example scenario, we could consider log space required per database if we estimate the number of users per database at 65. We will also assume that 1% of our users are moved per week in a single day, and that we will allocate enough space to support 3 days of logs in the case of failed copies or servers.
Log Capacity to Support 3 Days of Truncation Failure = (65 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 7.62GB Log Capacity to Support 1% mailbox moves per week = 65 mailboxes/database x 0.01 x 10.63GB mailbox size = 6.91GB Total Local Capacity Required per Database = 7.62GB + 6.91GB = 14.53GB
The easiest way to think about sizing for storage capacity without having a calculator tool available is to make some assumptions up front about the servers and storage that will be used. Within the product group, we are big fans of 2U commodity server platforms with ~12 large form-factor drive bays in the chassis. This allows for a 2 drive RAID array for the operating system, Exchange install path, transport queue database, and other ancillary files, and ~10 remaining drives to use as mailbox database storage in a JBOD direct attached storage configuration with no RAID. Fill this server up with 4TB SATA or midline SAS drives, and you have a fantastic Exchange 2013 server. If you need even more storage, it’s quite easy to add an additional shelf of drives to the solution.
Using the large deployment example and thinking about how we might size this on the commodity server platform, we can consider a server scaling unit that has a total of 24 large form-factor drive bays containing 4TB midline SAS drives. We will use 2 of those drives for the OS & Exchange, and the remaining drive bays will be used for Exchange mailbox database capacity. Let’s use 12 of those drive bays for databases – that leaves 10 remaining drive bays that could contain spares or remain empty. For this sizing exercise, let’s also plan for 4 databases per drive. Each of those drives has a formatted capacity of ~3725GB. The first step in figuring out the number of mailboxes per database is to look at overall capacity requirements for the mailboxes, content indexes, and required free space (which we will set to 5%).
To calculate the maximum amount of space available for mailboxes, let’s apply a formula (note that this doesn’t consider space for logs – we will make sure that the volume will have enough space for logs later in the process). First, we can remove our required free space from the available storage on the drive:
Available Space (excluding required free space) = Formatted capacity of the drive x (1 – free space)
Then we can remove the space required for content indexing. As discussed above, the space required for content indexing will be 20% of the database size, with an additional 20% of one database for content indexing maintenance tasks. Given the additional 20% requirement, we can’t model the overall space requirement as a simple 20% of the remaining space on the volume. Instead we need to compute a new percentage that takes the number of databases per-volume into consideration.
Now we can remove the space for content indexing from our available space on the volume:
And we can then divide by the number of databases per-volume to get our maximum database size:
In our example scenario, we would obtain the following result:
Given this value, we can then calculate our maximum users per database (from a capacity perspective, as this may change when we evaluate the IO requirements):
Let’s see if that number is actually reasonable given our 4 copy configuration. We are going to use 16-node DAGs for this deployment to take full advantage of the scalability and high-availability benefits of large DAGs. While we have many drives available on our selected hardware platform, we will be limited by the maximum of 50 database copies per-server in Exchange 2013. Considering this maximum and our desire to have 4 databases per volume, we can calculate the maximum number of drives for mailbox database usage as:
With 12 database volumes and 4 database copies per-volume, we will have 48 total database copies per server.
With 66 users per database and 100,000 total users, we end up with the following required DAG count for the user population:
In this very large deployment, we are using a DAG as a unit of scale or “building block” (e.g. we perform capacity planning based on the number of DAGs required to meet demand, and we deploy an entire DAG when we need additional capacity), so we don’t intend to deploy a partial DAG. If we round up to 8 DAGs we can compute our final users per database count:
With 65 users per-database, that means we will expect to consume the following space for mailbox databases:
Estimated Database Size = 65 users x 10.63GB = 690.95GB Database Consumption / Volume = 690.95GB x 4 databases = 2763.8GB
Using the formula mentioned earlier, we can compute our estimated content index consumption as well:
690.95GB database size x (4 databases + 1) x 0.20 = 690.95GB
You’ll recall that we computed transaction log space requirements earlier, and it turns out that we magically computed those values with the assumption that we would have 65 users per-database. What a pleasant coincidence! So we will need 14.53GB of space for transaction logs per-database, or to get a more useful result:
Log Space Required / Volume = 14.53GB x 4 databases = 58.12GB
To sum it up, we can estimate our total per-volume space utilization and make sure that we have plenty of room on our target 4TB drives:
Looks like our database volumes are sized perfectly!
To determine the IOPS requirements for a database, we look at the number of users hosted on the database and consider the guidance provided in the following table to compute total required IOPS when the database is active or passive.
For example, with 50 users in a database, with an average message profile of 200, we would expect that database to require 50 x 0.134 = 6.7 transactional IOPS when the database is active, and 50 x 0.134 = 6.7 transactional IOPS when the database is passive. Don’t forget to consider database placement which will impact the number of databases with IOPS requirements on a given storage volume (which could be a single JBOD drive or might be a more complex storage configuration).
Going back to our example scenario, we can evaluate the IOPS requirement of the solution, recalling that the average user profile in that deployment is the 200 message/day profile. We have 65 users per database and 4 databases per JBOD drive, so we can estimate our IOPS requirement in worst-case (all databases active) as:
65 mailboxes x 4 databases per-drive x 0.134 IOPS/mailbox at 200 messages/day profile = ~34.84 IOPS per drive
Midline SAS drives typically provide ~57.5 random IOPS (based on our own internal observations and benchmark tests), so we are well within design constraints when thinking about IOPS requirements.
While IOPS requirements are usually the primary storage throughput concern when designing an Exchange solution, it is possible to run up against bandwidth limitations with various types of storage subsystems. The IOPS sizing guidance above is looking specifically at transactional (somewhat random) IOPS and is ignoring the sequential IO portion of the workload. One place that sequential IO becomes a concern is with storage solutions that are running a large amount of sequential IO through a common channel. A common example of this type of load is the ongoing background database maintenance (BDM) which runs continuously on Exchange mailbox databases. While this BDM workload might not be significant for a few databases stored on a JBOD drive, it may become a concern if all of the mailbox database volumes are presented through a common iSCSI or Fibre Channel interface. In that case, the bandwidth of that common channel must be considered to ensure that the solution doesn’t bottleneck due to these IO patterns.
In Exchange 2013, we expect to consume approximately 1MB/sec/database copy for BDM which is a significant reduction from Exchange 2010. This helps to enable the ability to store multiple mailbox databases on the same JBOD drive spindle, and will also help to avoid bottlenecks on networked storage deployments such as iSCSI. This bandwidth utilization is in addition to bandwidth consumed by the transactional IO activity associated with user and system workload processes, as well as storage bandwidth consumed by the log replication and replay process in a DAG.
Since transport components (with the exception of the front-end transport component on the CAS role) are now part of the Mailbox role, we have included CPU and memory requirements for transport with the general Mailbox role requirements described later. Transport also has storage requirements associated with the queue database. These requirements, much like I described earlier for mailbox storage, consist of capacity factors and IO throughput factors.
Transport storage capacity is driven by two needs: queuing (including shadow queuing) and Safety Net (which is the replacement for transport dumpster in this release). You can think of the transport storage capacity requirement as the sum of message content on disk in a worst-case scenario, consisting of three elements:
Of course, all three of these factors are also impacted by shadow queuing in which a redundant copy of all messages is stored on another server. At this point, it would be a good idea to review the TechNet documentation on Transport High Availability if you aren’t familiar with the mechanics of shadow queuing and Safety Net.
In order to figure out the messages per day that you expect to run through the system, you can look at the user count and messaging profile. Simply multiplying these together will give you a total daily mail volume, but it will be a bit higher than necessary since it is double counting messages that are sent within the organization (i.e. a message sent to a coworker will count towards the profile of the sending user as well as the profile of the receiving user, but it’s really just one message traversing the system). The simplest way to deal with that would be to ignore this fact and oversize transport, which will provide additional capacity for unexpected peaks in message traffic. An alternative way to determine daily message flow would be to evaluate performance counters within your existing messaging system.
To determine the maximum size of the transport database, we can look at the entire system as a unit and then come up with a per-server value.
Overall Daily Messages Traffic = number of users x message profile Overall Transport DB Size = average message size x overall daily message traffic x (1 + (percentage of messages queued x maximum queue days) + Safety Net hold days) x 2 copies for high availability
Let’s use the 100,000 user sizing example again and size the transport database using the simple method.
Overall Transport DB Size = 75KB x (100,000 users x 200 messages/day) x (1 + (50% x 2 maximum queue days) + 2 Safety Net hold days) x 2 copies = 11,444GB
In our example scenario, we have 8 DAGs, each containing 16-nodes, and we are designing to handle double node failures in each DAG. This means that in a worst-case failure event we would have 112 servers online with 2 failed servers in each DAG. We can use this value to determine a per-server transport DB size:
Sizing for transport IO throughput requirements is actually quite simple. Transport has taken advantage of many of the IO reduction changes to the ESE database that have been made in recent Exchange releases. As a result, the number of IOPS required to support transport is significantly lower. In the internal deployment we used to produce this sizing guidance, we see approximately 1 DB write IO per message and virtually no DB read IO, with an average message size of ~75KB. We expect that as average message size increases, the amount of transport IO required to support delivery and queuing would increase. We do not currently have specific guidance on what that curve looks like, but it is an area of active investigation. In the meantime, our best practices guidance for the transport database is to leave it in the Exchange install path (likely on the OS drive) and ensure that the drive supporting that directory path is using a protected write cache disk controller, set to 100% write cache if the controller allows optimization of read/write cache settings. The write cache allows transport database log IO to become effectively “free” and allows transport to handle a much higher level of throughput.
Once we have our storage requirements figured out, we can move on to thinking about CPU. CPU sizing for the Mailbox role is done in terms of megacycles. A megacycle is a unit of processing work equal to one million CPU cycles. In very simplistic terms, you could think of a 1 MHz CPU performing a megacycle of work every second. Given the guidance provided below for megacycles required for active and passive users at peak, you can estimate the required processor configuration to meet the demands of an Exchange workload. Following are our recommendations on the estimated required megacycles for the various user profiles.
The second column represents the estimated megacycles required on the Mailbox role server hosting the active copy of a user’s mailbox database. In a DAG configuration, the required megacycles for the user on each server hosting passive copies of that database can be found in the fourth column. If the solution is going to include multi-role (Mailbox+CAS) servers, use the value in the third column rather than the second, as it includes the additional CPU requirements for the CAS role.
It is important to note that while many years ago you could make an assumption that a 500 MHz processor could perform roughly double the work per unit of time as a 250 MHz processor, clock speeds are no longer a reliable indicator of performance. The internal architecture of modern processors is different enough between manufacturers as well as within product lines of a single manufacturer that it requires an additional normalization step to determine the available processing power for a particular CPU. We recommend using the SPECint_rate2006 benchmark from the Standard Performance Evaluation Corporation.
The baseline system used to generate this guidance was a Hewlett-Packard DL380p Gen8 server containing Intel Xeon E5-2650 2 GHz processors. The baseline system SPECint_rate2006 score is 540, or 33.75 per-core, given that the benchmarked server was configured with a total of 16 physical processor cores. Please note that this is a different baseline system than what was used to generate our Exchange 2010 guidance, so any tools or calculators that make assumptions based on the 2010 baseline system would not provide accurate results for sizing an Exchange 2013 solution.
Using the same general methodology we have recommended in prior releases, you can determine the estimated available Exchange workload megacycles available on a different processor through the following process:
Using the example HP platform with E5-2630 processors mentioned previously, we would calculate the following result:
x 12 processors = 25,479 available megacycles per-server
Keep in mind that a good Exchange design should never plan to run servers at 100% of CPU capacity. In general, 80% CPU utilization in a failure scenario is a reasonable target for most customers. Given that caveat that the high CPU utilization occurs during a failure scenario, this means that servers in a highly available Exchange solution will often run with relatively low CPU utilization during normal operation. Additionally, there may be very good reasons to target a lower CPU utilization as maximum, particularly in cases where unanticipated spikes in load may result in acute capacity issues.
Going back to the example I used previously of 100,000 users with the 200 message/day profile, we can estimate the total required megacycles for the deployment. We know that there will be 4 database copies in the deployment, and that will help to calculate the passive megacycles required. We also know that this deployment will be using multi-role (Mailbox+CAS) servers. Given this information, we can calculate megacycle requirements as follows:
100,000 users ((11.69 mcycles per active mailbox) + (3 passive copies x 2.74 mcycles per passive mailbox)) = 1,991,000 total mcycles required
You could then take that number and attempt to come up with a required server count. I would argue that it’s actually a much better practice to come up with a server count based on high availability requirements (taking into account how many component failures your design can handle in order to meet business requirements) and then ensure that those servers can meet CPU requirements in a worst-case failure scenario. You will either meet CPU requirements without any additional changes (if your server count is bound on another aspect of the sizing process), or you will adjust the server count (scale out), or you will adjust the server specification (scale up).
Continuing with our hypothetical example, if we knew that the high availability requirements for the design of the 100,000 user example resulted in a maximum of 16 databases being active at any time out of 48 total database copies per server, and we know that there are 65 users per database, we can determine the per-server CPU requirements for the deployment.
(16 databases x 65 mailboxes x 11.69 mcycles per active mailbox) + (32 databases x 65 mailboxes x 2.74 mcycles per passive mailbox) = 12157.6 + 5699.2 = 17,856.8 mcycles per server
Using the processor configuration mentioned in the megacycle normalization section (E5-2630 2.3 GHz processors on an HP DL380p Gen8), we know that we have 25,479 available mcycles on the server, so we would estimate a peak average CPU in worst-case failure of:
17.857 / 25,479 = 70.1%
That is below our guidance of 80% maximum CPU utilization (in a worst-case failure scenario), so we would not consider the servers to be CPU bound in the design. In fact, we could consider adjusting the CPU selection to a cheaper option with reduced performance getting us closer to a peak average CPU in worst-case failure of 80%, reducing the cost of the overall solution.
To calculate memory per server, you will need to know the per-server user count (both active and passive users) as well as determine whether you will run the Mailbox role in isolation or deploy multi-role servers (Mailbox+CAS). Keep in mind that regardless of whether you deploy roles in isolation or deploy multi-role servers, the minimum amount of RAM on any Exchange 2013 server is 8GB.
Memory on the Mailbox role is used for many purposes. As in prior releases, a significant amount of memory is used for ESE database cache and plays a large part in the reduction of disk IO in Exchange 2013. The new content indexing technology in Exchange 2013 also uses a large amount of memory. The remaining large consumers of memory are the various Exchange services that provide either transactional services to end-users or handle background processing of data. While each of these individual services may not use a significant amount of memory, the combined footprint of all Exchange services can be quite large.
Following is our recommended amount of memory for the Mailbox role on a per mailbox basis that we expect to be used at peak.
To determine the amount of memory that should be provisioned on a server, take the number of active mailboxes per-server in a worst-case failure and multiply by the value associated with the expected user profile. From there, round up to a value that makes sense from a purchasing perspective (i.e. it may be cheaper to configure 128GB of RAM compared to a smaller amount of RAM depending on slot options and memory module costs).
Mailbox Memory per-server = (worst-case active database copies per-server x users per-database x memory per-active mailbox)
For example, on a server with 48 database copies (16 active in worst-case failure), 65 users per-database, expecting the 200 profile, we would recommend:
16 x 65 x 48MB = 48.75GB, round up to 64GB
It’s important to note that the content indexing technology included with Exchange 2013 uses a relatively large amount of memory to allow both indexing and query processing to occur very quickly. This memory usage scales with the number of items indexed, meaning that as the number of total items stored on a Mailbox role server increases (for both active and passive copies), memory requirements for the content indexing processes will increase as well. In general, the guidance on memory sizing presented here assumes approximately 15% of the memory on the system will be available for the content indexing processes which means that with a 75KB average message size, we can accommodate mailbox sizes of 3GB at 50 message profile up to 32GB at the 500 message profile without adjusting the memory sizing. If your deployment will have an extremely small average message size or an extremely large average mailbox size, you may need to add additional memory to accommodate the content indexing processes.
Multi-role server deployments will have an additional memory requirement beyond the amounts specified above. CAS memory is computed as a base memory requirement for the CAS components (2GB) plus additional memory that scales based on the expected workload. This overall CAS memory requirement on a multi-role server can be computed using the following formula:
Essentially this is 2GB of memory for the base requirement, plus 2GB of memory for each processor core (or fractional processor core) serving active load at peak in a worst-case failure scenario. Reusing the example scenario, if I have 16 active databases per-server in a worst-case failure and my processor is providing 2123 mcycles per-core, I would need:
If we add that to the memory requirement for the Mailbox role calculated above, our total memory requirement for the multi-role server would be:
48.75GB for Mailbox + 5.12GB for CAS = 53.87GB, round up to 64GB
Regardless of whether you are considering a multi-role or a split-role deployment, it is important to ensure that each server has a minimum amount of memory for efficient use of the database cache. There are some scenarios that will produce a relatively small memory requirement from the memory calculations described above. We recommend comparing the per-server memory requirement you have calculated with the following table to ensure you meet the minimum database cache requirements. The guidance is based on total database copies per-server (both active and passive). If the value shown in this table is higher than your calculated per-server memory requirement, adjust your per-server memory requirement to meet the minimum listed in the table.
In our example scenario, we are deploying 48 database copies per-server, so the minimum physical memory to provide necessary database cache would be 16GB. Since our computed memory requirement based on per-user guidance including memory for the CAS role (53.87GB) was higher than the minimum of 16GB, we don’t need to make any further adjustments to accommodate database cache needs.
With the new architecture of Exchange, Unified Messaging is now installed and ready to be used on every Mailbox and CAS. The CPU and memory guidance provided here assumes some moderate UM utilization. In a deployment with significant UM utilization with very high call concurrency, additional sizing may need to be performed to provide the best possible user experience. As in Exchange 2010, we recommend using a 100 concurrent call per-server limit as the maximum possible UM concurrency, and scale out the deployment if the sizing of your deployment becomes bound on this limit. Additionally, voicemail transcription is a very CPU-intensive operation, and by design will only transcribe messages when there is enough available CPU on the machine. Each voicemail message requires 1 CPU core for the duration of the transcription operation, and if that amount of CPU cannot be obtained, transcription will be skipped. In deployments that anticipate a high amount of voicemail transcription concurrency, server configurations may need to be adjusted to increase CPU resources, or the number of users per server may need to be scaled back to allow for more available CPU for voicemail transcription operations.
In the case where you are going to place the Mailbox and CAS roles on separate servers, the process of sizing CAS is relatively straightforward. CAS sizing is primarily focused on CPU and memory requirements. There is some disk IO for logging purposes, but it is not significant enough to warrant specific sizing guidance.
CAS CPU is sized as a ratio from Mailbox role CPU. Specifically, we need to get 37.5% of the megacycles used to support active users on the Mailbox role. You could think of this as a 3:8 ratio (CAS CPU to active Mailbox CPU) compared to the 3:4 ratio we recommended in Exchange 2010. One way to compute this would be to look at the total active user megacycles required for the solution, take 37.5% of that, and then determine the required CAS server count based on high availability requirements and multi-site design constraints. For example, consider the 100,000 user example using the 200 message/day profile:
Total CAS Required Mcycles = 100,000 users x 8.5 mcycles x 0.375 = 318,750 mcycles
Assuming that we want to target a maximum CPU utilization of 80% and the servers we plan to deploy have 25,479 available megacycles, we can compute the required number of servers quite easily:
Obviously we would need to then consider whether the 16 required servers meet our high availability requirements considering the maximum CAS server failures that we must design for given business requirements, as well as the site configuration where some of the CAS servers may be in different sites handling different portions of the workload. Since we specified in our example scenario that we want to survive a double failure in the single site, we would increase our 16 CAS to 18 such that we could sustain 2 CAS failures and still handle the workload.
To size memory, we will use the same formula that was used for Exchange 2010:
Per-Server CAS Memory = 2GB + 2GB per physical processor core
Using the example scenario we have been using, we can calculate the per-server CAS memory requirement as:
In this example, 20.77GB would be the guidance for required CAS memory, but obviously you would need to round-up to the next highest possible (or highest performing) memory configuration for the server platform: perhaps 24GB.
Active Directory sizing remains the same as it was for Exchange 2010. As we gain more experience with production deployments we may adjust this in the future. For Exchange 2013, we recommend deploying a ratio of 1 Active Directory global catalog processor core for every 8 Mailbox role processor cores handling active load, assuming 64-bit global catalog servers:
If we revisit our example scenario, we can easily calculate the required number of GC cores required.
Assuming that my Active Directory GCs are also deployed on the same server hardware configuration as my CAS & Mailbox role servers in the example scenario with 12 processor cores, then my GC server count would be:
In order to sustain double failures, we would need to add 2 more GCs to this calculation, which would take us to 7 GC servers for the deployment.
As a best practice, we recommend sizing memory on the global catalog servers such that the entire NTDS.DIT database file can be contained in RAM. This will provide optimal query performance and a much better end-user experience for Exchange workloads.
Turn it off. While modern implementations of simultaneous multithreading (SMT), also known as hyperthreading, can absolutely improve CPU throughput for most applications, the benefits to Exchange 2013 do not outweigh the negative impacts. It turns out that there can be a significant impact to memory utilization on Exchange servers when hyperthreading is enabled due to the way the .NET server garbage collector allocates heaps. The server garbage collector looks at the total number of logical processors when an application starts up and allocates a heap per logical processor. This means that the memory usage at startup for one of our services using the server garbage collector will be close to double with hyperthreading turned on vs. when it is turned off. This significant increase in memory, along with an analysis of the actual CPU throughput increase for Exchange 2013 workloads in internal lab tests has led us to a best practice recommendation that hyperthreading should be disabled for all Exchange 2013 servers. The benefits don’t outweigh the negative impact.
There’s an important caveat to this recommendation for customers who are virtualizing Exchange. Since the number of logical processors visible to a virtual machine is determined by the number of virtual CPUs allocated in the virtual machine configuration, hyperthreading will not have the same impact on memory utilization described above. It’s certainly acceptable to enable hyperthreading on physical hardware that is hosting Exchange virtual machines, but make sure that any capacity planning calculations for that hardware are based purely on physical CPUs. Follow the best practice recommendations of your hypervisor vendor on whether or not to enable hyperthreading. Note that the extra logical CPUs that are added when hyperthreading is enabled must not be considered when allocating virtual machine resources during the sizing and deployment process. For example, on a physical host running Hyper-V with 40 physical processor cores and hyperthreading enabled, 80 logical processor cores will be visible to the root operating system. If your Exchange design required 16-core servers, you could place 2 Exchange VMs on the physical host as those 2 VMs would consume 32 physical processor cores without enough physical processor cores to host another 16-core VM (32+16 = 48, which is greater than 40).
Now that you have digested all of this guidance, you are probably thinking about how much more of a pain it will be to size a deployment compared to using the Mailbox Role Requirements Calculator for Exchange 2010. UPDATE: You can now read about and download the calculator from here.
Hopefully that leaves you with enough information to begin to properly size your Exchange 2013 deployments. If you have further questions, you can obviously post comments here, but I’d also encourage you to consider attending one of the upcoming TechEd events. I’ll be at TechEd North America as well as TechEd Europe with a session specifically on this topic, and would be happy to answer your questions in person, either in the session or at the “Ask the Experts” event. Recordings of those sessions will also be posted to MSDN Channel9 after the events have concluded.
Jeff Mealiffe Principal Program Manager Lead Exchange Customer Experience
Windows 8 and Windows RT include a built-in email app named Mail (also referred to as Windows 8 Mail or the Windows 8 Mail app). The Windows 8 Mail app includes support for IMAP and Exchange ActiveSync (EAS) accounts.
This article includes some key technical details of the Windows 8 Mail app. Use the information to help you support the use of Windows 8 Mail app in your organization. Read this article start to finish, or jump to the topic that interests you. Use the reference links throughout the article for more information.
NOTE Mail, Calendar, People, and Messaging are apps that are built in to Windows 8 and Windows RT. Although this article discusses the Windows 8 Mail app, please note that much of the information in this article also applies to the Calendar, People, and Messaging apps. This is because, when connected to a server that supports Exchange ActiveSync, the Calendar, and People apps may also display data that was downloaded over the Exchange ActiveSync connection.
The Windows 8 Mail app lets users connect to any service provider that supports either of the following two protocols:
POP is not currently supported.
Exchange ActiveSync can be used to sync data for email, contacts, and calendar. The Windows 8 Mail app supports EAS versions 2.5, 12.0, 12.1, and 14.0. For detailed protocol documentation, see Exchange Sever Protocol Documents on MSDN.
NOTE All Windows Communications apps (Mail, Calendar, and People) can use the data that is synchronized with Exchange ActiveSync. After a user connects to their account in the Windows 8 Mail app, their contacts and calendar data are available in the other Windows Communications Apps and vice versa.
The Mail app does not support certificate-based authentication of clients for Exchange ActiveSync.
The Windows 8 Mail app supports the following IMAP and SMTP standards:
IMAP/SMTP can be used to send and receive email only. Contacts data and calendar data is not synchronized when IMAP/SMTP is used. Microsoft Exchange does not support Public Folders via IMAP. For more details about IMAP support in Exchange, see POP3 and IMAP4 (for Exchange 2010, see Understanding POP3 and IMAP4).
The Windows 8 Mail app can be configured to synchronize data at different times as follows:
If a push email connection can’t be established, it will automatically switch to poll at fixed intervals.
Push email requires that accounts are either Exchange ActiveSync (which all support Push) or IMAP with the IDLE extension. Not all IMAP servers support IDLE, and it is supported only for the Inbox folder.
When a push connection can’t be established, Mail will change to polling on 30 minute intervals. Push email on Exchange ActiveSync requires that HTTP connections must be maintained for up to 60 minutes, and IMAP IDLE requires TCP connections to be maintained for up to 30 minutes.
Windows 8 and Windows RT users can add email accounts to the Windows 8 Mail app using the Settings charm. The Settings charm is always available on the right side of the Windows 8 and Windows RT screen. (For more visual details about Charms & the Windows 8 user interface, see Search, share & more.)
NOTE This section provides an overview of Windows 8 Mail app account setup. For step-by-step procedures for setting up an account in the Windows 8 Mail app, see What else do I need to know? at the end of this guide.
To make it as easy as possible to add accounts, account setup only prompts the user to enter the email address and password for the account they want to set up. From that data, Mail attempts to automatically configure the account as follows:
Figure 1: Exchange ActiveSync (EAS) configuration in Windows Mail
Full details needed to connect to an Exchange server – needed only if Autodiscover failed
The information required to connect to a server via Exchange ActiveSync is:
Figure 2: IMAP/SMTP configuration in Windows Mail
The information required to connect to a server via IMAP/SMTP is:
Mail provides administrators with some level of security through Exchange ActiveSync policies. It doesn’t support any means of managing or securing PCs that are connected via IMAP.
Exchange ActiveSync devices can be managed using Exchange ActiveSync policies. Windows 8 Mail supports the following EAS policies. :
Note that if AllowNonProvisionableDevices is set to false in an EAS policy and the policy contains settings are not part of this list, the device won’t be able to connect to the Exchange server.
Most of the policies listed above can be automatically enabled by Mail, but there are certain cases where the user has to take action first. These are:
If a Windows 8 PC is joined to an Active Directory domain and controlled by Group Policy, there may be conflicting policy settings between Group Policy and an Exchange ActiveSync policy. In the event of any conflict, the strictest rule in either policy takes precedence. The only exception is password complexity rules for domain accounts. Group policy rules for password complexity (length, expiry, history, number of complex characters) take precedence over Exchange ActiveSync policies – even if group policy rules for password complexity are less strict than Exchange ActiveSync rules, the domain account will be deemed in compliance with Exchange ActiveSync policy.
Mail supports the Exchange ActiveSync remote wipe directive, but unlike Windows Phones, the data deleted by this directive is scoped to the specified Exchange ActiveSync account. The user's personal data is not deleted. For example, if a user has an Outlook.com account for personal use and a Contoso.com account for work use, a remote wipe directive from the Contoso.com server would impact Windows 8 and Windows Phone 7 as follows:
To make it as easy as possible for users to have all of their accounts set up on all of their devices, Windows 8 uploads vital account information to the user’s Microsoft account. This information includes email address, server, server settings, and password. When a user signs into a new PC with their Microsoft account, their email accounts are automatically set up for them.
Passwords are not uploaded from a PC for any accounts which are controlled by any Exchange ActiveSync policies. Users will have to enter their password to begin syncing a policy-controlled account on a new PC.
Users are required to have a Microsoft Account, formerly known as Windows Live ID, to use the Windows Communications apps. This will usually be the Microsoft account that the user is signed into Windows with, but if they have not done so, they will be prompted to provide one before proceeding.
Microsoft accounts will automatically sync to Microsoft services using Exchange ActiveSync 14.0 when Mail starts. This will synchronize:
If the user’s Microsoft account is not a Outlook.com or Hotmail account (for example, dave@contoso.com), Mail will prompt the user to provide the password for their email account, which will be added automatically.
By default, Mail only downloads the last two weeks of email. This is user configurable and can potentially download the user’s entire mailbox. For Exchange ActiveSync accounts, all contacts are downloaded and calendar events are downloaded only for three months behind the current date and 18 months ahead.
Additionally, messages are only partially downloaded to reduce bandwidth use as follows:
Embedded images in email messages are downloaded on-demand as the user reads them, and attachments are downloaded on-demand as the user attempts to open them.
By default, Mail only downloads the user’s Inbox and Sent folders. Other folders are downloaded once the user accesses them for the first time.
Mail does not enforce any limits on how many or large of attachments users can send.
The following features are currently not supported by Mail:
Mailbox connections using POP: IMAP and EAS are supported.
(Note, this does not mean that Windows 8 does not support POP3. This post is about the Windows 8 Mail app. )
Servers that require self-signed certificates: Users can work around the self-signed certificate limitation by manually installing the certificate on their Windows 8 or Windows RT device. For additional information about the self-signed certificates, see Self-Signed Certificates section below.
Opaque-Signed and Encrypted S/MIME messages: When S/MIME messages are received in Windows 8 Mail, it displays an email item with a message body that begins with “This encrypted message can’t be displayed.”
To view email items in the S/MIME format, users must open the message using Outlook Web App, Microsoft Outlook, or another email program that supports S/MIME messages. For more information, see Opaque-Signed and Encrypted S/MIME Message on MSDN.
Users may experience connectivity errors when trying to connect to an Exchange servers that require self-signed certificates. The user may receive the following error messages.
Unable to connect. Ensure the information entered is correct. <Email address> is unavailable
Unable to connect. Ensure the information entered is correct.
<Email address> is unavailable
NOTE This issue may occur because the Mail app cannot connect to Exchange by using self-signed certificates.
Consider the following options to resolve this issue.
Option 1: Install a certificate that is signed by a Microsoft-trusted root certification authority (CA) on the server
This enables Exchange to work for all clients without prompting. For more information about the trust root CAs, see the following topics on TechNet:
Option 2: Install a server’s self-signed certificate on a device
This enables Exchange to work for Windows 8 devices that have the certificate installed.
Note To install a self-signed certificate for a domain’s certification authority, the administrator must provide a certificate file (.cer). The certificate can be installed to the trusted root certificate authority store for either of the following options:
The user or the system administrator can use the .cer file to install the certificate. To do this, use one of the following methods:
Command-line tool
At an elevated command prompt, run the following command:
certutil.exe -f -addstore root <name_of_certificatefile>.cer
NOTE The command installs the certificate for all users on the device.
User interface
If Windows 8 Mail users can't successfully connect to their accounts, consider the following:
TIP The user will see the following message if they haven't registered their account. In Windows 8 Mail, you will see the following message: “We couldn’t find the settings for. Provide use with more info and we’ll try connecting again.”
For information about signing into Outlook Web App or the Office 365 Portal, see Sign In to Outlook Web App.
After the user signs in to your account using Outlook Web App, the user should sign out, and then try to connect using Windows 8 Mail.
EDIT 12/10/2010: Added a permissions section.
As was discussed in the previous (related) blog post "Troubleshooting Exchange 2010 Management Tools startup issues", in Exchange 2010 the Management tools are dependent on IIS. As was discussed in that blog, we have seen situations where the management tool connection to the target Exchange server can fail, and the error that is returned can be difficult to troubleshoot. This generally (but not always) happens when Exchange 2010 is installed on an IIS server that is already in service, or when changes are made to the IIS server settings post Exchange Install. We have seen that these changes are usually done when the IIS administrator is attempting to "tighten up" IIS security by editing the Default Web Site or PowerShell vdir settings.
The situation is further complicated by the fact that some of the errors presented have similar wording; most seem to originate with WinRM (Windows Remote Management), and in some cases different root problems can produce the exact same error message. In other words, depending on how knowledgeable you are with these errors, troubleshooting them is all around... not much fun.
I was approached by a good friend of mine and he asked what I thought we could do to make these errors a little easier to troubleshoot. I was studying PowerShell and PowerShell programming at the time (I just happened to be reading Windows PowerShell for Exchange Server 2007 SP1), and I thought that this would be a perfect opportunity to try and apply what I was learning.
This is the result.
Introducing the Exchange Management Troubleshooter (or EMTshooter for short).
What it does:
The EMTshooter runs on the local (target) Exchange server and attempts to identify potential problems with management tools connection to it.
The troubleshooter runs in 2 stages. First, it will look at the IIS Default Web Site, the PowerShell vdir, and other critical areas, to identify known causes of connection problems. If it identifies a problem with one of the pre-checks it will make a recommendation for resolving the problem. If the pre-checks pass, the troubleshooter will go ahead and try to connect to the server in the exact same way that the management tools would. If that connection attempt still results in a WinRM-style error, the troubleshooter will attempt to compare that error to a list of stored strings that we have taken from the related support cases that we have seen. If a match is found, the troubleshooter will display the known causes of that error in the CMD window. Here is an example of how this might look like:
When I was designing the troubleshooter, I could have just written a little error lookup tool that handed over the appropriate content for the error you were getting, but I felt that was not as robust of a solution as I was aiming for (and not much of a learning experience for me). So the tool runs active pre-checks before moving on to the error look-up. The amount of pre-checks it can run depends on the flavor of OS you are running on and the options you have installed on it, such as WMI Compatibility.
Basically, I have taken all of the documentation that has been created on these errors to date, and created a tool that will make the information available to you based on the error or problem it detects. Hopefully this will cut down on the amount of time it takes to resolve those problems.
Event reporting:
When you run the EMTshooter it will log events in the event log. All results that are displayed in the CMD window are also logged in the event log for record keeping.
Events are logged to the Microsoft-Exchange-Troubleshooters/Operational event log and are pretty self-explanatory.
Things to remember:
Depending on your current settings, you may need to adjust the execution policy on your computer to run the troubleshooter, using:
Set-ExecutionPolicy RemoteSigned Or Set-ExecutionPolicy Unrestricted
Set-ExecutionPolicy RemoteSigned
Or
Set-ExecutionPolicy Unrestricted
Remember to set it back to your normal settings after running the troubleshooter.
This version of the troubleshooter needs to run on the Exchange Server that the management tools are failing to connect to. While our final goal is that the troubleshooter will be able to run anywhere the Exchange Management tools are installed, the tool isn't quite there yet.
We have seen instances where corruption in the PowerShell vdir or in IIS itself has resulted in errors that seemed to be caused by something else. For instance, we worked on a server that had an error that indicated a problem with the PowerShell vdir network path. But the path was correct. Then we noticed that the PowerShell vdir was missing all its modules, and quite a few other things. Somehow the PowerShell vdir on that Exchange Server had gotten severely... um... modified beyond repair. WinRM was returning the best error it could, and the troubleshooter took that error and listed the causes. None of which solved the problem. So be aware that there are scenarios that even this troubleshooter cannot help at this time.
The troubleshooter is still a bit rough around the edges, and we plan to improve and expand its capabilities in the future. We also hope to be able to dig a little deeper into the PowerShell vdir settings as time goes on. Also note that the troubleshooter will NOT make any modification to your IIS configuration without explicitly asking you first.
Permissions required:
In order to run the troubleshooter, the user will have to have the rights to log on locally to the Exchange server (due to local nature of the troubleshooter at this time) and will need permissions to run Windows PowerShell.
Installing the troubleshooter:
First, you will need to download the troubleshooter ZIP file, which you can find here.
Installing the EMTshooter is pretty easy. Just drop the 4 files from the ZIP file into 1 folder, rename them to .ps1 and run EMTshooter.ps1 from a normal (and local) PowerShell window. I personally just created a shortcut for it on my desktop with the following properties:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -noexit -command ". 'C:\EMTshooter\EMTshooter.ps1'"
However, as most users probably won't run this more than a few times you might not need or want an icon. Just remember that EMTshooter.ps1 is the main script to run.
Providing feedback:
As I mentioned before, the troubleshooter is still a work in progress. If you wish to provide feedback on it, please post a comment to this blog post. I will be monitoring it and replying as time allows, and also making updates to the troubleshooter if needed. If you run into errors that are not covered by the troubleshooter, please run the troubleshooter, reproduce the error through it and send me the transcript.txt file (you will find it in the folder where the 4 scripts have been placed), along with what you did to resolve the error (if the problem has been resolved). My email is sbryant AT Microsoft DOT com.
Errors currently covered:
- Steve Bryant
EDIT: This post has been updated on 5/24/2013 for the new version of calculator. For the list of latest major changes, please see THIS.
The Exchange Mailbox Server role is arguably one of the most important roles within an Exchange deployment for it stores the data that users will ultimately access on a daily basis. Therefore, ensuring that you design the mailbox server role correctly is critical to your design.
With Exchange 2010 you can deploy a solution that leverages mailbox resiliency and has multiple database copies deployed across datacenters, implements single item recovery for data recovery, and has the flexibility in storage design to allow you to deploy on storage area networks utilizing fibre-channel or SATA class disks or on direct attached storage utilizing SAS or SATA class disks with or without RAID protection. But, in order to design your solution, you need to understand the following criteria:
Previous versions of Exchange were somewhat rigid in terms of the choices you had in designing your mailbox server role. The flexibility in the architecture with Exchange 2010, allows you the freedom to design the solution to meet your needs. Prior to making any decisions, please review the following topics from the Exchange 2010 Online Help:
After you have determined the design you would like to implement, you can follow the steps in the Exchange 2010 Mailbox Server Role Design Example article within the Exchange 2010 Online Help to calculate your solution's CPU, memory, and storage requirements, or you can leverage the Exchange 2010 Mailbox Server Role Requirements Calculator.
The calculator is broken out into the following sections (worksheets):
Important: The data points provided in the calculator are an example configuration. As such any data points entered into the Input worksheet are specific to that particular configuration and do not apply for other configurations. Please ensure you are using the correct data points for your design
Input
When you launch the Exchange 2010 Mailbox Server Role Requirements Calculator, you are presented with the Input worksheet. This worksheet is broken down into 5 key areas. This section is where you enter in all the relevant information regarding your intended design, so that the calculator can generate what you need in order to achieve it.
Note: There are many input factors that need to be accounted for before you can design your solution. Each input factor is briefly listed below; there are additional notes within the calculator that explain them in more detail.
Within Step 1 you will enter in the appropriate information concerning your messaging environment's configuration - the high availability architecture and database copy configuration, the data and I/O configuration, and CPU inputs.
Note: For optimal sizing, choose a multiple of the total number of database copies you have selected for the number of mailbox servers.
Exchange Environment Configuration
Site Resilience Configuration
Mailbox Database Copy Configuration
Lagged Database Copy Configuration
Exchange Data Configuration
Database Configuration
IOPS Configuration
Mailbox Configuration
Within Step 2 you will define your user profile for up to four different tiers of user populations.
Backup Configuration
Within Step 3 you will define your backup model and your tolerance settings, as well as, choose whether to isolate the transaction logs from the database.
Storage Configuration
Within Step 4 you will define your storage configuration.
Storage Options
Primary Datacenter Disk Configuration
Secondary Datacenter Disk Configuration
Processor Configuration
Within Step 5, you will define the number of processor core you have deployed for each mailbox server within your primary and secondary datacenters, as well as, enter the SPECint2006 Rate Value for the system you have selected.
When you enable virtualization, you must be sure to configure the processor architecture correctly. In particular, you must enter in the correct number of processor cores that the guest machine will support, as well as, the correct SPECInt2006 rating for these virtual processor cores. To calculate the SPECInt2006 rate value, you can utilize the following formula:
X/(N*Y) = per virtual processor SPECInt2006 Rate value Where X is the SPECInt2006 rate value for the hypervisor host server Where N = the number of physical cores in the hypervisor host Where Y = 1 if you will be deploying 1:1 virtual processor-to-physical processor on the hypervisor host Where Y = 2 if you will be deploying up to 2:1 virtual processor-to-physical processor on the hypervisor host
X/(N*Y) = per virtual processor SPECInt2006 Rate value
Where X is the SPECInt2006 rate value for the hypervisor host server Where N = the number of physical cores in the hypervisor host Where Y = 1 if you will be deploying 1:1 virtual processor-to-physical processor on the hypervisor host Where Y = 2 if you will be deploying up to 2:1 virtual processor-to-physical processor on the hypervisor host
For example, let’s say I am deploying an HP ProLiant DL580 G7 (2.27 GHz, Intel Xeon X7560) system which includes four sockets, each containing an 8-core processor, then the SPECInt2006 rate value for the system is 757.
If I am deploying my Mailbox server role as a guest machine using Hyper-V, and following best practices where I do not oversubscribed the number of virtual CPUs to physical processors, then
757/(32*1) = 23.66
Since each Mailbox server will have a maximum of 4 virtual processors, this means the SPECInt2006 rate value I would enter into the calculator would be 23.66*4 = 95.
In addition, if you are deploying your Exchange servers as guest machines, you can specify the Hypervisor CPU Adjustment Factor to take into account the overhead of deploying guest machines.
Server Configuration
Log Replication Configuration
Within Step 6, you will define your hourly log generation rate, the network link, and the network link latency you expect to have within your site resilient architecture.
Now you may be wondering how you can collect this data. We've written a simple VBS script that will collect all files in a folder and output it to a log file. You can use Task Scheduler to execute this script at certain intervals in the day (e.g. every 15 minutes). Once you have generated the log file for a 24 hour period, you can import it into Excel, massage the data (i.e. remove duplicate entries) and determine how many logs are generated for each hour. If you do this for each storage group, you will be able to determine your log generation rate for each hour in the day. This script is named collectlogs.vbsrename (just rename it to collectlogs.vbs) and you can find it here: Collectlogs VBS script
Network Configuration
This section provides the solution's I/O, capacity, memory, and CPU requirements.
Based on the above input factors the calculator will recommend the following architecture, broken down into four sections:
Processor Core Ratio Requirements
This table identifies the required number of processor cores required to support the activated databases. This table is only populated if you populate the processor core megacycle information on the Input tab.
Client Access Server Requirements
This table identifies the memory and CPU requirements for dedicated Client Access servers if you choose to not co-locate the server roles. This table is only populated if you populate the processor core megacycle information on the Input tab.
Hub Transport Server Requirements
This table identifies the memory and CPU requirements for dedicated Hub Transport servers if you choose to not co-locate the server roles. This table is only populated if you populate the processor core megacycle information on the Input tab.
Environment Configuration
The Environment Configuration table identifies the number of mailboxes being deployed in each datacenter, as well as, how many mailbox servers and lagged copy servers you will deploy in each datacenter. This table will also identify the minimum number of dedicated Hub Transport and Client Access servers you should deploy in each datacenter (taking into account worst case failure mode of two simultaneous server failures).
User Mailbox Configuration
The Mailbox Configuration table provides you with:
Database Copy Instance Configuration
This table highlights how many HA mailbox database copy instances and lagged database copy instances your solution will have within each datacenter for a given DAG.
The Database Configuration table provides you with:
Database Copy Configuration
The Database Copy Configuration table provides you with the number of database copies being deployed within each server and the total number of database copies within the DAG.
The Server Configuration table provides you with the following:
Transaction Log Requirements
The Transaction Log Requirements table provides you with:
Disk Space Requirements
The Disk Space Requirements table provides you with:
Host IO and Throughput Performance Requirements
The Host IO and Throughput Performance Requirements table provides you with:
Special Notes
The Special Notes table will provide you with additional information about your design:
If you are deploying a highly available and/or site resilient architecture, then this section will break down the failure scenarios. The section is broken up into two scenarios:
Important: For the purposes of this calculator, the term "primary datacenter" refers to the datacenter that is preferred for hosting the active copies for a given set of databases, while the term "secondary datacenter" refers to the disaster recovery datacenter that is used for datacenter activation and cross-site database failover events.
Single Datacenter and Active/Passive Environments
The DAG Member Layout table identifies the number of Active Mailbox servers (those that are hosting active mailboxes within the primary datacenter), the Disaster Recovery Mailbox Servers (those that host passive database copies in the second datacenter), and any Lagged Copy Mailbox servers you may be deploying.
There are two tables that provide data around the Active Database configuration, one for the primary datacenter, which outlines the single or double server events, and one for the secondary datacenter, which outlines the activation of that datacenter when the primary datacenter is lost. Both tables provide you with:
Active/Active Environments
This section breaks out the architecture into two perspectives, the layout of Datacenter 1 and the layout of Datacenter 2 with respect to the DAG architecture. Recall that
The calculator includes a new worksheet, Distribution. Within the Distribution worksheet, you will find the layout we recommend based on the database copy layout principles.
The Distribution worksheet includes several new options to help you with designing and deploying your database copies:
Important: The database copy layout the tool provides assumes that each server and its associated database copies are isolated from each other server/copies. It is important to take into account failure domain aspects when planning your database copy layout architecture so that you can avoid having multiple copy failures for the same database
The LUN Requirements section is really a continuation of the Storage Requirements section. It outlines what we believe is the appropriate LUN design based on the input factors and the analysis performed in the previous sections.
Note: The term LUN utilized in the calculator refers only the representation of the disk that is exposed to the host operating system. It does not define the disk configuration.
The LUN Design highlights the LUN architecture chosen for this server solution. The architecture is derived from the backup type, backup frequency, and high availability architecture that were chosen in the Storage Requirements section.
There are three types of LUN architecture that can be leveraged within Exchange 2010:
1 LUN / Database
A single LUN per Database architecture means that both the database and its corresponding log files are placed on the same LUN. In order to deploy a LUN architecture that only utilizes a single LUN per database, you must have a Database Availability Group that has 2 or more copies and not be utilizing a hardware based VSS solution.
Some of the benefits of this strategy include:
Some of the concerns with this strategy include:
2 LUNs / Database
With Exchange 2010, in the maximum case of 100 Databases, the number of LUNs you provision will depend upon your backup strategy. If your recovery time objective (RTO) is very small, or if you use VSS clones for fast recovery, it may be best to place each Database on its own transaction log LUN and database LUN. Because doing this will exceed the number of available drive letters, volume mount points must be used.
2 LUNs / Backup Set
A backup set is the number of databases that are fully backed up in a night. A solution that performs a full backup on 1/7th of the databases nightly (i.e. using a weekly or bi-monthly full backup with daily incrementals or differentials) can reduce complexity by placing all of the databases to be backed up on the same log and database LUN. This can reduce the number of LUNs on the server.
Based on the above input factors the calculator will recommend the following architecture:
LUN Design
The LUN Design table highlights the recommended LUN architecture.
LUN Configuration
The LUN Configuration table highlights the number of databases that should be placed on a single LUN. This is derived from LUN Architecture model.
This section also documents how many LUNs will be required for the entire solution, broken out by Database and Log sets, and the number of restore LUNs per server.
The Database Configuration table outlines the number of databases (or copies) per server, the number of mailboxes per database, the size of each database, and the transaction log size required for each database.
Database and Log LUN Design
The database and log LUN Design table outlines the physical LUN layout and follows the recommended number of databases per LUN approach based on the LUN Architecture model. It also documents the LUN size required to support layout (this is where we factor in the additional capacity for content indexing, the LUN Free Space Percentage, and whether you are using a Restore LUN), as well as the transaction log LUN.
Important: The DB and Log LUN Design Table identify databases by a unique number. However, databases copies are distributed across the servers, and thus, these numbers hold no significance and are used solely as an example to show a server's LUN layout.
The Backup Requirements section is really a continuation of the Role Requirements section. It outlines what we believe is the appropriate backup design based on the input factors and the analysis performed in the previous sections.
The Backup Configuration table outlines the number of databases that will be placed within a single LUN and the type of backup methodology and frequency in which the backups will occur.
Backup Frequency Configuration
The Backup Frequency Configuration section will provide you with an outline on how you should perform the backups for each server, utilizing either a daily full backup or weekly or bi-monthly full backup frequency.
The Log Replication Requirements section is another continuation of the Role Requirements section. It outlines what we believe is the throughput required to replicate the transaction logs to each target database copy in the secondary datacenter.
Peak Log and Content Index Replication Throughput Requirements
The Peak Log and Content Index Replication Throughput Requirements table provides you with:
RPO Log and Content Index Replication Throughput Requirements
In terms of log replication, RPO means how behind can you get in log shipping? The lower the RPO (a value of 0 or 1 essentially means you want to only lose the open log file), the higher the bandwidth you need because you cannot get behind in log replication. The higher the RPO (approaching 24) less bandwidth is needed as you are expecting to be behind (up to x hours) in log replication and to catch up at some point in the day.
The RPO Log and Content Index Replication Throughput Requirements table provides you with:
Chosen Network Link Suitability
The Chosen Network Link Suitability table will dictate whether the chosen network link has sufficient capacity to sustain the peak replication throughput requirements and/or the RPO replication throughput requirements. If the network link cannot sustain the log replication traffic, then you will need to either upgrade the network link to the recommended network link throughput, or adjust the design appropriately.
Recommended Network Link
The Recommended Network Link table recommends an appropriate network link if the chosen network link does not have sufficient capacity to sustain log replication for solution for both the peak and RPO throughput requirements.
Note: The Network Link recommendations do not take into account database seeding or any other data that may also utilize the link.
The Storage Design worksheet is designed to take the data collected from the Input worksheet and Storage Requirements worksheet and help you determine the number of physical disks needed to support the databases, transaction logs, and Restore LUN configurations.
In order to determine the physical disk requirements, you must enter in some basic information about your storage solution.
RAID Parity Configuration
For the Database/Log RAID Parity Configuration table you need to select the type of RAID building block your storage solution utilizes. For example, some storage vendors build the underlying storage in sets of data+parity (d+p) groups. A RAID-5 3+1 configuration means that 3 disks will be used for capacity and 1 disk will be used for parity, even though parity is distributed across all the disks. So if you had a capacity requirement that would utilize 15 disks, then you would need to deploy 5 3+1 groups to build that RAID-5 array.
Database/Log RAID Rebuild Overhead
When a disk is lost, the disk needs to be replaced and rebuilt. During this time, the performance of the RAID group is affected. This impact as a result can affect user actions. Therefore, to ensure that RAID rebuilds do not affect the overall performance of the mailbox server, Microsoft recommends that you should ensure sufficient overhead is provisioned into the performance calculations when designing for RAID parity. Most RAID-1/0 implementations will suffer a 25% performance penalty during a rebuild. Most RAID-5 and RAID-6 implementations will suffer a 50% performance penalty during a rebuild.
The calculator defaults with the following as Microsoft recommendations, but they are adjustable:
In addition, you should consult with your storage vendor to determine the appropriate RAID rebuild penalty.
Database RAID Configuration
By default, for RAID storage solutions, the calculator will recommend either RAID-1/0 or RAID-5 by evaluating capacity and I/O factors and determining which configuration utilizes the least amount of disks while satisfying the requirements. If you would like to override this and force the calculator to utilize a particular RAID configuration for your databases (e.g., RAID-0 or RAID-6), select "Yes" to this option and then select the appropriate RAID configuration in the cell labeled "Desired RAID Configuration." Note that while you can potentially override the database RAID configuration, you cannot do so for the log RAID configuration - that will always be RAID-1/0.
Note: The calculator prevents the use of RAID-5 or RAID-6 with 5.2K, 5.4K, 5.9K and 7.2K disk types, due to performance implications.
Restore LUN RAID Configuration
You can select the type of parity you will be utilizing and the RAID configuration you will be deploying for your Restore LUN.
The Storage Design Results section outputs the recommended configuration for the solution. The recommendations made are for implementing the solution potentially on RAID and JBOD storage.
RAID Storage Architecture
The RAID Storage Architecture Table outlines which servers (primary datacenter servers, secondary datacenter servers, or lagged copy servers) should be deployed on RAID storage.
The RAID Storage Architecture / Server table recommends the optimum RAID configuration and number of disks for each LUN (database, log and restore LUN) for each mailbox server ensuring that performance and capacity requirements are met within the design.
JBOD Storage Architecture
The JBOD Storage Architecture Table outlines which servers (primary datacenter servers, secondary datacenter servers, or lagged copy servers) could be deployed on JBOD storage.
The JBOD Storage Architecture / Server table recommends the optimum JBOD configuration and number of disks for each LUN (database, log and restore LUN) for each mailbox server ensuring that performance and capacity requirements are met within the design.
Total Disks Required
By default, the calculator will determine the storage architecture that should be utilized to reduce the total number of disks required to support the design, in addition, to still ensuring you have minimized single points of failure by utilizing RAID and/or JBOD based on the decisions found in the "RAID Storage Architecture" and "JBOD Storage Architecture" tables. However, you can change the storage architecture to be built entirely on RAID or entirely on JBOD (if the design supports JBOD as a possible solution; also keep in mind that certain scenarios (e.g., a single database copy in a datacenter) may result in a single point of failure) by selecting the appropriate value in the "Storage Architecture will be Deployed:" drop-down.
The Storage Configuration table will output the total number of disks required for each mailbox server that requires RAID or JBOD storage, as well as, identify the total number of disks requiring RAID or JBOD storage in each datacenter.
Hopefully you will find this calculator invaluable in helping to determine your mailbox server role requirements for Exchange 2010 mailbox servers. If you have any questions or suggestions, please email strgcalc AT microsoft DOT com.
For the calculator itself, please see the following link:
Exchange 2010 Server Role Requirements Calculator
Ross Smith IV
The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013. Cumulative Update 3 for Exchange Server 2013 and updated UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 3 includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved in Exchange Server 2013 Cumulative Update 3 can be found in Knowledge Base Article KB 2892464.
Note: Some article links may not be available at the time of this post's publication. Updated Exchange 2013 documentation, including Release Notes, will be available on TechNet soon.
We would like to call attention to an important fix in Exchange Server 2013 Cumulative Update 3 which impacts customers who rely upon Backup and Recovery mechanisms to protect Exchange data. Cumulative Update 3 includes a fix for an issue which may randomly prevent a backup dataset taken from Exchange Server 2013 from restoring correctly. Customers who rely on Backup and Recovery in their day-to-day operations are encouraged to deploy Cumulative Update 3 and initiate backups of their data to ensure that data contained in backups may be restored correctly. More information on this fix is available in KB 2888315.
In addition to the customer-reported fixes in Cumulative Update 3, the following new enhancements and improvements to existing functionality have also been added for Exchange Server 2013 customers:
More information on these topics can be found in What’s New in Exchange Server 2013, Release Notes and Exchange 2013 documentation on TechNet.
Here are some things to consider before you deploy Exchange 2013 CU3.
Our next update for Exchange 2013, Cumulative Update 4, will be released as Exchange 2013 Service Pack 1. Customers who are accustomed to deploying Cumulative Updates should consider Exchange 2013 SP1 to be equivalent to CU4 and deploy as normal.
The Exchange Team
For a long time, ForeFront TMG (and ISA before it) has been the go-to Microsoft reverse proxy solution for many applications, including Exchange Server. However, with no more development roadmap for TMG 2010 a lot of customers are looking out for an alternative solution that works well with Exchange Server 2013.
The Windows team have added an additional component called Application Request Routing (ARR, or as Greg the pirate says, ARR!) 2.5 to the Internet Information Service (IIS) role, which enables IIS to handle reverse proxy requests. By using the URL Rewrite Module and Application Request Routing you can implement complex and flexible load balancing and reverse proxy configurations.
There are two options when implementing this solution and each have their pros and cons, which I'll cover in three posts. In this first post, we'll take a look at:
In the next 2 posts in the series, we'll cover the second option and some troubleshooting steps. The troubleshooting steps would also help you to verify if you have implemented the reverse proxy solution correctly.
Here's a diagram of the environment we'll use when discussing how to implement ARR.
TIP To make sure you're configuring and using the right network interface, rename the NICs to Internal and External.
Requirements: IIS ARRis supported on Windows Server 2008, Windows Server 2008 R2, and Windows Server 2012. It is also supported on Windows Vista, Windows 7, and Windows 8 with the Web services features installed. Note that IIS ARR does not require IIS 6.0 compatibility mode.
Note: As with all such changes, we recommend that you test this in a non-production environment before deploying in production environment.
To install IIS with the ARR module on the server identifid as the Reverse Proxy:
Import-Module ServerManager Add-WindowsFeature Web-Static-Content,Web-Default-Doc,Web-Dir-Browsing,Web-Http-Errors,Web-Net-Ext,Web-Http-Logging,Web-Request-Monitor,Web-Http-Tracing,Web-Filtering,Web-Stat-Compression,Web-Mgmt-Console,NET-Framework-Core,NET-Win-CFAC,NET-Non-HTTP-Activ,NET-HTTP-Activation,RSAT-Web-Server
If you don’t have internet access on the IIS ARR server, you can use the steps highlighted in How to install Application Request Routing (ARR) 2.5 without Web Platform Installer (WebPI).
This is the simplest way of implementing IIS ARR as a Reverse Proxy solution for Exchange Server 2013. This implementation requires a minimum number of SAN entries in your certificate and minimum number of DNS entries.
This set up assumes that all protocols (OWA, ECP, EWS etc) have been published with the mail.tailspintoys.com namespace.
On the Server Farm settings node make the configuration changes as detailed below:
In Exchange 2013 there is a new component called Managed Availability and it uses various checks to make sure that each of the protocols (OA, OWA, EWS, etc.) are up and running. If any protocol fails this check then an appropriate action is automatically taken. (This was just a very simple explanation as to what Managed availability is of course, but if you can take it, and want a more detailed understanding watch Ross Smith IV’s TechEd 2013 Session). We are going to leverage one of these checks to make sure that the service/protocol is available.
https://<fqdn>/<protocol>/HealthCheck.htm is the default web page present in Exchange 2013. These URL’s are specific for each protocol and do not have to be created by the administrator.
Examples:
https://autodiscover.tailspintoys.com/Autodiscover/HealthCheck.htm
https://mail.tailspintoys.com/EWS/HealthCheck.htm
https://mail.tailspintoys.com/OAB/HealthCheck.htm
Configure the Health Test with the following settings:
URL: https://mail.tailspintoys.com/OWA/HealthCheck.htm
Interval: 5 seconds
Time-Out: 30 seconds
Acceptable Status Code: 200
Time-Out: 200 seconds
Response Buffer threshold: 0
Note: Make sure the option “Stop processing of subsequent rules” is selected. This is to make sure that the validation process stops once the requested URL finds a match.
That’s it!!!! ....You are now all set and have a reverse-proxy-with-load-balancing solution for your Exchange 2013 environment!
Give it a try and see how it works. Make sure DNS for mail.tailspintoys.com resolves to your reverse proxy and try connecting a client. And if it doesn’t work, go back through the steps and see where you went wrong. And if it still doesn’t work, post a comment here, or wait for Part 3, Troubleshooting (so please don’t do all this for the first time in a production environment! Really, we mean it!).
Finally, here are a couple of additional changes we recommend you review and optionally consider making to your IIS ARR configuration.
We've spent time testing this configuration and found it to work as we hoped and expected. Note that support for IIS ARR is provided by the Windows/IIS team, not Exchange. That's no different than support for TMG or UAG (if you use either of these products to publish Exchange).
We would really appreciate any feedback on your implementation and/or any configuration where this doesn’t seem to work.
Keep your eyes peeled for the next set of articles where we’ll talk about slightly complex and interesting implementations of IIS ARR for Exchange 2013.
I would like to thank Greg Taylor (Principal PM Lead) for his help in reviewing this article.
Part 2 | Part 3
B. Roop Sankar Premier Field Engineer, UK
In a previous article, I discussed the new server role architecture in Exchange 2013. This article continues the series by discussing the Client Access server role.
While this Exchange server role shares the same name as a server role that existed in the last two Exchange Server releases, it is markedly different. In Exchange 2007, the Client Access server role provided authentication, proxy/redirection logic, and performed data rendering for the Internet protocol clients (Outlook Web App, EAS, EWS, IMAP and POP). In Exchange 2010, data rendering for MAPI was also moved to the Client Access server role.
In Exchange 2013, the Client Access server (CAS) role no longer performs any data rendering functionality. The Client Access server role now only provides authentication and proxy/redirection logic, supporting the client Internet protocols, transport, and Unified Messaging. As a result of this architectural shift, the CAS role is stateless (from a protocol session perspective, log data that can be used in troubleshooting or trending analysis is generated, naturally).
As I alluded to in the sever role architecture blog post, Exchange 2013 no longer requires session affinity at the load balancer. To understand this better, we need to look at how CAS2013 functions. From a protocol perspective, the following will happen:
The protocol used in step 6 depends on the protocol used to connect to CAS. If the client leverages the HTTP protocol, then the protocol used between the Client Access server and Mailbox server is HTTP (secured via SSL using a self-signed certificate). If the protocol leveraged by the client is IMAP or POP, then the protocol used between the Client Access server and Mailbox server is IMAP or POP.
Telephony requests are unique, however. Instead of proxying the request at step 6, CAS will redirect the request to the Mailbox server hosting the active copy of the user’s database, as the telephony devices support redirection and need to establish their SIP and RTP sessions directly with the Unified Messaging components on the Mailbox server.
Figure 1: Exchange 2013 Client Access Protocol Architecture
In addition to no longer performing data rendering, step 5 is the fundamental change that enables the removal of session affinity at the load balancer. For a given protocol session, CAS now maintains a 1:1 relationship with the Mailbox server hosting the user’s data. In the event that the active database copy is moved to a different Mailbox server, CAS closes the sessions to the previous server and establishes sessions to the new server. This means that all sessions, regardless of their origination point (i.e., CAS members in the load balanced array), end up at the same place, the Mailbox server hosting the active database copy.
Now many of you may be thinking, wait how does authentication work? Well for HTTP, POP, or IMAP requests that use basic, NTLM, or Kerberos authentication, the authentication request is passed as part of the HTTP payload, so each CAS will authenticate the request naturally. Forms-based authentication (FBA) is different. FBA was one of the reasons why session affinity was required for OWA in previous releases of Exchange – the reason being that that the cookie used a per server key for encryption; so if another CAS received a request, it could not decrypt the session. In Exchange 2013, we no longer leverage a per server session key; instead we leverage the private key from the certificate that is installed on the CAS. As long as all members of the CAS array share the exact same certificate (remember we actually recommend deploying the same certificate across all CAS in both datacenters in site resilience scenarios as well), they can decrypt the cookie.
In the previous section, I spoke about CAS proxying the data to the Mailbox server hosting the active database copy. Prior to that, CAS has to make a decision whether it will perform the proxy action or perform a redirection action. CAS will only perform a redirection action under the following circumstances:
Figure 2: Exchange 2013 Client Access Proxy and Redirection Behavior Examples
For those of you paying attention, you may have noticed I only spoke about HTTP, POP, and IMAP. I didn’t mention RPC/TCP as connectivity solution that CAS supports. And that is for a very specific reason – CAS2013 does not support RPC/TCP as a connectivity solution; it only supports RPC/HTTP (aka Outlook Anywhere). This architecture change is primarily to drive a stable and reliable connectivity model.
To understand why, you need to keep the following tenets in the back of your mind:
The last item is tied to this discussion of why we have moved away from RPC/TCP as a connectivity solution. In all prior releases the RPC endpoint was a FQDN. In fact, the shift to the middle tier for RPC processing in CAS2010 introduced a new shared namespace, the RPC Client Access namespace. By moving RPC Client Access back to the MBX2013 role, this would have forced us to use either the MBX2013 FQDN for the RPC endpoint (thus forcing an Outlook client restart for every database *over event) or a shared namespace for the DAG.
Neither option is appropriate and adds to the complexity and support of the infrastructure. So instead, we changed the model. We no longer use a FQDN for the RPC endpoint. Instead we now use a GUID. The mailbox GUID, to be precise (along with a domain suffix to support multi-tenant scenarios). The mailbox GUID is unique within the (tenant) organization, so regardless of where the database is activated and mounted, CAS can discover the location and proxy the request to the correct MBX2013 server.
Figure 3: RPC Endpoint Changes
This architectural change means that we have a very reliable connection model – for a given session that is routed to CAS2013, CAS2013 will always have a 1:1 relationship with the MBX2013 server hosting the user’s mailbox. This means that the Mailbox server hosting the active copy of the user’s database is the server responsible for de-encapsulating the RPC stream from the HTTP packets. In the event a *over occurs, CAS2013 will proxy the connection to MBX2013 that assumes the responsibility of hosting the active database copy. Oh, and this means in a native Exchange 2013 environment, Outlook won’t require a restart for things like mailbox moves, *over events, etc.
The other architectural change we made in this area is the support for internal and external namespaces for Outlook Anywhere. This means you may not need to deploy split-brain DNS or deal with all Outlook clients using your external firewall or load balancer due to our change in MAPI connectivity.
I am sure that a few of you are wondering what this change means for third-party MAPI products. The answer is relatively simple – these third-party solutions will need to leverage RPC/HTTP to connect to CAS2013. This will be accomplished via a new MAPI/CDO download that that has been updated to include support for RPC/HTTP connectivity. It will be released in the first quarter of calendar year 2013. To leverage this updated functionality, the third-party vendor will either have to programmatically edit the dynamic MAPI profile or set registry key values to enable RPC/HTTP support.
I do also want to stress one key item with respect to third-party MAPI support. Exchange 2013 is the last release that will support a MAPI/CDO custom solution. In the future, third-party products (and custom in-house developed solutions) will need to move to Exchange Web Services (EWS) to access Exchange data.
Another benefit with the Exchange 2013 architecture is that the namespace model can be simplified (especially for those of you upgrading from Exchange 2010). In Exchange 2010, a customer that wanted to deploy a site-resilient solution for two datacenters required the following namespaces:
As I previously mentioned, we have removed two of these namespaces in Exchange 2013 – the RPC Client Access namespaces.
Recall that CAS2013 proxies requests to the Mailbox server hosting the active database copy. This proxy logic is not limited to the Active Directory site boundary. A CAS2013 in one Active Directory site can proxy a session to a Mailbox server that is located in another Active Directory site. If network utilization, latency, and throughput are not a concern, this means that we do not need the additional namespaces for site resilience scenarios, thereby eliminating three other namespaces (secondary Internet protocol and both Outlook Web App failback namespaces).
For example, let’s say I have a two datacenter deployment in North America that has a network configuration such that latency, throughput and utilization between the datacenters is not a concern. I also wanted to simplify my namespace architecture with the Exchange 2013 deployment so that my users only have to use a single namespace for Internet access regardless of where their mailbox is located. If I deployed an architecture like below, then the CAS infrastructure in both datacenters could be used to route and proxy traffic to the Mailbox servers hosting the active copies. Since I am not concerned about network traffic, I configure DNS to round-robin between the VIPs of the load balancers in each datacenter. The end result is that I have a site resilient namespace architecture while accepting that half of my proxy traffic will be out-of-site.
Figure 4: Exchange 2013 Single Namespace Example
Early on I mentioned that the Client Access Server role can proxy SMTP sessions. This is handled by a new component on the CAS2013 role, the Front-End Transport service. The Front-End Transport service handles all inbound and outbound external SMTP traffic for the Exchange organization, as well as, can be a client endpoint for SMTP traffic. The Front-End Transport Service functions as a layer 7 proxy and has full access to the protocol conversation. Like the client Internet protocols, the Front-End Transport service does not have a message queue and is completely stateless. In addition, the Front-End Transport service does not perform message bifurcation.
The Front-End Transport service listens on TCP25, TCP587, and TCP717 as seen in the following diagram:
Figure 5: Front-End Transport Service Architecture
The Front-End Transport service provides network protection – a centralized, load balanced egress/ingress point for the Exchange organization, whether it be POP/IMAP clients, SharePoint, other third-party or custom in-house mail applications, or external SMTP systems.
For outgoing messages, the Front-End Transport service is used as a proxy when the Send Connectors (that are located on the Mailbox server) have the FrontEndProxyEnabled property set. In this situation, the message will appear to have originated from CAS2013.
For incoming messages, the Front-End Transport service must quickly find a single, healthy Transport service on a Mailbox server to receive the message transmission, regardless of the number or type of recipients:
The Exchange 2013 Client Access Server role simplifies the network layer. Session affinity at the load balancer is no longer required as CAS2013 handles the affinity aspects. CAS2013 introduces more deployment flexibility by allowing you to simplify your namespace architecture, potentially consolidating to a single world-wide or regional namespace for your Internet protocols. The new architecture also simplifies the upgrade and inter-operability story as CAS2013 can proxy or redirect to multiple versions of Exchange, whether they are a higher or lower version, allowing you to upgrade your Mailbox servers at your own pace.
Ross Smith IV Principal Program Manager Exchange Customer Experience
The Exchange team is announcing today the availability of our most recent quarterly servicing update to Exchange Server 2013. Cumulative Update 5 for Exchange Server 2013 and updated UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 5 represents the continuation of our Exchange Server 2013 servicing and builds upon Exchange Server 2013 Service Pack 1. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved in Exchange Server 2013 Cumulative Update 5 can be found in Knowledge Base Article KB2936880. Customers running any previous release of Exchange Server 2013 can move directly to Cumulative Update 5 today. Customers deploying Exchange Server 2013 for the first time may skip previous releases and start their deployment with Cumulative Update 5 as well.
We would like to call your attention to a couple of items in particular about the Cumulative Update 5 release:
For the latest information and product announcements please read What’s New in Exchange Server 2013, Release Notesand product documentation available on TechNet.
Cumulative Update 5 includes Exchange related updates to Active Directory schema and configuration. For information on extending schema and configuring the active directory please review the appropriate TechNet documentation. Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474to adjust the settings.
Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU5) or the prior (e.g., CU4) Cumulative Update release.
The steps for accomplishing this are documented in various places in Exchange documentation, but it can be difficult to refer to multiple sources if you have a mixed environment containing several versions of Exchange Server. We wanted to provide a single place with somewhat generic instructions on how to accomplish these tasks across all currently supported versions of Exchange Server - Exchange 2010, Exchange 2007, and Exchange 2003.
The versatile Export-Mailbox cmdlet can export mailbox content based on specific folder names, date and time range, attachment file names, and many other filters. A narrow search will go a long way in preventing accidental deletion of legitimate mail. For more details, syntax and parmeter descriptions, see the following topics:
The account used to export the data must be an Exchange Server Administrator, a member of the local Administrators group of the target server, and have Full Access mailbox permission assigned on the source and target mailboxes. The target mailbox you specify must already be created; the target folder you specify is created in the target mailbox when the command runs.
This example retrieves all mailboxes from an Exchange organization and assigns the Full Access mailbox permission to the MyAdmin account. You must run this before exporting or deleting messages from user mailboxes. Note, if you need to export or delete messages only from a few mailboxes, you can use the Get-Mailbox cmdlet with appropriate filters, or specify each source mailbox.
Get-Mailbox -ResultSize unlimited | Add-MailboxPermission -User MyAdmin -AccessRights FullAccess -InheritanceType all
Get-Mailbox -ResultSize unlimited | Remove-MailboxPermission -User MyAdmin -AccessRights FullAccess -InheritanceType all
This example removes all messages with the subject keyword "Friday Party" and received between Sept 7 and Sept 9 from the Inbox folder of mailboxes on Server1. The messages will be deleted from the mailboxes and copied to the folder DeleteMsgs of the MyBackupMailbox mailbox. The Administrator can now review these items or delete them from the MyBackupMailbox mailbox. The StartDate and EndDate parameters must match the date format setting on the server, whether it is mm-dd-yyyy or dd-mm-yyyy.
Get-Mailbox -Server Server1 -ResultSize Unlimited | Export-Mailbox -SubjectKeywords "Friday Party" -IncludeFolders "\Inbox" -StartDate "09/07/2010" -EndDate "09/09/2010" -DeleteContent -TargetMailbox MyBackupMailbox -TargetFolder DeleteMsgs -Confirm:$false
This example removes all messages that contain the words "Friday Party" in the body or subject from all mailboxes.
Depending on the size of your environment, it is better to do the extraction/deletion in batches by using the Get-Mailbox cmdlet with the Server or Database parameters (Get-Mailbox -Server servername -ResultSize Unlimited or Get-Mailbox -Database DB_Name -ResultSize Unlimited), or specifying a filter using the Filter parameter. You can also use the Get-DistributionGroupMember cmdlet to perform this operation on members of a distribution group.
Get-Mailbox -ResultSize Unlimited | Export-Mailbox -ContentKeywords "Friday Party" -TargetMailbox MyBackupMailbox -TargetFolder 'Friday Party' -DeleteContent
It is recommended to always use a target mailbox (by specifying the TargetMailbox and TargetFolder parameters) so you have a copy of the data. You can review messages before purging them so any legitimate mail returned by the filter can be imported back to its owner mailbox. However, it is possible to outright delete all messages without temporarily copying them to a holding mailbox.
This example deletes all messages that contain the string "Friday Party" in the message body or subject, without copying them to a target mailbox.
Get-Mailbox | Export-Mailbox -ContentKeywords "Friday Party" -DeleteContent
The ExMerge utility can be used to extract mail items from mailboxes located on legacy Exchange Server versions. Follow the steps in KB 328202 HOW TO: Remove a Virus-Infected Message from Mailboxes by Using the ExMerge.exe Tool to remove unwanted messages from user mailboxes.
The following posts have more details:
There may be times where you need to purge messages from Exchange Server's mail queues to prevent delivery of unwanted mail. For more details about mail queues, see Understanding Transport Queues.
Removing a message from the queue is a two-step process. The first thing that must be done is that the message itself must be suspended. Once the messages have been suspended then you can precede with removing them from the queue. The below commands are based on suspending and removing messages based on the Subject of the message.
Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where{$_.Subject -eq "Friday Party" -and $_.Queue -notlike "*\Submission*"} | Suspend-Message
On Exchange 2007 RTM to SP2, you will not be able to suspend or remove message that are held in the Submission queue. So the command will not run against the messages in the submission queue.
This command removes all suspended messages from queues other than the Submission queue.
Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where{$_.status -eq "suspended" -and $_.Queue -notlike "*\Submission*"} | Remove-Message -WithNDR $False
This command suspends messages that have the string "Friday Party" in the message subject in all queues on Hub Tranpsort servers.
Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | where {$_.Subject -eq "Friday Party"} | Suspend-Message
This command removes messages that have the string "Friday Party" in the message subject in all queues on Hub Transport servers:
Get-TransportServer | Get-Queue | Get-Message -ResultSize unlimited | Where {$_.Subject -eq "Friday Party"} | Remove-Message -WithNDR $False
Note, you can run the command against an individual Hub Transport server by specifiying the server name after Get-TransportServer.
You can also suspend and remove messages from a specified queue. To retrieve a list of queues on a transport server, use the Get-Queue cmdlet.
This example suspends messages with the string "Friday Party" in the message subject in a specified queue.
Get-Message -Queue "server\queue" -ResultSize unlimited | where{$_.Subject -eq "Friday Party"} | Suspend-Message
This example removes messages with the string "Friday Party" in the message subject in the specified queue.
Get-Message -Queue "server\queue" -ResultSize unlimited | where{$_.Subject -eq "Friday Party" } | Remove-Message -WithNDR $False
In Exchange 2003/2000, you can use MFCMapi to clear the queues. For details, see KB 906557 How to use the Mfcmapi.exe utility to view and work with messages in the SMTP TempTables in Exchange 2000 Server and in Exchange Server 2003.
If there are a large number of messages in the queue, you may want to limit how many are displayed at a time. From the tool bar select Other > Options and under Throttle Level change the value to a more manageable number (for example, 1000).
On Exchange 2010 and Exchange 2007, you can use the New Transport Rule wizard from the EMC to easily create transport rules. The following examples illustrate how to accomplish this using the Shell. Note the variation in sytnax between the two versions. (The Exchange 2010 transport rule cmdlets have been simplified, allowing you to create or modify a transport rule using a one-line command.)
This example creates a transport rule to delete messages that contain the string "Friday Party" in the message subject.
New-TransportRule -Name "purge Friday Party messages" -Priority '0' -Enabled $true -SubjectContainsWords 'Friday Party' -DeleteMessage $true
$condition = Get-TransportRulePredicate SubjectContains $condition.Words = @("Friday Party") $action = Get-TransportRuleAction DeleteMessage New-TransportRule -name "purge Friday Party messages" -Conditions @($condition) -Actions @($action) -Priority 0
Note: If your Exchange Organization has mixed Exchange 2007 and Exchange 2010 you will have to create a rule for each Exchange version.
Angelique Conde, Ed Bringas
From time to time, you need to allow an application server to relay off of your Exchange server. You might need to do this if you have a SharePoint, a CRM application like Dynamics, or a web site that sends emails to your employees or customers.
You might need to do this if you are getting the SMTP error message "550 5.7.1 Unable to relay"
The top rule is that you want to keep relay restricted as tightly as possible, even on servers that are not connected to the Internet. Usually this is done with authentication and/or restricting by IP address. Exchange 2003 provides the following relay restrictions on the SMTP VS:
Here are the equivalent options for how to configure this in Exchange 2007.
Allow all computers which successfully authenticate to relay, regardless of the list above
Like its predecessor, Exchange 2007 is configured to accept and relay email from hosts that authenticate by default. Both the "Default" and "Client" receive connectors are configured this way out of the box. Authenticating is the simplest method to submit messages, and preferred in many cases.
The Permissions Group that allows authenticated users to submit and relay is the "ExchangeUsers" group. The permissions that are granted with this permissions group are:
NT AUTHORITY\Authenticated Users {ms-Exch-SMTP-Submit}NT AUTHORITY\Authenticated Users {ms-Exch-Accept-Headers-Routing}NT AUTHORITY\Authenticated Users {ms-Exch-Bypass-Anti-Spam}NT AUTHORITY\Authenticated Users {ms-Exch-SMTP-Accept-Any-Recipient}
The specific ACL that controls relay is the ms-Exch-SMTP-Accept-Any-Recipient.
Only the list below (specify IP address)
This option is for those who cannot authenticate with Exchange. The most common example of this is an application server that needs to be able to relay messages through Exchange.
First, start with a new custom receive connector. You can think of receive connectors as protocol listeners. The closest equivalent to Exchange 2003 is an SMTP Virtual Server. You must create a new one because you will want to scope the remote IP Address(es) that you will allow.
The next screen you must pay particular attention to is the "Remote Network settings". This is where you will specify the IP ranges of servers that will be allowed to submit mail. You definitely want to restrict this range down as much as you can. In this case, I want my two web servers, 192.168.2.55 & 192.168.2.56 to be allowed to relay.
The next step is to create the connector, and open the properties. Now you have two options, which I will present. The first option will probably be the most common.
Option 1: Make your new scoped connector an Externally Secured connector
This option is the most common option, and preferred in most situations where the application that is submitting will be submitting email to your internal users as well as relaying to the outside world.
Before you can perform this step, it is required that you enable the Exchange Servers permission group. Once in the properties, go to the Permissions Groups tab and select Exchange servers.
Next, continue to the authentication mechanisms page and add the "Externally secured" mechanism. What this means is that you have complete trust that the previously designated IP addresses will be trusted by your organization.
Caveat: If you do not perform these two steps in order, the GUI blocks you from continuing.
Do not use this setting lightly. You will be granting several rights including the ability to send on behalf of users in your organization, the ability to ResolveP2 (that is, make it so that the messages appear to be sent from within the organization rather than anonymously), bypass anti-spam, and bypass size limits. The default "Externally Secured" permissions are as follows:
MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Authoritative-Domain}MS Exchange\Externally Secured Servers {ms-Exch-Bypass-Anti-Spam}MS Exchange\Externally Secured Servers {ms-Exch-Bypass-Message-Size-Limit}MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Exch50}MS Exchange\Externally Secured Servers {ms-Exch-Accept-Headers-Routing}MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Submit}MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Any-Recipient}MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Authentication-Flag}MS Exchange\Externally Secured Servers {ms-Exch-SMTP-Accept-Any-Sender}
Basically you are telling Exchange to ignore internal security checks because you trust these servers. The nice thing about this option is that it is simple and grants the common rights that most people probably want.
Option 2: Grant the relay permission to Anonymous on your new scoped connector
This option grants the minimum amount of required privileges to the submitting application.
Taking the new scoped connector that you created, you have another option. You can simply grant the ms-Exch-SMTP-Accept-Any-Recipient permission to the anonymous account. Do this by first adding the Anonymous Permissions Group to the connector.
This grants the most common permissions to the anonymous account, but it does not grant the relay permission. This step must be done through the Exchange shell:
Get-ReceiveConnector "CRM Application" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -ExtendedRights "ms-Exch-SMTP-Accept-Any-Recipient"
In addition to being more difficult to complete, this step does not allow the anonymous account to bypass anti-spam, or ResolveP2.
Although it is completely different from the Exchange 2003 way of doing things, hopefully you find the new SMTP permissions model to be sensible.
More information
See the following for more information:
- Scott Landry
A user is out of office for some reason – on vacation, sick, on a sabbatical or extended leave of absence, or traveling to a remote location on business, and forgets to set an automatic reply, also known as an Out Of Office message or OOF in Exchange/Outlook lingo. As an Exchange administrator, you get an email from the user’s manager asking you to configure an OOF for the user.
In previous versions of Exchange, you would need to access the user’s mailbox to be able to do this. Out of Office messages are stored in the Non-IPM tree of a user’s mailbox along with other metadata. Without access to the mailbox, you can’t modify data in it. Two ways for an admin to access a mailbox:
It is safe to say that either of these options is potentially dangerous. The first option grants the administrator access to all of the data in the user’s mailbox. The second option grants the administrator access to all of the data that the user account can access within your company and locks the user out of his own user account (as the user in question no longer knows the account password).
In Exchange 2010, you can configure auto-reply options for your users without using either of the above options. You must be a member of a role group that has either the Mail Recipients or User Options management roles.
To configure an auto-reply using the ECP:
From Mail > Options, select Another User (default My Organization). Figure 1: Select Another User
Select the user you want to configure the auto-reply for
In the new window, ensure the user's name is displayed in the alert message, and then click Tell people you’re on vacation Figure 2: When managing another user in the ECP, an alert near the top of the page displays the name of the user you're managing
From the Automatic Replies tab, configure the auto-reply options for the user (see screenshot).
In Exchange 2007, we introduced the ability to create different Out of Office messages for external and internal recipients. You can also disable or enable Out of Office messages on a per-user basis and on a per-remote domain basis in Remote Domain settings. For details, see previous post Exchange Server 2007 Out of Office (OOF).
This command schedules internal and external auto-replies from 9/8/2011 to 9/15/2011:
Set-MailboxAutoReplyConfiguration bsuneja@e14labs.com –AutoReplyState Scheduled –StartTime “9/8/2011” –EndTime “9/15/2011” –ExternalMessage “External OOF message here” –InternalMessage “Internal OOF message here”
To configure auto-replies to be sent until they're disabled (i.e. without a schedule), set the AutoReplyState parameter to Enabled and do not specify the StarTime and EndTime parameters. For detailed syntax and parameter descriptions, see Set-MailboxAutoReplyConfiguration.
This command retrieves auto-reply settings for a mailbox.
Get-MailboxAutoReplyConfiguration bsuneja@e14labs.com
This command disables auto-reply configured for a mailbox:
Set-MailboxAutoReplyConfiguration bsuneja@e14labs.com –AutoReplyState Disabled –ExternalMessage $null –InternalMessage $null
This article is part 3 in a series that discusses namespace planning, load balancing principles, client connectivity, and certificate planning.
Over the last several months, we have routinely fielded questions on how various clients connect to the infrastructure once Exchange 2013 is deployed. Our goal with this article is to articulate the various connectivity scenarios you may encounter in your designs. To that end, this article will begin with a walk through of a deployment that consists of Exchange 2007 and Exchange 2010 in a multi-site architecture and show how the connectivity changes with the introduction of Exchange 2013.
Figure 1: Exchange 2007 & Exchange 2010 Multi-Site Architecture
As you can see from the above diagram, this environment contains three Active Directory sites:
To understand the client connectivity before we instantiate Exchange 2013 into the environment, let’s look at the four users.
The Autodiscover namespace, autodiscover.contoso.com, as well as, the internal SCP records resolve to the CAS2010 infrastructure located in Site1. Outlook clients and ActiveSync clients (on initial configuration) will submit Autodiscover requests to the CAS2010 infrastructure and retrieve configuration settings based on their mailbox’s location. The Autodiscover service on Exchange 2010 can process Autodiscover requests for both Exchange 2007 and Exchange 2010 mailboxes.
For more information on how Autodiscover requests are performed, see the whitepaper, Understanding the Exchange 2010 Autodiscover Service.
For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2010, they will connect to the Exchange 2010 RPC Client Access array endpoint (assuming one exists). Keep in mind the importance of configuring the RPC Client Access array endpoint correctly, as documented in Ambiguous URLs and their effect on Exchange 2010 to Exchange 2013 Migrations.
For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2007, they will connect directly to the Exchange 2007 Mailbox server instance hosting the mailbox.
For more information, see the article Upgrading Outlook Web App to Exchange 2010.
For more information, see the article Upgrading Exchange ActiveSync to Exchange 2010.
Exchange 2013 has now been deployed in Site1 following the guidance documented within the Exchange Deployment Assistant. As a result, Outlook Anywhere has been enabled on all Client Access servers within the infrastructure and the mail.contoso.com and autodiscover.contoso.com namespaces have been moved to resolve to Exchange 2013 Client Access server infrastructure.
Figure 2: Exchange 2013 Coexistence with Exchange 2007 & Exchange 2010 in a Multi-Site Architecture
To understand the client connectivity now that Exchange 2013 exists in the environment, let’s look at the four users.
The Autodiscover external namespace, autodiscover.contoso.com, as well as, the internal SCP records resolve to the CAS2013 infrastructure located in Site1. Outlook clients and ActiveSync clients (on initial configuration) will submit Autodiscover requests to the CAS2013 infrastructure and depending on the mailbox version, different behaviors occur:
For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2010, they will still connect to the Exchange 2010 RPC Client Access array endpoint.
For internal Outlook clients using RPC/TCP connectivity whose mailboxes exist on Exchange 2007, they will still connect directly to the Exchange 2007 Mailbox server instance hosting the mailbox.
When you have an Exchange 2013 mailbox you are using Outlook Anywhere, both within the corporate network and outside of the corporate network; RPC/TCP connectivity no longer exists for Exchange 2013 mailboxes.
In Exchange 2007/2010, the way Outlook Anywhere was implemented is that you had one namespace you could configure. In Exchange 2013, you have both an internal host name and an external host name. Think of it as having two sets of Outlook Anywhere settings, one for when you are connected to the corporate domain, and another for when you are not. You will see this returned to the Outlook client in the Autodiscover response via what looks like a new provider, ExHTTP. However, ExHTTP isn’t an actual provider, it is a calculated set of values from the EXCH (internal Outlook Anywhere) and EXPR (External Outlook Anywhere) settings. To correctly use these settings, the Outlook client must be patched to the appropriate levels (see the Exchange 2013 System Requirements for more information). Outlook will process the ExHTTP in order – internal first and external second.
The default Exchange 2013 internal Outlook Anywhere settings don’t require HTTPS. By not requiring SSL, the client should be able to connect and not get a certificate pop-up for the mail and directory connections. However, you will still have to deploy a certificate that is trusted by the client machine for Exchange Web Services and OAB downloads.
In order to support access for Outlook Anywhere clients whose mailboxes are on legacy versions of Exchange, you will need to make some changes to your environment which are documented in the steps within the Exchange Deployment Assistant. Specifically, you will need to enable Outlook Anywhere on your legacy Client Access servers and enable NTLM in addition to basic authentication for the IIS Authentication Method.
The Exchange 2013 Client Access server’s RPC proxy component sees the incoming connections, authenticates and chooses which server to route the request to (regardless of version), proxying the HTTP session to the endpoint (legacy CAS or Exchange 2013 Mailbox server).
For Outlook Web App, the user experience will depend on the mailbox version and where the mailbox is located.
For Exchange ActiveSync clients, the user experience will depend on the mailbox version and where the mailbox is located. In addition, Exchange 2013 no longer supports the 451 redirect response – Exchange 2013 will always proxy ActiveSync requests.
Coexistence with Exchange Web Services is rather simple.
Offline Address Book
Like with Exchange Web Services, Autodiscover will provide the Offline Address Book URL.
It’s important to understand that when CAS2013 proxies to a legacy Exchange Client Access server, it constructs a URL based on the server FQDN, not a load balanced namespace or the InternalURL value.But how does CAS2013 choose which legacy Client Access server to proxy the connection?
When a CAS2013 starts up, it connects to Active Directory and enumerates a topology map to understand all the Client Access servers that exist within the environment. Every 50 seconds, CAS2013 will send a lightweight request to each protocol end point to all the Client Access servers in the topology map; these requests have a user agent string of HttpProxy.ClientAccessServer2010Ping (yes, even Exchange 2007 servers are targeted with that user string). CAS2013 expects a response - a 200/300/400 response series indicates the target server is up for the protocol in question; a 502, 503, or 504 response indicates a failure. If a failure response occurs, CAS2013 immediately retries to determine if the error was a transient error. If this second attempt fails, CAS2013 marks the target CAS as down and excludes it from being a proxy target. At the next interval (50 seconds), CAS2013 will attempt to determine the health state of the down CAS to determine if it is available.
The IIS log on a legacy Client Access server will contain the ping events. For example:
2014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /ecp - 443 - 192.168.1.42 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 302 0 0 277 170 02014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /PowerShell - 443 - 192.168.1.27 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 0 0 309 177 152014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 HEAD /EWS - 443 - 192.168.1.134 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 0 0 245 170 02014-03-11 14:00:00 W3SVC1 DF-C14-02 157.54.7.76 GET /owa - 443 - 192.168.1.220 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 301 0 0 213 169 1712014-03-11 14:00:01 W3SVC1 DF-C14-02 157.54.7.76 HEAD /Microsoft-Server-ActiveSync/default.eas - 443 - 192.168.1.29 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 2 5 293 194 312014-03-11 14:00:04 W3SVC1 DF-C14-02 157.54.7.76 HEAD /OAB - 443 - 10.166.18.213 HTTP/1.1 HttpProxy.ClientAccessServer2010Ping - - coe-e14-1.coe.lab 401 2 5 261 170 171
If for some reason, you would like to ensure a particular CAS2010 is never considered a proxy endpoint (or want to remove it for maintenance activities), you can do so by executing the following cmdlet on Exchange 2010 (note that this feature does not exist on Exchange 2007):
Set-ClientAccessServer <server> -IsOutofService $True
All this discussion about HTTP-based clients is great, but what about POP and IMAP clients? Like the HTTP-based client counterparts, IMAP and POP clients are also proxied from the Exchange 2013 Client Access server to a target server (whether that be an Exchange 2013 Mailbox server or a legacy Client Access server). However, there is one key difference, there is no health-checking on the target IMAP/POP services.
When the Exchange 2013 Client Access server receives a POP or IMAP request, it will authenticate the user and perform a service discovery.
CAS2013 will choose a server to proxy the request based on the incoming connection’s configuration. If the incoming connection is over an encrypted channel, CAS2013 will try to locate an SSL proxy target first, TLS next, plaintext lastly. If the incoming connection is over a plaintext channel, CAS2013 will try to locate a plaintext proxy target first, SSL next, TLS lastly.
Hopefully this information dispels some of the myths around proxying and redirection logic for Exchange 2013 when coexisting with either Exchange 2007 or Exchange 2010. Please let us know if you have any questions.
Ross Smith IV Principal Program Manager Office 365 Customer Experience
It’s been a long road, but the initial release of the Exchange 2013 Server Role Requirements Calculator is here. No, that isn’t a mistake, the calculator has been rebranded. Yes, this is no longer a Mailbox server role calculator; this calculator includes recommendations on sizing Client Access servers too! Originally, marketing wanted to brand it as the Microsoft Exchange Server 2013 Client Access and Mailbox Server Roles Theoretical Capacity Planning Calculator, On-Premises Edition. Wow, that’s a mouthful and reminds me of this branding parody. Thankfully, I vetoed that name (you’re welcome!).
The calculator supports the architectural changes made possible with Exchange 2013:
Like with Exchange 2010, the recommendation in Exchange 2013 is to deploy multi-role servers. There are very few reasons you would need to deploy dedicated Client Access servers (CAS); CPU constraints, use of Windows Network Load Balancing in small deployments (even with our architectural changes in client connectivity, we still do not recommend Windows NLB for any large deployments) and certificate management are a few examples that may justify dedicated CAS.
When deploying multi-role servers, the calculator will take into account the impact that the CAS role has and make recommendations for sizing the entire server’s memory and CPU. So when you see the CPU utilization value, this will include the impact both roles have!
When deploying dedicated server roles, the calculator will recommend the minimum number of Client Access processor cores and memory per server, as well as, the minimum number of CAS you should deploy in each datacenter.
Now that the Mailbox server role includes additional components like transport, it only makes sense to include transport sizing in the calculator. This release does just that and will factor in message queue expiration and Safety Net hold time when calculating the database size. The calculator even makes a recommendation on where to deploy the mail.que database, either the system disk, or on a dedicated disk!
Exchange 2010 introduced the concept of 1 database per JBOD volume when deploying multiple database copies. However, this architecture did not ensure that the drive was utilized effectively across all three dimensions – throughput, IO, and capacity. Typically, the system was balanced from an IO and capacity perspective, but throughput was where we saw an imbalance, because during reseeds only a portion of the target disk’s total capable throughput was utilized. In addition, capacity on the 7.2K disks continue to increase with 4TB disks now available, thus impacting our ability to remain balanced along that dimension. In addition, Exchange 2013 includes a 33% reduction in IO when compared to Exchange 2010. Naturally, the concept of 1 database / JBOD volume needed to evolve. As a result, Exchange 2013 made several architectural changes in the store process, ESE, and HA architecture to support multiple databases per JBOD volume. If you would like more information, please see Scott’s excellent TechEd session in a few weeks on Exchange 2013 High Availability and Site Resilience or the High Availability and Site Resilience topic on TechNet.
By default, the calculator will recommend multiple databases per JBOD volume. This architecture is supported for single datacenter deployments and multi-datacenter deployments when there is copy and/or server symmetry. The calculator supports highly available database copies and lagged database copies with this volume architecture type. The distribution algorithm will lay out the copies appropriately, as well as, generate the deployment scripts correctly to support AutoReseed.
The calculator has been improved in several ways for high availability architectures:
Over the years, a few, but vocal, members of the community have requested that I add more mailbox tiers to the calculator. As many of you know, I rarely recommend sizing multiple mailbox tiers, as that simply adds operational complexity and I am all about removing complexity in your messaging environments. While, I haven’t specifically added additional mailbox tiers, I have added the ability for you to define a percentage of the mailbox tier population that should have the IO and Megacycle Multiplication Factors applied. In a way, this allows you to define up to eight different mailbox tiers.
I’ve received a number of questions regarding processor sizing in the calculator. People are comparing the Exchange 2010 Mailbox Server Role Requirements Calculator output with the Exchange 2013 Server Role Requirements Calculator. As mentioned in our Exchange 2013 Performance Sizing article, the megacycle guidance in Exchange 2013 leverages a new server baseline, therefore, you cannot directly compare the output from the Exchange 2010 calculator with the Exchange 2013 calculator.
There are many other minor improvements sprinkled throughout the calculator. We hope you enjoy this initial release. All of this work wouldn’t have occurred without the efforts of Jeff Mealiffe (for without our sizing guidance there would be no calculator!), David Mosier (VBA scripting guru and the master of crafting the distribution worksheet), and Jon Gollogy (deployment scripting master).
As always we welcome feedback and please report any issues you may encounter while using the calculator by emailing strgcalc AT microsoft DOT com.
There might be times when an Exchange Administrator will need to export the contents of individual mailboxes to offline files in order to present specific users with a format that is easily portable and ready to consume using Outlook clients. To fulfill this need Exchange 2007 SP1 will have a new set of features to export and import mailboxes to and from PST files. As I know you will ask - yes, those PST files can be bigger than 2 GB, which was a limitation of Exmerge tool used for this purpose in previous versions of Exchange.
Export/Import to PST Requirements
In order to export or import mailboxes to PST files the following requirements must be met:
Exporting mailboxes to PST files
The most basic cmdlet to export a mailbox to a PST file is as follows:
Export-Mailbox –Identity <mailboxUser> -PSTFolderPath <pathToSavePST>
PSTFolderPath must be a full path pointing either to a directory or to a (.pst) file. If a directory is specified a PST file named after the mailbox alias will be used as the target of the export. Note that if the PST file already exists the contents of the mailbox will be merged into it.
Example:
After the cmdlet finishes execution, the .pst file will be ready in the specified location:
To export multiple mailboxes to their respective .pst files at once you can pipe in the identities of those mailboxes to the export task. Notice that when bulk exporting the PSTFolderPath parameter must forcefully point to a directory since one .pst file will be created for each mailbox.
Get-Mailbox -Database 'MDB' | Export-Mailbox -PSTFolderPath D:\PSTs
Importing mailboxes from PST files
The process for importing mailbox contents from a PST file is quite similar:
Import-Mailbox -Identity <mailboxUser> -PSTFolderPath <PSTFileLocation>
Again, PSTFolderPath must be the full path to the directory where the .pst file lives or to the (.pst) file itself. In the case where PSTFolderPath points to a directory the cmdlet will try to match the mailbox alias with the name of an existing .pst file in the specified directory and import the content of that file.
Just as with the export to PST scenario, when bulk importing mailboxes the PSTFolderPath must forcefully point to a directory and the task logic will try to match mailboxes alias with the .pst file names under that location. If no match is found for a particular mailbox, that mailbox will be skipped.
Get-Mailbox -Database 'MDB' | Import-Mailbox -PSTFolderPath D:\PSTs
Filtering content in Export/Import to PST
When only specific content is desired in the PST file (or back into the mailbox) a common set of filters can be used to leave out the rest of the messages. Export/Import to PST support the following filters: Locale, StartDate, EndDate, ContentKeywords, SubjectKeywords, AttachmentFileNames, AllContentKeywords, SenderKeywords, and RecipientKeywords.
Example: Import only those messages that were created between 1/1/06 and 12/1/06 and contain the word "review" in the subject and any of the words {"project","alpha"} in the body.
Import-mailbox -Identity ricardr -PSTFolderPath D:\PSTs -StartDate 1/1/06 -EndDate 12/1/06 -SubjectKeywords:'review' -ContentKeywords:'project','alpha'
Now, we realize that you would like to try this today, but please be patient!
- Ricardo Rosales Guerrero
Among the many new features delivered in Exchange 2013 SP1 is a new method of connectivity to Outlook we refer to as MAPI over HTTP (or MAPI/HTTP for short). We’ve seen a lot of interest about this new connection method and today we’ll give you a full explanation of what it is, what it provides, where it will take us in the future, and finally some tips of how and where to get started enabling this for your users.
MAPI over HTTP is a new transport used to connect Outlook and Exchange. MAPI/HTTP was first delivered with Exchange 2013 SP1 and Outlook 2013 SP1 and begins gradually rolling out in Office 365 in May. It is the long term replacement for RPC over HTTP connectivity (commonly referred to as Outlook Anywhere). MAPI/HTTP removes the complexity of Outlook Anywhere’s dependency on the legacy RPC technology. Let’s compare the architectures.
MAPI/HTTP moves connectivity to a true HTTP request/response pattern and no longer requires two long-lived TCP connections to be open for each session between Outlook and Exchange. Gone are the twin RPC_DATA_IN and RPC_DATA_OUT connections required in the past for each RPC/HTTP session. This change will reduce the number of concurrent TCP connections established between the client and server. MAPI/HTTP will generate a maximum of 2 current connections generating one long lived connection and an additional on-demand short-lived connection.
Outlook Anywhere also essentially double wrapped all of the communications with Exchange adding to the complexity. MAPI/HTTP removes the RPC encapsulation within HTTP packets sent across the network making MAPI/HTTP a more well understood and predictable HTTP payload.
An additional network level change is that MAPI/HTTP decouples the client/server session from the underlying network connection. With Outlook Anywhere connectivity, if a network connection was lost between client and server, the session was invalidated and had to be reestablished all over again, which is a time-consuming and expensive operation. In MAPI/HTTP when a network connection is lost the session itself is not reset for 15 minutes and the client can simply reconnect and continue where it left off before the network level interruption took place. This is extremely helpful for users who might be connecting from low quality networks. Additionally in the past, an unexpected server-side network blip would result in all client sessions being invalidated and a surge of reconnections being made to a mailbox server. Depending on the number of Outlook clients reconnecting, the re-establishing of so many RPC/HTTP connections might strain the resources of the mailbox server, and possibly extend the outage in scope (to Outlook clients connected to multiple servers) and time, caused by a single server-side network blip.
You are probably asking yourself why the Exchange team would create a complete replacement for something so well-known and used. Let us explain.
The original Outlook Anywhere architecture wasn’t designed for today’s reality of clients connecting from a wide variety of network types – many of these are not as fast or reliable as what was originally expected when Outlook Anywhere was designed. Consider connections from cellular networks, home networks, or in-flight wireless networks as a few examples. The team determined the best way to meet current connection needs and also put Exchange in the position to innovate more quickly was to start with a new simplified architecture.
The primary goal of MAPI/HTTP is provide a better user experience across all types of connections by providing faster connection times to Exchange – yes, getting email to users faster. Additionally MAPI/HTTP will improve the connection resiliency when the network drops packets in transit. Let’s quantify a few of these improvements your users can expect. These results represent what we have seen in our own internal Microsoft user testing.
When starting Outlook users often see the message “Connecting to Outlook” in the Outlook Status bar. MAPI/HTTP can reduce the amount of time a user waits for this connection. In the scenario when a user first launches Outlook the time to start synchronization improved to 30 seconds vs. 90 seconds for Outlook Anywhere for 70% of the monitored clients.
Improvements are also delivered when clients are resuming from hibernation or simply re-connecting to a new network. Testing showed that 80% of the clients using MAPI/HTTP started syncing in less than 30 seconds vs. over 40 seconds for Outlook Anywhere clients when resuming from hibernation. This improvement was made possible as MAPI/HTTP implements a pause/resume feature enabling clients to resume using an existing connection rather than negotiating a new connection each time. Current sessions for MAPI/HTTP are valid for 15 minutes, but as we fine tune and expand this duration, these improvements will be even more noticeable.
Improvements aren’t limited to end users. IT administrators will gain greater protocol visibility allowing them to identify and remediate situations faster and with more confidence. Due to MAPI/HTTP moving to a more traditional HTTP protocol payload, the ability to utilize already known tools common to HTTP debugging is a reality. IIS and HttpProxy logs will now contain information similar to other HTTP based protocols like Outlook Web App and be able to pass information via headers. At times in the past, certain debug procedures for RPC/HTTP were only available via proprietary internal Microsoft tools. This move should put all customers on the same level playing field as far as what tools are available for debug purposes.
Exchange administrators also will find response returned by Autodiscover for MAPI/HTTP to Outlook is greatly simplified. The settings returned are just protocol version and endpoint URLs for Outlook to connect to Exchange mailbox and directory from inside or outside the customer’s corporate network. Outlook treats the URLs returned as opaque and uses as-is, minimizing the risk of connectivity breaking with future endpoint changes. Since MAPI/HTTP, like any other web protocol, simply sends an anonymous HTTP request to Exchange and gets back the authentication settings, there is no need for Autodiscover to advertise the authentication settings. This makes it easier to roll out changes in authentication settings for Outlook.
MAPI/HTTP puts the Exchange team in position to innovate more quickly. It simplifies the architecture removing dependency on the RPC technologies which are no longer evolving as quickly as the customers demand. It provides the path for extensibility of the connection capabilities. A new capability that is on the roadmap for Outlook is to enable multi-factor authentication for users in Office 365. This capability is made possible with the use of MAPI/HTTP and is targeted to be delivered later this year. For a deeper look at this upcoming feature you can review the recent Multi-Factor Authentication for Office 365 blog post. This won’t stop with Office 365 MFA, but provides the extensibility foundation for 3rdparty identity providers.
Let’s walk through the scenario of an Outlook 2013 SP1 client connecting to Exchange Server 2013 SP1 after MAPI/HTTP has been enabled.
So now we have a clear set of advantages you can offer users, let’s review the requirements to enable MAPI/HTTP.
Server Requirements: Use of MAPI/HTTP requires allExchange 2013 Client Access Servers to be updated to Exchange Server 2013 SP1 (or later). The feature is disabled by default in SP1 so you can get the servers updated without anyone noticing any changes. If you are an Office 365 Exchange Online customer you won’t have anything to worry about on the service side of deployment.
Client Requirements:Outlook clients must be updated to use MAPI/HTTP. Office 2013 SP1 or Office 365 ProPlus February update (SP1 equivalent for ProPlus) are required for MAPI/HTTP. It is recommend you deploy the May Office 2013 public update or the April update for Office 365 ProPlus to eliminate the restart prompt when MAPI/HTTP is enabled for users.
Prior version clients will continue to work as-is using Outlook Anywhere. Outlook Anywhere is the supported connection method for those clients. We do plan to add MAPI/HTTP support to Outlook 2010 in a future update. We will announce timing when we are closer to its availability.
Part one of getting ready is to get the required updates to your servers and clients as described in the prior section. Part two of getting ready is evaluating potential impacts MAPI/HTTP might have on your on-premises servers. Again if you an Office 365 customer you can ignore this bit.
When you implement MAPI/HTTP in your organization, it will have an impact on your Exchange server resources. Before you go any further you need to review the impacts to your server resources. The Exchange 2013 Server Role Requirements Calculatorhas been updated to factor in use of MAPI/HTTP. You need to use the most recent version of the calculator (v6.3 or later) before you proceed. MAPI/HTTP increases the CPU load in the Exchange Client Access Servers. This is a 50% increase over Exchange 2013 RTM, however it is still lower than Exchange 2010 requirements. As you plan be mindful that deploying in a multi-role configuration will minimize the impact to your sizing. Again use the calculator to review potential impacts this may have in your environment. This higher CPU use is due to the higher request rate with several short-lived connections, with each request taking care of authentication and proxying.
To provide the best MAPI/HTTP performance you need to install .NET 4.5.1 on your Exchange 2013 servers. Installing .NET 4.5.1 will avoid long wait times for users thanks to a fix to ensure the notification channel remains asynchronous to avoid queued requests.
The change in communication between Exchange and Outlook has a small impact on the bytes sent over the wire. The header content in MAPI/HTTP is responsible for an increase in bytes transferred. In a typical message communications we have observed an average packet size increase of 1.2% versus Outlook Anywhere for a 50 KB average packet. In scenarios of data transfers over 10 MB the increase in bytes over the wire is 5-10%. These increases assumes an ideal network where connections are not dropped or resumed. If you consider real world conditions you may actually find MAPI/HTTP data on the wire may be lower than Outlook Anywhere. Outlook Anywhere lacks the ability to resume connections and the cost of re-syncing items can quickly outweigh the increase from the MAPI/HTTP header information.
Now that you have prepared your servers with SP1, updated your clients, and reviewed potential sizing impacts you are ready to get on with implementing MAPI/HTTP. It is disabled by default in SP1 and you must take explicit actions to configure and enable it. These steps are well covered in the MAPI over HTTPTechNet article.
A few important things to remember in your deployment.
NOTE: If you require more specific control you can control the client behavior with a registry key on each client machine. This is not recommended or required but included if your situation demands this level of control. This registry entry prevents Outlook from sending the MAPI/HTTP capable flag to Exchange in the Autodiscover request. When you change the registry key on a client it will not take effect until the next time the client performs an Autodiscover query against Exchange.To disallow MAPI/HTTP and force RPC/HTTP to be used.HKEY_CURRENT_USER\Software\Microsoft\Exchange]“MapiHttpDisabled”=dword:00000001To allow MAPI/HTTP simply delete the MapiHttpDisabled DWORD, or set it to a value of 0 as below.HKEY_CURRENT_USER\Software\Microsoft\Exchange]“MapiHttpDisabled”=dword:00000000
There are a few quick ways to verify your configuration is working as expected.
1. Test with the Test-OutlookConnectivity cmdlet
Use this command to test MAPI/HTTP connectivity:
Test-OutlookConnectivity -RunFromServerId Contoso -ProbeIdentity OutlookMapiHttpSelfTestProbe
This test is detailed in the MAPI over HTTPTechNet Article.
2. Inspect MAPI/HTTP server logs
Administrators can review the following MAPI/HTTP log files to validate how the configuration is operating:
Location
Path
CAS:
%ExchangeInstallPath%Logging\HttpProxy\Mapi\HTTP
Mailbox:
%ExchangeInstallPath%Logging\MAPI Client Access\
%ExchangeInstallPath%Logging\MAPI Address Book Service\
3. Check Outlook connection status on clients
You can also quickly verify that the client is connected using MAPI/HTTP. The Outlook Connection status dialog can be launch by CTRL-right clicking the Outlook icon in the notification area and selecting Connection Status. Here are the few key fields to quickly confirm the connection is using MAPI/HTTP.
Field
Value
Protocol
HTTP (v/s RPC/HTTP for Outlook Anywhere)
Proxy Server
Empty
Server name
Actual server name (v/s GUID for Outlook Anywhere connections)
MAPI/HTTP provides a simplified transport and resulting architecture for Outlook to connect with Exchange. It enables improved user experiences to allow them faster access to mail and improves the resilience of their Outlook connections. These investments are the foundation for future capabilities such as multi-factor authentication in Outlook. It also helps IT support and troubleshoot client connection issues using standard HTTP protocol tools.
As with all things new you must properly plan your implementation. Use the deployment guidanceavailable on TechNet and the updated sizing recommendations in the calculator before you start your deployment. With proper use it will guide you to a smooth deployment of MAPI/HTTP.
Special thanks to Brian Day and Abdel Bahgat for extensive contributions to this blog post.
Brian Shiers | Technical Product Manager
We collected a number of questions which frequently came up during the development, internal dogfooding, and customer TAPtesting of MAPI/HTTP. We hope these answer most of the questions you may have about MAPI/HTTP.
No, it is an organization-wide Exchange setting. The user experience mentioned during database failovers when one server is not yet MAPI/HTTP capable made the functionality to turn MAPI/HTTP on and off per server not a viable solution.
No, there is not currently a Set-CasMailbox parameter to enable/disable MAPI/HTTP for a single mailbox.
The registry entry simply controls what Outlook sends to Exchange about its MAPI/HTTP capability during an Autodiscover request. It does not immediately change the connection method Outlook is using nor will it change it with a simple restart of Outlook. Remember, the Autodiscover response Outlook gets only has MAPI/HTTP or RPC/HTTP settings in it so it has no way to immediately switch types. You must allow Outlook to perform its next Autodiscover request and get a response from Exchange after setting this registry entry before the change will take place. If you must attempt to speed along this process, there are two options.
E.g. When not all mailbox servers in a DAG are MAPI/HTTP capable and MAPI/HTTP has already been enabled in the organization, and then a mailbox failover takes place between the SP1 (or later) and pre-SP1 servers. Additionally this could happen if you move a mailbox from a MAPI/HTTP capable mailbox server to a server that is not MAPI/HTTP capable.
In the above example Outlook would fail to connect and when Autodiscover next ran the user would get an Outlook restart notification warning because MAPI/HTTP is no longer a viable connection method due to the mailbox being mounted on a pre-SP1 server. After the client restart the client profile would be back to utilizing RPC/HTTP.
Note: While a mix of MAPI/HTTP capable and non-capable mailbox Exchange servers in the same DAG are supported in an environment with MAPI/HTTP enabled, it is very strongly not recommended due to the possible user experience outlined above. It is suggested the entire organization be upgraded to SP1 or later before enabling MAPI/HTTP in the organization.
Outlook profiles can continue access additional resources using non-MAPI/HTTP connectivity methods even if the user’s primary mailbox utilizes MAPI/HTTP. For example a user can continue to access Legacy Public Folders or Shared Mailboxes on other Exchange servers not utilizing MAPI/HTTP. During the Autodiscover process Exchange will determine and hand back to Outlook the proper connectivity method necessary for each resource being accessed.
No, a user’s profile will never attempt to use RPC/HTTP if MAPI/HTTP becomes unavailable because the original Autodiscover response only contained one connection method to use. There is no fallback from MAPI/HTTP to RPC/HTTP or vice versa. Normal high availability design considerations should ensure the MAPI/HTTP endpoint remain accessible in the event of server or service failures.
No, MAPI/HTTP is not a replacement for EWS and there are no plans to move current EWS clients to MAPI/HTTP.
Over time this may take place as non-MAPI/HTTP capable Outlook versions age out of their product support lifecycle, but there are no immediate plans to remove RPC/HTTP as a valid connection method.
A huge architectural improvement by moving to MAPI/HTTP is that MAPI/HTTP is abstracted from authentication. In short authentication is done at the HTTP layer, so whatever HTTP can do, MAPI/HTTP can use.
No, the Lync client uses the same profile as configured by Outlook and will connect via whatever connectivity method is in use by Outlook.
The MAPI/HTTP protocol is publically documented (PDF download) and has the same level of documentation support as RPC/HTTP. There are no plans to update the MAPI CDO library for MAPI/HTTP and third party companies are still encouraged to utilize Exchange Web Services as the long-term protocol for interaction with Exchange as discussed in Exchange 2013 Client Access Server Rolearticle.
Use of MAPI/HTTP requires Outlook 2013 clients to obtain Office 2013 Service Pack 1 or the February update for Office 365 ProPlus clients. MAPI/HTTP is planned to be ported to Outlook 2010 at a future date. At this time no other version of Windows Outlook supports MAPI/HTTP.
Publishing Exchange 2013 with MAPI/HTTP in use does not change very much. You will need to ensure devices in front of Exchange handling user access to CAS are allowing access to the Default Web Site’s /mapi/ virtual directory.
At this time UAG SP3 is not compatible with MAPI/HTTP even with all filtering options disabled. UAG plans to add support for MAPI/HTTP in a future update.
Still want more information? Review the following sessions on this topic from Microsoft Exchange Conference 2014
Outlook Connectivity: Current and FutureOutlook Connectivity: Current and FutureOutlook Connectivity: Current and Future
What's New in Outlook 2013 and Beyond
What's New in Authentication for Outlook 2013
EDIT 12/7/2010: For additional help resolving those issues, please see our newer blog post Resolving WinRM errors and Exchange 2010 Management tools startup failures.
In this blog post, we will be highlighting some of the most common errors that may be seen when attempting to open the Exchange Management tools (Exchange Management Console and Exchange Management Shell).
To start off, you first need to be aware that in Exchange 2010, all management is done via Remote PowerShell, even when opening the Management Tools on an Exchange server. Where this differs from Exchange 2007 is that there is now a much larger dependency on IIS, as Remote PowerShell requests are sent via the HTTP protocol and use IIS as the mechanism for connections. IIS works with the WinRM (Windows Remote Management) service, and the WSMan (Web Services for Management) protocol to initiate the connection.
When you click on the Exchange Management Shell shortcut, a Remote PowerShell session is opened. Instead of simply loading the Exchange snap-in (as we did with Exchange 2007), PowerShell connects using IIS to the closest Exchange 2010 server via WinRM. WinRM then performs authentication checks, creates the remote session and presents to you the cmdlets that you have access to via RBAC (Role Based Access Control).
Since all Remote PowerShell connections go through IIS, we have identified some of the most common errors that may be exhibited when attempting to open the Exchange Management tools along with the most common causes of those errors and how to address these issues. We have attempted to list these in order of frequency.
Issue:
Connecting to remote server failed with the following error message: The WinRM client cannot process the request. It cannot determine the content type of the HTTP response from the destination computer. The content type is absent or invalid. For more information, see the about_Remote_Troubleshooting Help topic.
Possible causes:
1. Remote PowerShell uses Kerberos to authenticate the user connecting. IIS implements this Kerberos authentication method via a native module. In IIS Manager, if you go to the PowerShell Virtual Directory and then look at the Modules, you should see Kerbauth listed as a Native Module, with the dll location pointing to C:\Program Files\Microsoft\Exchange Server\v14\Bin\kerbauth.dll. If the Kerbauth module shows up as a Managed module instead of Native, or if the Kerbauth module has been loaded on the Default Web Site level (instead of, or in addition to, the PowerShell virtual directory), you can experience this issue. To correct this, make sure that the Kerbauth module is not enabled on the Default Web Site, but is only enabled on the PowerShell virtual directory. The entry type of "Local" indicates that the Kerbauth module was enabled directly on this level, and not inherited from a parent.
2. If the WSMan module entry is missing from the global modules section of the C:\Windows\System32\Inetsrv\config\ApplicationHost.config file, as follows:
<globalModules> <add name="WSMan" image="C:\Windows\system32\wsmsvc.dll" />
This will result in the WSMan module displaying as a Managed module on the PowerShell virtual directory.
To correct this, make sure that the WSMan module has been registered (but not enabled) at the Server level, and has been enabled on the PowerShell virtual directory.
3. If the user that is attempting to connect is not Remote PowerShell enabled. To check if a user is enabled for Remote PowerShell, you need to open the Exchange Management Shell with an account that has been enabled, and run the following query.
(Get-User <username>).RemotePowershellEnabled
This will return a True or False. If the output shows False, the user is not enabled for Remote PowerShell. To enable the user, run the following command.
Set-User <username> -RemotePowerShellEnabled $True
Connecting to the remote server failed with the following error message: The WinRM client sent a request to an HTTP server and got a response saying the requested HTTP URL was not available. This is usually returned by a HTTP server that does not support the WS-Management protocol. For more information, see the about_Remote_Troubleshooting Help topic.
Possible Causes:
1. The http binding has been removed from the Default Web Site. A common scenario for this is if you are running multiple web sites, and attempting to set up a redirect to https://mail.company.com/owa by requiring SSL on the Default Web Site, and creating another web site to do the redirect back to the SSL-enabled website.
Remote PowerShell requires port 80 to be available on the Default Web Site. If you want to set up an automatic redirect to /owa and redirect http requests to https, you should follow the instructions located at http://technet.microsoft.com/en-us/library/aa998359(EXCHG.80).aspx and follow the directions under the section "For a Configuration in Which SSL is Required on the Default Web Site or on the OWA Virtual Directory in IIS 7.0".
2. The http binding on the Default Web Site has been modified, and the Hostname field configured. To correct this issue, you need to clear out the Hostname field under the port 80 bindings on the Default Web Site.
Connecting to remote server failed with the following error message: The WinRM client received an HTTP server error status (500), but the remote service did not include any other information about the cause of the failure. For more information, see the about_Remote_Troubleshooting Help topic. It was running the command 'Discover-ExchangeServer -UseWIA $true -SuppressError $true'.
In addition, you may see the following warning event in the System log:
Source: Microsoft-Windows-WinRMEventID: 10113Level: WarningDescription: Request processing failed because the WinRM service cannot load data or event source: DLL="%ExchangeInstallPath%Bin\Microsoft.Exchange.AuthorizationPlugin.dll"
Possible Causes
1. The ExchangeInstallPath variable may be missing. To check this, go to the System Properties, Environment variables, and look under the System variables. You should see a variable of ExchangeInstallPath with a value pointing to C:\Program Files\Microsoft\Exchange Server\V14\.
2. The Path of the Powershell virtual directory has been modified. The PowerShell virtual directory must point to the \Program Files\Microsoft\Exchange Server\v14\ClientAccess\PowerShell directory or you will encounter problems.
Connecting to remote server failed with the following error message: The connection to the specified remote host was refused. Verify that the WS-Management service is running on the remote host and configured to listen for requests on the correct port and HTTP URL. For more information, see the about_Remote_Troubleshooting Help topic.
1. Make sure the MSExchangePowerShellAppPool is running. If it is, try recycling the Application Pool and check for errors or warnings in the Event logs.
2. Make sure that the user that is trying to connect is Remote PowerShell Enabled (see the first error for details on how to check this).
3. Make sure WinRM is properly configured on the server.
a. Run WinRM Quick Config on the server and ensure that both tests pass and no actions are required. If any actions are required, answer Yes to the prompt to allow the WinRM configuration changes to be made.
b. Run WinRM enumerate winrm/config/listener and ensure that a listener is present for the HTTP protocol on port 5985 listening on all addresses.
Connecting to remote server failed with the following error message: The WinRM client received an HTTP status code of 403 from the remote WS-Management service.
1. The "Require SSL" option has been enabled on the PowerShell Virtual Directory. To resolve this, remove the "Require SSL" option from this Virtual Directory. The Exchange Management Tools connect over port 80, not 443, so if Require SSL is set, when a connection is attempted on port 80, IIS will return a 403 error indicating SSL is required.
- Ben Winzenz, Solange Trombini
Windows 8.1 and Windows RT include a built-in email app named Windows Mail. Mail includes support for IMAP and Exchange ActiveSync (EAS) accounts.
This article includes some key technical details of Windows Mail in Windows 8.1. (See Supporting Windows 8 Mail in your organization for Windows 8.0.) Use the information to help you support the use of Mail in your organization. Read this article start to finish, or jump to the topic that interests you. Use the reference links throughout the article for more information.
NOTE Mail, Calendar, and People apps run on Windows 8.1 and Windows RT. Although this article discusses the Mail app, please note that much of the information in this article also applies to the Calendar, and People apps. When connected to a server that supports Exchange ActiveSync, the Calendar, and People apps may also display data that was downloaded over the Exchange ActiveSync connection.
Mail lets users connect to any service provider that supports either of the following two protocols:
Post Office Protocol (POP) is not supported.
NOTE All Windows Communications apps (Mail, Calendar, and People) can use the data that is synchronized using Exchange ActiveSync. After a user connects to their account in the Mail app, their contacts and calendar data is available in the other Windows Communications Apps and vice versa.
Mail can be configured to synchronize data at different times as follows:
Windows 8.1 and Windows RT users can add email accounts to Mail using the Settings charm. The Settings charm is always available on the right side of the Windows 8.1 and Windows RT screen. (For more visual details about Charms & the Windows 8.1 user interface, see Search, share, print & more.)
NOTE This section provides an overview of account setup in Mail. For step-by-step procedures for setting up an account, see What else do I need to know? at the end of this guide.
If automatic configuration fails, the following additional information is required to connect to a server via Exchange ActiveSync:
Mail provides administrators with some level of security through Exchange ActiveSync policies (Mobile Device Mailbox Policies in Exchange 2013). It doesn’t support any means of managing or securing PCs that are connected via IMAP. EAS includes support for certificate-based authentication and remote wipe.
Exchange ActiveSync devices can be managed using Exchange ActiveSync policies. Mail supports the following EAS policies. :
Important If AllowNonProvisionableDevices is set to false in an EAS policy and the policy contains settings that are not part of this list, the device won’t be able to connect to the Exchange server.
If a Windows 8.x user uses a picture password and Exchange ActiveSync policy requires a password, the user will still need to create and enter a password in accordance with the policy.
If a Windows 8.1 PC is joined to an Active Directory domain and controlled by Group Policy, there may be conflicting policy settings between Group Policy and an Exchange ActiveSync policy. In the event of any conflict, the strictest rule in either policy takes precedence. The only exception is password complexity rules for domain accounts. Group policy rules for password complexity (length, expiry, history, number of complex characters) take precedence over Exchange ActiveSync policies – even if group policy rules for password complexity are less strict than Exchange ActiveSync rules, the domain account will be deemed in compliance with Exchange ActiveSync policy.
Communications applications can connect to a corporate Exchange service configured to require certificate-based authentication. User authentication certificates can be provisioned to Windows 8.1 devices by administrators or end-users can browse to certificate and install to user certificate storage.
User can add and connect an email account using a certificate. (For account setup, password entry is required per standard account setup.) User may be prompted to give the Mail application permission to access their user certificate, and should accept the prompt to enable certificate usage. In cases where multiple certificates are available, the user can go to account Settings to select the desired certificate.
Non-PIN protected software certificates are supported.
Mail supports the Exchange ActiveSync remote wipe directive, but unlike Windows Phone (which deletes all data on the device), Mail scopes the data deleted to the specified Exchange ActiveSync account for which the remote wipe command is issued. The user's personal data is not deleted. Additionally, attachments saved from that account are made inaccessible.
For example, if a user has an Outlook.com account for personal use and a Contoso.com account for work use, a remote wipe directive from the Contoso.com server would impact Windows 8.1 and Windows Phone 7 as follows:
To make it as easy as possible for users to have all of their accounts set up on all of their devices, Windows 8.1 uploads vital account information to the user’s Microsoft account. This information includes email address, server, server settings, and password. When a user signs into a new PC with their Microsoft account, their email accounts are automatically set up for them.
If using client certificate authentication, the client certificate, and the certificate selection for an account will not be roamed. Users will have to select their desired client certificate to begin syncing a client certificate account on a new PC.
By default, users are required to have a Microsoft account, formerly known as Windows Live ID, to use the Windows Communications apps. This will usually be the Microsoft account that the user is signed into Windows with, but if they have not done so, they will be prompted to provide one before proceeding.
You can apply a Group Policy to a device to make a Microsoft Account optional for the Windows Communications apps.
Note, the Group Policy setting is configured in Computer Configuration node in the Group Policy and applies to all users of the computer/device to which it's applied. The policy setting lets you control whether Microsoft accounts are optional for Windows Store apps that require an account to sign in. This policy only affects Windows Store apps that support it. Windows RT devices can use Local Group Policy.
To apply the Group Policy setting:
If the Group Policy is applied and a Microsoft account is not used, the Communications apps will:
A user can add additional accounts if desired. You can use corporate firewalls or other mechanisms to block access to any consumer email services as needed.
The following functionality will be unavailable to a user without a Microsoft Account:
By default, Mail only downloads one month of email (up from 2 weeks in Windows 8.0). This is user configurable and can potentially download the user’s entire mailbox. For Exchange ActiveSync accounts, all contacts are downloaded and calendar events are downloaded only for three months behind the current date and 18 months ahead.
Additionally, messages can be only partially downloaded to reduce bandwidth use as follows:
Embedded images in email messages are downloaded on-demand as the user reads them, and attachments which are not downloaded can be downloaded on-demand as the user attempts to open them.
Mail downloads all folders for an account. Users can configure the period of email which is downloaded to adjust the size of data for an account. Mail does not enforce any limits on number and size of attachments users can send.
Mail allows users to view and set their automatic reply messages (aka Out of Office or OOF messages). There is a visual indication when auto-reply is enabled. Users can view and set automatic reply plain text content. For corporate accounts, separate internal and external auto-reply messages are supported.
There is no date/time support for specifying start or end time for automatic replies.
The communications applications can connect over LAN or WiFi connections via authenticated proxies which use standard authentication methods including: NTLM, Digest, Negotiate, and Basic authentication.
Any user credentials entered can be cached for the session, or remembered persistently.
The communications applications warn the user with a prompt providing an option to connect anyway when trying to connect to services with common service certificate issues. See Self-Signed Certificates in Limitations below for details and recommendations.
Direct mailbox connections using POP: Only EAS and IMAP protocols are supported.
Note This does not mean that Windows 8.1 does not support POP. This post is about the Mail app. See Using email accounts over POP on Windows 8.1 and Windows RT 8.1 for workarounds.
Opaque-Signed and Encrypted S/MIME messages When S/MIME messages are received in Mail, it displays an email item with a message body that begins with “This encrypted message can’t be displayed.”
Users may experience connectivity errors when trying to connect to an Exchange server that uses a self-signed certificate or a certificate with other common issues. The user may receive the following error message.
There’s a problem with a server’s security certificate. It might not be safe to connect to the server because… <details>.
You can use one of the following options to resolve this issue.
At the prompt, users can connect anyway to ignore common service certificate issues such as self-signed certificates, allowing the communications applications to use an encrypted connection to the email service with the certificate issue. If users choose to connect anyway and ignore the service certificate issues, their selection will be remembered, (can be viewed and changed any time via Settings for account).
We recommend that users select Cancel when they receive a certificate-related error and contact the administrator to fix the issue (option 1).
See Digital Certificates and SSL for more information.
This enables Exchange to work for Windows 8.1 devices that have the certificate installed.
Note The administrator must provide a certificate file (.cer). The certificate can be installed to the trusted root certificate authority store for either of the following options:
certutil.exe -f -addstore root.cer
If a Mail user can't successfully connect to an account, consider the following:
TIP The user will see the following message if they haven't registered their account: “We couldn’t find the settings for. Provide us with more info and we’ll try connecting again.”
In 2005, we set out developing Exchange 2007. During the preliminary planning of Exchange 2007, we realized that the server architecture model had to evolve to deal with hardware constraints, like CPU, that existed at the time and would exist for a good portion of the lifecycle of the product. The architectural change resulted in five server roles, three of which were core to the product:
Figure 1: Server role architecture in Exchange 2007 & Exchange 2010
Our goal was to make these server roles autonomous units. Unfortunately, that goal did not materialize in either Exchange 2007 or Exchange 2010. The server roles were tightly coupled along four key dimensions:
The functionality and versioning aspects are the key issues. To understand it better, let’s look at the following diagram:
Figure 2: Inter-server communication across different layers of functionality
As you can see from the above diagram, there are three layers: 1) Protocols 2) Business Logic and 3) Storage. If you are familiar with the OSI model, you might believe the protocol layer is similar to the application layer and that data has to flow from the application layer through the other layers to get to the physical layer (or vice versa). However, that is not the case in Exchange 2007 or Exchange 2010; a protocol can interact directly with the storage layer. For example, the transport instance on Server1 (Exchange 2010 SP1) can deliver mail to the Store service on Server2 (Exchange 2010 SP2). The reverse is also true, the store can submit mail via the Store Submission service on Server2 to the transport service running on Server1. In either scenario, content conversion happens on the lower version server (as content conversion in this example happens within the transport components). While a newer version interacting with an older version may not be problematic, the same cannot be said when the reverse is true as the older version simply does not know about any changes that exist in the newer version that may break functionality (hence, why we put in blockers in certain circumstances, or provide guidance on upgrade procedures).
The end result is that in Exchange 2007 and Exchange 2010 deployments, the server roles are deployed as a single monolithic entity.
When we started our planning for Exchange Server 2013, we focused on a single architectural tenet – improve the architecture to serve the needs of deployments at all scales, from the small 100 user shop to hosting hundreds of millions of mailboxes in Office 365. This single tenet drove us to make major architectural changes and investments across the entire product. This shift is in part, because unlike when we were designing Exchange 2007, we are no longer CPU bound; in fact there is so much readily available CPU on modern server hardware, that the notion of dedicated server roles no longer makes sense as hardware ultimately is wasted (hence the recommendation in Exchange 2010 to deploy multi-role servers).
In Exchange Server 2013, we have two basic building blocks – the Client Access array and the Database Availability Group (DAG). Each provides a unit of high availability and fault tolerance that are decoupled from one another. Client Access servers make up the CAS array, while Mailbox servers comprise the DAG.
Figure 3: New server role architecture simplifies deployments
So what is the Client Access server in Exchange 2013? The Client Access server role is comprised of three components, client protocols, SMTP, and a UM Call Router. The CAS role is a thin, protocol session stateless server that is organized into a load balanced configuration. Unlike previous versions, session affinity is not required at the load balancer (but you still want a load balancer to handle connection management policies and health checking). This is because logic now exists in CAS to authenticate the request, and then route the request to the Mailbox server that hosts the active copy of the mailbox database.
The Mailbox server role now hosts all the components and/or protocols that process, render and store the data. No clients will ever connect directly to the Mailbox server role; all client connections are handled by the Client Access server role. Mailbox servers can be added to a Database Availability Group, thereby forming a high available unit that can be deployed in one or more datacenters.
Unlike the past two generations, these two server roles do not suffer from the same constraints:
If we return to the layering diagram, we can see how this changes:
Figure 4: Inter-server communication in Exchange 2013
Instead of allowing communication between servers to occur at any layer in the stack, communication must occur between servers at the protocol layer. This ensures that for a given mailbox’s connectivity, the protocol being used is always served by the protocol instance that is local to the active database copy. In other words, if my mailbox is located on Server1 and I want to send a message to a mailbox on Server2, the message must be sent from Server1’s transport components to the transport components on Server2; content conversion of the message then occurs on Server2 as the message is injected into the store. If I upgrade the Mailbox server with a service pack or cumulative update, then for a given mailbox hosted on that server, all data rendering and content conversions for that mailbox will be local, thus removing version constraints and functionality issues that arose in previous releases.
This article is the start of several articles focused on architecture and the investments we have made in Exchange Server 2013. Over the next several weeks:
And for those of you that did not get a chance to attend MEC 2012, feel free to visit www.iammec.com/video and watch my technical architecture keynote.
Exchange Server 2013 introduces a new building block architecture that facilitate deployments at all scales. All core Exchange functionality for a given mailbox is always served by the Mailbox server where the mailbox’s database is currently activated. The introduction of the Client Access server role changes enables you to move away from complicated session affinity load balancing solutions, simplifying the network stack. From an upgrade perspective, all components on a given server are upgraded together, thereby virtually eliminating the need to juggle Client Access and Mailbox server versions. And finally, the new server role architecture aligns Exchange with being able to take advantage of hardware trends for the foreseeable future – core count will continue to increase, disk capacity will continue to increase, and RAM prices will continue to drop.
The Exchange team is announcing the availability of the following updates:
Exchange Server 2010 Service Pack 3 Update Rollup 5 resolves customer reported issues and includes previously released security bulletins for Exchange Server 2010 Service Pack 3. A complete list of the issues resolved in this rollup is available in KB2917508.
Exchange Server 2007 Service Pack 3 Update Rollup 13 provides recent DST changes and adds the ability to publish a 2007 Edge Server from Exchange Server 2013. Update Rollup 13 also contains all previously released security bulletins and fixes and updates for Exchange Server 2007 Service Pack 3. More information on this rollup is available in KB2917522.
Neither release is classified as a security release but customers are encouraged to deploy these updates to their environment once proper validation has been completed.
Note: KB articles may not be fully available at the time of publishing of this post.
During my session at the recent Microsoft Exchange Conference (MEC), I revealed Microsoft’s preferred architecture (PA) for Exchange Server 2013. The PA is the Exchange Engineering Team’s prescriptive approach to what we believe is the optimum deployment architecture for Exchange 2013, and one that is very similar to what we deploy in Office 365.
The PA is designed with several business requirements in mind. For example, requirements that the architecture be able to:
The specific prescriptive nature of the PA means of course that not every customer will be able to deploy it (for example, customers without multiple datacenters). And some of our customers have different business requirements or other needs, which necessitate an architecture different from that shown here. If you fall into those categories, and you want to deploy Exchange on-premises, there are still advantages to adhering as closely as possible to the PA where possible, and deviate only where your requirements widely differ. Alternatively, you can consider Office 365 where you can take advantage of the PA without having to deploy or manage servers.
Before I delve into the PA, I think it is important that you understand a concept that is the cornerstone for this architecture – simplicity.
Failure happens. There is no technology that can change this. Disks, servers, racks, network appliances, cables, power substations, generators, operating systems, applications (like Exchange), drivers, and other services – there is simply no part of an IT services offering that is not subject to failure.
One way to mitigate failure is to build in redundancy. Where one entity is likely to fail, two or more entities are used. This pattern can be observed in Web server arrays, disk arrays, and the like. But redundancy by itself can be prohibitively expensive (simple multiplication of cost). For example, the cost and complexity of the SAN based storage system that was at the heart of Exchange until the 2007 release, drove the Exchange Team to step up its investment in the storage stack and to evolve the Exchange application to integrate the important elements of storage directly into its architecture. We recognized that every SAN system would ultimately fail, and that implementing a highly redundant system using SAN technology would be cost-prohibitive. In response, Exchange has evolved from requiring expensive, scaled-up, high-performance SAN storage and related peripherals, to now being able to run on cheap, scaled-out servers with commodity, low-performance SAS/SATA drives in a JBOD configuration with commodity disk controllers. This architecture enables Exchange to be resilient to any storage related failure, while enabling you to deploy large mailboxes at a reasonable cost.
By building the replication architecture into Exchange and optimizing Exchange for commodity storage, the failure mode is predictable from a storage perspective. This approach does not stop at the storage layer; redundant NICs, power supplies, etc., can also be removed from the server hardware. Whether it is a disk, controller, or motherboard that fails, the end result should be the same, another database copy is activated and takes over.
The more complex the hardware or software architecture, the more unpredictable failure events can be. Managing failure at any scale is all about making recovery predictable, which drives the necessity to having predictable failure modes. Examples of complex redundancy are active/passive network appliance pairs, aggregation points on the network with complex routing configurations, network teaming, RAID, multiple fiber pathways, etc. Removing complex redundancy seems unintuitive on its face – how can removing redundancy increase availability? Moving away from complex redundancy models to a software-based redundancy model, creates a predictable failure mode.
The PA removes complexity and redundancy where necessary to drive the architecture to a predictable recovery model: when a failure occurs, another copy of the affected database is activated.
The PA is divided into four areas of focus:
In the Namespace Planning and Load Balancing Principles articles, I outlined the various configuration choices that are available with Exchange 2013. From a namespace perspective, the choices are to either deploy a bound namespace (having a preference for the users to operate out of a specific datacenter) or an unbound namespace (having the users connect to any datacenter without preference).
The recommended approach is to utilize the unbound model, deploying a single namespace per client protocol for the site resilient datacenter pair (where each datacenter is assumed to represent its own Active Directory site - see more details on that below). For example:
Figure 1: Namespace Design
Each namespace is load balanced across both datacenters in a configuration that does not leverage session affinity, resulting in fifty percent of traffic being proxied between datacenters. Traffic is equally distributed across the datacenters in the site resilient pair, via DNS round-robin, geo-DNS, or other similar solution you may have at your disposal. Though from our perspective, the simpler solution is the least complex and easier to manage, so our recommendation is to leverage DNS round-robin.
In the event that you have multiple site resilient datacenter pairs in your environment, you will need to decide if you want to have a single worldwide namespace, or if you want to control the traffic to each specific datacenter pair by using regional namespaces. Ultimately your decision depends on your network topology and the associated cost with using an unbound model; for example, if you have datacenters located in North America and Europe, the network link between these regions might not only be costly, but it might also have high latency, which can introduce user pain and operational issues. In that case, it makes sense to deploy a bound model with a separate namespace for each region.
To achieve a highly available and site resilient architecture, you must have two or more datacenters that are well-connected (ideally, you want a low round-trip network latency, otherwise replication and the client experience are adversely affected). In addition, the datacenters should be connected via redundant network paths supplied by different operating carriers.
While we support stretching an Active Directory site across multiple datacenters, for the PA we recommend having each datacenter be its own Active Directory site. There are two reasons:
In the PA, all servers are physical, multi-role servers. Physical hardware is deployed rather than virtualized hardware for two reasons:
By deploying multi-role servers, the architecture is simplified as all servers have the same hardware, installation process, and configuration options. Consistency across servers also simplifies administration. Multi-role servers provide more efficient use of server resources by distributing the Client Access and Mailbox resources across a larger pool of servers. Client Access and Database Availability Group (DAG) resiliency is also increased, as there are more servers available for the load-balanced pool and for the DAG.
Commodity server platforms (e.g., 2U servers that hold 12 large form-factor drive bays within the server chassis) are use in the PA. Additional drive bays can be deployed per-server depending on the number of mailboxes, mailbox size, and the server’s scalability.
Each server houses a single RAID1 disk pair for the operating system, Exchange binaries, protocol/client logs, and transport database. The rest of the storage is configured as JBOD, using large capacity 7.2K RPM serially attached SCSI (SAS) disks (while SATA disks are also available, the SAS equivalent provides better IO and a lower annualized failure rate). Bitlocker is used to encrypt each disk, thereby providing data encryption at rest and mitigating concerns around data theft via disk replacement.
To ensure that the capacity and IO of each disk is used as efficiently as possible, four database copies are deployed per-disk. The normal run-time copy layout (calculated in the Exchange 2013 Server Role Requirements Calculator) ensures that there is no more than a single copy activated per-disk.
Figure 2: Server Design
At least one disk in the disk pool is reserved as a hot spare. AutoReseed is enabled and quickly restores database redundancy after a disk failure by activating the hot spare and initiating database copy reseeds.
Within each site resilient datacenter pair you will have one or more DAGs.
As with the namespace model, each DAG within the site resilient datacenter pair operates in an unbound model with active copies distributed equally across all servers in the DAG. This model provides two benefits:
Each datacenter is symmetrical, with an equal number of member servers within a DAG residing in each datacenter. This means that each DAG contains an even number of servers and uses a witness server for quorum arbitration.
The DAG is the fundamental building block in Exchange 2013. With respect to DAG size, a larger DAG provides more redundancy and resources. Within the PA, the goal is to deploy larger DAGs (typically starting out with an eight member DAG and increasing the number of servers as required to meet your requirements) and only create new DAGs when scalability introduces concerns over the existing database copy layout.
Since the introduction of continuous replication in Exchange 2007, Exchange has recommended multiple replication networks for separating client traffic from replication traffic. Deploying two networks allows you to isolate certain traffic along different network pathways and ensure that during certain events (e.g., reseed events) the network interface is not saturated (which is an issue with 100Mb, and to a certain extent, 1Gb interfaces). However, for most customers, having two networks operating in this manner was only a logical separation, as the same copper fabric was used by both networks in the underlying network architecture.
With 10Gb networks becoming the standard, the PA moves away from the previous guidance of separating client traffic from replication traffic. A single network interface is all that is needed because ultimately our goal is to achieve a standard recovery model despite the failure - whether a server failure occurs or a network failure occurs, the result is the same, a database copy is activated on another server within the DAG. This architectural change simplifies the network stack, and obviates the need to eliminate heartbeat cross-talk.
Ultimately, the placement of the witness server determines whether the architecture can provide automatic datacenter failover capabilities or whether it will require a manual activation to enable service in the event of a site failure.
If your organization has a third location with a network infrastructure that is isolated from network failures that affect the site resilient datacenter pair in which the DAG is deployed, then the recommendation is to deploy the DAG’s witness server in that third location. This configuration gives the DAG the ability to automatically failover databases to the other datacenter in response to a datacenter-level failure event, regardless of which datacenter has the outage.
Figure 3: DAG (Three Datacenter) Design
If your organization does not have a third location, then place the witness server in one of the datacenters within the site resilient datacenter pair. If you have multiple DAGs within the site resilient datacenter pair, then place the witness server for all DAGs in the same datacenter (typically the datacenter where the majority of the users are physically located). Also, make sure the Primary Active Manager (PAM) for each DAG is also located in the same datacenter.
Data resiliency is achieved by deploying multiple database copies. In the PA, database copies are distributed across the site resilient datacenter pair, thereby ensuring that mailbox data is protected from software, hardware and even datacenter failures.
Each database has four copies, with two copies in each datacenter, which means at a minimum, the PA requires four servers. Out of these four copies, three of them are configured as highly available. The fourth copy (the copy with the highest Activation Preference) is configured as a lagged database copy. Due to the server design, each copy of a database is isolated from its other copies, thereby reducing failure domains and increasing the overall availability of the solution as discussed in DAG: Beyond the “A”.
The purpose of the lagged database copy is to provide a recovery mechanism for the rare event of system-wide, catastrophic logical corruption. It is not intended for individual mailbox recovery or mailbox item recovery.
The lagged database copy is configured with a seven day ReplayLagTime. In addition, the Replay Lag Manager is also enabled to provide dynamic log file play down for lagged copies. This feature ensures that the lagged database copy can be automatically played down and made highly available in the following scenarios:
By using the lagged database copy in this manner, it is important to understand that the lagged database copy is not a guaranteed point-in-time backup. The lagged database copy will have an availability threshold, typically around 90%, due to periods where the disk containing a lagged copy is lost due to disk failure, the lagged copy becoming an HA copy (due to automatic play down), as well as, the periods where the lagged database copy is re-building the replay queue.
To protect against accidental (or malicious) item deletion, Single Item Recovery or In-Place Hold technologies are used, and the Deleted Item Retention window is set to a value that meets or exceeds any defined item-level recovery SLA.
With all of these technologies in play, traditional backups are unnecessary; as a result, the PA leverages Exchange Native Data Protection.
The PA takes advantage of the changes made in Exchange 2013 to simplify your Exchange deployment, without decreasing the availability or the resiliency of the deployment. And in some scenarios, when compared to previous generations, the PA increases availability and resiliency of your deployment.
Last week we released Exchange Server 2010 Service Pack 1. It has received some great feedback and reviews from customers, experts, analysts, and the Exchange community.
The starting point for SP1 setup/upgrade should be the What's New in SP1, SP1 Release Notes, and Prerequisites docs. As with any new release, there are some frequently asked deployment questions, and known issues, or issues reported by some customers. You may not face these in your environment, but we're posting these here along with some workarounds so you're aware of them as you test and deploy SP1.
The order of upgrade from Exchange 2010 RTM to SP1 hasn’t changed from what was done in Exchange 2007. Upgrade server roles in the following order:
The Edge Transport server role can be upgraded at any time; however, we recommend upgrading Edge Transport either before all other server roles have been upgraded or after all other server roles have been upgraded. For more details, see Upgrade from Exchange 2010 RTM to Exchange 2010 SP1 in the documenation.
Note: Due to the shared code base for these updates, Windows Server 2008 and Windows Vista share the same updates. Similarly, Windows Server 2008 R2 and Windows 7 share the same updates. Make sure you select the x64 versions of each update to be installed on your Exchange 2010 servers.
Update 2/11/2011: Windows 2008 R2 SP1 includes all the required hotfixes listed in this table — 979744, 983440, 979099, 982867 and 977020. If you're installing Exchange 2010 SP1 on a server running Windows 2008 R2 SP1, you don't need to install these hotfixes separately. For a complete list of all updates included in Windows 2008 R2 SP1, see Updates in Win7 and WS08R2 SP1.xls.
Here’s a matrix of the updates required, including download locations and file names.
Microsoft Connect
Request from CSS
979099An update is available to remove the application manifest expiry feature from AD RMS clients.
x64: Windows6.1-KB977020-v2-x64.msu
Some of the hotfixes would have been rolled up in a Windows update or service pack. Given that the Exchange team released SP1 earlier than what was planned and announced earlier, it did not align with some of the work with the Windows platform. As a result, some hotfixes are available from MSDN/Connect, and some require that you request them online using the links in the corresponding KBAs. The administrator experience when initially downloading these hotfixes may be a little odd. However, once you download the hotfixes, and receive two of the hotfixes from CSS, you can use the same for subsequent installs on other servers. In due course, all these updates may become available on the Download Center, and also through Windows Update.
These hotfixes have been tested extensively as part of Exchange 2010 SP1 deployments within Microsoft and by our TAP customers. They are fully supported by Microsoft.
When installing Exchange Server 2010 SP1 the prereq check may turn up some required hotfixes to install. The message will include a link to click for help. Clicking this link redirects you to a page saying that the content does not exist.
We're working to update the linked content.
Meanwhile, please refer to the TechNet article Exchange 2010 Prerequisites to download and install the prerequisites required for your server version (the hotfixes are linked to in the above table, but you'll still need to install the usual prerequisites such as .Net Framework 3.5 SP1, Windows Remote Management (WinRM) 2.0, and the required OS components).
Some customers have reported that after upgrading an Exchange Server 2010 server to Exchange 2010 SP1, the Exchange Management Shell shortcut is missing from program options. Additionally, the .ps1 script files associated with the EMS may also be missing.
We’re actively investigating this issue. Meanwhile, here’s a workaround:
NOTE: If these files are missing, you can copy the files from the Exchange Server 2010 Service Pack 1 installation media to the %ExchangeInstallPath%\bin directory. These files are present in the \setup\serverroles\common folder.
Note: if the Exchange installation folder or drive name is different than the default, you need to change the path accordingly.
If you upgrade a server with the Edge Transport server role running with ForeFront Threat Management Gateway (TMG) and ForeFront Protection for Exchange (FPE) enabled for SMTP protection, the ForeFront TMG Managed Control Service may fail to start and E-mail policy configuration settings cannot be applied.
The TMG team is working on this issue. See Problems when installing Exchange 2010 Service Pack 1 on a TMG configured for Mail protection on the ForeFront TMG (ISA) Team Blog. Exchange 2010 SP1 Release Notes has been updated with the above information.
The ForeFront TMG product team has released a software update to address this issue. See Software Update 1 for Microsoft Forefront Threat Management Gateway (TMG) 2010 Service Pack 1 now available for download.
The location for setting the port the address book service should use has changed in SP1. In Exchange 2010 RTM you had to edit the Microsoft.exchange.addressbook.service.exe.config to configure the service port. In SP1 you must use the following registry key:Path: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MSExchangeAB\Parameters Value name: RpcTcpPortType: REG_SZ (String)
When you apply SP1 to a machine where you had previously configured a static port by editing the Microsoft.exchange.addressbook.service.exe.config file, the upgrade process will not carry forward your static port assignments. Following a restart, the Address Book Service will revert to using a dynamic port instead of a static port specified in the config file. This may cause interruptions in service.
As with all upgrades where servers are in load balanced pools, we recommend you perform a rolling upgrade — removing servers from the pool, updating them and then moving the pool to the newly upgraded machines. Alternatively, we recommend that you upgrade an array of servers by draining connections from any one machine before you upgrade it.
There are times when these approaches may not be possible. You can maintain your static port configuration, and have it take effect the moment the address book service starts for the first time following the application of the service pack, by creating the registry key BEFORE you apply SP1 to your server. The registry key has no impact pre SP1, and so by configuring it before you apply the Service Pack you can avoid the need to make changes to set the port post install, and avoid any service interruptions.
After applying E2010 SP1:
iPhone users may not be able to view the content of incoming messages in their Inboxes, and when they try to open a message, they get an error saying:
Admins may see the following event logged in the Application Event Log on Exchange 2010 CAS Server:
Watson report about to be sent for process id: 1234, with parameters: E12, c-RTL-AMD64, 14.01.0218.011, AirSync, MSExchange ActiveSync, Microsoft.Exchange.Data.Storage.InboundConversionOptions.CheckImceaDomain, UnexpectedCondition:ArgumentException, 4321, 14.01.0218.015.
OWA Premium users may not be able to reply or forward a message. They may see the following error in OWA:
An unexpected error occurred and your request couldn't be handled. Exception type: System.ArgumentException, Exception message: imceaDomain must be a valid domain name.
POP3 & IMAP4 users may also not be able to retrieve incoming mail and Admins will see the following event logged in Event Log:
ERR Server Unavailable. 21; RpcC=6; Excpt=imceaDomain must be a valid domain name.
Please run the following command under Exchange Management Shell and verify that there is one domain marked as ‘Default’ and it's DomainName & Name values are valid domain names. We were able to reproduce the issue by setting a domain name with a space in it, like "aa bb"
Get-AcceptedDomain | fl
If you also have an invalid domain name there (for example, a domain name with a space in it), then removing the space and restarting the server will fix the EAS (iPhone), OWA, POP3 & IMAP4 issues as mentioned above.
Command to run under EMS would be:
Set-AcceptedDomain –Identity -Name “ValidSMTPDomainName”
Thes examples update the Name parameter of the "My Company" and "ABC Local" accepted domains (the space is removed from both):
Set-AcceptedDomain –Identity “My Company” –Name “MyCompany.Com”Set-AcceptedDomain –Identity “ABC Local” –Name “ABC.Local”
If a server running Exchange 2010 RTM (or Exchange 2010 SP1 Beta) is upgraded to Exchange 2010 SP1, administrators may experience an error when using the Add-MailboxdDatabaseCopy or Remove-MailboxDatabaseCopy cmdlets to add or remove DAG members.
When you try to add a DAG member, you may see the following error:
Add-MailboxDatabaseCopy DAG-DB0 -MailboxServer DAG-2
The result:
WARNING: An unexpected error has occurred and a Watson dump is being generated: Registry key has subkeys and recursive removes are not supported by this method.Registry key has subkeys and recursive removes are not supported by this method. + CategoryInfo : NotSpecified: (:) [Add-MailboxDatabaseCopy], InvalidOperationException + FullyQualifiedErrorId : System.InvalidOperationException,Microsoft.Exchange.Management.SystemConfigurationTasks. AddMailboxDatabaseCopy
The command is not successful in adding the copy or updating Active Directory to show the copy was added. This happens due to presence of the DumpsterInfo registry key.
Workaround: Delete the DumpsterInfo key, as shown below.
Identify the GUID of the database that is being added using this command:
Get-MailboxDatabase DAG-DB0 | fl name,GUID
Name : DAG-DB0 Guid : 8d3a9778-851c-40a4-91af-65a2c487b4cc
On the server specified in the add command, using the database GUID identified, remove the following registry key:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v14\Replay\State\<db-guid>\DumpsterInfo
The GUID identified in this case is 8d3a9778-851c-40a4-91af-65a2c487b4cc. With this information you can now export and delete the DumpsterInfo key on the server where you are attempting to add the mailbox database copy. This can be easily done using the registry editor, but if you have more than a handful of DAG members, this is best automated using the Shell.
This example removes the DumpsterInfo key from the 8d3a9778-851c-40a4-91af-65a2c487b4cc key:
Remove-Item HKLM:\Software\Microsoft\ExchangeServer\V14\Replay\State\8d3a9778-851c-40a4-91af-65a2c487b4cc\DumpsterInfo
To automate this across all servers in your organization, use the DeleteDumpsterRegKey.ps1 script.
For more info, see Tim McMichael’s blog post Exchange 2010 SP1: Error when adding or removing a mailbox database copy.
Thanks to all the folks in CSS and Exchange teams who helped identify, validate and provide workarounds for some of the issues mentioned above, and to the Exchange community and MVPs for their feedback.
Bharat Suneja, Nino BilicM. Amir Haque, Greg Taylor, & Tim McMichael
Over the years Exchange Server architecture has gone through a number of changes. As a product matures over time you may see us change what is supported as we react to changes in the product architecture, the state of technology as a whole, or major support issues we see come in through our support infrastructure.
Over the years a large volume of support calls have ended up being caused by communication issues between Exchange servers or between Exchange servers and domain controllers. Often times this results from a network device between the servers not allowing some port or protocol to communicate to the other servers.
I tried to get Harrison Ford to co-write this article with me given his specific talents, but alas he was busy and regretfully couldn’t partake. Please allow me to start with the short version up front so there is no confusion about what we currently DO and DO NOT support before I lose some of you to;
Image Courtesy of: http://knowyourmeme.com/memes/tldr
For Exchange Server 2010 this is already articulated at http://technet.microsoft.com/en-us/library/bb331973(v=EXCHG.141).aspx under Client Access Server Connectivity in the Client Access Server section in the following paragraph.
“In addition to having a Client Access server in every Active Directory site that contains a Mailbox server, it’s important to avoid restricting traffic between Exchange servers. Make sure that all defined ports that are used by Exchange are open in both directions between all source and destination servers. The installation of a firewall between Exchange servers or between an Exchange 2010 Mailbox or Client Access server and Active Directory isn’t supported. However, you can install a network device if traffic isn’t restricted and all available ports are open between the various Exchange servers and Active Directory.”
Why has this seemingly simple support statement become so muddied and confusing over the years? Maybe we didn’t make it blunt enough to start with, but there could be some other compounding points adding to the confusion.
Exchange Server 2003 was the last version of Exchange Server to allow deploying (at the time) a Front-End server in a perimeter network (aka DMZ) while locating the Back-End server in the intranet. While this could be made to work it required a specialized set of rules that essentially turned your perimeter network security model into the following:
Image Courtesy of: The Internet
During the time of Exchange Server 2003 adoption of reverse proxies within perimeter networks was on the rise. Reverse proxies allowed customers to more securely publish Exchange Server for remote access while only allowing a single port and protocol to traverse from the Internet to the perimeter network, and then a single port and protocol to traverse from the perimeter network to the intranet.
You could go from something complicated like this with endless port and protocol requirements….
Figure 1: Legacy Exchange 2003 deployment with Front-End server in a perimeter network. What a mess. Who’s hungry?
To something simple like this…
Figure 2: Reverse proxy in the perimeter network and all Exchange servers within the intranet. Simplicity at its best!
The resulting increase in simplicity as well as the drop in support cases was strong enough for Microsoft to determine during the lifecycle of the next major version of Exchange Server, 2007, that we would no longer support deploying what is now the Client Access Server role in a perimeter network. From that time on all Exchange servers, except for Edge Transport Server role, were to be deployed on the intranet with unfettered access to each other. We do have this documented in http://technet.microsoft.com/en-us/library/bb232184.aspx.
TechNet includes a number of articles that document many if not all of the ports and protocols Exchange Server utilizes during the course of normal operation. These documents are often misunderstood as “configure your firewall this way” articles. The information is only being provided for educational purposes on the inner-workings of Exchange Server, or to aid with the configuration of load balancing or service monitoring mechanisms which often require specific port/protocol definitions to perform their functions correctly. In case you come across them in the future, here is a list of most of those articles.
Exchange Network Port References
Understanding Protocols, Ports, and Services in Unified Messaging
This is a different story and yes there are things you can do here to remain supported. Exchange Server has for a number of revisions supported configuring static client communication ports for Windows based Outlook clients. After the client contacts the endpoint mapper service running on Windows under TCP Port 135 it will be handed back the static TCP port you have chosen to use in your environment. For Exchange Server 2010 you may be familiar with the following article describing how to configure static client communication ports for the Address Book Service and the RPC Client Access Service, thereby leaving you with 4 ports required for clients to operate in MAPI/RPC mode.
http://social.technet.microsoft.com/wiki/contents/articles/864.configure-static-rpc-ports-on-an-exchange-2010-client-access-server.aspx
TechNet also has resources for versions prior to Exchange Server 2010: http://support.microsoft.com/kb/270836.
Starting in Exchange Server 2013 the only protocol supported for Windows Outlook clients is RPC over HTTPS. This architectural change reduces your required port count to one, TCP 443 for HTTPS, to be utilized by Autodiscover, Exchange Web Services, and RPC over HTTPS (aka Outlook Anywhere). This is going to make your life easy, but don’t tell your boss as they’ll expect you to start doing other things as well. It’ll be our secret. Promise. I’ll go through some examples of supported deployments, but will keep it easy and only use Outlook clients. The same ideas apply to other POP/IMAP/EAS clients as well, just don’t restrict Exchange servers from talking to each other. A setup like the following Outlook 2010 / Exchange 2010 diagram would be entirely supported where we have a firewall between the clients and the servers. In all of the following examples I have chosen static TCP port 59531 for my RPC Client Access Service on CAS and Mailbox, and static TCP port 59532 for my Address Book Service on CAS. UDP notifications are also thrown in for fun for those of you running Outlook 2003 in Online Mode, which I hope is very few and far between these days. Domain controllers were left out of these diagrams to focus on communication directly between clients and Exchange, and load balancers were also kept out for simplicity.
Figure 3: Firewall between clients and all Exchange servers. Supported if firewall is configured correctly to allow all necessary client access. AD not shown.
However, if you attempted to do something naughty like the following diagram and for reasons unknown to us put a firewall between CAS and Mailbox then there had better be ANY/ANY rules in place allowing conversations to originate from either side between Exchange servers.
Figure 4: Firewall between CAS and other Exchange servers. Supported only if the firewall is configured for unfettered access between Exchange servers, and Exchange servers and AD. AD not shown.
Well what if you have multiple datacenters with Exchange and want to firewall everything everywhere because you believe that as the number of firewalls goes up your security must exponentially increase? We’ve got you covered there too, deploy it like this where you’ll see both MAPI/RPC and RPC/HTTPS user examples. I didn’t bother putting load balancers or Domain Controllers into any of these diagrams by the way. I’m putting faith in all of you that you know where those go.
Figure 5: Firewalls between users and Exchange servers as well as between datacenters. Supported if the firewalls are configured to allow unfettered access between Exchange servers, between Exchange servers and AD, and appropriate client rules. AD not shown.
Boy this is going to be easy when all of you migrate to Exchange Server 2013 and are only dealing with RPC/HTTPS connections from clients and SMTP or HTTPS between servers. Except for maybe those pesky POP/IMAP/UM clients…
Figure 6 below depicts what Exchange 2013 network conversations may look like at a high level. A load balancer and additional CAS were introduced to show we don’t care what CAS a client’s traffic goes through as they all end up at the same Mailbox Server anyways where the user’s database is mounted. You may have read previously Exchange Server 2013 does not require affinity for client traffic and hopefully this visual helps show why.
The one tricky bit to consider if placing a firewall in between clients and Exchange Server 2013 would be UM traffic as it is not all client to CAS in nature. In Exchange Server 2013 a telephony device first makes a SIP connection through CAS (orange arrows) which after speaking with the UM Service on Mailbox Server will redirect the client so it may establish a direct SIP+RTP session (blue arrow) to the Mailbox Server holding the user’s active database copy for the RTP connection.
Figure 6: Showing at a high level SMTP, Windows Outlook Client, and UM traffic with a firewall between users and Exchange Server 2013.
The key here is to not block traffic between Exchange servers, or between Exchange servers and Domain Controllers. As long no traffic blocking is performed between these servers you will be in a fully supported deployment and will not have to waste time with our support staff proving you really do have all necessary communications open before you can start to troubleshoot an issue. We know many customers will continue to test the boundaries of supportability regardless, but be aware this may drag out your troubleshooting experience and possibly extend an active outage. We prefer to help our customers resolve any and all issues as fast as possible. Staying within support guidelines does in fact help us help you as expeditiously as possible, and in the end will save you time, support costs, labor costs, and last but not least aggravation.
Brian Day Program Manager Exchange Customer Experience
Over the last several months there has been significant chatter around what is background database maintenance and why is it important for Exchange 2010 databases. Hopefully this article will answer these questions.
The following tasks need to be routinely performed against Exchange databases:
The primary purpose of database compaction is to free up unused space within the database file (however, it should be noted that this does not return that unused space to the file system). The intention is to free up pages in the database by compacting records onto the fewest number of pages possible, thus reducing the amount of I/O necessary. The ESE database engine does this by taking the database metadata, which is the information within the database that describes tables in the database, and for each table, visiting each page in the table, and attempting to move records onto logically ordered pages.
Maintaining a lean database file footprint is important for several reasons, including the following:
Prior to Exchange 2010, database compaction operations were performed during the online maintenance window. This process produced random IO as it walked the database and re-ordered records across pages. This process was literally too good in previous versions – by freeing up database pages and re-ordering the records, the pages were always in a random order. Coupled with the store schema architecture, this meant that any request to pull a set of data (like downloading items within a folder) always resulted in random IO.
In Exchange 2010, database compaction was redesigned such that contiguity is preferred over space compaction. In addition, database compaction was moved out of the online maintenance window and is now a background process that runs continuously.
Database defragmentation is new to Exchange 2010 and is also referred to as OLD v2 and B+ tree defragmentation. Its function is to compact as well as defragment (make sequential) database tables that have been marked/hinted as sequential. Database defragmentation is important to maintain efficient utilization of disk resources over time (make the IO more sequential as opposed to random) as well as to maintain the compactness of tables marked as sequential.
You can think of the database defragmentation process as a monitor that watches other database page operations to determine if there is work to do. It monitors all tables for free pages, and if a table gets to a threshold where a significant high percentage of the total B+ Tree page count is free, it gives the free pages back to the root. It also works to maintain contiguity within a table set with sequential space hints (a table created with a known sequential usage pattern). If database defragmentation sees a scan/pre-read on a sequential table and the records are not stored on sequential pages within the table, the process will defrag that section of the table, by moving all of the impacted pages to a new extent in the B+ tree. You can use the performance counters (mentioned in the monitoring section) to see how little work database defragmentation performs once a steady state is reached.
Database defragmentation is a background process that analyzes the database continuously as operations are performed, and then triggers asynchronous work when necessary. Database defragmentation is throttled under two scenarios:
Database checksumming (also known as Online Database Scanning) is the process where the database is read in large chunks and each page is checksummed (checked for physical page corruption). Checksumming’s primary purpose is to detect physical corruption and lost flushes that may not be getting detected by transactional operations (stale pages).
With Exchange 2007 RTM and all previous versions, checksumming operations happened during the backup process. This posed a problem for replicated databases, as the only copy to be checksummed was the copy being backed up. For the scenario where the passive copy was being backed up, this meant that the active copy was not being checksummed. So in Exchange 2007 SP1, we introduced a new optional online maintenance task, Online Maintenance Checksum (for more information, see Exchange 2007 SP1 ESE Changes – Part 2).
In Exchange 2010, database scanning checksums the database and performs post Exchange 2010 Store crash operations. Space can be leaked due to crashes, and online database scanning finds and recovers lost space. Database checksum reads approximately 5 MB per second for each actively scanning database (both active and passive copies) using 256KB IOs. The I/O is 100 percent sequential. The system in Exchange 2010 is designed with the expectation that every database is fully scanned once every seven days.
If the scan takes longer than seven days, an event is recorded in the Application Log :
Event ID: 733 Event Type: Information Event Source: ESE Description: Information Store (15964) MDB01: Online Maintenance Database Checksumming background task is NOT finishing on time for database 'd:\mdb\mdb01.edb'. This pass started on 11/10/2011 and has been running for 604800 seconds (over 7 days) so far.
If it takes longer than seven days to complete the scan on the active database copy, the following entry will be recorded in the Application Log once the scan has completed:
Event ID: 735 Event Type: Information Event Source: ESE Description: Information Store (15964) MDB01 Database Maintenance has completed a full pass on database 'd:\mdb\mdb01.edb'. This pass started on 11/10/2011 and ran for a total of 777600 seconds. This database maintenance task exceeded the 7 day maintenance completion threshold. One or more of the following actions should be taken: increase the IO performance/throughput of the volume hosting the database, reduce the database size, and/or reduce non-database maintenance IO.
In addition, an in-flight warning will also be recorded in the Application Log when it takes longer than 7 days to complete.
In Exchange 2010, there are now two modes to run database checksumming on active database copies:
Regardless of the database size, our recommendation is to leverage the default behavior and not configure database checksum operations against the active database as a scheduled process (i.e., don’t configure it as a process within the online maintenance window).
For passive database copies, database checksums occur during runtime, continuously operating in the background.
Page patching is the process where corrupt pages are replaced by healthy copies. As mentioned previously, corrupt page detection is a function of database checksumming (in addition, corrupt pages are also detected at run time when the page is stored in the database cache). Page patching works against highly available (HA) database copies. How a corrupt page is repaired depends on whether the HA database copy is active or passive.
Page patching process
Database Page Zeroing is the process where deleted pages in the database are written over with a pattern (zeroed) as a security measure, which makes discovering the data much more difficult.
With Exchange 2007 RTM and all previous versions, page zeroing operations happened during the streaming backup process. In addition since they occurred during the streaming backup process they were not a logged operation (e.g., page zeroing did not result in the generation of log files). This posed a problem for replicated databases, as the passive copies never had its pages zeroed, and the active copies would only have it pages zeroed if you performed a streaming backup. So in Exchange 2007 SP1, we introduced a new optional online maintenance task, Zero Database Pages during Checksum (for more information, see Exchange 2007 SP1 ESE Changes – Part 2). When enabled this task would zero out pages during the Online Maintenance Window, logging the changes, which would be replicated to the passive copies.
With the Exchange 2007 SP1 implementation, there is significant lag between when a page is deleted to when it is zeroed as a result of the zeroing process occurring during a scheduled maintenance window. So in Exchange 2010 SP1, the page zeroing task is now a runtime event that operates continuously, zeroing out pages typically at transaction time when a hard delete occurs.
In addition, database pages can also be scrubbed during the online checksum process. The pages targeted in this case are:
For more information on page zeroing in Exchange 2010, see Understanding Exchange 2010 Page Zeroing.
Requiring a scheduled maintenance window for page zeroing, database defragmentation, database compaction, and online checksum operations poses significant problems, including the following:
Due to the aforementioned issues, it was critical in Exchange 2010 that the database maintenance tasks be moved out of a scheduled process and be performed during runtime continuously in the background.
We’ve designed these background tasks such that they're automatically throttled based on activity occurring against the database. In addition, our sizing guidance around message profiles takes these maintenance tasks into account.
In previous versions of Exchange, events in the Application Log would be used to monitor things like online defragmentation. In Exchange 2010, there are no longer any events recorded for the defragmentation and compaction maintenance tasks. However, you can use performance counters to track the background maintenance tasks under the MSExchange Database ==> Instances object:
You'll find the following page zeroing counters under the MSExchange Database object:
You will need to dismount the database and use ESEUTIL /MS to check the available whitespace in a database. For an example, see http://technet.microsoft.com/en-us/library/aa996139(v=EXCHG.65).aspx (note that you have to multiply the number of pages by 32K).
Note that there is a status property available on databases within Exchange 2010, but it should not be used to determine the amount of total whitespace available within the database:
Get-MailboxDatabase MDB1 -Status | FL AvailableNewMailboxSpace
AvailableNewMailboxSpace tells you is how much space is available in the root tree of the database. It does not factor in the free pages within mailbox tables, index tables, etc. It is not representative of the white space within the database.
Naturally, after seeing the available whitespace in the database, the question that always ensues is – how can I reclaim the whitespace?
Many assume the answer is to perform an offline defragmentation of the database using ESEUTIL. However, that's not our recommendation. When you perform an offline defragmentation you create an entirely brand new database and the operations performed to create this new database are not logged in transaction logs. The new database also has a new database signature, which means that you invalidate the database copies associated with this database.
In the event that you do encounter a database that has significant whitespace and you don't expect that normal operations will reclaim it, our recommendation is:
Much of the confusion lies in the term background database maintenance. Collectively, all of the aforementioned tasks make up background database maintenance. However, the Shell, EMC, and JetStress all refer to database checksumming as background database maintenance, and that's what you're configuring when you enable or disable it using these tools.
Figure 1: Enabling background database maintenance for a database using EMC
Enabling background database maintenance using the Shell:
Set-MailboxDatabase -Identity MDB1 -BackgroundDatabaseMaintenance $true
Figure 2: Running background database maintenance as part of a JetStress test
Database checksumming can become an IO tax burden if the storage is not designed correctly (even though it's sequential) as it performs 256K read IOs and generates roughly 5MB/s per database.
As part of our storage guidance, we recommend you configure your storage array stripe size (the size of stripes written to each disk in an array; also referred to as block size) to be 256KB or larger.
It's also important to test your storage with JetStress and ensure that the database checksum operation is included in the test pass.
In the end, if a JetStress execution fails due to database checksumming, you have a few options:
If you do this, we do recommend smaller database sizes. Also keep in mind that the passive copies will still perform database checksum as a background process, so you still need to account for this throughput in our storage architecture. For more information on this subject see Jetstress 2010 and Background Database Maintenance.
The architectural changes to the database engine in Exchange Server 2010 dramatically improve its performance and robustness, but change the behavior of database maintenance tasks from previous versions. Hopefully this article helps your understanding of what is background database maintenance in Exchange 2010.