The Past: Exchange 2007 and Exchange 2010

In 2005, we set out developing Exchange 2007. During the preliminary planning of Exchange 2007, we realized that the server architecture model had to evolve to deal with hardware constraints, like CPU, that existed at the time and would exist for a good portion of the lifecycle of the product. The architectural change resulted in five server roles, three of which were core to the product:

  1. Edge Transport for routing and anti-malware from the edge of the organization
  2. Hub Transport for internal routing and policy enforcement
  3. Mailbox for storage of data
  4. Client Access for client connectivity and web services
  5. Unified Messaging for voice mail and voice access

E2007 Design
Figure 1: Server role architecture in Exchange 2007 & Exchange 2010

Our goal was to make these server roles autonomous units. Unfortunately, that goal did not materialize in either Exchange 2007 or Exchange 2010. The server roles were tightly coupled along four key dimensions:

  1. Functionality was scattered across all the server roles, thereby making it a requirement that Hub Transport and the Client Access server roles be deployed in every Active Directory site where Mailbox servers were deployed.
  2. This also necessitated a tight versioning alignment – a down-level Hub Transport or Client Access shouldn’t communicate with a higher version Mailbox server; in some cases this was enforced (e.g., Exchange 2007 Hub Transport servers cannot deliver messages to Exchange 2010 Mailbox servers), but in other cases they were not (e.g., an Exchange 2010 SP1 Client Access server can render data from an Exchange 2010 SP2 Mailbox server). The versioning restriction also meant that you could not simply upgrade a single server role to take advantage of new functionality – you had to upgrade all of them.
  3. The server roles were also coupled together from a geographical affinity perspective as well due to the middle tier server roles using RPC as the communication mechanism to the Mailbox server role.
  4. Similarly, from a user perspective, the server roles were also tightly coupled – a set of users that are served by a given Mailbox server, are always served by a given set of Client Access and Hub Transport servers.

The functionality and versioning aspects are the key issues. To understand it better, let’s look at the following diagram:

2010Island
Figure 2: Inter-server communication across different layers of functionality

As you can see from the above diagram, there are three layers: 1) Protocols 2) Business Logic and 3) Storage.  If you are familiar with the OSI model, you might believe the protocol layer is similar to the application layer and that data has to flow from the application layer through the other layers to get to the physical layer (or vice versa).  However, that is not the case in Exchange 2007 or Exchange 2010; a protocol can interact directly with the storage layer.  For example, the transport instance on Server1 (Exchange 2010 SP1) can deliver mail to the Store service on Server2 (Exchange 2010 SP2). The reverse is also true, the store can submit mail via the Store Submission service on Server2 to the transport service running on Server1. In either scenario, content conversion happens on the lower version server (as content conversion in this example happens within the transport components).  While a newer version interacting with an older version may not be problematic, the same cannot be said when the reverse is true as the older version simply does not know about any changes that exist in the newer version that may break functionality (hence, why we put in blockers in certain circumstances, or provide guidance on upgrade procedures).

The end result is that in Exchange 2007 and Exchange 2010 deployments, the server roles are deployed as a single monolithic entity.

The Present: Exchange Server 2013

When we started our planning for Exchange Server 2013, we focused on a single architectural tenet – improve the architecture to serve the needs of deployments at all scales, from the small 100 user shop to hosting hundreds of millions of mailboxes in Office 365. This single tenet drove us to make major architectural changes and investments across the entire product. This shift is in part, because unlike when we were designing Exchange 2007, we are no longer CPU bound; in fact there is so much readily available CPU on modern server hardware, that the notion of dedicated server roles no longer makes sense as hardware ultimately is wasted (hence the recommendation in Exchange 2010 to deploy multi-role servers).

In Exchange Server 2013, we have two basic building blocks – the Client Access array and the Database Availability Group (DAG). Each provides a unit of high availability and fault tolerance that are decoupled from one another. Client Access servers make up the CAS array, while Mailbox servers comprise the DAG.

2013
Figure 3: New server role architecture simplifies deployments

So what is the Client Access server in Exchange 2013? The Client Access server role is comprised of three components, client protocols, SMTP, and a UM Call Router. The CAS role is a thin, protocol session stateless server that is organized into a load balanced configuration. Unlike previous versions, session affinity is not required at the load balancer (but you still want a load balancer to handle connection management policies and health checking). This is because logic now exists in CAS to authenticate the request, and then route the request to the Mailbox server that hosts the active copy of the mailbox database.

The Mailbox server role now hosts all the components and/or protocols that process, render and store the data. No clients will ever connect directly to the Mailbox server role; all client connections are handled by the Client Access server role. Mailbox servers can be added to a Database Availability Group, thereby forming a high available unit that can be deployed in one or more datacenters.

Unlike the past two generations, these two server roles do not suffer from the same constraints:

  1. Functionality is not dispersed across both server roles. All data rendering now occurs on the Mailbox server that hosts the active mailbox database copy. As we will discuss in more detail in a future article, the Client Access server role is merely a proxy.
  2. Versioning is also decoupled as a result of moving the data rendering stack to a single server role.
  3. Client Access servers and the Mailbox servers no longer leverage the chatty RPC protocol for client sessions, thereby removing the geographical affinity issues.
  4. From a user perspective, users do not need to be serviced by the Client Access servers that are located within the same Active Directory site as the Mailbox servers hosting the user’s mailboxes.

If we return to the layering diagram, we can see how this changes:

2013 islandv2
Figure 4: Inter-server communication in Exchange 2013

Instead of allowing communication between servers to occur at any layer in the stack, communication must occur between servers at the protocol layer. This ensures that for a given mailbox’s connectivity, the protocol being used is always served by the protocol instance that is local to the active database copy. In other words, if my mailbox is located on Server1 and I want to send a message to a mailbox on Server2, the message must be sent from Server1’s transport components to the transport components on Server2; content conversion of the message then occurs on Server2 as the message is injected into the store.  If I upgrade the Mailbox server with a service pack or cumulative update, then for a given mailbox hosted on that server, all data rendering and content conversions for that mailbox will be local, thus removing version constraints and functionality issues that arose in previous releases.

Looking Ahead

This article is the start of several articles focused on architecture and the investments we have made in Exchange Server 2013. Over the next several weeks:

  • We will delve into more specifics with respect to the server roles.
  • Discuss our investments with respect to storage.
  • Discuss sizing Exchange Server 2013.
  • Discuss load balancing and namespace planning.

And for those of you that did not get a chance to attend MEC 2012, feel free to visit www.iammec.com/video and watch my technical architecture keynote.

Conclusion

Exchange Server 2013 introduces a new building block architecture that facilitate deployments at all scales. All core Exchange functionality for a given mailbox is always served by the Mailbox server where the mailbox’s database is currently activated. The introduction of the Client Access server role changes enables you to move away from complicated session affinity load balancing solutions, simplifying the network stack. From an upgrade perspective, all components on a given server are upgraded together, thereby virtually eliminating the need to juggle Client Access and Mailbox server versions. And finally, the new server role architecture aligns Exchange with being able to take advantage of hardware trends for the foreseeable future – core count will continue to increase, disk capacity will continue to increase, and RAM prices will continue to drop.

Ross Smith IV
Principal Program Manager
Exchange Customer Experience