• Running IIS 32-bit Applications on IIS 64-bit

    Do you have IIS on Windows 64 bit and want to run application that is for 32 bit. No problem. From the Inetpub admin scripts run the following:

    "cscript.exe adsutil.vbs set W3SVC/AppPools/Enable32BitAppOnWin64 true"

    Here are the details.

    Windows Server 2003TM, Service Pack 1 enables IIS 6.0 to run 32-bit Web applications on 64-bit Windows using the Windows-32-on-Windows-64 (WOW64) compatibility layer. IIS 6.0 using WOW64 is intended to run 32-bit personal productivity applications needed by software developers and administrators, including 32-bit Internet Information Services (IIS) Web applications.

    On 64-bit Windows, 32-bit processes cannot load 64-bit DLLs, and 64-bit processes cannot load 32-bit DLLs. If you plan to run 32-bit applications on 64-bit Windows, you must configure IIS to create 32-bit worker processes. Once you have configured IIS to create 32-bit worker processes, you can run the following types of IIS applications on 64-bit Windows:

    • Internet Server API (ISAPI) extensions
    • ISAPI filters
    • Active Server Page (ASP) applications (specifically, scripts calling COM objects where the COM object can be 32-bit or 64-bit)
    • ASP.NET applications

    IIS can, by default, launch Common Gateway Interface (CGI) applications on 64-bit Windows, because CGI applications run in a separate process.

    Before you configure IIS to run 32-bit applications on 64-bit Windows, note the following:

    • IIS only supports 32bit worker processes in Worker Process Isolation mode on 64-bit Windows
    • On 64-bit Windows, the World Wide Web Publishing service can run 32-bit and 64-bit worker processes. Other IIS services like the IIS Admin service, the SMTP service, the NNTP service, and the FTP service run 64-bit processes only
    • On 64-bit Windows, the World Wide Web Publishing service does not support running 32-bit and 64-bit worker processes concurrently on the same server
  • Please welcome the DAG Active/Active model in the Mailbox Server Role Requirements Calculator

    Yes, there is a new version of the Mailbox Server Role Requirements calculator (v12.3) that supports the Active/Active DAG model.

    Enhancements in this version:

    • Incorporated Megacycle adjustment formula changes as documented in Guidance Change- Calculating the Megacycles for Different Processor Configurations Formula.
    • The calculator no longer requires you to enter in the adjusted megacycles per core for the server architecture you are deploying.  Instead, you simply need to obtain the SPECint2006 Rate Value for your server platform.
    • Added Megacycle Multiplication Factor – this works exactly like the IOPS Multiplication Feature does and was added as a result of RIM providing E2010 guidance on megacycle impact due to Blackberry devices.
    • Active/Active user distribution scenarios.  Yes, really!  An Active/Active user distribution architecture has the user population dispersed across both datacenters (usually evenly) with each datacenter being the primary datacenter for its specific user population.  In the event of a failure, the user population can be activated in the secondary datacenter (either via cross-datacenter single database *over or via full datacenter activation).
    • Added a new worksheet/section that documents the Activation Scenarios for DAG deployments. 
    • Added error reporting validation logic if HA solution results in greater than 16 servers in a DAG to not show any results, since the design is invalid.
    • Dumpster size calculations have been optimized as calendar versioning storage has been reduced from 5.8% impact to 3% impact in SP1.

    A blog post explaining the calculator (updated for this new version) is here and or you can download the calculator directly.

  • Lync 2010 Address Book Normalization

    Excellent article by Jeff about AB normalization, read the article at http://blog.schertz.name/2010/09/lync-2010-address-book-normalization/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed:+jschertz+(Jeff+Schertz)

  • Who said that transaction goes from Logs to DB!!!!!

    I've heard lot of discussions going about how Exchange ESE works and that the transactions goes from the Logs to the DB based on the checkpoint depth. I've collected some small info from different sources trying to explain how ESE writes transactions to the DB.

    The following five subcomponents of ESE work together to move the data into the database and to its static form:

    • Log buffers   When ESE first receives a transaction, it stores it in log buffers. These log buffers are used to hold information in memory before it is written to the transaction logs. By default, each buffer unit is the size of a disk sector, which means that it's 512 bytes in size. JET does some sanitation to make sure that the number of buffers is a minimum of 128 sectors, a maximum of 10,240 sectors, and aligns to the largest 64 KB boundary. So, for Exchange 2000 Server (and all service packs) the default number of log buffers is 84, which JET sanitizes to 128 so the actual buffer area is 64 Kbytes. For Exchange Server 2003, the default number of log buffers is 500, which JET sanitizes to 384 so the actual buffer area is 192 KB.
    • Log writer   As the buffers fill up, ESE moves the data from the buffers onto disk and into the log files. In this operation, the transactions are committed to disk to the logs in a synchronous fashion. This process is fast, because it is crucial to move the data from memory and into the transaction logs quickly in case of a system failure.
    • IS buffers   The IS or cache buffers are the first step toward turning a transaction into actual data. The IS buffers are a group of 4 kilobyte (KB)-pages allocated from memory by Exchange for the purpose of caching the database pages before they are written to disk. When first created, these pages are clean, because they have yet to have any transactions written to them. ESE then plays the transactions from the logs into these empty pages in memory, thereby changing their status to dirty. The default value for maximum size these buffers can reach is 900 MB in Exchange 2000 Server SP 3.
    • Version store   ESE writes multiple, different transactions to a single page in memory. The version store keeps track of and manages these transactions. It also structures the pages as the transactions occur
    • Lazy writer   At this point, ESE must flush the dirty pages out of memory. The lazy writer is responsible for moving the pages from the cache buffers to disk. Because there are so many transactions coming in at once and so many pages getting dirtied, the job of the lazy writer is to prioritize them and subsequently handle the task of moving them there without overloading the disk I/O subsystem. This is the last phase and the point at which the transactions have officially become static data. It is also at this point that the dirty pages are cleaned and ready for use again.

    What about the checkpoint file and other components???

    The Checkpoint File. The database engine maintains a checkpoint file called Edb.chk for every log file sequence in order to keep track of the data that has not yet been written to the database file on disk. The checkpoint file is a pointer in the log sequence that indicates where in the log file the information store needs to start the recovery in case of a failure. The checkpoint file is essential for efficient recovery. Without it, the information store would start from the beginning of the oldest log file on the disk and check every page in every log file to determine whether it had already been written to the database--a time-consuming process, especially if all you want to do is make the database consistent.

    The Checkpoint depth. The checkpoint depth is a threshold that defines when ESE begins to aggressively flush dirty pages to the database.

    ESE Cache. An area of memory reserved to the Information Store process (Store.exe) that is used to store database pages in memory. By storing pages in memory, this can reduce read I/Os, especially when the page is accessed many times; in addition, this cache can be used in two ways to reduce write I/Os - by storing the dirty page in memory, there is the potential for multiple changes to be made to the page before it is flushed to disk; also, the engine can write multiple pages together (up to 1MB) using write coalescing

    In Exchange 2000 & Exchange 2003 we had two main files for each Exchange database which are:

    The EDB file. The .edb file is the main repository for the mailbox data. The fundamental construct of the .edb file is the b-tree structure, which is only present in this file, and not in the .stm file. The b-tree is designed for quick access to many pages at once. The .edb file design permits a top level node and many child nodes. Tree depth has the greatest effect on performance. A uniform tree depth across the entire structure, where every leaf node or data page is equidistant from the root node, means database performance is consistent and predictable. In this way, the ESE 4 KB pages are arranged into tables that form a large database file containing Exchange data. The database is actually made up of multiple b-trees. These other ancillary trees hold indexing and views that work with the main tree. The .edb file is accessed by ESE directly.

    The STM file. The .stm or streaming media file is used in conjunction with the .edb file to comprise the Exchange database. The purpose of the .stm file is to store streamed native Internet content. Unlike the .edb file mentioned previously, the .stm file does not store data in a b-tree structure. When a message arrives through the Internet or Simple Mail Transfer Protocol (SMTP), it always arrives as a stream of bytes. In Exchange Server 2003 and Exchange 2000 Server, these messages are streamed directly to the .stm file where they are held until accessed by a MAPI client. So the content is not converted. That way, if the end user is consistently accessing mail through POP3, the mail items are pulled directly from the .stm file and are already in the proper state for delivery. In the case that the message is accessed by a MAPI client, however, the message is moved over to the .edb file and converted to Exchange native form, and is never moved back to the .stm file.

    Note that we removed STM file from the Exchange 2007.

    So how do the above components work together?

    1. An operation occurs against the database (e.g. client sends a new message), and the page that requires updating is read from the file and placed into the ESE cache (if it is not already in memory), while the log buffer is notified and records the operation in memory.
    2. The changes are recorded by the database engine but are not immediately written to disk; instead, these changes are held in the ESE cache and are known as "dirty" pages because they have not been committed to the database. The version store is used to keep track of these changes, thus ensuring isolation and consistency are maintained.
    3. As the database pages are changed, the log buffer is notified to commit the change, and the transaction is recorded in a transaction log file (which may or may not require a log roll and a start of a new log generation).
    4. Eventually the dirty database pages are flushed to the database file.
    5. The checkpoint is advanced.

    In Exchange 2007 we changed the database architecture in four significant respects, this is to make use of the 64bit platform so the result is:

    1. The streaming database (.stm) file has been removed from Exchange 2007.
    2. Longer log file names are used, thereby enabling each storage group to generate as many as 2 billion log files before log file generation must be reset.
    3. Transaction log file size has been reduced from 5 MB to 1 MB to support the new continuous replication features in Exchange 2007.
    4. The database page size has increased from 4 KB to 8 KB.

    And to push the ESE engine another step forward in terms of performance, Exchange SP1 introduced two major updates and some enhancements in the online maintenance process. I recommend reading part I at http://msexchangeteam.com/archive/2007/11/30/447640.aspx and Part II at http://msexchangeteam.com/archive/2007/12/06/447695.aspx

  • Demystifying Exchange 2010 database availability group (DAG)

    What is a DAG?

    A database availability group (DAG) is the base component of the high availability and site resilience framework that is built into Exchange 2010. A DAG is a group of up to 16 Mailbox servers that host a set of databases and provide automatic database-level recovery from failures that affect individual servers or databases. Exchange 2010 uses the same continuous replication technology found in Exchange 2007. Exchange 2010 combines on-site data replication (CCR) and off-site data replication (SCR) into a single framework which is the DAG. Once servers have been added to a DAG, administrators can add replicated database copies incrementally, and Exchange 2010 switches between these copies automatically, as needed, to maintain availability.

    When can I create a DAG?

    After you've deployed Exchange 2010, you can create a DAG, add Mailbox servers to the DAG, and then replicate mailbox databases between the DAG members. A DAG can be created using the New Database Availability Group wizard in the Exchange Management Console, or by running the New-DatabaseAvailabilityGroup cmdlet in the Exchange Management Shell. When creating a DAG, you provide a name for the DAG, and optional witness server and witness directory settings. In addition, one or more IP addresses are assigned to the DAG, either by using static IP addresses or by allowing the DAG to be automatically assigned the necessary IP addresses using Dynamic Host Configuration Protocol (DHCP).

    Do I need to setup windows cluster for the DAG to work?

    No, there is nothing called a standalone or clustered Exchange 2010 installation. After you install a normal Exchange 2010 mailbox server, you need to run the New-DatabaseAvailabilityGroup cmdlet to create a DAG, once the DAG has been created, mailbox servers can be added to the DAG. When the first server is added to the DAG, a cluster is formed for use by the DAG. DAGs make limited use of Windows Failover Clustering technology, namely the cluster heartbeat, cluster networks, and the cluster database (for storing data that changes or can change quickly, such as database state changes from active to passive or vice versa, or from mounted to dismounted and vice versa). As each subsequent server is added to the DAG, it is joined to the underlying cluster (and the cluster's quorum model is automatically adjusted by the system, as needed), and the server is added to the DAG object in Active Directory. And because DAGs rely on Windows Failover Clustering, they can only be created on Exchange 2010 Mailbox servers that are running Windows Server 2008 Enterprise Edition or Windows Server 2008 R2 Enterprise Edition. In addition, each Mailbox server in the DAG must have at least two network interface cards in order to be supported

    What's happening when I create a DAG or join a server to an existing DAG?

    When the first Mailbox server is added to a DAG, the following occurs:

    -       The Windows Failover Clustering component is installed, if it is not already installed.

    -       A failover cluster is created using the name of the DAG.

    -       A cluster network object (CNO) is created in default computers container.

    -       The name and IP address of the DAG is registered as a Host (A) record in DNS.

    -       The server is added to DAG object in Active Directory.

    -       The cluster database is updated with information on the databases that are mounted on the added server.

    When the second and subsequent servers are added to the DAG, the following occurs:

    -       The server is joined to Windows Failover Cluster for the DAG.

    -       The quorum model is automatically adjusted:

    -       A Node Majority quorum model is used for DAGs with an odd number of members.

    -       A Node and File Share Majority quorum is used for DAGs with an even number of members.

    -       The witness directory and share are automatically created by Exchange when needed.

    -       The server is added to DAG object in Active Directory.

    -       The cluster database is updated with info on mounted databases

    Can I have DAG members from different subnets?

    Yes, during the cluster creation, the Add-DatabaseAvailabilityGroupServer task retrieves the IP address(es) configured while you are creating the DAG, takes whatever appropriate IP and ignores the ones don't match any of the subnets found on the server. This gives you the flexibility to have a DAG with members on the same or different subnets "in case you will have a DAG node in another datacenter".

    Can I use a 3rd party replication tool to replicate the databases in the DAG?

    By default, a DAG is designed to use the built-in continuous replication feature to replicate mailbox databases between servers in the DAG. If you are using third-party data replication that supports the Third Party Replication API in Exchange 2010, you must create the DAG in third party replication mode by using the New-DatabaseAvailabilityGroup cmdlet with the ThirdPartyReplication parameter, but note that Once this mode is enabled, it cannot be disabled.

    Can I encrypt the DAG network traffic?

    DAGs support the use of encryption by leveraging the encryption capabilities of the Windows Server operating system. DAGs use Kerberos authentication between Exchange servers. Microsoft Kerberos SSP’s EncryptMessage/DecryptMessage APIs handle encryption of DAG network traffic. Microsoft Kerberos SSP supports multiple encryption algorithms. The Kerberos authentication handshake picks the strongest encryption protocol supported in the list: typically AES 256-bit, potentially with a SHA Hash Message Authentication Code (HMAC) to maintain integrity of the data

    Can I compress the DAG network communication?

    DAGs also support built-in compression. When compression is enabled, DAG network communication uses XPRESS, which is Microsoft’s implementation of the LZ77 algorithm. This is the same type of compression used in many Microsoft protocols, in particular, MAPI RPC compression between Outlook and Exchange

    What is the minimum network interfaces required for a DAG?

    Although a single network is supported, we recommend that each DAG have at least two networks: a single MAPI network and a single Replication network. This provides redundancy for the network and the network path, and enables the system to distinguish between a server failure and a network failure. Using a single network adapter prevents the system from distinguishing between these two types of failures.

    What will happen if one of my DAG networks encountered a failure?

    In the event of a failure affecting the MAPI network, a server failover will occur (assuming there are healthy mailbox database copies that can be activated). In the event of a failure affecting the Replication network, if the MAPI network is unaffected by the failure, log shipping and seeding operations will revert to use the MAPI network. When the failed Replication network is restored, log shipping and seeding operations will revert back to the Replication network. To increase the high availability on your DAG, additional MAPI and/or Replication networks can be added, as needed. Also you can prevent an individual network from being a single point of failure by using network adapter teaming or similar technology.

    Can I host other roles on a mailbox server that is member of a DAG?

    Unlike Exchange 2007, where clustered mailbox servers required dedicated hardware, Mailbox servers in a DAG can host other Exchange roles (Client Access, Hub Transport, Unified Messaging), providing full redundancy of Exchange services and data with just two servers. This can be an excellent option for small and medium organizations where the number of mailboxes and email traffic doesn't require a dedicated hardware for each role.