Is this thing on?

Scott Schnoll's weblog

Is this thing on?

  • Microsoft Exchange Server 2007 Edge Transport and Messaging Protection

    There's some  new content from MS IT Showcase on how Microsoft uses Exchange 2007's Edge Transport server in production.  Check out the white paper and PowerPoint presentations below.

     

    Microsoft Exchange Server 2007 Edge Transport and Messaging Protection

    In deploying the Exchange 2007 based messaging protection solution, Microsoft IT used all messaging protection features of the Edge Transport server role and Forefront Security for Exchange Server to block, delete, reject, or quarantine unwanted messages. To further increase security, servers were hardened and audited for vulnerabilities to ensure readiness for Internet visibility. The many steps that Microsoft IT took to design a network environment, combined with the messaging protection features of Exchange Server 2007, resulted in greater flexibility, fewer false positives, and reduced TCO.
    Technical White Paper | PowerPoint Presentation

     

  • Exchange 2007 32-bit Management Tools Available for Download

    A short time ago, we released a 32-bit package containing the Exchange 2007 management tools, including the Exchange Management Console, the Exchange Management Shell, the Exchange Help file, the Microsoft Exchange Best Practices Analyzer Tool, and the Exchange Troubleshooting Assistant Tool.

    This download is available from the Microsoft Download Center at http://www.microsoft.com/downloads/details.aspx?FamilyID=6be38633-7248-4532-929b-76e9c677e802&DisplayLang=en.

    Enjoy!

  • Exchange 2007: Platforms, Editions, Product Keys and Versions

    Now that the RTM version of Exchange 2007 is available, I'm seeing a lot of questions in the newsgroups, Web forums and other Exchange community areas related to SKUs, platforms and product keys.  People are wondering what are the differences between the 32-bit and 64-bit version of Exchange 2007, what are the differences between the Standard and Enterprise Editions of Exchange 2007, particularly on the 32-bit version. People are also wondering what they can do with the trial version of Exchange 2007 posted for download on microsoft.com.

    Editions and Licenses

    First, let's talk about editions.  Exchange 2007 comes in two server editions: Standard Edition and Enterprise Edition. These editions are described and compared at http://www.microsoft.com/exchange/evaluation/editions.mspx. As you can see in the Exchange 2007 Edition Offerings table on that page, the primary differences are:

    1. Only the Enterprise edition can scale to 50 databases per server; the Standard edition is limited to 5 databases per server.
    2. In a production environment, only the Enterprise edition is supported in a Windows failover cluster; the Standard edition is not supported in a Windows failover cluster in production; therefore, Single Copy Clusters and Cluster Continuous Replication are only supported on the Enterprise Edition.  Notice that I said supported in production.  More on this in a bit.

    Even though Exchange comes in two edition offerings, these are licensing editions only, and controlled by the use of a product key. There is a single set of binary files for each platform (one for x64 systems, and one for x86 systems), and the same binaries are used for both editions. It is when you enter a valid, licensed product key that the supported edition for the server is established.

    Note     One important nuance of product keys is that they are for same edition key swaps and upgrades only, and they cannot be used for downgrades.  You can use a valid product key to go from the evaluation version (Trial Edition) to either the Standard Edition or the Enterprise Edition; you can also use a valid product key to go from the Standard Edition to the Enterprise Edition.  You can also re-license the server using the same edition product key.  For example, if you had two Standard Edition servers with two keys, but you accidentally used the same key on both servers, you can change the key for one of them to be the other key that you were issued.  These things can be done without having to reinstall or reconfigure anything.  Simply enter the product key and restart the Microsoft Exchange Information Store service and the edition corresponding to that product key will be reflected. However, you cannot use product keys to downgrade from the Enterprise Edition to the Standard Edition, nor can you use them to revert back to the Trial Edition. These types of downgrades can only be done by uninstalling Exchange 2007, reinstalling Exchange 2007, and entering in the correct product key.

    Exchange 2007 also comes in two client access license (CAL) editions, which are also called the Standard Edition and the Enterprise Edition. You can mix and match the server editions with the CAL editions.  For example, you can use Enterprise CALs against the Standard server edition.  Similarly, you can use Standard CALs against the Enterprise server edition.  The Enterprise CAL is an additive CAL, which means that you buy the Standard CAL, and then add on an Enterprise CAL on top of it. An Enterprise CAL gets you all of the features listed in the last column of the Exchange 2007 CAL Offerings table (note that, as that page says, some of the listed features can only be purchased through a volume license program, and they are not available as retail purchases).

    When you're ready to buy Exchange 2007, visit http://www.microsoft.com/exchange/howtobuy/default.mspx for details.  BTW, please note that the above text is my interpretation of what is stated at http://www.microsoft.com/exchange/evaluation/editions.mspx as of 12/31/06, and my interpretation could be totally wrong. I encourage you to read the page yourself, and if you have any questions, feel free to contact Microsoft Sales using the contact information listed at http://www.microsoft.com/exchange/howtobuy/default.mspx.

    32-bit vs. 64-bit 

    Next, let's answer the platform question: why is there a 32-bit version and a 64-bit version of Exchange 2007?  We are working on some product documentation that will provide complete details, but until then, I've compiled a bunch of information that should answer all of the questions I've seen on this issue.  We made two platform versions of Exchange 2007 with the intent that one platform version (the 64-bit version) would be used in production environments and the other platform version (the 32-bit version) would be used in non-production environments (such as labs, training facilities, demo and evaluation environments, etc.). You cannot purchase 32-bit version; you can only purchase the 64-bit version. Everyone should know the difference between a production and non-production environment, but in case you don't, KC Lemson and Paul Bowden give a great description of what we mean here in their Exchange Queue and A debut article for TechNet Magazine at http://www.microsoft.com/technet/technetmag/issues/2007/01/ExchangeQA/. As KC and Paul also explain, the lines between production and non-production use of the 32-bit are a little blurred, because we do allow minimal supported use of 32-bit code in production environments. Specifically, as they state, you can use the 32-bit version in production to administer Exchange 2007 servers and extend your Active Directory schema. All other uses of the 32-bit version of Exchange 2007 in production environments is unsupported.  At this time, you cannot use either the 32-bit version or the 64-bit version on Windows Vista, or on Windows Server codenamed "Longhorn". One reason is that the Exchange management components (namely the Exchange Management Console and the Exchange Management Shell) rely on Windows Powershell, and at this time there is no RTM version of Windows Powershell for Vista or Longhorn.  See http://www.microsoft.com/windowsserver2003/technologies/management/powershell/download.mspx for some details on the RTM version of Windows Powershell for Vista and Longhorn.

    While the 64-bit version can be the Standard Edition or the Enterprise Edition, the 32-bit version is always and only the Standard Edition.  As I mentioned earlier, Single Copy Clusters (SCC) and Cluster Continuous Replication (CCR) are only supported in production on the Enterprise Edition of Exchange 2007; however, we have made an exception in the 32-bit version code to allow SCC and CCR to be used for non-production use on the 32-bit version, even though the 32-bit version is the Standard Edition. This means that you can set up a 32-bit test lab for trying out SCC and CCR in non-production environments.  Because its 32-bit, you can even create the non-production environments using Microsoft Virtual Server.  I use Exchange 2007 in virtual environments for all of my blogcasts, Webcasts, demos, etc. and it works really well.  If you're not sure how to build up such an environment, check out my step-by-step instructions. Also, check out http://msexchangeteam.com/archive/2006/08/09/428642.aspx for a blogcast on CCR that uses a virtual environment.

    Note   We also allow you to install Unified Messaging (UM) with the 32-bit version so you can check out UM-related features in a non-production environment. You can even use the software-based UM Test Phone described at http://www.microsoft.com/technet/prodtechnol/exchange/e2k7help/08e67a99-e37f-4afd-bd58-455b62580af7.mspx.

    Exchange 2007 and Virtualization

    Speaking of virtual environments and production environments be aware that it will be quite some time before Exchange 2007 is supported in production in a virtual environment. Virtual server support for Exchange Server 2007 is only supported in production using the 64-bit version, and neither Microsoft Virtual Server nor Microsoft Virtual PC support 64-bit guest systems. Our first 64-bit guest support will come with Hypervisor, which is coming for Longhorn within 180 days of Longhorn's release (note that is within 180 days, meaning, it could ship the same day as Longhorn, or it could ship 180 days after Longhorn ships). Exchange 2007 does not yet support Longhorn server (nor does it support Longhorn directory servers, so AD sites with Longhorn directory servers need to be isolated from AD sites that include Exchange 2007 servers). Support for Longhorn will arrive in a service pack (most likely SP1) for Exchange 2007. In summary, there won't be virtualization support for Exchange 2007 in production for some time.

    Evaluations and Product Keys

    When you install Exchange 2007, it is unlicensed and referred to as a Trial Edition. Unlicensed (Trial Edition) servers appear as the Standard Edition, and they are not eligible for support from Microsoft Product Support Services. The Trial Edition expires 120 days after the date of installation. When you start the Exchange Management Console, if you have any unlicensed Exchange 2007 servers in your organization, Exchange will display a list of all unlicensed Exchange 2007 servers and the number of days that are remaining until the trial edition expires. If you have expired unlicensed Exchange 2007 servers you will also see a separate warning for each expired server.  For lab, demo and test environments, unless you have a valid reason for rebuilding the environment, or unless you just love our new Setup wizard so much that you just can't stop uninstalling and installing server roles, I recommend that you get used to dealing with the expiration nag dialog, and not rebuild your servers every 120 days. Either way, the choice is yours, but again, you won't lose any functionality when running on an expired Trial Edition.

    You can upgrade from a 64-bit Trial Edition to a 64-bit retail version by purchasing the appropriate license(s) and by entering the Product Key that you get when you make the purchase. You can find the product key on the Exchange 2007 DVD case. It's a 25-character alphanumeric string, grouped in sets of five characters separated by hyphens. Step-by-step instructions for entering your product key can be found at http://www.microsoft.com/technet/prodtechnol/exchange/e2k7help/40d9e583-69cd-4363-807f-43e02e03ca78.mspx.  These steps include instructions for entering the key using either the Exchange Management Console or the Exchange Management Shell. However, in the 32-bit version, there is no Exchange Management Console interface for this because you can't purchase 32-bit licenses.

    Using either the Exchange Management Console or the Exchange Management Shell, you can see what Edition you're running, and using the Exchange Management Shell, you can also see how many days, hours, minutes, seconds, and yes, milliseconds, are left on the 120-day trial period.  Use the Get-ExchangeServer cmdlet and look for the Edition and RemainingTrialPeriod values.

    What's Missing from the 32-bit Version

    There are some things that are not available in the 32-bit version:

    1. Automatic Anti-spam updates from Windows Updates.  Only a licensed 64-bit version will be able to get automatic anti-spam updates from Microsoft Update.
    2. Storage groups and databases.  You can have a maximum of 5 databases per server in as many as 5 storage groups on the 32-bit version.

    Final Build - Version Confusion

    You may have heard that the final RTM build of Exchange 2007 is build 685.25.  You may have also heard that its 685.24Both are correct, actually.

    When you view the Version information in the Exchange Management Console or examine the value of the AdminDisplayVersion property for Exchange servers in the Exchange Management Shell, it shows the version as 685.24. When you view the Exchange version information in the Windows registry, it shows 685.25. If you use Microsoft Operations Manager, it will also show version 685.25, but if you view version information in Microsoft Office Outlook, it will say 685.24.

    An exception to this version mismatch problem is present on the Edge Transport server.  That will always and only display 685.25 for the version.  This makes things interesting when looking at a bunch of Exchange servers in the Exchange Management Console that include one or more sync'd Edge Transport servers because the Version column will show both 685.24 (for non-Edge Transport servers) and 685.25 (for Edge Transport servers).

    Also, you click Help | About Exchange Server 2007, you'll see a different version number altogether: 685.018. This happens on all Exchange 2007 servers.

    Finally, if you use the Get-ExchangeServer cmdlet and examine the ExchangeVersion property, you'll notice yet another different version number: 0.1 (8.0.535.0).  However, this one does not refer to the version of an installed product, but rather the minimum version of the product that can read the object. In this case any Exchange server that is version 8.0.535.0 or later will be able to read this object (because the last changes to this object's schema were made in build 8.0.535.0).

    The versioning is very confusing, but these discrepancies should be fixed in Service Pack 1.

    ISOs and EXEs

    If you download Exchange 2007 from MSDN, you get a large ISO file.  It's larger than 4.7GB which means if you want to burn it to a DVD, you must use a double-layer drive and disc.  If you do not have a double-layer drive or disc, you can use an ISO file mounting tool, mount the ISO file, and then extract the files to the file system.

    If you download Exchange 2007 from microsoft.com, you get a self-extracting EXE which will extract itself to the file system.

  • Halo 3 Video Documentary Available!

    Over at Bungie.net, Frankie posted details about a video documentary (Vidoc) called 'Et Tu Brute' which is about the Brutes in Halo 3.  I really enjoyed the Vidoc and it will be good to have such an interesting challenge when Halo 3 is released.  Even though the graphics in the Vidoc are not final, and even though it looks like some of the material was from a pre-alpha build, the detail is stunning and impressive. Clearly, Bungie has a lot of quality artistic talent on their hands.

    You can check out the Vidoc in Windows Media and Quicktime formats:

    Quicktime:

    Et Tu Brute - small
    Et Tu Brute - Large (640x480)

    Windows Media:

    Et Tu Brute - small
    Et Tu Brute - Large (640x480)

    The folks over at on10.net, also have it in iPod, PSP, and Zune formats.

     

  • Continuous Replications and Exchange Backups

    In Microsoft Exchange Server 2007, continuous replication, also known as log shipping, is the process of automating the replication of closed transaction log files from a production storage group (called the "active" storage group) to a copy of that storage group (called the "passive" storage group) that is located on a second set of disks (Local Continuous Replication, or LCR) or on another server altogether (Cluster Continuous Replication, or CCR). Once copied to the second location, the log files are then replayed into the passive copy of the database, thereby keeping the storage groups in sync with a slight time lag.

    In simple terms, log shipping follows these steps at the storage group level, with each storage group containing a maximum of one database:

    • Seed the passive copy database directory with a current copy of the active database.
    • When there is a new log file in the active copy log directory, copy it to the passive copy log directory.
    • Replay the log file from the passive copy log directory into the passive database.

    One of the benefts of using continuous replication is the ability to offload Volume ShadowCopy Service (VSS)-based backups from the active storage groups to the passive storage groups. Exchange-aware VSS backups are supported for both the active and passive storage groups and databases. The passive copy backup solution we provide is VSS-only, and its implemented by the Exchange Replica VSS Writer that is part of the Replication service. Streaming backups are only supported from the active storage groups. You cannot use streaming backup APIs to backup the database on the passive side. You also need to use a third-party backup application that supports Exchange VSS, as NT Backup is not Exchange VSS-aware.

    When you're making a VSS backup off the passive copy, what happens to the transaction logs?  A common task during Exchange-aware backups is the truncation of transaction log files after the backup has completed successfully. The replication feature in Exchange 2007 guarantees that logs that have not been replicated are not deleted.

    The challenges with taking a backup on the passive is that backups modify the header of the database. For example, they add information about the time of the last backup of the database. The VSS backup is made by possible by the Exchange Replica VSS Writer in the Replication service, and the Replication service has a copy of the data, but it can only get its data and modifications from the store. It can't independently modify its copy of the database; that would produce divergence. Therefore, it can't modify the header of its database copy.

    The solution is to have the Replication service coordinate its backups with the store. As soon as you start a backup on the passive, the Replication service contacts the store on the active and tells it that a backup is about to start. This is done to prevent the same storage group on both the active and the passive from being backed up simultaneously. Once the backup is finished, the Replication service contacts the store and lets it know that the backup completed.

    The database header modifications resulting from the backup are then made by the store on the active. This action generates a log record, which through continuous replication is copied to the passive. When it is replayed, the database header on the passive is then updated.

    This is a little more complex than traditional backups. And it has some interesting side effects. For example, if you backup the passive and then immediately after the backup has finished you look at the database on the passive, it will not reflect the backup. The database on the active node, will however, reflect it.

    So if you are backing up databases in a continuous replication environment, looking at the database on the active is the most accurate way to determine what the last backup is.

    Another side effect is that, if the store is not running, you can't backup the passive. Running the store is required so that backups can be coordinated and so the database header can be updated.

    With log files being copied around and required by the Replication service, it becomes a little more complicated when it comes to getting rid of them. Right now the conventional way to get rid of log files is to run a backup. Backups runs, and on successful completion, it deletes the logs you don't need any more.

    The challenge now is that the definition of "need" is different because now it takes into account the state of replication. If a log file has not been copied, then you still need it (even though the store might not need it). So now a log file should not be deleted until (1) it isn't needed for crash recovery, (2) it has been replayed on the passive, and (3) it has been backed up.

    To coordinate all of this, whenever the Replication service finishes a replay, it contacts the store and says that it replayed storage group X up to Y generation number. At that point, the store knows that log files up to that generation number are no longer needed by the Replication service. It can then analyze the state of the last backup and the state of crash recovery and work out which log files are no longer needed on the active.

    Fortunately, on the passive, things are a lot simpler. The passive can analyze its own log files and determine which ones are needed for recovery, and which ones are needed for backup.

  • Exchange Server 2007 has RTM'd!

    On Friday, December 8, 2006, Exchange Server 2007 was released to manufacturing!  w00t!  See http://msexchangeteam.com/archive/2006/12/07/431782.aspx for a signoff note from our General Manager, Terry Myerson.  The final build number is 8.0.685.25 (note that the build will show in some places as 8.0.685.24; this is known issue and nothing to be worried about).

    After 2 ½ years of upgrading Microsoft's internal Exchange servers, we now have 125,319 mailboxes on Exchange 2007 at Microsoft, with 30,372 enabled for Unified Messaging. We also have over 200 Technology Adoption Partners and Rapid Deployment Partners have over 55,000 mailboxes in production operating within their enterprise SLA’s. You can read more about how we did Exchange 2007 internally at http://msexchangeteam.com/archive/2006/12/07/431793.aspx.

    So now its your turn to check it out, to learn about it, to deploy it, and to let us know what you think.  I don't have a date or a link for you, but rest assured it will be available very soon!  So keep checking http://technet.microsoft.com/exchange, http://www.microsoft.com/exchange and http://msexchangeteam.com for more information!

  • Windows Vista has RTM'd!

    As the press release at http://www.microsoft.com/presspass/features/2006/nov06/11-08VistaRTM.mspx states, Windows Vista has been released to manufacturing. w00t!

    Windows Vista becomes available to volume license customers this month, and to consumers on January 30, 2007.

    After reading the press release, head on over to http://www.microsoft.com/windowsvista/ and start getting ready for Windows Vista.

    It's time.  :-)

  • 2007 Microsoft Office has RTM'd!

    As the press release at http://www.microsoft.com/presspass/press/2006/nov06/11-062007OfficeRTMPR.mspx states, the 2007 Microsoft Office system has released to manufacturing!  w00t!

    If you've not had a chance to use the 2007 Microsoft Office system, check out the video at http://office.microsoft.com/search/redir.aspx?AssetID=XT101752491033&CTT=5&Origin=HA101752721033. Note that this video requires Windows Media Player. And for even more information, check out the new Office Online site at http://office.microsoft.com/en-us/products/HA101752721033.aspx.

    For you IT Pros out there looking for technical information on the 2007 Microsoft Office system, check out the Office TechCenter at http://office.microsoft.com/search/redir.aspx?AssetID=XT101642801033&Origin=HH101662121033&CTT=5.

  • Exchange 2007 at Microsoft Tech·Ed Europe: IT Forum

    As you may know, we've combined Microsoft Tech·Ed and Microsoft IT Forum into two conferences, Microsoft Tech·Ed Europe: IT Forum, which is specifically for IT Professionals, and Microsoft Tech·Ed Europe: Developers, which is specifically for software developers.  IT Forum is sold out, but there's still seats available at Developers.

    As you might imagine, there will be a lot of content related to what I and other folks have referred to as LOVE: Longhorn, Office 2007, Vista, Exchange 2007.  And of course, there will be great sessions on other Microsoft and related products and technologies. You can search by tracks, content types, and other criteria using the Session Search Tool.

    If you don't have tickets, it's too late for you to get them, but you still can enjoy some of this great content.  Check out the Virtualside for more information.

    I'll be at Tech·Ed Europe: IT Forum, delivering two presentations: UCM310 - High Availability and Clustering in Exchange Server 2007, and UCM309 - Microsoft Exchange Server 2007: Storage Changes.  If you happen to be at the show, stop by and say hello!  I'll also be in the Ask the Experts area frequently, as well.

    Hope to see you there!

  • More on Continuous Replication

    In my last blog entry, I talked about the internals of the continuous replication feature in Exchange 2007.  We went into a lot of technical details about the Replication service, its DLL companion files, its object model, etc.  Deep stuff.

    For this blog, I thought it might be useful to step back a bit and cover some more of the basics of continuous replication.

    Why Continuous Replication?

    You may be wondering, why do we have continuous replication at all?  The problem we’re trying to solve is one of data outages. We’re trying to provide data availability; the observation being, that if you lose your data, you have a very expensive recovery from this. Restoring from backup takes a long time, there might be significant data loss, and you’re going to be offline for a long period of time before you get your data back.

    What is Continuous Replication?

    A simple way to describe the solution is that we keep a second copy of your data. If you have a copy of your data, you can use that copy, should you lose your original.  The thing that makes this hard, is that this copy of your data has to be up-to-date.

    The theory of continuous replication is actually quite simple. The idea is that we make a copy of your data, and then as the original is modified, we make the exact same modifications to the copy. This is going to be far less expensive than copying all of the data each time it is modified. And, this gives you an up-to-date copy of the data which you can then use, should you lose your original.

    How does Continuous Replication Work?

    The way that we keep this data up-to-date is through Extensible Storage Engine (ESE) Logging.  ESE is the database engine for Exchange Server. As ESE modifies the database, it generates a log stream (a stream of 1 MB log files) containing a list of physical modifications of the database. The log stream is normally used for crash recovery. If the server blue screens, if a process dies, etc., the database can be made consistent by using the changes described in these logs files. The basic technology for this is industry standard.  For example, SQL Server and other database engines all use write-ahead logging.  Now in Exchange, though, there are a lot of complexities and subtleties, which I won’t go into in this blog.

    Log files contains a list of physical modifications to database pages. When an update is made to the database an in-memory copy of the page is modified. Then, the log record describing that modification is written to the log file. Once that is done, the page can then be written to the database.

    To implement continuous replication, we make a copy of the database, and then as log files are created that describe modifications to the original, we copy the log files and then replay them into the database copy.

    Continuous Replication Behavior

    This leads us to the basic architecture of continuous replication. A new service, called the Microsoft Exchange Replication service, is responsible for keeping the copy of the database up-to-date. It does this by copying log records that the store generates, inspecting them, and then replaying into the copy of the database.

    Having a copy of the data is only useful if you have some way to use it; preferably, accessing it in a way that is transparent to the user.  For CCR, the cluster service provides that. It moves the network address and identity to the passive node and starts the services.  For LCR, activation is manual, but it is generally a very quick operation, since it's copy of the data is already available to the server.

    Replication Pipeline 

    The replication pipeline is illustrated in the following figure.

    Continuous Replication - Replication Pipeline

    To briefly recap what happens in the replication pipeline - the Store modifies the source database and generates log files in its log directory (the log directory for the storage group containing the database). The Microsoft Exchange Replication service, which "listens" for new logs by using Windows File System Notification events, is responsible for first copying the log files, inspecting them, and then applying them to the copy of the database.

    ESE Logging and Log Files

    To go deeper on this subject, we need to talk about ESE log files.  Each storage group is assigned a prefix number, starting with 00 for the first storage group.  Each log file in each storage group is assigned a generation number, starting with generation 1. Log files are a fixed size; 1 MB in Exchange 2007.  The current log file is always Exx.log, where xx is the storage group prefix number. Exx is the only log file which is modified, and it is the only log file to which log records can be added. Once it fills up, it is renamed to a filename that incorporates its generation number. In Exchange 2007, the generation number is an 8-digit hexadecimal number.

    Log Copying

    Log copying is a pull model. The Exchange store on the active copy (sometimes referred to as the source) creates log files normally.  Exx.log is always in use, and log records are being added to it.  So that log file cannot be copied.  However, as soon as it fills up and is renamed to the next generation sequence number. The Replication service on the passive side (sometimes referred to as the target) will be notified through WFSN and it will copy the log file.

    On a move (scheduled outage) or failover (unscheduled outage), once the store is stopped, Exx.log becomes available for copying and the Replication service will try and copy it.  If the file is unavailable (perhaps because, in the case of CCR, the active node blue-screened) then you have what we call a "lossy failover."  It's called "lossy" because not all of the data (e.g., Exx.log, and any other log files in the copy queue) could be copied. In this case, the administrator-configured loss setting for the storage group is consulted to see if the amount of data loss is in the acceptible range for mounting the database.

    Log Verification

    Log files are copied by the Replication service to an Inspector directory. The idea is that we want to look at the log files and make sure that they are correct.  There are physical checksums to be verified, as well as the logical properties of the log file (for example, its signature is checked to make sure it matches the database).  The intention is, that once a log file is inspected, we have a high degree of confidence that replay will succeed.

    If there is an inspection failure, the log file is recopied.  This is to try to deal with any network issues that might have resulted in a non-valid log file.  If the log file can't be copied successfully, then a re-seed is going to be required.

    After a log file is successfully inspected, it is moved to the proper log directory where it becomes available for replay.

    Log Replay

    As log files are copied and inspected, a log re-player applies the changes to the database. This is actually a special recovery mode, which is different from the replay performed by Eseutil /r.  Among other differences, the undo phase of recovery is skipped.  There's a little more to this, but I won't go into it in this blog.

    If possible, log files are replayed in batches.  We'll wait a little bit of time for more log files to appear, and that's because replaying several log files together improves performance.

    Monitoring the Replication Pipeline

    Now let's look at how the Get-StorageGroupCopyStatus cmdlet reflects the status of the different phases in the pipeline.  If you run this cmdlet, some of the information that is returned can be used to track the status of the replication pipeline:

    • LastLogCopyNotified is the last generation that was seen in the source directory.  This file has not even been copied yet, but it's the last file that the Replication service saw appear in this directory that the store created.
    • LastLogCopied is the last log file that was successfully copied into the Inspector directory by the Replication service.
    • As a log file is validated and moved from the Inspector directory to its target log file directory, LastLogInspected is updated.
    • Finally, as the changes are applied to the database, LastLogReplayed is updated.

    The following figure illustrates the replication pipeline with these values shown:

    Replication Pipeline with Status Shown 

    These numbers are also available using Performance Monitor, as well.

    Looking at the Replication Pipeline figure once more, we have our database which is modified by the store.  That generates a log file.  The Replication service sees the log file created, and updates LastLogCopyNotified.  It copies the log file to the Inspector directory and updates LastLogCopied. After inspecting the log file, it is moved to the log directory used by the Replication service for the storage group copy, and then LastLogInspected is updated.  Finally, the changes are applied to the copy of the database and LastLogReplayed is updated.  And these two databases now have these changes in common.

    Cluster Continuous Replication and Failovers

    Let's talk about failover in a CCR environment.  The Cluster service's resource monitor keeps tabs on the resources in the cluster.  Keep in mind that failure detection is not instantaneous.  Depending on the type of failure, it could be a fraction of a second to several seconds before the failure is noticed.

    Failover behavior is dependent on which resource(s) failed:

    • In the case of the failure of an IP address or network name resource, the behavior is to assume that a machine, or network access to a machine, has failed, and the services are moved over from the active node to the passive node.
    • If Exchange services fail or timeout, they are restarted on the same node, and failover does not occur.
    • Should a database fail, or should a database disk go offline, it will not trigger failover. The reason for this is that you now have the ability to have as many as 50 databases on a single mailbox server, including a clustered mailbox server. Moving all of the databases because one database failed would result in a lot of downtime for the storage groups/databases that are still running.

    Lossy Failovers

    A Move-ClusteredMailboxServer operation is called a "handoff," or a scheduled outage.  This is something an administrator does when they need to move the clustered mailbox server from one node to the other. A failover, often referred as a lossy failover, is an unscheduled outage.

    Consider the example of a CCR cluster. The active and passive are running along normally, and then suddenly the active node dies and goes offline. Because it is offline, the passive node cannot copy log files from it. Once the passive is the active, it starts making modifications to the database.  The problem that occurs here is that without knowledge of the log files that were on Node 1, Node 2 starts generating log files with the same generation number.  But of course, these files have different content.

    So what happens when Node 1 comes back online?  Node 1 will come online as the passive, and it will want to copy log files from Node 2.  But you've now got two different log files with the same generation number, and potentially conflicting modifications.  It literally could be the case that the modifications made on Node 1 before it died are the complete opposite of the modifications made on Node 2 after Node 1 died.

    In this case, the log files have different content, the databases are different, and the storage group copies are in a state of divergence.

    Divergence

    Divergence is a case where the copy of the data has information that is not in the original. We expect the copy will run behind the original a little bit in time.  So the original will have more data than the copy. If the copy has more data, or different data from the original, then we are in a state of divergence; the diverged data may be in the database, or it may be in the log files.

    A lossy failover is always going to produce divergence. You can also get into a diverged state if "split-brain" syndrome happens in the cluster. Split brain is the condition where all network connectivity between the nodes is lost, and both nodes believe they are the active node. In this case, the Store is running on both nodes, and both nodes are making changes to their copies of the database. This means that, even though clients might only be able to connect to the Store on one of the nodes, or even if clients cannot connect to either of the Stores/nodes, background maintenance will still be occurring, and that is a logged operation.  In other words, even if the Store is isolated from the network, logged physical changes to the database can and do occur.

    Divergence can also be caused by administrator action. Remember that the recovery logic used by the Replication service is different from Eseutil /r.  So if an administrator went to the passive node and ran Eseutil /r, they will end up in a diverged state.  Or if an administrator performs an offline defragmentation of the active or the passive copy.

    Detecting Divergence

    To deal with divergence, we first need to know how to always detect it, so we can then correct it. Detecting divergence is the job of the Replication service.  Divergence checking runs when the first log file is copied by the Replication service. It compares the last log file on the passive that was copied by the Replication service with its equivalent on the active node. If the files are the same then we can continue copying log files.

    Every log file has a header and the header contains not only the creation time of the log file, but also the creation time of the previous log file in the sequence.  This means that all log files are linked together by a chain of modification times which allows us to know that we have the correct set of log files.

    The last thing that we do is, before replacing Exx.log, we make sure that the log file that is replacing it, is a superset of the data.

    Correcting Divergence

    The first thing to note about divergence is that a re-seed will always correct it. You can always re-seed a storage group copy to correct everything.  But the problem is that this is a very expensive operation when dealing with large databases and/or constrained networks. So we tried to come up with some solutions. Looking at the common case, we expect to have a lossy failover where only a few log files are lost. A lossy failover in which the passive node was, for example, 1, 2, or 3 log files away from the active (e.g., only a few log files failed to copy). The solutions we implemented include decreasing the log file size, so that the amount of data loss was smaller, and implementing a new feature called Lost Log Resilience.

    Lost Log Resilience

    Lost Log Resilience (LLR) is a new ESE feature in Exchange 2007.  Remember, with write-ahead logging, the log record is written to disk before the modified database page is written to disk. Normally, as soon as the log record is written, it becomes possible that the page can be written to the database file. LLR introduces the ability to force the database modification to be held in memory until some more log generations have been created.

    LLR only runs on the active copy of a database; if you analyze a passive copy's database header, you'll see that its database is always up-to-date.

    As an example, if a database page is modified, and if the log record describing the modification is written in log generation 10, we might enforce something such that the database cannot be modified until log generation 12 is created.  Essentially, we're forcing the database on disk to remain a few generations behind the log files we created.

    Log Stream Landmarks

    For readers familiar with ESE logging recovery, LLR introduces a new marker within the log stream.  You have the Checkpoint (the minimum generation that is required - the first log file required for recovery).  And now at the other end of the log stream, there are two markers:

    • Waypoint, or the maximum log required.  This is the log file that is required for recovery. Without it, even with all of the log files up to this point, you cannot successfully recover.
    • Committed log, which is further out. This is created data which is not technically needed for recovery of your database. However, if you lose the logs, you have lost some modifications.

    Recovering from Divergence

    Its through LLR that we can recover from divergence. The divergence correction code that uses this runs inside the Replication service on the passive.  After realizing there is a divergence, the first thing it does is find the first diverged log file.  It starts with the highest number and works backwards until it finds a log file that is exactly the same on the active. The log file above the one that is exactly the same is the first diverged log file.

    The nice thing is, if the diverged log file is not required by the database, then we can just throw it away.  We'll throw it away and copy the new data from the active.  If the diverged file is required by the database, then re-seed will be required to recover from divergence.

    Loss Calculations

    When you failover in a CCR environment, there is a loss calculation that occurs.  For example, you just failed over in your CCR cluster, and you know Exx.log was copied so there was some loss.  Now you want to quantify the loss.  There are two numbers that you use for this.

    Remember, the Replication service keeps track of the last log that the store generated.  But the store, just in case the Replication service is down, also updates, in the cluster database, the last log generation that it created.  When you run the Get-StorageGroupCopyStatus cmdlet, LastLogGenerated represents the maximum of those two numbers.

    So when you do a failover, we compare the last log generation with the last log that was copied.  The gap between them is how many log files you just lost.  The lossy-ness setting (AutoDatabaseMountDial) on your storage group is compared that to that number to determine whether it can mount automatically.

    If you cannot mount a specific storage group, the Replication service will run on the active (which was the old passive).  It will "wake up" every once in a while, try to contact the passive (which was the old active), and copy the missing log files.  If it can copy enough log files to reduce the "lossy-ness" to an acceptable amount, then the storage group will come online.

    There are three settings for AutoDatabaseMountDial: Lossless (0 logs lost); GoodAvailability (3 logs lost) and BestAvailability (default; 6 logs lost).

    Say, for example, you have the dial set to Lossless, and then for some reason, the active node dies. The passive node will become the active node, but the database won't come online. Should the original active appear, its log files will be copied, and one-by-one, the storage groups will start coming online.

    Transport Dumpster

    Finally, there is also the Transport Dumpster.  After a lossy failover, the Replication service can look at the time stamp on the last log file it copied. And then it can ask Transport Dumpster to redeliver all email since that time stamp. So, although you might lose data representing some actions (for example, making messages read/unread, moving messages, accepting meeting requests), all of the incoming mail can be re-delivered to the clustered mailbox server.

  • Exchange 2007 - Continuous Replication Architecture and Behavior

    I've previously blogged about the two forms of continuous replication that are built into Exchange 2007: Local Continuous Replication (LCR) and Cluster Continuous Replication (CCR).  In those blogcasts, you can see replication at work, but we really don't get into the architecture under the covers. So in this blog, I'm going to describe exactly how replication works, what the various components are, and what the replication pipeline looks like.

    As you may have heard or read, continuous replication is also known as "log shipping." In Exchange 2007, log shipping is the process of automating the replication of closed transaction log files from a production storage group (called the "active" storage group) to a copy of that storage group (called the "passive" storage group) that is located on a second set of disks (LCR) or on another server altogether (CCR). Once copied to the second location, the log files are then replayed into the passive copy of the database, thereby keeping the storage groups in sync with a slight time lag.

    In simple terms, log shipping follows these steps:

    1. Seed the source database in the destination to create a target database.
    2. Monitor for new logs in source log directory for copying by subscribing to Windows file system notification events for the directory.
    3. Copy any new log files to the destination log directory.
    4. Inspect the copied log files.
    5. After inspection is passed, move the log files the destination log directory and replay them into the copy of the database.

    Microsoft Exchange Replication Service

    Exchange 2007 implements log shipping using the Microsoft Exchange Replication Service (the "Replication service"). This service is installed by default on the Mailbox server role. The executable behind the Replication service is called Microsoft.Exchange.Cluster.ReplayService.exe, and its located at <install path>\bin. The Replication service is dependent upon the Microsoft Exchange Active Directory Topology Service. The Replication service can be stopped and started using the Services snap-in or from the command line. The Replication service is also configured to be automatically restarted in case of a failure or exception.

    Running Replication Service in Console Mode

    The Replication service can be started as service or as a console application., But note, that running the service as a console application is strictly for troubleshooting and debugging purposes. This is not something that would be done as a regular administrative task. In console mode the replication process check for two parameters: -console and -noprompt.

    -Console

    If the console switch is specified or no default parameter is provided then the process will check to see if it is started up as service or console application. This is done by looking at the SIDs in the tokens of the process. If the process has a service SID, or no interactive SID, the process is considered to be running as a service.

    -NoPrompt

    By default, a shutdown prompt is on. You use the -noprompt switch to disable the shutdown prompt.

    The Replication Service Internals

    The Replication service is a managed code application that runs in the Microsoft.Exchange.Cluster.ReplayService.exe process.

     

    Replication Service Registry Values

    The Replication service keeps track of a storage group that is enabled for replica by keeping that information in the registry. The storage group replica information is stored the registry with the Object GUID of the storage group.

    State 

    The replay state of storage group that has the continuous replication enabled is stored at HKLM\Software\Microsoft\Exchange\Replay\State\GUID.

    StateLock

    Each replica state is controlled via a StateLock to make sure that the access to the state information is gated. As its name implies, StateLock is used to manipulate a state lock from inside the Replication service. There are two StateLocks created per storage group: one for the database file and one for the log files. These locks states are stored at HKLM\Software\Microsoft\Exchange\Replay\StateLock\GUID.

    Replication Service Diagnostics Key

    The Replication service stores its configuration information regarding diagnostics at HKLM\System\CCS\Services\MSExchange Repl\Diagnostics.

    You can query the current diagnostic level for the Replication service using an Exchange Management Shell command: get-EventLogLevel -Identity "MsExchange Repl".  This will also return the diagnostic level for the Replication service's Exchange VSS Writer, which is another subject altogether (maybe something for a future blog).

    Replication Service Configuration Information in Active Directory

    The Replication service uses the msExchhasLocalCopy attribute to identify which storage groups are enabled for replication in an LCR environment. msExchhasLocalCopy will be set at the database level, as well.

    In a CCR environment, the Replication service uses the cluster database to store this information.

    The Replication service uses an algorithm to search Active Directory for replica information:

    1. Find the Exchange Server object in the Active Directory using the computer name. If there is no server object then return.
    2. Enumerate all storage groups that are on this Exchange server.
    3. For each storage group with msExchhasLocalCopy set to true:

    a. Read the msExchESEParamCopySystemPath and msExchESEParamCopyLogFilePath attributes of the storage group.

    b. Read the msExchCopyEdbFile attribute for each database in the storage group

    Replication Components

    The Replication Service implements log shipping by using several components to provide replication between the active and passive storage groups.

    Replication Service Object Model

    The Replication service is responsible for creating an instance of the replica associated with a storage group. The object model below shows the different objects that are created for each storage group copy.

    Continuous Replication - Replication Object Model

    In a CCR environment, the Replication service runs on both the active node and the passive node.  As a result, both an active and a passive replica instance will be created.

    Copier 

    The copier is responsible for copying closed log files from the source to destination. This is an asynchronous operation in which the Replication service continuously monitors the source. As soon as new log file is closed on the source, the copier will copy the log file to the inspector location on the target.

    Inspector

    The inspector is responsible for verifying that the log files are valid. It checks the destination inspector directory on a regular basis. When a new log file is available, it will be checked (checksummed for validity) and then copied to the database subdirectory. If a log file is found to be corrupt, the Replication service will request a re-copy of the file.

    LogReplayer

    The logreplayer is responsible for replaying log files into the passive database. It also has the ability to batch multiple log files into a single batch replay. In LCR, replay is performed on the local machine, whereas with CCR, replay is performed on the passive node. This means that the performance impact of replay is higher on for LCR than CCR.

    Truncate Deletor

    The truncate deletor is responsible for deleting log files that have been successfully replayed into the passive database. This is especially important after an online backup is performed on the active copy since online backups delete log files are not required for recovery of the active database. The truncate deleter makes sure that any log files that have not been replicated and replayed into the passive copy are not deleted by an online backup of the active copy.

    Incremental Reseeder

    The incremental reseeder is responsible for ensuring that the active and passive database copies are not diverged after a database restore has been performed, and after a failover in a CCR environment.

    Seeder

    The seeder is responsible for creating the baseline content of a storage group used to start replay processing. The Replication service perform automatic seeding for new storage groups.

    Replay Manager

    The replay manager is responsible for keeping track of all replica instances. It will create and destroy the replica on-demand based on the online status of the storage group. The configuration of a replica instance is intended to be static; therefore, when a replica instance configuration is changed the replica will be restarted with the updated configuration. In addition, during shutdown of the Replication service, the configuration is not saved. As a result, each time the Replication service starts it has an empty replica instance list. When the Replication service starts, the replay manager does discovery of the storage groups that are currently online to create a "running instance" list.

    The replay manager periodically runs a "configupdater" thread to scan for newly configured replica instances. The configupdater thread runs in the Replication service process every 30 seconds. It will create and destroy a replica instance based on the current database state (e.g., whether the database is online or offline. The configupdater thread uses the following algorithm:

    1. Read instance configuration from Active Directory
    2. Compare list of configurations found in Active Directory against running storage groups/databases
    3. Produce a list of running instances to stop and a list of configurations to start
    4. Stop running instances on the stop list
    5. Start instances on the start list

    Effectively, therefore, the replay manager always has a dynamic list of the replica instances.

    Replication Pipeline

    The replication pipeline implemented by the Replication service is shown below. In an LCR environment, the source database and target database are on the same machine. In a CCR environment, the source and target database are on different machines (different nodes in the same failover cluster).

    Continuous Replication - Replication Pipeline

    Log Shipping and Log File Management

    The Replication service uses an Extensible Storage Engine (ESE) API to inspect and replay log files that are copied over from the active storage group to the passive storage group. Once the log files are successfully copied to the inspector directory, the log inspector object associated with the replica instance verifies the log file header. If the header is correct, the log file will be moved to the target log directory and then replayed into the passive copy of the database.

    Log Shipping Directory Structure

    The Replication service creates a directory structure for each storage group copy. This per-storage group directory structure is identical in both LCR and CCR environments, with one exception: in a CCR environment, a content index catalog directory is also created.

    Inspector Directory

    The Inspector directory contains log files copied by the Copier component. Once the log inspector has verified that a log file is not corrupt, the log file will be copied to the storage group copy directory and replayed in the passive copy of the database.

    IgnoredLogs Directory

    The IgnoredLogs directory is used to keep valid files that cannot be replayed for any reason (e.g., the log file is too old, the log file is corrupt, etc.). The IgnoredLogs might also have the following subdirectories:

    E00OutofDate

    This is the subdirectory that holds any old E00.log file that was present on the passive copy at the time of failover. An E00.log file is created on the passive if it was previously running as an active. An event 2013 is logged in the Application event log to indicate the failure.

    InspectionFailed

    This is the subdirectory that holds log files that have failed inspection. An event 2013 is logged when a log file fails inspection. The log file is then moved to the InspectionFailed directory. The log inspector uses Eseutil and other methods to verify that a log file is physically valid. Any exception returned by these checks will be considered as a failure and the log file will be deemed to be corrupt.

    Well, there you have it.  I hope you found this useful and informative.

  • Blogcast: Cluster Continuous Replication

    We've posted my blogcast on Cluster Continuous Replication on the Exchange Team blog today.  Enjoy!
  • Exchange Server Troubleshooting Assistant 1.0 Available!

    Earlier today, we released the Microsoft Exchange Server Troubleshooting Assistant (ExTRA) 1.0 to the Web. This new tool is a union of Exchange Server Performance Troubleshooting Analyzer (ExPTA), the Exchange Server Disaster Recovery Analyzer (ExDRA), and a new tool called the Exchange Server Mail Flow Analyzer (ExMFA). Now Exchange admins have a single tool for troubleshooting performance, database and mail flow issues.

    ExTRA includes the following troubleshooting functionality.

    • Performance Troubleshooter: Includes the post 1.1 ExPTA functionality. New features include the new FCL analysis capability to troubleshoot problems with back-ups in RPC requests.
    • Database Recovery Management: Also includes the post 1.1 ExDRA functionality. New features include the wizards to reset transaction log generation and repair databases.
    • Mail Flow Troubleshooter: First release. Designed to help an Exchange admin to tackle major mail flow problems such as non-delivery reports, queue back-ups and slow deliveries and identify the root causes.

    For more information, and to obtain ExTRA, visit the following pages:

  • TechNet Forums for Exchange Server 2007

    Got a question about Exchange Server 2007?  Check out the forums at http://forums.microsoft.com/TechNet/default.aspx?ForumGroupID=235&SiteID=17, which include discussions about deployment, high availability, unified messaging, transport, development, and more!

    Hope to see you there!

  • Got Content? New and Updated Exchange 2003 Content Available!

    Cathy Anderson just posted a nice article on the Exchange Team blog that the Exchange 2003 Technical Library was just updated with many changes:

    The following topics are new:

    The following topics have had major updates (for example: procedures changed, support policies changed, best practices guidance changed):

    The following topics have had minor updates (for example: spelling errors corrected, URLs updated):

    The following documentation downloads are new:

    To read the most current version online of the Exchange Server 2003 documentation, see the Microsoft Exchange Server TechCenter. 

    Content Available Only in English

    The following documentation downloads have been updated:

  • Exchange Management Console in Exchange 2007

    Exchange MVP Henrik Walther posted a nice article on the all new Exchange Management Console in Exchange 2007.  I especially like Figure 4, which shows a side-by-side comparison between Exchange System Manager and the Exchange Management Console.

    Enjoy!

  • Exchange 2007 Cluster Continuous Replication: Can I Get a Witness?

    As of today, you can!  A file share witness, that is.  Let me start from the beginning.

    Exchange Server 2007 introduces a new feature called cluster continuous replication (CCR).  CCR combines the log shipping and replay functionality in Exchange 2007 with the failover functionality in the Microsoft cluster service (MSCS). Unlike previous versions of Exchange that also supported clustering, CCR does not require shared storage. Instead, each node in the cluster has it own locally connected storage (e.g., SAS, DAS, iSCSI). On the active node in the cluster is the production clustered mailbox server (CMS). On the passive node is a relatively up-to-date copy of the CMS that is created and maintained by log shipping and replay.

    Because CCR does not use shared storage for the Exchange data, we wanted to make sure we didn't need shared storage anywhere in the cluster. The only other resource in the cluster besides the CMS is the default cluster group, which contains the quorum resource. To eliminate the need for shared storage to host the quorum resource, Exchange 2007 supports the Majority Node Set (MNS) quorum.

    Traditionally, MNS is a three-node solution where one of the nodes is known as a "voter" node. One purpose of the voter node is to prevent a condition known as a split brain syndrome, where each node in the cluster thinks it is in charge of the cluster.  Another purpose is to prevent a problem that is known as a partition in time. I won't got into detail here, but lets just from the cluster's perspective, both problems are tantamount to world chaos.

    But requiring three cluster nodes for a CCR solution does not exactly help reduce costs of ownership, which is one of the original themes for Exchange 2007. So enter a new type of MNS quorum - the MNS quorum with file share witness. The MNS quorum with file share witness means that CCR only requires two cluster nodes. The voter node requirement is replaced with the new file share witness variant to the MNS quorum.

    The Exchange team worked very closely with the Windows Cluster team to create a new type of quorum device based on the MNS quorum model. Instead of using a third voter node, a file share on a non-clustered system is used. We recommend that a share on a Hub Transport server be used.

    The Windows Cluster team delivered the new quorum device in the form of a QFE (hotfix), which is documented in Knowledge Base article 921181. The QFE, which requires Windows Server 2003 with Service Pack 1 or Windows Server 2003 R2 in order to be installed, actually contains two significant updates for MSCS:

    1. File share witness. The file share witness feature is an improvement to the current MNS quorum model. This feature lets you use a file share that is external to the cluster as an additional "vote" to determine the status of the cluster in a two-node MNS quorum cluster deployment.
    2. Configurable cluster heartbeats. The configurable cluster heartbeats feature enables you to configure cluster heartbeat parameters. This may help avoid unnecessary cluster failovers. These failovers occur because of a temporary network problem that may cause packets to be dropped or delayed. The configurable cluster heartbeats feature may help in an environment where cluster nodes are geographically dispersed.

    Knowledge Base article 921181 goes into a lot of detail on the new private MNS resource configuration properties that are added by the new file share witness feature, including MNSFileShare, MNSFileShareCheckInterval, and MNSFileShareDelay. It also contains details on configurable heartbeats, as well as step-by-step instructions for configuring heartbeats.

    Once you've obtained the QFE and after you've finished reading 921181, the process for creating an MNS quorum with file share witness for use with CCR is as follows:

    1. Before the cluster has been formed, install the QFE on each node, one at a time. Each node will need to be restarted at the installation in order to complete the installation. Once both nodes in the cluster has the QFE installed, proceed to step 2.
    2. Form a new cluster with an MNS quorum.
    3. Create and permission the file share. Login as an administrator on the machine that is going to host the file share (we recommend that you use a Hub Transport server for the share) and perform steps 4, 5 and 6:
    4. Start a command session and make a directory to be the share. We recommend following the following naming convention: MNS_FSQ_DIR_<clustername>.  This can be done with the following command: mkdir <shareDirectory>.  For example, mkdir C:\MNS_FSQ_DIR_EXCLUS.
    5. Create the share by running the following command (we recommend using the following naming convention MNS_FSQ_<clusterName>): net share <shareName>=<shareDirectory> /GRANT:<ClusterServiceAcccount>,FULL.  For example, net share MNS_FSQ_EXCLUS=C:\MNS_FSQ_DIR_EXCLUS /GRANT:DOMAIN\CLUSSVC,FULL.
    6. Permission the share by running the following command: cacls <shareDirectory> /G BUILTIN\Administrators:F <ClusterServiceAccount>:F.  For example, cacls C:\MNS_FSQ_DIR_EXCLUS /G BUILTIN\Administrators:F ClusSvc:F.
    7. Next, configure MNS to use the file share. The share name is stored in a property on the MNS resource.  The name of the resource can be changed by an administrator but the default is “Majority Node Set”.  To set the property perform the following operation from a command prompt use the following command: Cluster res “Majority Node Set” /priv MNSFileShare=<UNCPathtoShare>.  For example if the server name is e2k7 and the share name is MNS_FSQ_EXCLUS, then the command would be:

      Cluster res “Majority Node Set” /priv MNSFileShare=\\E2K7\MNS_FSQ_EXCLUS

      When this command completes an error message is produced.  The message indicates that the resource must be restarted to have the change take effect.  The following command will accomplish this: Cluster group “Cluster Group” /move
    8. Perform the command a second time to complete the configuration of the MNS File Share Witness. To check that the property run the following command: Cluster res “Majority Node Set” /priv

    Once the MNS quorum with file share witness is created and validated, you can configure the cluster heartbeat as desired using the instructions in Knowledge Base article 921181.

    Then, proceed with the installation of Exchange 2007 in a CCR environment. For detailed instructions on how to do that, see the online content at http://go.microsoft.com/fwlink/?LinkId=69434. Specifically, look under Operations | High Availability | Cluster Continuous Replication.

  • More Offline Content for Exchange 2003 Available

    Today we released even more of our Exchange 2003 guides in offline (Word document) format.  The following guides are now available:

    Enjoy!

  • Downloadable Offline Content for Exchange 2003 Available

    We released downloadable (.DOC files) versions of several Exchange 2003 guides yesterday.  If you've been waiting for offline content, now's your chance to get some!  The following guides were released in Microsoft Word format:

    Enjoy!

  • Exchange Server 2007 Beta 2 Documentation Download Available

    A downloadable compiled HTML Help (CHM) file containing the product documentation for Exchange 2007 Beta 2 is now available at http://www.microsoft.com/downloads/details.aspx?FamilyID=555f5974-9258-475a-b150-0399b133fede&DisplayLang=en. This is an offline version of the content at http://go.microsoft.com/fwlink/?LinkId=69434.

    As I mentioned in my previous blog entry on this subject, even if you don't have access to Exchange 2007, the content is still very valuable as a learning tool. For example, there is a glossary to get you up-to-speed on Exchange terminology, a great discussion of high availability, best practices for deployment, an extensive technical reference, and much more.

    And when you're done reading each page, feel free to use the Feedback control on the bottom of each page to give us your rating and comments.  We review every piece of feedback that is sent to us, so if you have an idea, suggestion, correction, or comment, don't be shy!  We'd love to hear it.

  • Exchange 2007 Beta 2 SDK Available

    Get it now at http://www.microsoft.com/downloads/details.aspx?FamilyID=8c32d5ee-a071-459e-843d-8ede0a9582b3&DisplayLang=en.

    This preliminary release of the Exchange 2007 SDK Documentation and Samples provides new and updated documentation and samples for building applications that use Exchange Server 2007 Beta 2. Use the Exchange 2007 SDK to help you develop collaborative enterprise applications with Exchange.

    The online version of the Exchange 2007 SDK is at http://go.microsoft.com/fwlink/?LinkID=66989.

  • Exchange Server 2007 Product Documentation Available Online

    Check it out - http://go.microsoft.com/fwlink/?LinkId=69434.

    I'm really excited about this, as I have been working on some of this content for more than a year.  It's great to see that it is publicly available.

    Even if you don't have access to Exchange 2007, the content is still very valuable as a learning tool. For example, there is a glossary to get you up-to-speed on Exchange terminology, a great discussion of high availability, best practices for deployment, an extensive technical reference, and much more.

    And when you're done reading each page, feel free to use the Feedback control on the bottom of each page to give us your rating and comments.  We review every piece of feedback that is sent to us, so if you have an idea, suggestion, correction, or comment, don't be shy!  We'd love to hear it.

  • The Future of Exchange Server Public Folders

    We get a lot of questions on Microsoft's strategy for public folders in Exchange Server.  For those of you who are still wondering what the fate of public folders are, be aware that:

    1. Public folders are present and FULLY SUPPORTED in Exchange 2007.
    2. The next major version of Exchange Server AFTER Exchange 2007, will also likely include public folders.

    This may or may not contradict what you have heard previously. For details on our public folder strategy in Exchange Server, check out this interview with Terry Myerson, who is the General Manager of the Exchange Product Group.  Around 7:50 into the video, Terry talks about public folders.

  • Exchange 2007 Demo Videos Available

    We released several demo videos for Exchange 2007.  Check out http://msexchangeteam.com/archive/2006/06/20/428030.aspx for more details and links.
  • Exchange Server 2003 Tips, Tricks and Shortcuts

    I delivered this deck recently at Tech Ed 2006.  This deck has been updated for 2006, but also includes tips/tricks from previous years.

    Enjoy!