SSL certificates are issued for periods of spanning a number of years (typically in multiples for example one, two or more years, however eventually they do expire and need to be renewed).
The renewal process involves generating a fresh CSR (Certificate Signing Request) on one of your Exchange Client Access servers. This is then sent to a root certification authority (e.g. VeriSign) for processing into a valid SSL certificate (essentially they sign the request).
Creating a Certificate
In order to generate a CSR file on the Client Access Server and Windows Server 2008 open the Exchange Management Shell and type the following command:
New-ExchangeCertificate -GenerateRequest -Path c:\myReq.csr -KeySize 1024 -SubjectName “c=GB, s=Middx, i=MyCompany, ou=IT, cn=mail.mydomain.com” -PrivateKeyExportable $True
The string that you provide after the -SubjectName switch is very important and it is made up of the following values:
This will produce a file in the root of C drive on the CAS server called myReq.csr. This should be sent to our root certification authority.
When the CSR has been generated you will be provided with a CRF (Certificate Response File) which looks like the following (this will be returned to you via email):
-----BEGIN CERTIFICATE-----JJkbbssCCAuucgAwIBAgIQcyE6jZgwnFgAq0d7onjMFzANBgkqhkiG9w0BAQUFADCBzj
EEWNNNEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJQ2FwZSBUb3du
MR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UECxMfQ2VydGlmaWNhdGlv
biBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhhd3RlIFByZW1pdW0gU2VydmVyIENB
MSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNlcnZlckB0aGF3dGUuY29tMB4XDTA4MDcxMTE2M
DU0OFoXDTEwMDcyNjE1NTcxN1owgYYxCzAJBgNVBAYTADDDDDDjujjjjjw87666cvNxMJkeDE
PMA0GA1UEBxMGTG9uZG9uMSswKQYDVQQKEyJMb25kb24gQm9yb3VnaCBvZiBIb3Vuc2xvdyBD
b3VuY2lsMQswCQYDVQQLEwJJVDEcMBoGA1UEAxMTb3dhLmhvdW5zbG93Lmdvdi51azCBnzANB
gkqhkiG9w0BAQEFAAOBjQAwgYkCgYEAolvn0lT1W+cdRFjqOn56tPwHNULjq7LDA/G4ZAIVf9
cl7y4jLKR/6/3x2O/1st8OEcFDFKElmn8dzoA3pG14JL8ZmBTh0RLxtGRw9fHB2ARuYplagoD
LqgA5mzEPo3a3wCKboTaEwKwoeQ9dAp2bGcvs4lMPptI48eoSDhFs/u0CAwEAAaOBpjCBozAd
BgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwQAYDVR0fBDkwNzA1oDOgMYYvaHR0cDovL
2NybC50aGF3dGUuY29tL1RoYXd0ZVByZW1pdW1TZXJ2ZXJDQS5jcmwwMgYIKwYBBQUHAQEE
JjAkMCIGCCsGAQUFBzABhhZodHRwOi8vpgthennn/ss88877a222129tMAwGA1UdEwE
B/wQCMAAwDQYJKoZIhvcNAQEFBQADgYEAuYSyeOUx53TkjCfol2psVY3E9uzMb6P6nrgs2U
uG8BBQlshPkv+te8G2JpaaaaCmcrCV8J0WQN8mRm5443vbdasafJTBxB2PAZfl3GSWEgDIH
q/lg3IOxG43YK4qDWYTu3j/Ngymq8g/d+0VrqkF/AmXWnGMGIQmE3GUnUDXeZKOR8SM=
-----END CERTIFICATE-----
You should copy the CRF (including the Begin Certificate and End Certificate) into a text file called owa.txt and then rename the file owa.cer. You should then copy this file up to a drive on the CAS server where you are working.
Installing a Certificate (CAS)
Firstly you need to remove the existing (expired) SSL certificate from your Client Access Server. In order to accomplish this you need to open the Exchange Management Shell and then type in the following command:
Get-ExchangeCertificate | fl | out-file –filePath c:\certs.txt
This will create a text file in the root of C drive called certs.txt which contains the details of every certificate install on the server. The output should look like the following:
The key property that will identify the certificate that you wish to replace is the Not After field. As this is essentially the expiry date and should have already expired or indeed be very close to expiring. Make a note of the thumbprint (the long number at the bottom after the Thumbprint field) and then type in the following command:
Remove-ExchangeCertificate –thumbprint <thumbprint that you noted down>
As a tip here is to copy the thumbprint from the text file above and then paste it into the PowerShell Window. When you have typed the command and pressed enter you will be presented with the confirmation message:
Are you sure you want to perform this action?
Remove certificate with thumbprint 138B6EC5AAE868F495ECCBDA05C1F011B08A7CD3?
[Y] Yes [A] Yes to All [N] No [L] No to All [S] Suspend [?] Help(default is “Y”):
Confirm the action by entering A and then press ENTER. You are now ready to import the new certificate onto the Client Access Server. In order to do this type in the following command within the PowerShell window (ensure that the path you specify to the certificate file matches the location where you placed the new certificate in the earlier steps):
Import-ExchangeCertificate -path e:\certificates\owa.cer –FriendlyName “owa.mydomain.com”
You should then be presented with the following output (again here you will need to make a note of the thumbprint):
Thumbprint Services Subject
———- ——– ——-
B52842F7408772B7151FF74FDAE914EA7B59B53A ….. CN=owa.mydomain.com,…
Now that the certificate has been imported into the certificates repository you need to enable it for OWA. In order to do this run the following command in the PowerShell window:
Enable-ExchangeCertificate -Thumbprint B52842F7408772B7151FF74FDAE914EA7B59B53A -Services IIS
The new certificate should now be installed you can confirm this by running the following command:
Get-ExchangeCertificate
The output of which should be:
B52842F7408772B7151FF74FDAE914EA7B59B53A …W. CN=owa.mydomain.com,…
The key thing here to note is the W under services (this signifies that the cert has been enabled for OWA) and that the thumbprint matched what you have typed in previously.
This is a case study of a customer who wanted to perform a migration from Exchange Server 2000 / 2003 to Exchange Server 2007 whilst consolidating a number of remote sites to a centralised hub site. The main premise of this engagement was to ascertain Microsoft’s best practice methodologies for migrating very large mailboxes and moving large amounts of data with the least amount of risk of data loss, and the consolidation of globally dispersed messaging environments.
I’d like to send from here a HUGE THANKS to my peer, Ray Khan for putting all of this information together, as he was the one involved on the project and who kindly present to us this awesome case study which hopefully will encourage some other customers!
Problem Statement
This customer had a need to migrate all of their regional site data into a centralised site in UK with the least amount of downtime. They had users with very large mailboxes in China and Japan of up to 40 GB per mailbox, however there were only 30 to 40 users at each of these sites. There was a total of around 600 GB of mailbox data that would be required to be migrated from the regional sites to UK.
The objective was to archive most of the data from these mailboxes onto a new Enterprise Vault platform, so once on Exchange Server 2007 they would only have around two gigabytes of "online" mail.
They had reviewed several migration approaches but wanted our advice on these approaches and any others that have been proven in other environments.
As part of their upgrade project they were also investigating all other messaging platforms including Microsoft’s BPOS solution as well as partner’s and other third party solutions and also considering carrying on running On Premise. This aspect was out scope for this engagement.
Infrastructure before Upgrade
The company has just under a couple of thousand employees. Over 75% of their staff are located in UK and Switzerland the other 25% are located around the major financial centres around the world including China, Japan, and US.
Their Exchange server organization exists inside a single Active Directory Forest running Exchange Server 2003 SP2 in native mode. However there were several Exchange 2000 SP3 mailbox servers situated at some of their key branch offices such as China, Japan, UAE, Canada and Uruguay.
Exchange Estate Summary
Exchange 2003 Integrations
Many of their users have very large mailboxes, averaging 2 to 4 GB, but in some regions where no archiving exists we have users with up to 40GB mailboxes
Our client estate is based on Windows XP SP3 and Outlook 2003 with SP3 (new workstations with 2GB RAM).
Networks (Global MPLS WAN)
Target Architecture
They planned to transition their entire organization to Exchange Server 2007 SP2 into a consolidated and centralised architecture hosted in our new Global Data Centre in UK. This network will be facilitated by Cisco WAN Accelerators, situated at all of their branch offices.
They were also planning to consolidate and centralise their archiving and BES estate into the UK Global Data Centre.
They were in the planning/design phase of this programme and were scheduled to commence deployment in beginning of 2010 for completion by end of summer of 2010.
Proposed Solution
This customer designed a number of solutions to address the problem statements listed above. They decided to go for the following approach to migrate the Exchange Server 2000 / 2003 users with very large mailboxes and archive their data to the centralised site using SCR data replication technologies... Please refer to the steps below for more detail.
Migration Steps
Summary
This customer decided to migrate to Exchange Server 2007 and to continue managing their infrastructure to On Premise using the above approach. They have managed to migrate their remote site data using the strategy outlined above. They are continuing with us from a support perspective and will be booking further proactive engagements looking forward to migrating to Exchange Server 2010.
Overview
Exchange Server 2010 help organizations guard their messaging with built-in protective technologies; offers anywhere access to email, voice mail, calendars, and contacts; and enables new levels of operational efficiency. Help develop your expertise in this advanced messaging system with state-of-the-art training from Exchange Server 2010 product specialists. Choose a certification path that is relevant to your current job role or one that prepares you for your next career step.
Microsoft Certified Technology Specialist certification
Whether you are new to Microsoft Certification or a Microsoft Certified Professional (MCP) certified on Microsoft Exchange Server 2003, consider earning the Microsoft Certified Technology Specialist (MCTS): Microsoft Exchange Server 2010 – Configuration certification. This certification highlight your area of expertise and help validate the knowledge and skills that are required to deploy and administer an enterprise messaging environment by using Exchange Server.
Exam 70-662 – TS: Exchange Server 2010, Configuring
Microsoft Certified IT Professional certification
The Microsoft Certified IT Professional (MCITP): Enterprise Messaging Administrator certification is also appropriate for MCPs who are certified on Microsoft Exchange Server 2003 as well as IT professionals who are new to Microsoft Certification. This certification helps demonstrate your professional expertise in using Microsoft Exchange Server 2010 to excel in a specific job role, such as the lead engineer for messaging solutions within an enterprise organization.
Exam 70-662 – TS: Microsoft Exchange Server 2010, Configuring
Exam 70-663 – Pro: Designing and Deploying Messaging Solutions with Microsoft Exchange Server 2010
Microsoft Certified Master program
Differentiate yourself as the technical expert. The Microsoft Certified Master (MCM) program helps the best professionals in the IT industry become even better. Whether you want to enhance and help validate your advanced skills or take your career to the next level, achieving a Microsoft Certified Master certification will help differentiate you from others in the competitive ranks of senior IT professionals.
This unique program consists of three weeks of mandatory, hands-on training led by experts, and extensive written and lab-based testing. Candidates' practical product knowledge, technical acumen, knowledge of best practices, personal and professional stamina, and communication skills are constantly challenged as they work toward attaining this premier Microsoft technical certification.
MCM Certification
Microsoft Certified Architect certification
Validate your capability to translate business problems into technology solutions. When you earn the Microsoft Certified Architect (MCA) certification, you can be recognized by Microsoft and the IT industry worldwide as an expert who holds the highest level of professional certification from Microsoft.
MCA Certification
Data Protection Manager
As this blog main technologies are Exchange and Data Protection Manager I could miss the Data Protection Manager exams information:
Exam 70-658 – TS: System Center Data Protection Manager 2010, Configuring
If you are concerned on knowing exactly what you need to do in order to migrate your current Exchange environment to Exchange Server 2010, whatever reason it is (multiple firewall rules, multiple certificates, multiple external URLs/ports for clients) don’t be as there is good news. We completely understand that this complexity means there is opportunity for making mistakes, which causes deployments to stall-not to mention a lot of frustration.
The solution is Exchange Server 2010 Deployment Assistant!!!
It will allows you to create Exchange Server 2010 On-Premises deployment instructions that are customized to your environment and all of your specific situations. It starts by collecting some information from you, and based on a your answers, it provides a finite set of instructions that are designed to get you up and running on Exchange Server 2010.
Main idea is to avoid the infinite information you can find in Internet specially in Exchange Server 2010 library and go straight to what you need to do.
Here is a screenshot from the tool, after the initial set of questions were answered and instructions generated.
Happy Exchange Server 2010 Migration!!!
There has been one too many conversations about Data Protection Manager and The Cloud… Some of them correct and others eventually a little bit less correct, so I decide to write this post in order to clear things a bit and hopefully help someone with it.
So apart from being able to protect to disk, tape and disk to disk to tape we can actually protect our information online, now how can we do that?
We have basically two great partners who have been with us on this, and they are Iron Mountain who provide us CloudRecovery solution and I365 who provide us a brilliant solution called EVault (DPM):
CloudRecovery
The new release of Iron Mountain, CloudRecovery, will provide support for DPM 2010 in addition to continued support for DPM 2007 with Service Pack 1. To support rapid corporate data growth, the solution also provides increased scalability – protecting multi-terabyte servers.
“We need to have systems that are always on, and always available. This is especially true when it comes to the backup and recovery needs of our Microsoft applications – including Exchange, SQL, and SharePoint,” said Alan Bourassa, CIO, Empire CLS. “The seamless integration between Microsoft DPM and Iron Mountain CloudRecovery is ideal for addressing these requirements and we are looking forward to taking advantage of the new features that will be available with the new release.”
EVault (DPM)
EVault (DPM) is an all-in-one backup and recovery appliance for Microsoft centric, mixed computing environments. And besides that believe-it or not even to non Microsoft Platforms… all in the same box!!!
It integrates with your Microsoft business applications and offers the most up-to-date protection for your Microsoft systems. You also gain broad non-Microsoft system protection; optimized cloud connectivity for fast, secure, affordable offsite disaster recovery; and the simple deployment and maintenance of an all-in-one appliance.
Jason Buffington, recently blogged about the DPM 2010 release and the i365 partnership.
So basically these are the solutions we have with these two brilliant partners, and that should be what is considered to be recommended, any further information until today’s date may not be absolutely true…
And finally it is out… The “kind of” expected element to allow Exchange Server 2010 (including BPOS) to coexist with Lotus Notes… The Quest Solution!
As you probably are aware already there is no support from Microsoft to make both environments to coexist, and as far as I am aware, there won’t be, obviously on a native Exchange Server 2010 organization, as so Quest will provide that service… If you want to do it using Microsoft products only you will need to have an Exchange Server 2007 as a bridge to the Lotus Connector… however Exchange Server 2007 needs to be there before the Exchange Server 2010 boxes… :)
“Quest Software is one of Dimension Data’s premier partners for our Microsoft Solutions line of business. We are pleased to partner with Quest to combine our specialist services with their best-of-breed Lotus Notes to Microsoft Exchange migration tools and coexistence solutions. Together, we have helped our clients enable high performance workplaces by improving competitiveness, increasing return on IT assets and providing better regulatory compliance. ”
Phil Aldrich, Practice Director of Microsoft Solutions, Dimension Data
“A Lotus Notes transition to Microsoft Exchange Server can be a complex project with considerable risk. To ensure business continuity and minimal user impact, organizations need a tool that keeps end users collaborating together throughout the coexistence period. With the new functionality of Quest Coexistence Manager for Notes, we enable organizations to establish interoperability between Notes e-mail and applications and Exchange/Outlook by ensuring the accuracy of mail and calendar data.”
Bill Evans, Vice President and General Manager, SharePoint and Notes Transition, Quest Software
As we heard on the 19th of April we finally released DPM 2010 RTM, however customer wise it will only be available in June…
For our customers which have already in place previous versions of DPM the main question will be how can they migrate their system.
A great utility we came up with was the DPM 2010 Upgrade Advisor. It is an Excel spreadsheet where you can fill in what version that you are coming from and what version that you want to go to as well as what you are using for Disaster Recovery, Tape Library Sharing, etc. And it outputs a checklist to walk you through the upgrade process.
Imagine the following scenario where you have your Exchange Organization split in three major sites, where each site has their own domain, for instance EMEA, APAC and AMERICAS.
In each site we have CCR clusters, Hub Transport and CAS servers, all in Service Pack 1. Service Pack 2 meanwhile is released and we end up with a major doubt regarding how should we act in terms of such upgrade.
The doubt generally resides between:
The answer for this is quite simple, long maybe as we need to reassure some stuff around it, but definitely it is not hard as it may initially seem.
It doesn’t matter which choice we take, as long as we upgrade first the CAS servers, especially if we are using CAS-CAS proxy. We can have servers with Service Pack 1 and others with Service Pack 2, as long as the CAS is greater (or equal) to the mailbox. In terms of delays, as in timeframe for the patching operation between the first and last patches, that doesn’t really matter, as far as the versions are supported and in this case Service Pack 1 is still supported!
Another thing to take as a recommendation is that two CAS servers can have different versions as far as we do not have them in NLB or even in a CAS Array as in that case the RPC Client Access attribute in the databases will just be a virtual name, as so everything within that virtual name (NLB or CAS Array) need to be consistent in terms of versions… This exclude the time that takes the update of all servers in the NLB or CAS Array obviously.
This apply to all builds, or at least to all the Release Updates and Service Packs supported at the moment. Obviously the CAS servers on the internet facing site must always be the first servers to upgrade, otherwise you definitely break the OWA.
Finally if you upgrade a whole site first, if it is the internet-facing site, there is no issue in having the others sites with lower versions.
Best Practices Analyzer (BPA) is a server management tool that is available in Windows Server 2008 R2. BPA reports best practice violations to the administrator after BPA scans the roles that are installed on Windows Server 2008 R2. Administrators can filter out unnecessary information or exclude results from BPA reports. Administrators can also perform BPA tasks with either the Server Manager GUI, or Windows PowerShell cmdlets. For more information about Best Practices Analyzer and scans, see the Best Practices Analyzer Help.
When you select the entry for Hyper-V under the Roles node you should see a new section called Best Practice Analyzer:
Here you can select to scan the Hyper-V role and see how you are going against common best practices. One other neat feature of the Best Practice Analyzer is the ability to exclude results. This way you can remove best practices that you do not believe apply to your environment – so you will not have to deal with a large number of unnecessary errors / warnings.
Imagine the scenario where you want to to set an execution policy for a specific user on a machine. The per-user setting is nothing more than a key in the registry, something like:
[HKEY_USERS\S-1-5-21-REST-OF-SID\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell]
"ExecutionPolicy"="RemoteSigned"
So all you have to do is grab your users’ SIDs (easy enough with AD cmdlets or ADSI I suppose) and modify the registry directly (as long as you have admin rights).
The proper way of doing this, of course, would be using Group Policy, see Set-ExecutionPolicy for details on that.
You can alter who is affected by the statement by using the –scope option. The setting will be persistent though. You shouldn’t need to run it each time PS is started up.
None of the transport rule predicates support matching on only BCC, so I don’t believe a basic transport rule will block outgoing BCC emails. I think you’d have to do a transport agent (routing agent) to do what you’re looking for. Something like this might get you started:
This is just some sample code which was quickly modified. It wasn’t even put in Visual Studio, so it’s NOT TESTED AT ALL! Sample code courtesy of Matt Stehle.
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.IO;
using Microsoft.Exchange.Data.Transport;
using Microsoft.Exchange.Data.Transport.Email;
using Microsoft.Exchange.Data.Transport.Routing;
namespace CustomNDR
{
public sealed class CustomNDRAgentFactory : RoutingAgentFactory
public override RoutingAgent CreateAgent(SmtpServer server)
return new CustomNDR();
}
public class CustomNDR : RoutingAgent
public const string NdrSubject = "Rejected Message - Original Attached";
public const string NdrFromDisplayName = "System Administrator";
public const string NdrFromAddress = "administrator@contoso.com";
public const string OutputPath =
@"C:\Users\Administrator\Desktop\CustomNDR\output\";
public const string PickupPath =
@"C:\Program Files\Microsoft\Exchange Server\TransportRoles\Pickup\";
public CustomNDR()
LogMessage("CustomNDR{}", "Enter");
this.OnSubmittedMessage += new SubmittedMessageEventHandler(CustomNDR_OnSubmittedMessage);
LogMessage("CustomNDR()", "Leave");
void CustomNDR_OnSubmittedMessage(SubmittedMessageEventSource source, QueuedMessageEventArgs e)
LogMessage("OnSubmittedMessage", "Enter");
try
// Check for BCC recipients...
if (e.MailItem.Message.Bcc.Count > 0)
EmailMessage origMsg = e.MailItem.Message;
LogMessage("OnSubmittedMessage", String.Concat("Message: ", origMsg.Subject));
if (Directory.Exists(OutputPath))
// Save a copy of the original message for debug
//SaveMessage(origMsg, String.Concat(OutputPath, "Original.eml"));
EmailMessage newMsg = EmailMessage.Create();
newMsg.To.Add(new EmailRecipient(origMsg.Sender.DisplayName, origMsg.Sender.SmtpAddress));
newMsg.From = new EmailRecipient(NdrFromDisplayName, NdrFromAddress);
newMsg.Subject = NdrSubject;
// Setting the contentType to 'message/rfc822' is the key to avoid an InvalidOperationException
// with a message 'Cannot set the property, the attachment is not an embedded message.'
Attachment attach = newMsg.Attachments.Add("RejectedMessage", "message/rfc822");
attach.EmbeddedMessage = origMsg;
// Save a copy of the NDR message for debug
//SaveMessage(newMsg, String.Concat(OutputPath, "newMsg.eml"));
// Save message to the pickup directory to send it
SaveMessage(newMsg, String.Concat(PickupPath, "newMsg.eml"));
// Cancel the original message
source.Delete();
else
LogMessage("OnSubmittedMessage", "OutputPath does not exist.");
catch (Exception ex)
LogMessage("OnSubmittedMessage",
String.Format("EXCEPTION: Type={0}, Message='{1}'", ex.GetType().FullName, ex.Message));
LogMessage("OnSubmittedMessage", "Leave");
public static void SaveMessage(EmailMessage msg, string filePath)
FileStream file = new FileStream(filePath, System.IO.FileMode.Create);
msg.MimeDocument.WriteTo(file);
file.Close();
catch
LogMessage("SaveMessage",
String.Format("Failed to save message, {0}, to {1}.",
msg.Subject, filePath));
throw;
public static void LogMessage(string methodName, string message)
StringBuilder traceMessage = new StringBuilder();
traceMessage.Append(System.DateTime.Now.ToString("s"));
traceMessage.Append("|");
traceMessage.Append(methodName);
traceMessage.Append(message);
traceMessage.Append("\r\n");
System.Diagnostics.Debug.WriteLine(traceMessage.ToString());
File.AppendAllText(
String.Concat(OutputPath, "log.txt"),
traceMessage.ToString());
Debug.WriteLine(String.Format("Failed to log message, {0}, to 'log.txt' in {1}.", message, OutputPath));
System.Diagnostics.Debug.WriteLine(String.Format("Exception: {0}", ex.Message));
This code will block ALL messages with ANY BCC recipients, including internal to internal, so additional modification would be required to process only messages leaving the organization if that’s the desired goal. Unless obviously you have this on the Edge Transport role…
While searching on Internet about this piece of information couldn’t really find a place where we had it all together as so I have decided to blog it… mostly because a team mate asked me… :)
DPM2007 Server Services
Service Name
Display Name
Description
MSDPM
DPM
Implements and manages synchronization and shadow copy creation for protected file servers.
DPMAC
DPM Agent Coordinator
Manages the installation, un-installation, and upgrade of protection agents to remote servers.
DPMWriter
DPM Writer
Manages backup shadow copies of Data Protection Manager replicas, and manages backups of the DPM and DPM Report databases, for purposes of data archival.
DPMLA
DPM Library Agent
Manages the tape library.
DPMRA
DPM Recovery Agent
Helps back up and recover file and application data to the Data Protection Manager.
SwPrv
Microsoft Software Shadow Copy Provider
Manages software-based volume shadow copies taken by the Volume Shadow Copy service. If this service is stopped, software-based volume shadow copies cannot be managed. If this service is disabled, any services that explicitly depend on it will fail to start.
VSS
Volume Shadow Copy
Manages and implements Volume Shadow Copies used for backup and other purposes. If this service is stopped, shadow copies will be unavailable for backup and the backup may fail. If this service is disabled, any services that explicitly depend on it will fail to start.
Protected Server Services
Manages and implements Volume Shadow Copies used for backup and other purposes. If this service is stopped, shadow copies will be unavailable for backup and the backup may fail. If this service is disabled, any services that explicitly depend on it will fail to start
Imagine a scenario where we would have an Exchange Server 2007 CCR on Windows Server 2008. In this setup we only have two network cards.
Initially our thoughts would go to that it’s a best practice to configure the private/heartbeat NIC as the primary one to be used for log shipping and seeding and thus to move that traffic away from the public/MAPI NIC, as per Tim McMichael’s blog.
So the main question here would be if we could use such configuration. Unfortunately there's no support statement on this, just best practices. But we can't configure log traffic on a network that doesn't support client traffic, so a purely private network would be restricted from log shipping anyway.
As per the above blog instructions the heartbeat NIC needs to be set to mixed as well which bring us to our initial question again:
Exchange 2007 SP1 CCR (Cluster Continuous Replication) – Two Network Ports When using a cluster continuous replication cluster with two network ports, your options are limited. Consider the following design: First port set to “allow the cluster to use this network” and “allow clients to connect through this network”. Second port set to “allow the cluster to use this network” and “allow clients to connect through this network”. You’ll noticed that in this configuration both networks are set to “allow clients to connect through this network”. This is necessary in order to establish the private network for use with log shipping functions. To establish this network for log shipping functions, refer to the enable-continuousreplicationhostnames cmdlet: http://technet.microsoft.com/en-us/library/bb690985.aspx http://technet.microsoft.com/en-us/library/bb629629.aspx
Exchange 2007 SP1 CCR (Cluster Continuous Replication) – Two Network Ports
When using a cluster continuous replication cluster with two network ports, your options are limited. Consider the following design:
First port set to “allow the cluster to use this network” and “allow clients to connect through this network”.
Second port set to “allow the cluster to use this network” and “allow clients to connect through this network”.
You’ll noticed that in this configuration both networks are set to “allow clients to connect through this network”. This is necessary in order to establish the private network for use with log shipping functions.
To establish this network for log shipping functions, refer to the enable-continuousreplicationhostnames cmdlet:
If used the replication service will first prefer to perform log shipping functions over the “private” network. Should the private network be unavailable the replication service will resume log shipping functions over the public network.
However in the above case, if we didn't enable both cluster and client connectivity for both networks, we'd have at least one single point of failure again. That's something to avoid in a two-port configuration. I believe Tim McMichael’s blog generally recommends a minimum of three ports for CCR systems, with two networks set for public and private and one for private only.
Last but definitely not least, for Windows Server 2008 clusters there really is no such thing as a dedicated private network anymore. As soon as you enable the network for client use, you have to enable it for cluster use, you can’t enable it for client use until you have set it for cluster use. Additionally there is no way to control the priority of the networks for cluster use, the failover cluster adapter determines the routing paths for that. Also, apparently “the allow clients to connect through this network” option is only used for automating and/or controlling the wizards for configuring services and applications, this setting tells the wizards which networks to create the service IPs on. So it actually no longer matters if you don’t have a dedicated heartbeat network, even ExBPA has been updated to not fire the “Dedicated heartbeat network not found” warning for Windows 2008 Failover Clusters. The key thing is that the networks must be on different subnets, and therefore client traffic would never go over the cluster/log shipping network.
The main reason we wanted dedicated private networks in Windows Server 2003 was because we were not very tolerant of missed heartbeats, in Windows Server 2008 we are certainly far more tolerant of missed heartbeats with the new network heartbeat improvements, especially when using a Majority Node Set with File Share Witness (no bus resets to rely on, with buggy disk drivers and firmware). But if you are wanting to be extra, extra safe then go with three networks.
Hopefully you have already heard about this and have change your deployment plans, otherwise here it goes. Recently it was compared the performance of the Exchange 2010 Client Access role supporting Outlook Anywhere users on both Windows 2008 SP2 and Windows 2008 R2, and found that the improvements the Windows Team has made in R2 more than doubles the number of concurrent users a given server can support, assuming CPU is the limiting resource.
For more information refer to the Microsoft Messaging Team Blog…
These results will be included in an upcoming TechNet whitepaper on Exchange 2010 CAS Guidance.
First of all a HUGE THANKS to Yuval Sinay for putting this information together.
The following article will provide a summary on common issues in Exchange 2010 deployment in the Enterprise. The information in this article consists of my own experience and official Microsoft knowledgebase articles. Due to the fact that your Exchange environment may vary, please read the information carefully and test the suggestions from this article - in a lab that can demonstrate the current Exchange infrastructure.
The article is divided to the following chapters:
General issues in Exchange 2010 deployment.
Best practices in Exchange 2010 deployment.
Common issues in Exchange 2007 migration to Exchange 2010.
Common issues in Exchange 2003 migration to Exchange 2010.
To all the DPM 2010 fans out there, here are some notes/transcriptions of the sessions which were delivered at MMS 2010 in Las Vegas by Jason Buffington!
MMS 2010 – BB12 – Technical Introduction to DPM 2010 MMS 2010 – BB13 – Protecting Applications with DPM 2010 MMS 2010 – BB14 – Protecting Windows Clients with DPM 2010 MMS 2010 – BB15 – Virtualization and Data Protection, better together with DPM 2010 MMS 2010 – BB37 – Disaster Recovery and Advanced Scenarios with DPM 2010 MMS 2010 – Partner Announcements (DPM 2010)
MMS 2010 – BB12 – Technical Introduction to DPM 2010
MMS 2010 – BB13 – Protecting Applications with DPM 2010
MMS 2010 – BB14 – Protecting Windows Clients with DPM 2010
MMS 2010 – BB15 – Virtualization and Data Protection, better together with DPM 2010
MMS 2010 – BB37 – Disaster Recovery and Advanced Scenarios with DPM 2010
MMS 2010 – Partner Announcements (DPM 2010)
A huge THANKS to Mike Resseler for taking these notes as Jason was presenting his slide decks…
Load balancing serves two primary purposes. It reduces the impact of a single Client Access server failure within one of your Active Directory sites. In addition, load balancing ensures that the load on your Client Access server and Hub Transport computers is evenly distributed.
A great document online has been published regarding Exchange Server 2010 Load Balancing which you can see in the link below, however it is never bad to not forget the requirements of load balancing when we play it in Exchange protocols… http://technet.microsoft.com/en-us/library/ff625248.aspx.
Understanding Load Balancing in Exchange 2010: http://technet.microsoft.com/en-us/library/ff625247.aspx
It is a bit late to announce, however that is just due to my blog’s maintenance!
Finally this day has arrived ob the 19th of April. It has been so long in the making, and yet it seems like time flew by.
Check out the official blog announcing the RTM launch: http://www.microsoft.com/systemcenter/en/us/data-protection-manager.aspx.
Here are a few of my favourite features in DPM 2010:
The RTM Evaluation Software for DPM 2010 is now available at: http://technet.microsoft.com/en-us/evalcenter/bb727240.aspx
Shadow redundancy minimizes message loss due to server outages. When a transport server comes back online after an outage, there are two scenarios:
The following table summarizes how transport reacts to these two scenarios when shadow redundancy is enabled. For clarity, assume that the server that had an outage is named Hub01.
Recovery scenario
Alternative Routes
No Alternative Routes
Hub01 comes back online with a new database.
When Hub01 becomes unavailable, each server that has shadow messages queued for Hub01 will assume ownership of those messages and resubmit them. The messages then get delivered to their destinations using alternative routes.
The total delay for messages is equal to the product of the heartbeat time-out interval and the heartbeat retry count configured in your organization.
These messages remain in the shadow queue on each server that has shadow messages queued for Hub01. When Hub01 comes back online with a new database ID, the shadow servers detect that it's a new database and resubmit the messages that are in the shadow queue to Hub01. This is equivalent to suddenly discovering an alternative route for these messages.
The total delay for the messages depends on the duration of the outage.
Hub01 comes back online with the same database.
Hub01 will deliver the messages in its queues. This will result in duplicate delivery of these messages. Exchange mailbox users won't see duplicate messages due to duplicate message detection. However, recipients on foreign systems may receive duplicate copies.
Hub 01 will deliver the messages in its queues and then send discard notifications to the shadow servers.
However regarding duplicate messages we should always refer this very interest link which actually applies since previous versions and which explains how Exchange can in some scenarios avoid message duplication:
http://msexchangeteam.com/archive/2004/07/14/183132.aspx
I was trying to pull some stuff together for a customer who’s contemplating going from Exchange 2003 to Exchange 2010 and looking for slides/charts comparing the two. This may be useful to some of you, as so here it is.
Awesome news as the so much waited SP1 on Exchange Server 2010 will be out, hopefully soon… And the good news continue as these should be some of the improvements:
http://msexchangeteam.com/archive/2010/04/05/454533.aspx
The main reason this guidance was developed was due to customers requesting that CAS be placed in a perimeter network. See http://msexchangeteam.com/archive/2009/10/21/452929.aspx for more information on why that is not supported. This same guidance, as the article indicates, is true for the other server roles, with the exception of Edge Transport as it was designed from a usage scenario to only communicate from the perimeter network to the internal network via SMTP and to allow connections (LDAP and SMTP) from the internal network.
As for what customers can do - If their plan is to open up all the defined ports between the Exchange and AD servers in Site A and all Exchange and AD servers in Site B, then this would be supported since one could argue that there isn't really a "firewall" between the two sets of servers anymore. They will be able to get support after deploying like that. And if an issue comes and then, while helping them debug it, its found that there was network traffic blocking going on which breaks some aspect of Exchange communications with other server roles in the AD Sites, then they would be helped at least help them get into a supported state (without traffic limitations between the Exchange servers and DCs).
Yes this adds complexity. Yes it could break things. Yes customers have reasons to do it.