I’m sure this has been covered by many other sources on the internet, but I thought I’d put down my thoughts on the matter as many people still don’t understand why the correct load balancing configuration is important.
I’ve been involved in a number of Exchange Server 2010 deployments during my last couple of months and most of the deployments were upgrades on hosted platforms from Exchange Server 2007 to Exchange Server 2010.
What I noticed in these Exchange Server 2007 deployments were that load on Client Access Servers (CAS) were somewhat skew. And this makes sense, because the load balancing was configured for Source IP persistence.
What does this mean in a hosting environment?
Well, firstly all clients are connecting to the messaging platform over the Internet behind a NATed IP.
You could potentially have a tenant with a 1000 users behind a single IP. The hosting environment won’t have any visibility to the internal IP’s and thus only see the source IP being the external interface on the tenants firewall. If source IP persistence is configured on the load balancer it will basically send all traffic for that source IP to one CAS server (give or take a few connections).
Something like this:
This concept is also the same for corporate enterprises running their own on-premise Exchange Server 2010 solution. The reason I’m saying it effects corporate deployments as well is that most mobile phones connect to the internet via NATed IP’s behind the carrier firewall. So mobile phone ActiveSync connections from a specific carrier will be sent to one CAS box.
So how do we fix this? Configure the correct persistence.
First we check that the load balancer is on the Exchange qualification program for load balancers. The main reason for this is that we’ll know if it was tested and reviewed by Microsoft and the partner for the type of load balancing we want to do. It’s also a very good resource to find deployment guides on the specific load balancer.
When I deployed the Exchange Server 2010 solution we incorporated cookie based persistence on the load balancers for the customer. We did not configure SSL Offloading. To keep things simple we configured an SSL Bridge whereby the load balancer will decrypt the packets, read the cookies then re-encrypt the packets before sending it to the CAS boxes.
Implementing cookie based persistence can be tricky. but it can also be very easy, it really depends on the person responsible for the load balancer, which usually falls into the networking or security team. Personally, I put in a lot of effort to understand how the specific vendors’ load balancer works. I find that this makes discussions with the network engineer easier. If the engineer understands the concepts on the Exchange side and the impact then it makes life very easy to implement the correct solution.
What protocols require persistence?
I’ve detailed the recommendations on the specific services below that will help you determine the correct persistence method for optimal load balancing.
Hopefully, this helps some administrators/implementer's understand the concept better. As I mentioned earlier, I personally do a lot of research during my planning and deployment phases to help ease configuration on firewalls, load balancers and such.
Some references that you will find very valuable:
Until next time…..happy load balancing :-)
Michael
I've been busy with a small Exchange 2007 to Exchange 2010 engagement the last few days and we decided to go with Windows Server 2012 for the base operating system. Luckily for us Exchange 2010 SP3 has been released and I was very excited to deploy on a new Windows Server 2012 OS.
I thought it would be a good idea to share my experiences as I picked up some issues during the deployment specifically the configuration of the Database Availability Groups.
I’m not going to cover the actual migration, just the Windows Server 2012 part and the issues I picked up during the Database Availability Group configuration. There are many great TechNet articles that cover the coexistence:
http://technet.microsoft.com/en-us/library/dd638158(v=exchg.141).aspx
First things first – let’s read the release notes on Exchange 2010 SP3:
http://technet.microsoft.com/en-us/library/jj965774(v=exchg.141).aspx
Two important points:
There are also some minor changes in the prerequisites for Windows Server 2012.
Dot Net Framework 3.5 and Windows Powershell 2.0 needs the Windows Server 2012 image mounted as the side by side store (sxs) source files are not available locally after install – you only need to use the source files if you don’t have internet access to Windows Updates from your box – which was the case for me. You don’t have to uninstall Dot Net Framework 4.5 or Windows PowerShell 3.0.
To install Dot Net Framework 3.5 and Windows Powershell 2.0 via PowerShell using SXS source files:
Import-Module ServerManagerInstall-WindowsFeature NET-Framework-Core,PowerShell-V2 –Source E:\Sources\sxs
Where E:\ is the drive where you mounted the Windows Server 2012 image file.
The rest of the MultiRole prerequisites (I install telnet client additionally for troubleshooting purposes):
Install-WindowsFeature Telnet-Client,RSAT-ADDS, RSAT-Clustering,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Request-Monitor,Web-Static-Content,Web-Mgmt-Console,Web-Lgcy-Mgmt-Console,Web-WMI,WAS-Process-Model,Web-Asp-Net,Web-Client-Auth,Web-ISAPI-Ext,Web-ISAPI-Filter,Web-Http-Errors,Web-Http-Logging,Web-Http-Redirect,Web-Http-Tracing,Web-Digest-Auth,Web-Dir-Browsing,Web-Dyn-Compression,NET-HTTP-Activation,RPC-Over-HTTP-Proxy –Restart
Install the Office 2010 Filter Packs found here: http://www.microsoft.com/en-us/download/details.aspx?id=17062
Install Office 2010 Filter Pack Service Pack 1 found here: http://www.microsoft.com/en-us/download/details.aspx?id=26604
The next step is the actual install – which I won’t cover as there are tons of content on the web around that.
After my brand spanking new multirole servers were deployed and base configuration completed it was time for the DAG configuration.
The first thing to know when creating the DAG on Windows Server 2012 is that your cluster name object (CNO) needs to be pre-staged, because of the permission changes in Windows Server 2012 with regards to computer objects.
It’s important to ensure the above CNO pre-staging is correct as the cluster is only formed once you add the first mailbox server to the DAG. This is where my second issue popped up.
I noticed that during Add-DatabaseAvailabilityGroupServer the process got stuck at installing the Failover Cluster Components. I’m not entirely sure if this is Windows Server 2012 related or only happened to me on that day (it has never happened on Windows Server 2008 R2), but I killed the process and noticed that the components were indeed installed on the server. When I reran Add-DatabaseAvailabilityGroupServer it finished successfully and the cluster was created.
I don’t like processes getting stuck and then killed in mid configuration, so before my second Add-DatabaseAvailabilityGroupServer I pre-installed the Failover Cluster components by using the following PowerShell cmdlet:
Install-WindowsFeature -Name Failover-Clustering –IncludeManagementTools
At this stage I encountered my third issue (yeah it was a tough day at work :-) ):
When the second server is added to the cluster, the cluster is changed to Node and File Share majority – thus using the predefined File Share Witness server (FSW) and Witness directory that you specified in New-DatabaseAvailabilityGroup. For some reason my FSW cluster resource would just not go online with an error message 0x8007052e: "unknown user name or bad password".
Trying to avoid unnecessary time wasting I decided to just remove the cluster completely and start again (luckily PowerShell makes this very easy):
Moral of the story: Sometimes it’s better and faster to start again than to troubleshoot for hours on end. I might have been able to get the FSW online with a little digging in the cluster logs, I have feeling that the CNO had some problem in AD, but with minimal deployment time I decided to reconfigure and luckily it worked out with minimal troubleshooting this time around.
The rest of the deployment went without a hitch, I’ll be migrating mailboxes soon and hopefully I don’t pick up any other anomalies along the way (if I do I’ll definitely add it as a blog post :-) ).
Hopefully someone will find the information above helpful when they deploy on Windows Server 2012!
Until next time.
If you haven’t seen my mailbox migration tool yet, go and have a look at it here. It’s been working great for me and my customers that have been using it. If you have a large amount of mailboxes to move it definitely helps with the management and automation of the process.
I'm also working on one for on-premise migrations and will hopefully release it soon.
In a previous article, I mentioned that I’ll be giving you some tips on what to do with Failed move requests...the time has arrived.
First, it’s important to understand how move requests work. I highly recommend that you read the TechNet article on mailbox move requests.
http://technet.microsoft.com/en-us/library/dd298174(v=exchg.141).aspx
Then, understand the parameters of the new-moverequest cmdlet:
http://technet.microsoft.com/en-us/library/dd351123(v=exchg.150).aspx
First, I need to check why my move requests failed. Let's use the following PowerShell code:
get-moverequest -movestatus Failed|get-moverequeststatistics|select DisplayName,SyncStage,Failure*,Message,PercentComplete,largeitemsencountered,baditemsencountered|ft -autosize
Let’s cover a few scenarios (there is obviously many other scenarios that you might experience):
So, what do we do in the above scenario’s, let’s cover each one and you’ll get the picture on where I’m trying to go with this article.
Scenario 1:
This issue occurs when the number of bad items encountered exceeds the bad item limit. By default if you don’t specify a bad item limit the move request will fail if it encounters any bad items.
I’m not going to go into much detail around bad items, but from my understanding these items are usually MIME properties from e-mail clients that do not comply with the MIME standard. In the past Exchange (E2k3 and E12) didn’t understand the illegal properties that do not comply with the MIME standard so when it tries to index these illegal properties it may encounter corruption. Exchange 2010/2013 simply excludes these items in new messages and blocks these corrupted properties when mailbox items are moved from older Exchange versions, thus the bad items.
You can see a report of the bad items by running the following:
get-moverequeststatistics –includereport
Pheeww, ok let’s get back to the issue at hand. We need to skip these bad items, but we don’t want the move to start from scratch. Huh? That doesn’t make sense.
Well, many folk think that when a move request fails the data disappears on the target side, when in actual fact the move request is still available and the synchronized data will not be removed unless you clear the move request for that specific mailbox.
Let’s say we encountered 9 Bad Items. We checked the move request report and confirmed its bad properties or calendar properties.
So we modify the current move request to skip those items.
get-moverequest 'Jackie Chan'|set-moverequest –baditemlimit 10
Next, we resume our move request to continue where it failed:
resume-moverequest 'Jackie Chan'
You'll notice the move request just continues and doesn't start with an InitialSeeding. Happy days !
Let’s move on to the next scenario.
Scenario 2:
I’m sure you already know that moving a mailbox to Exchange Online requires that individual item sizes be less than 35 MB. If you kick of a move request it will fail with a large item limit exception when it hits that item that’s larger than 35 MB.
I covered the large items in this article where I gave you a PowerShell script that you can run to report on any mailboxes that have items larger than 35MB. You can use the report to find the item and get the user to back up the item somewhere safe.
So you’ve cleared all the large items (let’s say there were 3 items) in the mailbox and you don’t want to restart the 10GB mailbox move…aargh!
Don’t fret, let’s modify the move request.
get-moverequest 'Jackie Chan'|set-moverequest –LargeItemLimit 3
Happy days are here again...
Scenario 3:
I personally haven’t encountered this scenario a lot, but what seems to happen is the connection to the remote server encounters some kind of issue and the move request fails with a HTTP request unauthorized error.
In this scenario, I re-credential the move request with the remote credentials for the Exchange On-Premise organization.
$onpremcreds = get-credentialget-moverequest 'Jackie Chan'|Set-moverequest –remotecredential $onpremcreds
Resume-moverequest 'Jackie Chan'
Are you getting excited about this? I hope so, because this saved me endless migration issues, especially migrations to Exchange Online where you don’t have gigabit connectivity to the target database.
The above approaches can obviously be used with on-premise migrations as well.
Until next time,
Happy mailbox migrations!!
Michael Hall
Got some great news – Windows Azure Active Directory Sync Agent (DirSync) has a new welcome feature - Password Synchronization - whooohoo.
This is great for hybrid and staged migrations and simplifies things tremendously during these types of migrations.
If you already have DirSync running you’ll need to update it to get the new feature set.
Check out Alex Simons’ blog post here:
http://blogs.technet.com/b/ad/archive/2013/06/03/making-it-simple-to-connect-windows-server-ad-to-windows-azure-ad-with-password-hash-sync.aspx
Check out TechNet here:
http://technet.microsoft.com/en-us/library/dn246918.aspx
UPDATE: Some of you might experience issues with password sync and finding the following exception in the event logs:
Microsoft.Online.PasswordSynchronization.DirectoryReplicationServices.DrsException: RPC Error 8440 : The naming context specified for this replication operation is invalid. There was an error calling _IDL_DRSGetNCChanges.
I have been providing the Dev team logs and feedback on the above issue. They are aware of this and are hard at work to determine the root cause.
UPDATE 25 June 2013: The Dev team has informed me that a new version of the DirSync tool is now available for download on the Admin portal - the version number 6411.0007.
Please use this version as it contains the fix for the RPC Error 8440 Exception that was caused in Windows 2003 Domain Controller environments.
See also -
DirSync//WAAD Sync Tool wiki - http://social.technet.microsoft.com/wiki/contents/articles/18096.dirsyncwindows-azure-ad-password-sync-frequently-asked-questions.aspx
DirSync/WAAD Sync Tool release history: http://social.technet.microsoft.com/wiki/contents/articles/18429.windows-azure-active-directory-sync-tool-version-release-history.aspx
Happy DirSync’ing
UPDATE :
1. Please take note we have increased the size limit to 150MB during on boarding the previous 25MB should no longer be an issue - see the announcement here: http://blogs.office.com/2015/01/12/whats-new-december-2014/
2 . I've handed this script over to someone else for maintenance and support. Please use the following link to get the latest version: https://gallery.technet.microsoft.com/PowerShell-Script-Office-54d367e
There has been a lot of requests from my customers for an easy way to find large mail items in mailboxes that are planned to be moved to Exchange Online.
As you are probably aware, items larger than 25MB are not moved to Exchange Online and this causes the move request to fail if LargeItemLimit is not specified and/or the items larger than 25MB isn’t removed from the mailbox (backed up to a PST).
You can read more about it here: http://support.microsoft.com/kb/2584294
It’s obviously not very feasible to start your move requests without knowing that you are going to hit these kind of problems, because you want those moves to be as smooth as possible. I’ll show you some tricks that I use to avoid starting the move request from scratch when this occurs, but that will be another article.
Initially, I wanted to start from scratch and write a PowerShell script based on Exchange Web Services, but luckily one of my colleagues (Dmitry Kazantsev) already had a great bunch of scripts based on EWS that he created. I have however made some extensive changes by consolidating all the scripts to functions in one single script and adding some additional horsepower to the script. It’s been working great for me.
The script uses Exchange Web Services to impersonate a user account and essentially gaining access to the mailbox to scan the items in each folder for the large item limit you specify. All the results are then dumped to a CSV file.
So how does this work?
First, download and install the Exchange Web Services 2.0 API here
Make sure you have Exchange Management Tools installed on the machine you will be running the script. If you are running the script on an Exchange Server you need to install the API on the server.
Now we need an account with impersonation rights to be able to read the mailboxes.
Now we need to assign impersonation permissions for this account.
For Exchange Server 2007 mailboxes:
Get-ExchangeServer|where {$_.Admindisplayversion.Major -lt 14 -and $_.IsClientAccessserver}| ForEach-Object {Add-ADPermission -Identity $_.distinguishedname -User svc_ews@contoso.com -extendedRight ms-Exch-EPI-Impersonation}
Get-MailboxDatabase | ForEach-Object {Add-ADPermission -Identity $_.DistinguishedName -User svc_ews@contoso.com -ExtendedRights ms-Exch-EPI-May-Impersonate}
For Exchange Server 2010/2013 mailboxes:
New-ManagementRoleAssignment –Name:impersonationAssignmentName –Role:ApplicationImpersonation –User:svc_ews@contoso.com
Let’s cover the parameters that the script uses:
Mandatory parameters:
Parameters that are not mandatory:
If you want to target a subset of users you can use the ImportCSV parameter to specify a CSV file to read. The file needs a header called PrimarySMTPAddress and then the primary SMTP addresses of the mailboxes you want to target.
If you don’t specify the ImportCSV parameter the script will scan all mailboxes in the organization.
The URI parameter can also be specified if you want to use a specific EWS endpoint like https://webmail.contoso.com/ews/exchange.asmx.
If you do not specify the URI parameter the script will use Autodiscover for the correct Web Services URI for each mailbox.
And that’s it. Now you’re ready to rumble.
Copy the script to a folder of your choice and open Exchange Server 2010/2013 Management Shell.
To following is examples of the usage:
.\LargeItemChecks.ps1 -serviceAccountName svc_ews -serviceAccountDomain contoso.com -servicePassword P@5sword2 -resultsFile .\exportResultSet.csv -ItemSizeLimit 25
.\LargeItemChecks.ps1 -archiveCheck -serviceAccountName svc_ews -serviceAccountDomain contoso.com -servicePassword P@5sword2 -resultsFile .\exportResultSet.csv -ItemSizeLimit 25
It will run for a while depending on the size of your organization. The transcript log will also be created in the current directory.
The results file will look like this:
I decided to extend the Insight into the Hybrid Configuration Wizard article into another 2 parts. I've been getting numerous requests on troubleshooting the dreaded Get-FederationInformation Exception.
Let’s recap on what the high level steps for the HCW are:
I’m going to skip the Recipient Configuration Task here and cover that in my next article. I want to focus on step 4, the Organization Relationship Task for this article.
So let’s get right into it.
As the task name suggests, this step will:
Now, from the things I’ve seen and heard in the field is that most of the issues occur at step 3.
Step 3 uses a process called ProvisionOrganizationRelationship. The very first step that this function does is it tries to get the federation information for the domain for the organization relationship settings – let’s use uclabz.com.
Get-FederationInformation –domainname uclabz.onmicrosoft.com –BypassAdditionalDomainValidation $True
New-OrganizationRelationship -Name -TargetApplicationUri *.outlook.com -TargetAutodiscoverEpr <the Exchange Online Autodiscover URL> -Enabled:$True -DomainNames uclabz.mail.onmicrosoft.com
Get-FederationInformation –domainname uclabz.com –BypassAdditionalDomainValidation $True
Let’s pause here for a moment.
So why is the code doing this. Well, it’s simple. By using Get-FederationInformation, it’s very easy to get the correct values for TargetApplicationURI, TargetAutodiscoverEPR and DomainNames which is required for the New-OrganizationRelationship task.
The issues occur, because many customers have different ways of doing things, like Autodiscover, Certificates and Reverse Proxy etc.
Let’s take an example – Autodiscover:
Execution of the Get-FederationInformation cmdlet had thrown an exception. This may indicate invalid parameters in your Hybrid Configuration settings. Federation information could not be received from the external organization. at Microsoft.Exchange.Management.Hybrid.RemotePowershellSession.RunCommand(String cmdlet, Dictionary`2 parameters, Boolean ignoreNotFoundErrors) '.
See, the way Get-FederationInformation cmdlet works is that the discovery process only uses the following logic to determine the correct settings (in this order):
So as you can see from the above, you need to have the correct DNS record’s in public DNS for this step to work.
Here are some more tips on what to check for when you run into this problem:
Get-autodiscovervirtualdirectory –server <hybridcas>|Set-AutodiscoverVirtualDirectory –WSSecurityAuthentication $true
Get-FederationInformation –domainname domain.onmicrosoft.com -BypassAdditionalDomainValidation $True
Get-FederationInformation –domainname domain.onmicrosoft.com
/EWS/Exchange.asmx/wssecurity /Autodiscover/Autodiscover.svc/Autodiscover/Autodiscover.svc/wssecurity
Allow All users and No Authentication, users can authenticate directly. TMG will need to passthrough the traffic directly to the Hybrid CAS instead of authenticating as specified above.Confirm that traffic is not being blocked to Autodiscover.svc by checking the TMG logs.See this article on TMG - http://support.microsoft.com/kb/2821214
Get-FederationInformation -domainname uclabz.com -BypassAdditionalDomainValidation $True
Phew, I think that’s that for this article. Good luck with your hybrid configurations, I hope the above helps.
2014/10/02 Update: New Version 1.1 has been released - fixed a bug that always causes moves to be suspended.
If you are going to use the Large Item Limit switch, please use it cautiously and ensure your users have the large items backed up.
I wanted to share a small personal project I've been working on with you.
I was finding it cumbersome and painful to keep track of my migrations into the cloud at my customers so I decided to build something quick and easy that can be executed from anywhere.
The aim of this was to allow me to:
The outcome was a simple spreadsheet that uses VBA to call PowerShell code depending on the functionality you want.
It’s relatively straightforward, the Excel file contains the following main sheets:
Mailboxes – this sheet contains the main mailbox data that the other areas use to build csv files for the PowerShell scripts.
Planned_Batch – this sheet is where most of the functionality is. In here you can Activate Licenses, Export Scripts for manual execution, initiate the mailbox migrations, show current moves, resume any suspended moves or just open a remote PowerShell session to Office 365.
The following buttons exist on this sheet:
You will need to populate all the areas marked with yellow with your relevant tenant details:
There are two option buttons:
Moved – Once your planned batch has been successfully migrated you mark the batch completed by changing the Move_Variable to “Moved”. This will allow the moved sheet to be updated with the relevant mailboxes that have been migrated to help you keep track of your migrations.
Move_Request_Statistics – This sheet allows you to track the data on the all your move requests. When you click on the Get-Statistics button it opens a remote PowerShell session to get some data from your current move request and export that to a csv file to the location you specified. You can then import this file and use the Average MB/min to help with the timing calculation in the Planned_batch sheet.
License Report - The two Licensing sheets allows you to export the raw license data from your tenant and import the data into the file. You can then use this report to reconcile your licenses and get an overview of the license situation.
Ok, so that covers the sheets and their functionality. So how would I start with this:
First download and install the Windows Azure Active Directory Module for Windows PowerShell:
When you open the Excel spreadsheet you will first need to populate the mailboxes sheet by running the following within the Exchange 2010/2013 management shell:
$mbx=Get-Mailbox -resultsize unlimited; $mbx | foreach-object {$UPN = $_.UserPrincipalName; $EmailAddress = $_.PrimarySmtpAddress;$OU = $_.OrganizationalUnit; $Type = $_.RecipientTypeDetails; $_ | Get-MailboxStatistics | select @{Name="UPN";expression={$UPN}},@{Name="EmailAddress";expression={$EmailAddress}},@{Name="Type";expression={$Type}},@{Name="OU";expression={$OU}},DisplayName,@{Name="TotalItemSize(MB)";expression={$_.TotalItemSize.Value.ToMB()}},LastLogonTime}|Export-csv .\Mailboxes_Output.csv –notype
The above will export the required data to a CSV file which you can then import into the Mailboxes sheet by using the button provided.
When you want to target a couple of mailboxes for migration you simply mark the Move_Variable column in the mailboxes sheet with an “X”. Refreshing the Planned_Batch sheet will populate the targeted mailboxes for migration.
I didn’t want to build an elaborate system that uses a database and a front end, because I wanted something quick and easy to use and for me this works. I’m sure there are some brilliant minds out there that will definitely have something to add so feel free to add wherever you want and share. The code is open and available to modify as you see fit.
One more thing, this spreadsheet is in no way an Official Microsoft Office 365 Tool and not maintained by Microsoft, it’s essentially a Michael maintained tool developed to make my life easier during mailbox migrations :-)
Download the tool here.
If you have any questions or want to give some feedback feel free to e-mail me here - technicalramblings@outlook.com or ask in the comment section below.
Happy mailbox migations!
I got a mail the other day from a colleague requesting assistance on behalf of a partner around an Office 365 certificate error in Outlook.
The scenario was that a certificate expired – I’m not sure what certificate they referred to so I assumed the ADFS/TLS certificate. They renewed the certificate, but Outlook clients were still popping up with a certificate validation error.
The first thing you need to understand with this is that you don't manage the certificates for Exchange in Office 365 for your Outlook Anywhere connections. Microsoft manages this on the Office 365 backend. You will only manage your ADFS and TLS certificates from your side.
I did some checks on the domain and noticed that they have an A record pointing to the root domain that listens on 443 and 80 for their www site- like below:
This site had a web server certificate loaded which was expired (not the same certificate the guy was talking about initially).
So why is Outlook popping up with a certificate validation issue on from their website? Easy….
The Outlook Autodiscover process will first check the root domain for any Autodiscover service points – see here: http://technet.microsoft.com/en-us/library/cc539049.aspx
Outlook will also run Autodiscover during startup, refreshes as often as the TTL period specifies, usually 1 hour and then also during network connectivity issues to a server.
Essentially the request will see that the root domain record is listening on 443, but the certificate is expired. This results in Certificate validation errors on the first step when Outlook goes through the Autodiscover process.
There are two ways to resolve this:
Happy Office 365’ing!!!
In this post I want to cover the Hybrid Configuration process - specifically detail on the Configure Legacy Exchange Support step.
The Hybrid Configuration Wizard has 6 tasks that it executes:
Lets cover the first two tasks where most of the problems usually occur (from what I’ve seen in the field).
Global prerequisites task – this task does the following checks:
Legacy Exchange Support task – This task covers legacy Public Folder configuration to allow Free/Busy lookup where Public Folder Databases exist in an organization. It can cause some major headaches if your Public Folder infrastructure is not healthy and the way it looks up Exchange servers in the organization.
Herewith the logic of this task:
I want to pause here for a moment and just highlight that the code executes Get-ExchangeServer and loops through each server. The impact of this is that your first Exchange 2010 server in the Get-ExchangeServer results will be the oldest Exchange 2010 server in your organization. Your brandspanking new Exchange 2010 Hybrid servers will be the last servers on this list. So be aware that if you have any firewalls between any of your Exchange Servers we need clear traffic between the Hybrid Servers and all the Exchange 2010 mailbox servers hosting Public Folders in that list – otherwise you might get the ‘Subtask ValidateConfiguration execution failed: Configure Legacy Exchange Support’ error. If the Hybrid Servers are your only Exchange 2010 Servers you need the Mailbox Role and Public Folder databases replicated to them for the above to work (see below).
It’s also important that your Public Folder infrastructure replication is working and healthy – otherwise you might experience problems with the Intall-FreeBusyFolder cmdlet.
The Mailbox role is a requirement on the Hybrid server in the event that you have Public Folders in the organization for Exchange 2003 mailboxes.
You will need to create a Public Folder database on the hybrid servers and ensure (force) that the hierarchy - \NON_IPM_SUBTREE folders and subfolders are replicated to this database by using AddReplicaToPFRecursive.ps1 script
Why?
The above isn't very detailed, but to understand the actual process and why the legacy step is so important in Exchange 2003 environments check out this great article from the Exchange Team.
http://blogs.technet.com/b/exchange/archive/2011/06/28/cross-org-availability-using-federation-trust-and-organization-relationship.aspx#scenario3
Some more information on Hybrid Servers with Public Folders - http://technet.microsoft.com/en-us/library/hh757251(v=exchg.141).aspx
Remember to size your storage correctly for the Hybrid Servers if they will be hosting Public Folder Databases and the usual Public Folder guidance applies - http://technet.microsoft.com/en-us/library/bb629523(v=exchg.141).aspx
Hopefully the above can help with your troubleshooting steps if you receive the dreaded ‘Subtask ValidateConfiguration execution failed: Configure Legacy Exchange Support’ error during your Hybrid Configuration.
PS: And remember if your organization contains Exchange 2003 users, you must manually populate the TargetSharingEPR property (ex: https://hybrid.contoso.com/ews/exchange.asmx) on the Organization Relationship on the Exchange Online side :-)
UPDATE: The Exchange Team released the The Hybrid Free Busy Troubleshooter which is an awesome tool to help with Free/Busy issues - http://aka.ms/hybridfreebusy
A couple of months ago I was assisting a customer with a very strange Group Policy problem. The first sign of the problem was when some users complained that they weren’t notified of upcoming password expiries.
My initial thought was that it could probably be that the Default Domain Policy was not being applied properly as this is the only area where the password policies were being set for this customer (the settings in the Password Policy settings were correct).
Trying to recreate the issue we started up a test box and executed GPRESULT /R, but didn’t notice anything out of the ordinary.
I then suggested we get one of the affected machines to determine what the cause is, maybe this way we can replicate the issue on a clean machine.
These were my troubleshooting steps:
The question now is why the entry was enabled on a few machines.
I found some blog articles around automated OS deployment where the deployment process would get stuck on the Interactive Logon text which would cause the process to stop. Disabling the security client side extension by adding the NoMachinePolicy=1 entry would disable this and the process would then be able to continue.The deployment team at this customer confirmed that this was not the case during their deployment process, so the only other explanation could be malware/virus, but I'm still not sure.
Luckily the amount of affected machines were very small in this case.
Hopefully someone else out there will come across this article if they experience this very interesting issue.
Until next time......
UPDATE 2014/14/09: New version 0.6 has been released to support Power BI for Office 365 licenses.
I’ve been looking for some way of reporting on Office 365 licenses assigned to users and creating a simple Pivot Chart to get an overview of the licenses.
There are alot of PowerShell scripts out there, but the ones I found didn't really fit my requirements.
I decided to throw something together to help me with my requirements ,nothing fancy, but I came with the following.
Basically, I’m using Excel to call PowerShell to export the raw license data to CSV. Once the export process has finished you just use the import button to import the CSV into the Excel sheet. The Pivot Chart will then update with the relevant data.
The Chart will look like this:
You can manipulate the Chart by filtering on AccountSKU, ServiceName and ProvisioningStatus.
I re-used some code in the PowerShell from Alan Byrne's script, but I needed the licensing data in each row to create a report on different SKU’s. So my script basically loops through each user and export SKU’s and Service names against every user.
I hope this helps you to get an overview on your licenses assigned on your tenant.
You can download the tool here
This requires the Windows Azure PowerShell module to work.
I also included this feature in my Migration Management tool to help keep track of licenses during a migration.
If you have any feedback, don’t hesitate to ping me at technicalramblings@outlook.com
Michael Hall (MCS)
It’s been a while since my last article; things have been quite hectic on the work front and I recently moved half way around the world to pursue some new and exciting challenges.
Seeing that cobwebs have been building in my little space on the Interwebs I decided that it’s time to throw something cool into the wild.
Now that Exchange 2013 SP1 has been out I know from experience that customers are planning hard to get onto Exchange 2013, SP1 in any product is “believed” to be the “stable, what RTM should’ve been” version…..not sure I always agree with that, but everyone has their opinions, that’s what makes life so interesting.
Let’s jump right into it….Exchange 2013 maintenance.
You need to do some maintenance on your Exchange 2013 environment by setting a server into maintenance mode safely and efficiently.
First, you need to understand the supported process, there’s great content on TechNet covering maintenance here: http://technet.microsoft.com/en-us/library/dd298065(v=exchg.150).aspx#Pm
Now that you’ve gone through the process it might become obvious that you could potentially automate a whole lot of these commands that you need to run. And off course you could add some cool checks and tests and bells and whistles to make the process easier, more efficient and less error prone.
Sounds cool right, well, I’ve been working on something like this and I think it’s ready for some worldly love, attention and testing. Now many of you would say, well there are some scripts out there that do this, and I would say we have a lot of cars on the road….they all get you from point A to point B, but we all like to drive in style from point A to point B….right.
So, let’s talk about this script and most importantly the parameters it accepts.
The script is a one stop shop to add/remove your Exchange 2013 server to/from production rotation. The script will follow the procedures as outlined in the maintenance article in TechNet with a few additional checks and validations.
The process at a high level:
The following parameters are accepted by the script, depending on what you want to do and the script is able to execute everything from a central location.
The only requirements for the script is Exchange Management tools.
Remove a server from production rotation:
.\E2013-Maintenance.ps1 –server ex15-02.uclabz.com –StartMaintenance –LogFolderPath c:\temp\logs
Add a server back into production rotation:
.\E2013-Maintenance.ps1 –server ex15-02.uclabz.com –StopMaintenance –LogFolderPath c:\temp\logs
Add a server back into production rotation and balance databases:
.\E2013-Maintenance.ps1 –server ex15-02.uclabz.com –StopMaintenance –balanceMailboxDatabases –LogFolderPath c:\temp\logs
Let’s take a look at some output examples.
If you follow the high level process depicted above you’ll see in the screenshot below, the script output is very verbose and the operator knows exactly what’s going on at any given moment. The output shows the steps the script follows when removing a server from production rotation and each line in code at that given point.
If any database move fails the script will bail, catching the Active Manager exception and request operator intervention. It will also test MAPI connectivity after databases have been moved to ensure they are mounted. If any databases are not mounted operator intervention is required at this stage, unless Active Manager fixes the issue automatically.
The below shows what the script will do when adding the server back into production and the operator forgot to check if all services are started on the target machine. The test health function will essentially try to start these services. If any service is still unable to start the server will not be added back into rotation.
In the below example the script was able to start the services not running and then continued to add the server into production rotation.
The below shows an example where the logic was unable to find enough healthy copies during database pre-validation prior to moving the databases to a new target server, thus alerting the operator and then exiting.
In the final example below, the server was added back into production rotation and the balanceMailboxDatabases switch was specified which resulted in the script balancing the databases in the DAG using the built in database balancing script.
It’s important to note that if you haven’t setup your load balancing for Client Access Servers properly to be in sync with managed availability, i.e. you’re not checking for healthcheck.htm then CAFÉ roles will not be removed from the load balancing pool and client connections will still be directed to the server. Read more here: http://blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx
Now with any script that moves databases, changes component states and restart services I wouldn't advise anyone to just run this straight in production without making 100% sure it fits your needs, you understand what the output is saying, you are comfortable running it and that you've tested it in your QA/LAB/DEV or whatever environment first.
I've only had a chance to test this in my lab environment with 3 servers and 3 database copies. So if you have some larger test beds out there, please give it a bash and let me know how it goes.
Grab the script here.
Like always, I appreciate any feedback/improvements/ideas/fixes! #smileyface
Michael HallService Engineer
This short article covers the process of an in-place upgrade of the DirSync database to a full standard edition of SQL 2008 R2.
Why would we want to do this? Well, you might find yourself in an environment where the total objects are less than 50,000 ,but you still see the database hitting the limit with a sudden increase in objects which causes your SQL 2008 R2 Express databases to hit the 10GB limit.
First things first – lets backup the current SQL database.
BACKUP DATABASE [FIMSynchronizationService] TO DISK = N'C:\DirSyncSQLBackup\FIMSynchronizationService.bak' WITH NOFORMAT, NOINIT, NAME = N'FIMSynchronizationService-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10GO
Next step is to upgrade the SQL Express instance to SQL 2008 R2 Standard.
Insert/Mount the SQL 2008 R2 Standard ISO.
On the SQL Server Installation Center select Maintenance and Edition Upgrade
Follow the on screen steps for Product Key, Instance selection (MSONLINE) and click on upgrade.
The process is relatively quick and I didn’t experience issues during my instance upgrade.
You can verify the upgrade by running the following in SQLCMD.
SELECT SERVERPROPERTY ('edition')GO
Check your DirSync services:
Kick off a Directory Synchronization:
Check if all is good on the Synchronization Service Manager:
C:\Program Files\Microsoft Online Directory Sync\SYNCBUS\Synchronization Service\UIShell\MiisClient.exe
Life is good again!
It’s been a while since my last blog, but the world of Microsoft Consulting has been keeping me very busy.
In this post I’m going to cover an issue I picked up during an Office 365 Hybrid deployment for a customer.
I’m not going to go into much detail around the actual deployment as there’s great documentation available on the deployment process, but I couldn’t find any reference to the issue I picked up with the certificates on Hybrid Configuration Wizard.
When you start planning your Office 365 hybrid solution you'll need to plan for a trusted third party certificate. It's required for ADFS and Transport Layer Security to work between the on-premise environment and Office 365.
During my deployment my customer requested the certificate and everything was looking good until I wanted to configure mail flow. The Hybrid Configuration Wizard wasn’t picking up the certificates on the transport servers.
After a little (actually...ALOT) of digging I found that my customer was pushing certificates via GPO, even the third party trusted Root CA certificates.
Upon checking the certificate in EMS I noticed that the Exchange Certificate was showing a RootCAType of GroupPolicy which made me uncomfortable.
I then removed the certificates from the GPO, manually imported the Trusted CA and this resulted in the RootCAType showing Registry.
This again didn’t solve my issue, so I removed the Root CA certificate from the machine, opened a secure HTTPS session to the third party certificate site and the machine downloaded the root CA.
When I checked the certificate after this it displayed ThirdParty, which then resulted in the HCW to pick up the certificates on the transport servers...yay!
You might also get an error message when the HCW runs:
Execution of the Get-FederationInformation cmdlet had thrown an exception. This may indicate invalid parameters in your Hybrid Configuration settings.
In my case Autodiscover was working properly and all pre-flight checks were good so I retried it a couple of times and it went through. :-)
I hope the above assists anyone with the same issue….it was a very interesting one to say the least.
Some more posts coming....
Cheers,
Have you ever wondered where Exchange looks to find the certificate for inbound and outbound TLS for the SMTP service?
Well, it’s actually documented in detail here and here. If you read the TechNet articles you can come to the conclusion that Enable-ExchangeCertificate for the SMTP service will stamp the msExchServerInternalTLSCert attribute on the transport server object in AD with the certificate thumbprint you specify in the command.
But what happens if you run Enable-ExchangeCertificate and choose to not overwrite the certificate for SMTP.
If you check the certificate list now, you’ll notice two certificates, both are valid, and apparently both are assigned to SMTP (which isn't really the case, but the output seems to confuse alot of people). So which one will Exchange use, because I didn’t overwrite the current certificate.
Luckily we know from the TechNet articles that Exchange queries AD to match the thumbprint in the msExchInternalTLSCert attribute during the certificate selection process. So, how can I quickly check which one of the certificates is actually being used?
Import-Module activedirectory$transportServers = Get-TransportServer|select -ExpandProperty Name$forest = [System.DirectoryServices.ActiveDirectory.Domain]::GetComputerDomain().Forest.ToString();$searchbase = "CN=Configuration,DC="+($forest).replace('.',',DC=')$results = @() foreach($transportServer in $transportServers){ $CertThumbPrint = Get-ADObject -Filter "ObjectClass -eq 'msExchExchangeServer' -and name -eq '$transportServer'" -properties * -SearchBase $searchbase -server $forest|%{[Security.Cryptography.X509Certificates.X509Certificate2]$_.msExchServerInternalTLSCert}|select -expandproperty thumbprint $obj = New-Object PSObject -Property @{"TransportServer"=$transportServer;"SMTPCertificate"=$CertThumbPrint} $results+=$obj}$results|select TransportServer,SMTPCertificate
Michael Hall Service Engineer Office 365
I've been working with the Exchange Team on some additions to the Exchange 2013 Role Requirements calculator scripts lately and I picked up an interesting issue with the Information Store service on Exchange 2013 while testing my changes to the scripts.
I wanted to share this experience to increase visibility to our customers in case someone encounters the same problem as it took me a while to figure out what the root cause of the problem was.
As you probably already know the Exchange 2013 Role Requirements calculator scripts creates the databases, the DAG and the mailbox database copies depending on the input you provide to the calculator. This makes deployment way easier and allows you to deploy servers exactly the same way, because the PowerShell scripting engine does all the work for you.
So here’s the lowdown:
Once I reboot the server or restart the Information Store service, the service will just fail to start with the errors seen below.
Error: 0x8004010f
Event log error: -2147221233
Event log error: No databases found on this server.
So on to troubleshooting:
With the above all good, I then take a look at the logic that the Managed Information Store service code uses to start the service and I find a piece of code that refers to maximum active databases in the start-up process.
I get that “aha” moment and check my MaximumActiveDatabase parameter on the Exchange Server.
Get-mailboxserver UCLABZ-EX15-01|select MaximumActiveDatabases
Low and behold it’s 0, I change the value to 1 to test
Get-mailboxserver UCLABZ-EX15-01|set-mailboxserver –maximumactivedatabases 1
Test the service and it starts successfully.
Now why on earth is it 0?
After double checking the PowerShell code and the input files I find a small issue which results in the Maximum Active Databases value to always be 0.
I fix the code and life is good again…for real this time. :-)
So, would you ever want to set MaximumActiveDatabases to 0 on an Exchange Server?
The simple answer is NO. We have better means of limiting database activations on servers – think about DatabaseCopyActivationDisabledAndMoveNow and DatabaseCopyAutoActivationPolicy
You want to use MaximumActiveDatabases to limit the amount of active databases at a time on the server, but you don’t want to use this parameter to disable activations of mailbox database copies.
I hope the above helps someone out there, because it took me a while to find the root cause of this very interesting issue.
PS: A bug has been filed for the above, but we’ll have to wait and see if it will make the “cut”.
Office 365 customers running on Wave 14 tenants are gradually being upgraded to Wave 15 tenants. Although communication is being sent out to customers I wanted to add to my blog to help with the visibility in the requirements post upgrade. Customers need to plan for the below dates and need to be prepared. You should be aware of some of the following prerequisites that will be required by the deadlines specified or you might experience some service interruptions after the specified dates:
Action: Please make sure you have deployed the following Service Packs.
Public updates for Office 2007 and Office 2010:
For detailed information see here: http://community.office365.com/en-us/wikis/upgrade/guide-to-the-office-365-service-upgrade-for-2013.aspx#SystemRequirementsCheck
The above updates are also required for any customer starting out directly with a new Wave 15 tenant. I found that if the Office suite is not updated as per the prerequisites the Outlook 2010 profile might get corrupted once the mailbox has been moved from on-premise to Exchange Online. This essentially then requires that a new profile be created.
I also urge all current Office 365 Wave 14 tenants to constantly check out the FOPE Administration console for any FOPE related notifications – these notifications are not shown on the Office 365 Portal and you might miss out on important changes coming from a FOPE point of view that could have an impact on your environment.
I had some time recently to fine tune my lab environment on my laptop and thought I would share my experiences.
Microsoft was kind enough to provide me with a drive upgrade a couple of months back – a 250 GB Solid State Drive – Thank you Microsoft!!
I also took the liberty of purchasing an additional 128 GB Solid State to act as my OS drive.
So my setup is as follows:
I didn’t want to utilize my precious SSD space for ISO images so I still have my 1TB SATA external HDD that contain my bulky ISO’s that I use during my deployments.
My goal in my lab was very simple –
First order of business was creating the virtual switches.
At this stage I didn’t want to create a situation where my VM’s might affect a customer production network where I might be working – and yes this has happened to me once before…many many moons ago…I won’t go into detail but I started providing DHCP IP’s to machines…hehehe, sounds funny now!
Anyway, the vSwitch configuration on Hyper-v is configured as follow:
External vSwitches:
Internal vSwitch:
As seen above I planned on having two subnets internally within the Hyper-v environment and then my external home subnet for Internet access. I’ll then configure routing (I’ll cover this later) between these subnets to allow internet access and RDP from my home network.
Next step was to create my template VM.
This VM’s vhd will be used as the parent disk for all my other VM’s in the lab environment by using the Differencing disk feature.
Start deploying my VM’s as follow:
Save the following as PowerShell script and run in PowerShell window (Run as Administrator and change paths where required) :
Import-module Hyper-v$LABVMs =@('DC01';'DC02';’EX01’;’EX02’;’EX03’;’ADFS’;’SQL’;’LYNC’) Foreach ($LABVM in $LABVMs) { New-VHD -ParentPath "C:\LAB\TEMPLATE\Template.vhd" -Differencing -Path "D:\LAB\$LABVM\$LABVM.vhd" New-VM -VHDPath "D:\LAB\$LABVM\$LABVM.vhd" -VMName $LABVM -MemoryStartupBytes 1024MB -SwitchName vSwitch-Internal-Access Start-VM -Name $LABVM }
While the above executes go and download Vyatta Virtual Router here: http://www.vyatta.com/download/trial_software/VyattaCore
The reason I’m using Vyatta is that it has a very small footprint – less than 800mb.
Vyatta will act as my router between the subnets so it will require three interfaces:
IMPORTANT: There are currently some issues with Windows 8 Hyper-v and Wireless LAN adapters – specifically routing via Wireless LAN adapters. If you assign the Vyatta interface to your Wireless LAN adapter the routing will NOT work. On top of that if you use the Wireless adapter and assign a static route on your Windows 8 machine to other subnets you will encounter an OS crash – I’ve replicated this behaviour a couple of times. Use the physical LAN adapter or you’ll be troubleshooting for hours. I’ve communicated this to the Hyper-v dev team as well – I’ll update this blog if I get a fix for this issue.
So to continue….
Create the Vyatta VM with the ISO downloaded and login with vyatta/vyatta.
My config is very simple no weird NATs or anything like that – just plain routing. I assign IP’s to my interfaces created.
#configure
#set interfaces ethernet eth0 address 192.168.41.254/24 #set interfaces ethernet eth2 address 192.168.42.254/24 #set interfaces ethernet eth3 address 192.168.43.254/24
#set system gateway-address 192.168.41.1
#set service ssh port 22
#save
#commit
As this is lab environment I didn’t bother with any other config as I didn’t require it.
Config looks like this:
interfaces {
ethernet eth0 {
address 192.168.41.254/24
duplex auto
hw-id 00:15:5d:29:04:3f
smp_affinity auto
speed auto
}
ethernet eth1 {
address 192.168.42.254/24
hw-id 00:15:5d:29:04:3e
ethernet eth2 {
address 192.168.43.254/24
hw-id 00:15:5d:29:04:40
loopback lo {
service {
ssh {
port 22
Next step was to add some static routes on my home router:
Next was configuring all my VM’s with static IP’s and using gateway IP of 192.168.42.254 or 192.168.43.254 depending on which subnet I want the VM in.
Test network – life is good:
And then you can create and configure your ADDS, Exchange Server 2010, SQL, Lync etc. as per usual. If you configure DHCP on your internal subnets it won’t breach the physical network, but you still have access to those VM’s via RDP from you physical network at home.
I can get about 12 VM's running on my Laptop with this parent/child configuration on the VHD's while the SSD drives are still idling along nicely and the laptop is still as responsive as always. My bottleneck at this stage is RAM.
Hope this helps anyone. I read alot of people struggling with Vyatta, but it’s really that simple.
Until next time…..
If you've been following the Exchange Team blog you'd know that I released version 2.0 of PelNet recently.
I thought I’d share some info on my blog to cover some of the benefits in using this tool for any Exchange environment.
PelNet v2.0 has been updated to optimize execution time in large environments and the new code will allow an administrator to test SMTP against a large list of transport servers to multiple smarthosts in a matter of minutes.
In the Exchange Online Dedicated environment I’m able to validate transport on 1000’s of transport servers in matter of minutes. This increases the efficiency of the change team and incident response teams tremendously when dealing with send connector changes, smarthost changes and mail flow issues. This is amazing considering that most SMTP validation and/or troubleshooting still happens by using TelNet in the community.
The other new feature is the welcomed addition of TLS validation. The administrator has the ability to test TLS from all the transport servers to a specific smarthost (this can actually be any remote organization server, such as Exchange Online Protection in a hybrid scenario). PelNet will essentially try and find the TLS certificate assigned for SMTP and use that certificate for the encryption stream. The other cool thing about this is that an administrator has the ability to override the certificate logic by providing a thumbprint of a certificate to test prior to assigning the SMTP service.
With the new performance enhancements, PelNet could also be used to check SMTP mail flow daily if an administrator sets up some daily task. This way the administrator is being proactive in mitigating any mail flow issues.
PelNet can also be used in change management processes, such as pre- and post change validation. If a transport change occurs such as a new sendconnector, smarthost, MX change etc. PelNet can be used to test all these scenarios to ensure a successful change occurs prior to production switchover.
If you are starting to get excited about this, head over to the Exchange Team blog, read about and download the tool, and then go do some PelNetting!
Michael Hall Service Engineer Office 365: Exchange Online
Got a call from a customer informing me that inbound mails are bouncing with 550 Unable to relay – but only on 1 of their Exchange 2010 Edge servers. The other server works fine.
First thing I ask is whether the receive connector on the Edge server has Anonymous permissions allowed – upon which they say Yes…..sigh!
Anyways, I decide to have a look and everything looks good on the receive connectors. I decide to renew the Edge subscription; test the Edge subscription between the Edge server and the AD site and all tests are successful. Yet mails are still “Unable to relay” inbound….sigh again!
I test with FullCompareMode parameter and notice that the AcceptedDomainStatus shows NotSynchronized. Check the Accepted Domains on internal server all looks good, but nothing gets synchronized to that one Edge server.
I create a test accepted domain, forced synchronization and notice the domain is being synchronized…..hmmm what now….I can’t just delete and recreate the authoritative domain!
I decide to rename the display name of the accepted domains and forced synchronization. This populated the Accepted domains on the affected Edge server.
Renamed the display name back and confirmed the name changes on the Edge server.
Tested inbound mail….working…sorted….YAY!!
I’m not sure why this occurred, but I’m sure someone else out there will experience the same issue.
Happy Edge’ing.
I’ve been involved in relatively large Exchange Server 2010 multi-tenanted solutions recently and during my deployments we decided to move away from a dedicated server for Offline Address Book (OAB) generation to increase the efficiency of OAB generation while decreasing the time it takes for the OABs to generate.
In a multi-tenanted environment you have many address lists to cater for all the tenants you’re providing services to. This can cause a great deal of OAB generation processing on a dedicated server which slows down the rate at which OABs are generated, because they are all being generated by the same server. Depending on the size of the environment you could potentially have hundreds of OAB’s that get generated daily.
To get around this problem we distributed the OABs to multiple generation servers based on a random pattern. It’s relatively easy to script the move to a random server with PowerShell:
Write-Host "Getting all Exchange 2007 address books."[array]$OABs=Get-offlineaddressbook | ?{$_.ExchangeVersion -lt 14} foreach ($OAB in $OABs) { $OABName = $OAB.Name $OABGenServer = Get-MailboxServer | ?{$_.AdminDisplayVersion.Major -ge 14} | Get-Random -Count 1 Write-Host "Moving Offline Address Book $OABName to $OABGenServer" Move-OfflineAddressBook "$OABName" -Server $OABGenServer -Confirm:$false -ea SilentlyContinue }
Write-Host "Getting all Exchange 2007 address books."[array]$OABs=Get-offlineaddressbook | ?{$_.ExchangeVersion -lt 14} foreach ($OAB in $OABs) {
$OABName = $OAB.Name
$OABGenServer = Get-MailboxServer | ?{$_.AdminDisplayVersion.Major -ge 14} | Get-Random -Count 1 Write-Host "Moving Offline Address Book $OABName to $OABGenServer" Move-OfflineAddressBook "$OABName" -Server $OABGenServer -Confirm:$false -ea SilentlyContinue }
To keep this randomized distribution you need to ensure that the Control Panel doesn’t parse a server value to the new-offlineaddressbook cmdlet. This will ensure that the cmdlet will choose a random server during Offline Address Book creation and keep the OAB’s distributed over all your mailbox servers.
Some references to very valuable Exchange Server 2010 multi-tenancy documentation:
Until next time....
I just received feedback from the Dev Team that an updated version of DirSync Tool has been released that fixes the below Exception in Windows 2003 Domain Controller environments.
Grab the updated tool in the Admin Portal - version number is 6411.0007.
I can confirm that this version fixed the above issue in my environment.
One giant leap for mankind