Exchange 2010 added multiple features to improve the resiliency of messaging services. Notable additions included client throttling to ensure that a single mailbox would not consume excessive resources and mailbox quarantine.
Mailbox quarantine is enabled by default, and the first time a lot of admins discover the feature is when a mailbox gets quarantined and the user loses access to their mailbox. In Exchange 2010 the default quarantine value is 6 hours. Think about that for a minute, if a mailbox gets quarantined at 09:00, then it will exit quarantine at 15:00. Whilst a mailbox is quarantined there is no access to the mailbox. Only by passing the open as admin flag can it be opened. The mailbox cannot be moved, indexed, opened in OWA/EAS/Outlook or anything whilst it is quarantined. Quarantined really does mean quarantined…..
Some customers may be OK with the Exchange 2010 6 hour default quarantine duration, others not so much. TechNet also states that Exchange 2013 has a 24 hour default quarantine duration.
Lets take a look at the feature to investigate what we can configure. Some other questions that we want to answer are:
Exchange 2010 has a single store.exe process where all the databases are loaded, so it is imperative that this critical process is as well defended as possible. If the store were to crash or get hung up on a single thread then all mailboxes would be affected. Exchange 2013 implements multiple store.exe processes to mitigate impact. By analysing the status of mailbox threads, Exchange can determine if a single mailbox is impacting the store. It is possible that a single mailbox with corrupted data could cause store to crash or become unresponsive. If this happens repeatedly, then that would be considered a poison mailbox. As described on TechNet there are a couple of items that store considers naughty:
A mailbox that exhibits these behaviours is tagged, and a count is kept. So that this data is non-volatile and made available to multiple servers in a DAG, it is persisted in the registry. In a DAG the cluster service replicates this information via the cluster database. If a mailbox does get tagged with one of these issues you will see the entry here:
HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\<ServerName>\Private-{dbguid}\QuarantinedMailboxes\{mailbox guid}
With CrashCount or LastCrashTime holding the necessary data.
The key is not created until the store has crashed at least one time by a mailbox.
The default behaviour is to quarantine a mailbox if identified as causing a failure or deadlock three times in a two hour timespan. Store tags the mailbox as quarantined in the registry and the user cannot get access to the mailbox. The only access allowed is if the Open_As_Admin flag is passed, you can do this with MFCMapi for example and take a look at the mailbox contents.
The QuarantineState and QuarantineTime registry keys are used to keep track of the quarantine status.
Mailboxes are automatically released from quarantine if quarantined for longer that the quarantine duration (MailboxQuarantineDurationInSeconds) since it’s last LastCrashTime.
If the mailbox does not cause further issues, then the registry will be cleaned up. So if there are no failures in the previous two hours and the mailbox is not currently quarantined the registry will be cleaned up.
This is where it gets a little bit interesting! There are a couple of registry keys that we want to examine:
MailboxQuarantineCrashThreshold - number of failures which cause mailbox to be quarantined. By default this is three (3). MailboxQuarantineDurationInSeconds – amount of time a mailbox will stay quarantined. This is specified in seconds. By default the Exchange 2010 value is 21,600 (6 hours).
MailboxQuarantineCrashThreshold - number of failures which cause mailbox to be quarantined. By default this is three (3).
MailboxQuarantineDurationInSeconds – amount of time a mailbox will stay quarantined. This is specified in seconds. By default the Exchange 2010 value is 21,600 (6 hours).
TechNet documents the time period for resetting quarantined mailboxes is controlled by the registry key:
HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\<ServerName>\Private-{dbguid}\MailboxQuarantineDurationInSeconds.
Unfortunately there is a lot of content out on them interwebs which state that the registry value must be created in other locations. For example a quick search suggested these locations:
HKLM\SYSTEM\CurrentControlSet\Services\MSexchangeIS\ParameterSystem\Servername\Private-dbguid\Quarantined Mailboxes\MailboxQuarantineDurationInSecond
or
HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\<ServerName>\Private-{dbguid}\QuarantinedMailboxes\MailboxQuarantineDurationInSecond
Note that the MailboxQuarantineDurationInSeconds value is NOT below the QuarantinedMailboxes key, it is actually above it.
Exchange 2013 has the Enable-MailboxQuarantine and Disable-MailboxQuarantine cmdlets which allows the admin to easily control when a mailbox is placed in and out of quarantine.
Exchange 2010 however today does not have a simple way of really testing the quarantine values. As mentioned, Exchange 2013 has the Enable-MailboxQuarantine and Disable-MailboxQuarantine cmdlets. These cmdlets are not available in Exchange 2010. This means there is no easy way to validate that the change was applied apart from waiting for the next reoccurrence.
In Exchange 2013 the Disable-MailboxQuarantine cmdlet can be used.
For Exchange 2010, which does not have the Disable-MailboxQuarantine cmdlet we have to take a different approach. KB 2603736 states that to take a mailbox out of quarantine immediately all that we need to do is to delete the mailbox’s GUID entry from under the QuarrantinedMailboxes registry key. Store should process the registry key deletion and since the registry is the authoritative source of quarantine state the mailbox should be released. If no action is taken, then the mailbox will exit quarantine after the MailboxQuarantineDurationInSeconds period has expired.
There are a few ways to look at the status of a given mailbox:
When a mailbox is quarantined, EventID 10018 will be logged into the application event log and this can be easily picked up by monitoring tools.
We can take a peek at the registry to see if there are any mailbox GUIDs listed there under:
HKLM\SYSTEM\CurrentControlSet\Services\MSExchangeIS\<Server Name>\Private-{db guid}
Then run Get-Mailbox <GUID> to see which mailbox it is.
Get-MailboxStatistics for a mailbox also has a property to indicate if a mailbox is quarantined.
Get-MailboxStatistics Administrator | Select DisplayName, IsQuarantined | Format-Table -AutoSize
ExBPA will also check to see if a mailbox is quarantined.
Finally Exchange also exposes a performance monitor counter to indicate the number of quarantined mailboxes. SCOM will pick this up with the Exchange Management Pack. You can manually look at the counter -
MSExchangeIS Mailbox\Quarantined Mailbox Count
Cheers,
Rhoderick
Security is an integral aspect of running modern IT operations. There is a clear understanding that we need to protect our IT assets, company data and personal identifiable information. So when we discuss a migration to Office 365, security is an inevitable topic. One aspect that we need to discuss is around account lockout, and how to protect our Active Directory accounts as part of the overall cloud solution.
Methods to protect user accounts can be broken down into a few categories that include:
Customers wish to look at such options to mitigate the impact from:
In a future post I'll circle back on the underlying account lockout policy discussion, so let's park that one for right now. What I do want to cover in this post is ADFS and how it can impact account lockouts should you have an aggressive lockout policy enabled.
Update 3-9-2014: Please also review this post for an issue requiring a hotfix to resolve with Extranet Account Lockout Protection
In the previous versions of ADFS there was no native mechanism within ADFS itself to prevent brute force attacks upon ADFS. If AD has a password lockout policy set, then an external entity hammering the ADFS logon page could then lockout an AD account. If an entity knew the user account name, they could access the ADFS proxy page and enter a bad password for the user account. The below is an example for ADFS 2.0 running on Windows 2008 R2.
In order to mitigate this, the external firewall in front of the ADFS server could be set to only allow HTTPS traffic to the ADFS endpoint from the IP address ranges that are part of Office 365. Since this is a manual configuration, the onus is on the on-premises firewall administrator to keep the IP ranges up to date else authentication may fail. In the traffic flow, the HTTPS traffic coming to the on-premises ADFS proxy server is initiated from Office 365. As discussed at MEC, this will have to be a planning point for the upcoming OAuth changes in Q2 this CY. As part of the authentication changes, by default clients will connect directly to the ADFS servers. Either the firewall rules will need to be changed, or modifications made to the clients to use the legacy behaviour. More on that when the team announces the details later this year! This was discussed publically at MEC in the What’s new in Authentication for Outlook 2013 session.
If you did not get to MEC, then the content is available here for your viewing pleasure!
Apart from locking down the firewall, Windows Server 2012 R2 ADFS now adds a feature to natively allow the ADFS proxy to prevent AD DS accounts from being locked out! This is the Extranet Lockout feature. This is similar to the TMG 2010 Soft Account Lockout feature that was introduced in TMG 2010 SP2. It is said to be "soft" as the AD DS account is not locked, and after a period of time the ADFS server then automatically allows the account to retry the authentication.
Only Windows Server 2012 R2 has the Extranet Lockout feature. For this and other reasons you want to look at deploying Server 2012 for your ADFS infrastructure. Some reasons include:
As mentioned above, only ADFS 2012 R2 has the Extranet Lockout feature. Thus the ADFS infrastructure must be upgraded or installed as this version. For upgrade steps, please check out the excellent ASKPFE PLAT blog!
While the Extranet Lockout feature is enabled on the ADFS server, you must also deploy an ADFS proxy.
Traffic must hit the ADFS proxy. If you publish the ADFS server instead or your network misroutes the traffic and bypasses the proxy, the Extranet Lockout feature will not work as expected. Trust me, I’ve been there – but more on that later in a separate blog post!!
The other base ADFS requirements and prerequisites are also documented on TechNet.
As with the other articles in the recent ADFS posts, this is again in the Tailspintoys.ca lab. The ADFS namespace is adfs.tailspintoys.ca. The environment looks like the diagram below. The ADFS server is deployed on the internal corporate network and is joined to AD. The ADFS proxy is deployed in the DMZ, and is in a workgroup. Since we are using ADFS 2012 R2, the ADFS proxy uses Web Application Proxy (WAP) rather than a dedicated ADFS proxy role as in older versions.
For the details in building this lab please see the previous series of three posts.
The diagram was drawn with the April 2014 Visio Stencils for Office 365.
AD DS is set with a domain account lockout policy that states an account will lock out after 10 invalid logon attempts. This can be seen in the GPO Management Console:
And for those LAN Manager freaks out there the command prompt too!
PS -- This was taken from a DC that does not have the PDC emulator role
So we know that after 10 attempts the account will lock out. What happens if we launch a mini-DOS attack on some guy called user-1@tailspintoys.ca via the ADFS sign in page?
Browse to the ADFS sign in page in IE11 at https://adfs.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
And we enter a bad password 11 times…
Staying with the LAN Manager freak show, look what happened to that poor user, their account is now locked out.
On the ADFS server we see the 10 failed logon attempts before the account locked out:
Zooming in on one event we see that the response from AD is that this is an unknown user name and bad password. Well, that’s the generic text string. If we really want to know what is going on, then we look at the status and sub status codes. In this case 0xC0000006D maps to the bad user name response but 0xC0000006A tells us that the password was not correct. Well, that’s because I was making like Jean Michel Jar on the keyboard to make up a random string in the password entry field. Well, less the light show….
Not good! A malicious person ( moi ! ) managed to do a denial of service on this account.
AD FS Extranet Lockout to the rescue!
In the context of AD FS in Windows Server 2012 R2, Web Application Proxy functions as a federation server proxy. Web Application Proxy also serves as a barrier between the Internet and your corporate applications.
Web Application Proxy provides a number of security features to protect your corporate network, such as your users and your resources, from external threats. One of these features is AD FS extranet lockout. In case of an attack in the form of authentication requests with invalid (bad) passwords that come through the Web Application Proxy, AD FS extranet lockout enables you to protect your users from an AD FS account lockout. In addition to protecting your users from an AD FS account lockout, AD FS extranet lockout also protects against brute force password guessing attacks.
There are three ADFS settings that we need to look at with respect to the Extranet Lockout feature.
The intent is that the ADFS administrator will define a maximum number of failed authentication requests that the ADFS proxy will allow in a certain time period. Once these authentication attempts have been used up for that specific user, then the ADFS server will go into <Seinfeld> soup Nazi -- no auth for you!!! </Seinfeld>. The ADFS proxy server will then cease attempting to log the user on. By doing so, it does not hammer on the AD account thereby locking the AD account out. This protects the AD account from losing access to all resources, i.e. it is still functional on the corporate network and can get to file and print resources etc.
One thing to note. The value for the ExtranetLockoutThreshold on the ADFS server must be set to a lower value than the AD DS account lock out threshold, else the AD DS account will lock out before the ADFS proxy ceases to attempt authentication and enabling this on ADFS is pretty pointless!!
This is a global setting on the ADFS server, and the settings apply to all domains that the ADFS server can authenticate. Please plan accordingly.
To configure the AD FS extranet lockout, you must set three properties on the AD FS service object. To set the configuration, use Set-ADFSProperties and Get-ADFSProperties to verify.
For example, you can use the following oneliner PowerShell command to set the AD FS extranet lockout:
Set-AdfsProperties -EnableExtranetLockout $true -ExtranetLockoutThreshold 15 -ExtranetObservationWindow (New-Timespan -Minutes 30)
(The command is one line, please ensure that it does not word wrap)
You could split it out into multiple commands if desired:
$Timespan = New-TimeSpan -Minutes 30 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 15 -ExtranetObservationWindow $Timespan Get-AdfsProperties | Format-List *extranet*
$Timespan = New-TimeSpan -Minutes 30 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 15 -ExtranetObservationWindow $Timespan
Get-AdfsProperties | Format-List *extranet*
(Each command is one line, please ensure that it does not word wrap)
Opening up PowerShell on the ADFS server, and querying for the *Extranet* values we can see the default Extranet Lockout settings. Extranet Lockout is disabled by default.
Where is the default value for the lockout threshold coming from? Since it is disabled, 2147483647 is the maximum value in an Int32 data type. Run [int32]::maxValue in PowerShell to see.
Let’s now configure the ADFS server so that the ADFS proxy will lock out after 4 bad attempts in a 60 minute observation window.
$Timespan = New-TimeSpan -Minutes 60 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 4 -ExtranetObservationWindow $Timespan Get-AdfsProperties | Fl *extranet*
$Timespan = New-TimeSpan -Minutes 60
Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 4 -ExtranetObservationWindow $Timespan
Get-AdfsProperties | Fl *extranet*
When I first tried to configure this feature, I ran into this wonderful error:
Set-AdfsProperties : A parameter cannot be found that matches parameter name 'ExtranetLockoutEnabled'. At line:1 char:20 + Set-AdfsProperties -ExtranetLockoutEnabled $True + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Set-AdfsProperties], ParameterBindingException + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.IdentityServer.Management.Commands.SetServiceProperties Command
Huh???
As we saw above, there is definitely a property on the ADFS object that is called ExtranetLockoutEnabled – so why was I unable to set it?
This is probably because I have been spoilt with Exchange PowerShell since 2006. The attributes are carefully thought out and after running a get cmdlet we just change it to a set cmdlet and change what we need. For that reason I get frustrated with Windows PowerShell, especially the AD cmdlets. Why do I have to have a separate cmdlet for each tiny task? Anyway I digress…
In this case the developer changed the parameter that we use to set ExtranetLockoutEnabled. To set it we have to use the EnableExtranetLockout parameter. The two values are different:
It’s always the little things that get me……
After waiting a minute for the ADFS proxy to pickup on the change, we can test to make sure this is working!
Remember that AD DS is set to lockout after 10 invalid logons, and AD FS will cease after 4 failed authentication attempts.
Again we browse to the ADFS sign in page in IE11 at https://adfs.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
This time we will pick on user-2@tailspintoys.ca, just so that it is easy to distinguish the two scenarios in the event logs.
Again, the account is hammered with 11 bad logon attempts.
This time however, there are only 4 failed audit events on the AD FS server:
Please note The events at from 02:10 to 02:11 were the user-1 logon attempt at the top of this blog post.
Let’s check the status of the User-2 account
Even after 11 bad logon attempts via the ADFS proxy, the account is still active – boyashaka !
Just to prove what is in the security event log of the ADFS server, let’s look for audit failure events over the last day for each of these test accounts. To look for this data, PowerShell will be the weapon of choice. Note that Get-EventLog is not used as it lame when it comes to filtering so we will use Get-WinEvent which is way more powerful. Why the difference? Get-Eventlog was in the initial PowerShell release and Get-WinEvent was added in PowerShell 2.0…..
The code used below is:
$StartTime = (Get-Date).AddDays(-1) Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime} | Measure-Object Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime}
$StartTime = (Get-Date).AddDays(-1)
Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime} | Measure-Object
Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime}
The $StartTime variable goes back 24 hours from when it was created, i.e. a day. We then create a hashtable and look for only failure security audits for accounts that match the given username. In the example above this is the user-2@tailspintoys.ca account. Measure-Object is used to save us having to count….
First up is user-1. Note that there are 10 failed logon attempts which corresponds to the AD DS account lockout policy. The timeframe of the ‘attack” was 02:10 – 02:11.
For User-2, note that there are only 4 failed logon attempts. This correlates to the AD FS Extranet Lockout protection setting. Also note that since the AD FS lockout setting is lower than the AD DS account lockout policy the AD DS account is not locked out.
In addition to the content and links in the previously published ADFS blog posts there is also the following:
Troubleshooting AD FS
AD FS 2012 R2 Extranet feature lights up a new feature to make it very easy to provide protection from AD DS account lockout scenarios where the internal AD account is locked out due to malicious or fat-fingered end user logon attempts.
One thing to note is that applications that require ADFS to federate the authentication request will not be able to do so whilst this account is in a state of Extranet Lockout. Because of this some organisations may still choose to restrict access to their ADFS proxy via firewall rules and to set “reasonable” AD account lockout policies. We can talk more next time about why locking an AD account out after 3 bad attempts is not so good…..
An updated System Centre Operations Manager (SCOM) Management Pack (MP) for Exchange 2010 has been released to the Microsoft download center. This is build 14.03.0038.004 of the MP.
Edit 3-9-2012 Updated MP build 14.03.0038.004 released
EDIT 9-8-2012 - Updated MP build 14.03.0038.003 is currently unavailable due to an issue.
EDIT 8-8-2012 - Updated MP build 14.03.0038.003 released.
EDIT 27-6-2012 - NOTE that the MP is currently unavailable due to an issue. Please see the comments
The Microsoft Exchange Server 2010 Management Pack includes a complete health model, extensive protocol synthetic transaction coverage, and a full complement of diagnostics-based alerts and service-oriented reporting, including mail flow statistics. Alerts are classified by impact and recovery action, and are now processed by a new component called the Correlation Engine. The Correlation Engine suppresses duplicate alerts whenever possible to help front-line monitoring technicians monitor Exchange more efficiently. Most diagnostic information used in the Exchange 2010 Management Pack, including events and performance counters, is specifically engineered for monitoring. Very little tuning is required to monitor your Exchange organization. The Exchange 2010 Management Pack will scale with your environment.
The Exchange 2010 Management Pack is engineered for organizations that include servers running Exchange 2010. It isn't based on the Exchange 2007 Management Pack. Therefore, you'll notice some differences in the way you deploy and configure the Exchange 2010 Management Pack if you used the Exchange 2007 Management Pack in the past.
Please ensure that you download and thoroughly read over the Exchange 2010 Management Pack guide before installing the Management Pack. There are important considerations including:
Via this blog we have discussed the fundamentals of Exchange Autodiscover, and also issues around the Set-AutodiscoverVirtualDirectory cmdlet.
At this point the message should be out there with regards to how Outlook functions internally and externally to locate Autodiscover and the difference that having the workstation domain joined makes. Lync on the other hand is a different beastie!
Both the Outlook Client and the Lync client want to get to the Exchange Autodiscover endpoint, but they differ in how to get to Sesame Street. **
At one of my recent engagements the customer experienced a situation around Lync 2010 and Exchange 2010 integration. Exchange was successfully upgraded to Exchange 2010, and OCS was still in use. When piloting Lync 2010 and the Lync 2010 client they noted errors in the Lync client. There were a couple of reasons for this. The required configuration on the load balancer was not in place, and the device’s firmware was not at the required build level.
When we investigated what Lync and Exchange Autodiscover were doing, we noted that Lync was not locating the Exchange Autodiscover endpoint. Hmm. That’s a bit strange, innit? Outlook was running perfectly, and all the domain joined clients were always able to located Autodiscover by querying for the SCP. The Lync client on the other hand does not leverage SCP when locating Exchange Autodiscover.
Dave Howe’s whitepaper Understanding and Troubleshooting Microsoft Exchange Server Integration discusses this in more detail and is a great read! The one line that distils the important message is:
Unlike Outlook, which uses an SCP object to locate the Autodiscover URL, UC clients and devices will only use the DNS-based discovery method.
There is also a flow diagram in the whitepaper showing the DNS records used.
Note that nowhere in Dave’s article does he change or view the properties of the Autodiscover virtual directory. The same is also true in Prerequisites for Integrating Microsoft Lync Server 2013 and Microsoft Exchange Server 2013.
There are some differences between Exchange 2007 and 2010 with regards to how the requests get serviced. Exchange 2007 only does POX (Plain Old Xml) whereas newer Exchange does SOAP (Simple Object Access Protocol) in addition. Lync can leverage SOAP, Outlook kicks it old School with POX.
The customer above had deployed Exchange, but had not created any internal DNS records for Autodiscover.domain.com. Technically this was not needed for their Exchange + Outlook design, as they have an environment with HA load balancers and multiple CAS servers behind each load balancer. Their Autodiscover namespace had been set as the load balancer FQDN. As such the FQDN Autodoscover.domain.com was not on any of the Exchange CAS Certificates. And as mentioned in the busting Autodiscover myth post on Set-AutodiscoverVirtualDirectory their Autodiscover URI was previously configured by running:
Set-ClientAccessServer –AutoDiscoverServiceInternalUri “https://lb.contoso.com/Autodiscover/Autodiscover.xml”
In order to change this they:
** - That 8 foot tall yellow bird still freaks me out!!
>>>
One of my favourite features in Windows 8 is client side Hyper-V. This allows me to run multiple virtual machines on my Lenovo W530 laptop without having to remote to other machines when I am on site with customers.
As you can imagine I have a plethora of VMs for a variety of situations, and I ran into a bit of a pickle when moving my Exchange 2003/2007/2010 migration workshop lab over to Windows 8. This was for a different series of VMs than the one that would not import.
I thought that I’d share this to save others the time. So if you are seeing issues around:
Then you may be interested in seeing what was going on with my naughty integration components! Oooohh matron…
Update 20-1-2014: For a giggle I repeated the exercise with Windows Server 2012 R2 to see if that would upgrade the original VM's ICs correctly. It did not and I still had the same behaviour as shown here.
The machine that we will look at is a Windows Server 2003 x86 Enterprise Edition VM with SP2 installed. Upon starting the VM up we can see that the Windows 2008 R2 SP1 Hyper-V Integration Components (6.1.7601.17514) are currently installed as this VM was last running on that version of Hyper-V.
The VM was copied to my Windows 8 laptop, imported and then powered on.
Device Manager shows that there is a virtual network card present, and as expected there are a couple of unrecognised devices. This is because the Windows 2008 R2 version of the Integration Components will allow the machine to start and function, but does not recognise all of the virtual hardware in a Windows 8/2012 hosted VM.
Looking at the properties of one of the unknown devices we can see the following:
The above is something that updating the Integration Components to a supported version will fix. So moving on…
The virtual network card has the following version information. If your name is Norman Potter, Hyper-V version spotter, then you might notice that something is not quite pukka with this……
And to check the version information of the file on disk: netsvc50.sys 6.0.6001.18004
Hold those thoughts for now, and we’ll come back to them.
From the VM’s Action menu when “Insert Integration Services Setup Disk” is selected, the expected prompt appears in the VM.
So far so good, so let’s hit OK to start the upgrade.
The upgrade commences and then states that it has successfully completed. The VM is then immediately restarted as requested.
After the reboot, things are not looking good. In fact, it’s all gone a bit Pete Tong…..
In the event logs multiple services have failed to start. There is also no network connectivity to any other machines. IPCONFIG shows no IPs are present and there is no valid TCP/IP configuration.
It looks like a new party game has been invented, it’s called “Hunt the network card”. Can you see the network card listed below in Device Manager?
Nope, neither can I…..
When showing system devices, some of the system devices are unable to start.
Properties of the above devices shows the following:
In the C:\Windows folder there was nothing of interest in the new Integration Services Setup Logs
C:\Windows\VMGuestSetup.log -- - Debug log written by setup.exe. A new section is appended to the log for each installation or uninstallation
C:\Windows\VMGcoInstall.log -- Debug log written by the guest components co-installer. A new section is appended to the log for each installation or uninstallation
I also had an older log VMGInst.log dating from April 2008. Opening this log would reveal the issue, but I’ll keep that to the end else it will spoil the surprise
When troubleshooting virtual machine issues it sometimes helps to think what would you do if there were a physical machine? Well on Windows 2003 I would look at the setupapi.log. And sure enough the setupapi.log had errors within it:
#I443 No installed Authenticode(tm) catalogs matching catalog name "oem0.CAT" were found that validated file "C:\WINDOWS\system32\DRVSTORE\gencounter_46FBE6659A8242F714DAEF17D05DB56E79E85446\gencounter.inf" (key "gencounter.inf"). Error 1168: Element not found.
#I443 No installed Authenticode(tm) catalogs matching catalog name "oem16.CAT" were found that validated file "C:\WINDOWS\system32\DRVSTORE\gencounter_46FBE6659A8242F714DAEF17D05DB56E79E85446\gencounter.inf" (key "gencounter.inf"). Error 1168: Element not found.
#I443 No installed Authenticode(tm) catalogs matching catalog name "oem17.CAT" were found that validated file "C:\WINDOWS\system32\DRVSTORE\gencounter_46FBE6659A8242F714DAEF17D05DB56E79E85446\gencounter.inf" (key "gencounter.inf"). Error 1168: Element not found. [2012/07/04 18:35:17 2808.202 Driver Install]
Are those files there, and are they corrupt?
Looking at the system showed that these oem.inf files were still present on the file system. When opened up in notepad it is possible to see that they are related to Hyper-V IC, and that the dates indicated that they are old and out dated. So Windows is not attempting to install the later versions of the Hyper-V drivers, and is getting stuck on the old files which are failing Authenticode checks.
Again placing this in context on a physical system with driver issues, one technique is to purge the offending driver from the system and then to re-install it fresh. In the case of this Windows 2003 VM I then uninstalled the latest Hyper-VC ICs and restarted the machine.
Once logged back on after the reboot, I did a quick search to see what other remnants have been left over from previous installs. The results can be seen below. After checking that these are all Hyper-V files I moved all of them out from their current location to a backup location.
Now that all traces of the previous Integration Components have been removed, let’s install the latest Integration Components. This was done using the same method as originally discussed above.
After the required restart, the mouse integration, networking and all unknown hardware elements are working fine. The gods of plug & pray are on our side.
All the virtual hardware components were correctly enumerated and installed on the first attempt. This also includes the virtual network card which is rather handy as this was the Domain Controller for this particular lab.
Thinking about certain issues inside a VM will involve the same troubleshooting as a physical machine. VM’s <James Doohan> Cannae Change the Laws of Physics ! </James Doohan> so physical reality still matters.
In the case of this VM, it had existed on every single version of Hyper-v starting from the initial beta build and had managed to retain one of the older drivers. When we look at Windows Vista and up and the move to Component Based Servicing (CBS), this has alleviated a lot of the issues with chaining multiple .inf files and the dependency issues that they create. Take a peek at this for more details on CBS.
The VMGInst.log showed that the initial installation for the Hyper-V beta ICs was the start of this issue. This log had entries like
C:\WINDOWS\system32\CatRoot\{F750E6C3-38EE-11D1-85E5-00C04FC295EE}\netvsc.cat with error 0x57
KB article You cannot install some updates or programs discusses some of the issues around such errors.
And for some more historical fun, remember that the initial release of Hyper-V was announced as coming < 180 days after the release for Windows 2008. This was installed through update Description of the update for the release version of the Hyper-V technology for Windows Server 2008 (950050) which is dated 26th of June 2008. This DC VM was built using the beta version in April 2008.
Note that you may not have the VMGInst.log present on your system as this log is not used in the more recent Hyper-V versions.
Exchange 2007 & 2010 use a different message routing design than Exchange 2003. This is an important aspect to understand when transitioning from Exchange 2003 upwards due to the change in behaviour.
TechNet has articles that discuses these concepts :
Exchange 2010 http://technet.microsoft.com/en-us/library/aa998825.aspx
Exchange 2007 http://technet.microsoft.com/en-us/library/aa998825(EXCHG.80).aspx
Exchange 2003 http://technet.microsoft.com/en-us/library/aa998800(EXCHG.65).aspx
Exchange 2007 and 2010 base their routing topology off the defined AD site design, and do not carry forward the Exchange 2000/2003 concept of Routing Groups (though mail can still be delivered to servers in the older Routing Groups during a transition).
When sending a message from one Exchange site to another, the Hub will determine the least cost route and only use this one route. This means:
Determining the least cost route can be easily determined if you have site link costs that sum up to different totals. But should you design an Exchange 2007 environment that has two paths between sites that has the same cumulative site cost it may lead you to think that both connectors will be used in a load balancing scenario. Still in this case a single least cost path will be determined. Having two paths with the same cost does NOT mean that both connections will be used! Least cost routing really does mean least cost, i.e. use the one that has the lowest cost and only that least cost route. let’s dig into this a little.
Here is an example of an Exchange organisation that has 5 sites. The relevant cost of the link is show on the respective segment.
Now, let’s review three examples to see how the message path is determined:
Example 1 A message that is being relayed from Site A to Site D can follow two possible routing paths: Site A-Site B-Site D and Site A-Site C-Site D. The costs assigned to the IP site links in each routing path are added to determine the total cost to route the message. In this example, the routing path Site A-Site B-Site D has an aggregate cost of 20. The routing path Site A-Site C-Site D has an aggregate cost of 10. Routing selects path Site A-Site C-Site D.
Example 2 A message is being relayed from Site B to Site D. There are three possible routing paths: Site B-Site D with a cost of 15, Site B-Site E-Site C-Site D with a cost of 15, and Site B-Site A-Site C-Site D with a cost of 15. Because more than one routing path results in the same cost, routing selects the routing path Site B-Site D. This has the least number of hops.
Example 3 A message is being relayed from Site A to Site E. There are two possible routing paths: Site A-Site B-Site E with a cost of 10, and Site A-Site C-Site E with a cost of ten. Both routing paths have the same cost and same number of hops. The alphanumeric order of the Active Directory sites immediately before Site E is compared. Site B has a lower alphanumeric value than Site C. Therefore, routing selects the routing path Site A-Site B-Site E.
After the least cost routing path has been determined, Exchange 2007 & 2010 routing does not consider alternative routing paths.
Why did Example 3 behave the way that it did? Multiple factors come into choosing the least cost path which include:
This can be summarised as:
So, least cost really does mean least cost!
Exchange 2013 CU6 has been released to the Microsoft download centre! Exchange 2013 has a different servicing strategy than Exchange 2007/2010 and utilises Cumulative Updates (CUs) rather than the Rollup Updates (RU/UR) which were used previously. CUs are a complete installation of Exchange 2013 and can be used to install a fresh server or to update a previously installed one. Exchange 2013 SP1 was in effect CU4, and CU6 is the second post SP1 release. CU6 contains AD DS schema changes so please test and plan accordingly!
Update 1-9-2014: If you are deploying into a mixed environment with Exchange 2007, you need to review KB2997209 Exchange Server 2013 databases unexpectedly fail over in a co-existence environment with Exchange Server 2007
Update 1-9-2014: Please also review the comments here for an issue that affects Hybrid mailboxes.
Update 9-9-2014: If you are deploying into a mixed environment with Exchange 2007, you also need to review KB 2997847 You cannot route ActiveSync traffic to Exchange 2007 mailboxes after you upgrade to Exchange 2013 CU6
This is build 15.00.0995.029 of Exchange 2013 and the update is helpfully named Exchange2013-x64-cu6.exe. Which is a great improvement over the initial CUs that all had the same file name! Details for the release are contained in KB2961810.
As with previous CUs, CU6 follows the new servicing paradigmthat was previously discussed on the blog. The CU6 package can be used to perform a new installation, or to upgrade an existing Exchange Server 2013 installation to CU6. You do not need to install Cumulative Update 1 or 2 for Exchange Server 2013 when you are installing CU6. Cumulative Updates are well, cumulative. What else can I say…
After you install this cumulative update package, you cannot uninstall the cumulative update package to revert to an earlier version of Exchange 2013. If you uninstall this cumulative update package, Exchange 2013 is removed from the server.
Note that customised configuration files are overwritten on installation. Make sure you have any changes fully documented!
CU6 contains AD Schema updates – please test and plan accordingly!
What do I mean by that? Well, you need to ensure that you are fully informed about the caveats with the CU and are aware of all of the changes that it will make within your environment. Additionally you will need to test the CU your lab which is representative of your production environment.
Edit: 24-1-2013: A second article using PowerShell 3.0 is here
Edit: 30-1-2013: – A third article is using advanced PowerShell 3.0 is here.
Edit 28-8-2013: – A similar issue with the setting being removed is present in Windows Server 2012. Article with workaround is here.
The previous behaviour in Windows was to register all IP addresses that were entered on the network card’s property sheet into DNS if the “Register this connection’s address in DNS” option was selected (Which is the default).
For servers with a single NIC which has one IP bound to it this works great as we can dynamically register changes in IP addressing into DNS and all in the world is good. What happens though when you start to complicate matters and have additional IPs and additional NICs?
In the two NIC scenario, it is easy to set one NIC to register into DNS and then clear the register in DNS option for the second NIC. That allows for the IP on the first NIC to be registered and the IP on the second NIC will not. This would be a common scenario for a server that had multiple interfaces where one would be used for a management/backup purpose and end users should not be able to resolve the server’s name to the management IP as their traffic would not be allowed to route to that interface. That’s fine but what about the scenario of a single NIC with multiple IPs bound to it? An example would be a web server with multiple IPs for different web sites.
Previously, if you did not want the server to register all of its IPs into DNS, then the register in DNS option would have to be disabled and the administrator would have to manually maintain the DNS registration information in the DNS zone. If this was not done then all the IPs that were bound to the server would be registered in DNS and clients potentially would be returned an incorrect IP.
Windows 2008 and 2008 R2 now have the option to selectively register IPs into DNS. This capability was first released as an update for Windows 2008 and 2008 R2. After you install this hotfix, you can assign IP addresses that will not be registered for outgoing traffic on the DNS servers by using a new flag of the Netsh command. This new flag is the skipassource flag.
For example, the following command creates an IPv4 address that is not registered for outgoing traffic on the DNS servers:
Netsh Int IPv4 Add Address <Interface Name> <IP Address> SkipAsSource=True
"Interface Name" is the name of the interface for which you want to add a new IP address.
"IP Address" is the IP address you want to add to this interface.
For Example:
Netsh Int IPv4 Add Address Team-1 172.16.5.10 SkipAsSource=True
How can I see what IPs have this flag set? To list the IPv4 addresses that have the skipassource flag set to true, run the following command:
Netsh int ipv4 show ipaddresses level=verbose
Note the “Skip As Source” entries in the below:
That’s all pretty neat but if you are are wondering what is my interface name check the GUI or run the following Netsh command to show the interfaces:
Netsh Interface Show Interface
Note the Interface Name column on the right hand side.
Which corresponds to the GUI:
Note that once you have configured the above, if you then go to the regular GUI and make changes there, the SkipAsSource flag is overwritten unless you have installed the update to correct this known issue.
Consider the following scenario:
The issue occurs because the GUI does not recognize the skipassource flag, and the GUI uses an incorrect method to handle changes of IP settings. When IP settings are changed, the GUI deletes all the old IP addresses from the old list and then adds new IP addresses to the new list. Because the GUI does not know the skipassource flag, the GUI does not copy the flag when IP addresses are added to the list. Therefore, the skipassource flag is cleared.
As mentioned 6 months ago, Exchange 2003 is rapidly approaching the end of its extended product lifecycle. The ship is getting ready to launch, and hopefully this should not surprise anyone.
Exchange 2003 certainly has had a good run over the last 10 years since it was released back in back in October 2003. It made ActiveSync available to the mainstream (Mobile Information Server was not required), implemented anti-spam measures and also introduced RPC/HTTPS which was later renamed to be Outlook Anywhere. Do you still remember the "fun" of manually configuring the OA settings, prior to SP1, manually in the registry?
Exchange had certainly reached a point during the Exchange 2003 lifecycle which meant radical changes were required to let Exchange grow and overcome certain limitations. Moving to x64 based architecture allowed Exchange to leverage more memory and move beyond the painful limitation of only being able to work with 4GB of memory. PowerShell simplified the administrator's life compared to writing vbscript, and the Exchange System Manager with options buried at numerous levels was also replaced.
While we can now look back on those halcyon days, it’s almost time to say goodbye to this old friend....
Six months from today, Exchange 2003 will reach the end of it's extended support window. Additionally other products will also cease to be supported on the same date. Please make sure that the 8th of April 2014 is in your calendar!
Outlook 2003 will transition out of extended support on 8th of April 2014
Exchange Server 2003 will transition out of extended support on 8th of April 2014
Windows XP will transition out of extended support on 8th of April 2014
Exchange 2010 SP2 will transition out of support on 8th April 2014
And as a non Exchange specific item, please also note Windows 2003:
Windows Server 2003 will transition out of extended support on 14th of July 2015
The Lifecycle site’s FAQ has more information and details on support options if you are not able to complete your migration prior to the end of support dates.
Make sure that you are able to migrate to a supported product prior to the support expiration date. Security updates will not be provided for products that are not supported.
Generally the Exchange external Autodiscover DNS entity is configured as a regular A record. Sometimes a service record (SRV) is used instead. Since I have the habit of forgetting the syntax of quickly querying for the SRV record, this is one of those shared bookmark posts!
Nslookup is the tool of choice here! It's documentation can be found on TechNet.
There are two ways to run nslookup – interactive and noninteractive. Noninteractive is good when you know that you only want to query a single piece of data. Let’s take a peek at an example of each. We will check for the _autodiscover SRV record in the Tailspintoys.ca domain. The record points to a host called autod.tailspintoys.ca. The full format of this record is:
_autodiscover._tcp.tailspintoys.ca
For more reading on SRV records, take a peek at this article. And for Autodiscover in general please review this post.
Open a cmd prompt and run
nslookup -q=srv _autodiscover._tcp.tailspintoys.ca
You should see the below output. Note that the svr hostname will be the Autodiscover target.
In this example we launched Nslookup in noninteractive mode. The query type is set to SRV and then we checked for the _autodiscover._tcp.tailspintoys.ca record.
Open a cmd prompt and run:
In this example we launched Nslookup in interactive mode, so we can interact with it. The query type is set to SRV and then we checked for the _autodiscover._tcp.tailspintoys.ca record.
For reference purposes, the steps to add a Autodiscover SRV record will be something like the below. They are intended to be general so please follow any specific notes or items for the DNS registrar you are using!
In your DNS zone editor ad a SRV record with the following information:
Service _autodiscover
_autodiscover
Protocol _tcp
_tcp
Name Enter one of the following values:
Enter @ if your registered domain is your cloud-based domain. For example, if your registered domain is contoso.com and your cloud-based domain is contoso.com, enter @.
@
Enter the subdomain name if your cloud-based domain is a subdomain of your registered domain. For example, if your registered domain is contoso.com, but your cloud-based domain is the subdomain test.contoso.com, enter test.
test
Priority 10 (or as per your design)
10 (or as per your design)
Weight 10 (or as per your design)
Port 443
443
Target server.contoso.com (in the example above this was autod.tailspintoys.ca)
server.contoso.com (in the example above this was autod.tailspintoys.ca)
TTL Verify that an appropriate TTL is selected, 1 hour is a common default. (If you are approaching a migration, this should be decremented to allow for quicker cutover)
In addition to the SRV record pointing us to the correct location, we also have to ensure that there is a valid certificate installed which is published to the Internet. This could be something as simple as a NAT rule with the appropriate firewall rule for TCP 443 or it could involve TMG or a load balancer's APM.
The choice as they say - is yours!!
Previously we discussed how to customise Exchange 2010 RBAC to delegate creating mail enabled contacts. The intent of that original post was to allow for the for creation of simple mail enabled contacts that would facilitate sharing the SMTP address of a person outside the Exchange organisation.
Marc commented on that post as the provided solution did not fit his requirements which were different. There was no intent to go and modify the details of the contact objects in the original post. Phone number, office and location amongst others were not required. Marc on the other hand does want these fields to be edited. So what to do? Time for some more RBAC fun!!!
Let’s assume that we are at the end state of the previous blog, all those steps were followed and the custom RBAC role of “AD-Contact-Editors” exists as documented in that post. This would involve running the following PowerShell commands:
New-ManagementRole -Name AD-Contact-Editors -Parent "Mail Recipient Creation" Get-ManagementRoleEntry -Identity AD-Contact-Editors\* | Where-Object {$_.Name -ne 'Get-MailContact'} | Remove-ManagementRoleEntry Add-ManagementRoleEntry -Identity "AD-Contact-Editors\New-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Remove-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Get-Recipient" Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Set-Recipient" New-ManagementRoleAssignment -Role AD-Contact-Editors -User User-1
New-ManagementRole -Name AD-Contact-Editors -Parent "Mail Recipient Creation"
Get-ManagementRoleEntry -Identity AD-Contact-Editors\* | Where-Object {$_.Name -ne 'Get-MailContact'} | Remove-ManagementRoleEntry
Add-ManagementRoleEntry -Identity "AD-Contact-Editors\New-MailContact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Remove-MailContact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Get-Recipient"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors\Set-Recipient"
New-ManagementRoleAssignment -Role AD-Contact-Editors -User User-1
They should all be on a single line, but may wrap.
Note that the Management Role has been assigned to an individual account – please see the note below on assigning to a group for production usage.
The AD-Contact-Editors custom management role should contain the following cmdlets:
Opening up ECP shows that User-1, who is assigned this custom RBAC role, can Create and Delete contacts. Note that there is no details button, thus a contact cannot be edited once created, and additionally the capability to edit other properties of the contact are not exposed.
As mentioned in the other post, AD-Contact-Editors is a copy from the built in “Mail Recipient Creation” role since that was the only role which has the New-MailContact cmdlet. However, it does not contain the Set-MailContact cmdlet, and since the cmdlet does not exist in the parent role it can never be added to the child role. So if we want to provide the capability to run Set-MailContact then we will need to do some more delegation work in RBAC!
As before, lets see where the Set-MailContact cmdlet lives:
Get-ManagementRole –Cmdlet Set-MailContact
We can see that Set-MailContact lives in three places. In this case we want to leverage the Mail Recipients built in role, so we shall make a copy of that to work with! For lack of imagination, this new custom role will be called: AD-Contact-Editors-Recipients.
Lets create the role, by copying the parent role:
New-ManagementRole -Name AD-Contact-Editors-Recipients -Parent "Mail Recipients"
The Mail Recipients role contains a lot of unwanted cmdlets for this task, and since AD-Contact-Editors-Recipients is a direct copy then it too will contain the same unwanted cmdlets. Lets flush out all cmdlets apart from Get-MailContact.
Get-ManagementRoleEntry -Identity AD-Contact-Editors-Recipients\* | Where-Object {$_.Name -ne 'Get-MailContact'} | Remove-ManagementRoleEntry
The above should be on one line, but may wrap.
After pressing “A” to accept that all the cmdlets will be removed, lets check the current contents of our custom AD-Contact-Editors-Recipients role:
Get-ManagementRoleEntry -Identity AD-Contact-Editors-Recipients\*
That looks good! It only contains the Get-MailContact cmdlet – all the others were removed. Now we can add back in the couple of cmdlets that we need by running all of these commands:
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Enable-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Disable-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-Contact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Get-Contact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-MailContact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Enable-MailContact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Disable-MailContact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-Contact"
Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Get-Contact"
I won’t screenshot you to death, so here is just one image showing the above being added back in:
Again, lets check to see the cmdlets contained within the Role:
Looking good!
Update 1-6-2014: The focus of the post was on the above items, since creating the custom RBAC role is the hardest part of the process. Initially this role was directly assigned to an end user called “User-1”, but have also added the steps so that the role assignment has also been done to a Role Group as well. Thanks for the feedback folks! For testing purposes individual assignment is fine, though in production usage groups will be used. Just the same as for NTFS permission assignment….
If you want to assign directly to an individual account, then execute the:
New-ManagementRoleAssignment -Role AD-Contact-Editors-Recipients -User User-1
Alternatively if you want to assign to a brand new Role Group called “AD-Contact-Editors-RG” then execute:
New-RoleGroup AD-Contact-Editors-RG -Description "Contact Creators" -Roles "AD-Contact-Editors-Recipients"
If assigning the management Role to a group, we also need to ensure that the test account is added to the Role Group:
Add-RoleGroupMember -Identity AD-Contact-Editors-RG -Member User-1
And then we can run Get-RoleGroupMember to verify the membership addition.
Get-RoleGroupMember -Identity AD-Contact-Editors-RG
Moving on now to the most important part, testing!
Logging onto ECP as the test account (User-1), now shows that the details button has been enabled when looking at the contact objects:
We can edit the contact, and fill in some meaningless data!
Once the changes have been saved, AD users and computers then displays the updated fields:
Since our test user now has RBAC Role Assignments to both the AD-Contact-Editors and AD-Contact-Editors-Recipients custom roles, they are now able to create, delete and modify contact objects! The two RBAC Role Assignments can be seen below:
To summarise the commands used:
New-ManagementRole -Name AD-Contact-Editors-Recipients -Parent "Mail Recipients" Get-ManagementRoleEntry -Identity AD-Contact-Editors-Recipients\* | Where-Object {$_.Name -ne 'Get-MailContact'} | Remove-ManagementRoleEntry Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Enable-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Disable-MailContact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Set-Contact" Add-ManagementRoleEntry -Identity "AD-Contact-Editors-Recipients\Get-Contact" New-ManagementRoleAssignment -Role AD-Contact-Editors-Recipients -User User-1
.
If needed we could have scoped RBAC down even further and limited the actual contact fields they were allowed to modify. Maybe that’s a post for another day!
* – The super eagle eyed out there may notice the deliberate image issue above
Exchange 2013 SP1 has now been released to the Microsoft Download Center!
The build number for Exchange Server 2013 SP1 is 15.00.0847.032
Update 5-3-2014: If you are using custom transport agents please see Third-party transport agents cannot be loaded correctly in Exchange Server 2013 The script you need to remediate the issue is linked from that KB, and is available directly from the download center.
Update 14-4-2014: As discussed in post “Patching Exchange? Don’t Overlook Outlook”, make sure to keep Outlook updated. KB 2863911 Outlook 2013 profile might not update after mailbox is moved to Exchange 2013
Update 14-4-2014: Please see KB 2958434 if deleting Exchange 2013 databases. Users cannot access mailboxes in OWA or EAS when mailbox database is removed.
As always please read the release notes! Exchange 2013 SP1 contains schema changes and you will need to go through testing and validation to ensure a smooth rollout!
Noted at the bottom of the Exchange Team Post the next Exchange 2013 update will be CU5. Thus we could call this CU4, but Service Packs mark an important milestone for support lifecycle events so this do think of this as a Service Pack!
You can download Exchange 2013 SP1 from here.
Scroll down below for details on each of these features!
KB 2926248 contains the description for Exchange 2013 SP1.
Windows Server 2012 R2 is now a supported operating system in Exchange 2013 SP1. Exchange 2013 SP1 also supports installation in Active Directory environments running Windows Server 2012 R2. For more information, see Exchange 2013 System Requirements.
Edge Transport servers minimize attack surface by handling all Internet-facing mail flow, which provides SMTP relay and smart host services for your Exchange organization, including connection filtering, attachment filtering and address rewriting. For more information, see Edge Transport Servers.
OWA customers can report missed spam in the inbox (false negative) and misclassified as spam (false positive) messages to Microsoft for analysis by using its built-in junk email reporting options. Depending on the results of the analysis, we can then adjust the anti-spam filter rules for our Exchange Online Protection (EOP) service. For more information, see Junk Email Reporting in OWA.
Microsoft Exchange Online and Exchange 2013 SP1 now support S/MIME-based message security. Secure/Multipurpose Internet Mail Extensions (S/MIME) allows people with Office 365 mailboxes to help protect sensitive information by sending signed and encrypted email within their organization. Administrators can enable S/MIME for Office 365 mailboxes by synchronizing user certificates between Office 365 and their on-premises server and then configuring Outlook Online to support S/MIME. For more information, see S/MIME for Message Signing and Encryption and the Get-SmimeConfigcmdlet reference.
Data loss prevention (DLP) Policy Tips are informative notices that are displayed to senders in Outlook when they try sending sensitive information. In Exchange 2013 SP1, this functionality has been extended to both the desktop version of Outlook Web App and the mobile version (named OWA for Devices). You’ll see it in action if you have an existing DLP policy with Policy Tips turned on for Outlook. If your policy already includes Policy Tips for Outlook, you don't need to set up anything else. Go ahead and try it out!
Not currently using Policy Tips? To get started, Create a DLP Policy From a Template, then add a policy tip by editing the policy and adding a Notify the sender with a Policy Tipaction.
Deep content analysis is a cornerstone of DLP in Exchange. Document Fingerprintingexpands this capability to enable you to identify standard forms used in your organization, which may contain sensitive information. For example, you can create a fingerprint based off a blank employee information form, and then detect all employee information forms with sensitive content filled in.
SP1 provides an expanded set of standard DLP sensitive information types covering an increased set of regions, which makes it easier to start using the DLP features. SP1 adds region support for Poland, Finland and Taiwan. To learn more about the new DLP sensitive information types, see Sensitive Information Types Inventory.
Deploying and configuring Active Directory Federation Services (AD FS) using claims means multifactor authentication can be used with Exchange 2013 SP1 including supporting smartcard and certificate-based authentication in Outlook Web App. In a nutshell, to implement AD FS to support multifactor authentication:
Install and configure Windows Server 2012 R2 AD FS (this is the most current version of AD FS and contains additional support for multifactor authentication). To learn more about setting up AD FS, see Active Directory Federation Services (AD FS) Overview
Create relying party trusts and the required AD FS claims.
Publish Outlook Web App through Web Application Proxy (WAP) on Windows Server 2012 R2.
Configure Exchange 2013 to use AD FS authentication.
Configure the Outlook Web App virtual directory to use only AD FS authentication. All other methods of authentication should be disabled.
Restart Internet Information Services on each Client Access server to load the configuration.
For details, see Using AD FS claims-based authentication with Outlook Web App and EAC
SSL offloading is supported for all of the protocols and related services on Exchange 2013 Client Access servers. By enabling SSL offloading, you terminate the incoming SSL connections on a hardware load balancer instead of on the Client Access servers. Using SSL offloading moves the SSL workloads that are CPU and memory intensive from the Client Access server to a hardware load balancer.
SSL offloading is supported with following protocols and services:
Outlook Web App
Exchange Admin Center (EAC)
Outlook Anywhere
Offline Address Book (OAB)
Exchange ActiveSync (EAS)
Exchange Web Services (EWS)
Autodiscover
Mailbox Replication Proxy Service (MRSProxy)
MAPI virtual directory for Outlook clients
If you have multiple Client Access servers, each Client Access server in your organization must be configured identically. You need to perform the required steps for each protocol or service on every Client Access server in your on-premises organization. For details, see Configuring SSL Offloading in Exchange 2013
Although there are both private (internal network) and public (external network) settings to control attachments using Outlook Web App mailbox policies, admins require more consistent and reliable attachment handling when a user signs in to Outlook Web App from a computer on a public network such as at a coffee shop or library. Go here for details, Public Attachment Handling in Exchange Online.
Internet Explorer 10 and Windows Store apps using JavaScript support the Application Cache API (or AppCache), as defined in the HTML5 specification, which allows you to create offline web applications. AppCache enables webpages to cache (or save) resources locally, including images, script libraries, style sheets, and so on. In addition, AppCache allows URLs to be served from cached content using standard Uniform Resource Identifier (URI) notation. The following is a list of the browsers that support AppCache:
Internet Explorer 10 or later versions
Google Chrome 24 or later versions
Firefox 23 or later versions
Safari 6 or later (only on OS X/iOS) versions
Information workers in Exchange on-premises organizations need to collaborate with information workers in Exchange Online organizations when they are connected via an Exchange hybrid deployment. New in Exchange 2013 SP1, this connection can now be enabled and enhanced by using the new Exchange OAuth authentication protocol. The new Exchange OAuth authentication process will replace the Exchange federation trust configuration process and currently enables the following Exchange features:
Exchange hybrid deployment features, such as shared free/busy calendar information, MailTips, and Message Tracking.
Exchange In-place eDiscovery
For more information, see Configure OAuth Authentication Between Exchange and Exchange Online Organizations.
New in Exchange 2013 SP1, hybrid deployments are now supported in organizations with multiple Active Directory forests. For hybrid deployment features and considerations, multi-forest organizations are defined as organizations having Exchange servers deployed in multiple Active Directory forests. Organizations that utilize a resource forest for user accounts, but maintain all Exchange servers in a single forest, aren’t classified as multi-forest in hybrid deployment scenarios. These types of organizations should consider themselves a single forest organization when planning and configuring a hybrid deployment.
For more information, see Hybrid Deployments with Multiple Active Directory Forests.
Windows Server 2012 R2 enables you to create a failover cluster without an administrative access point. Exchange 2013 SP1 introduces the ability to leverage this capability and create a database availability group (DAG) without a cluster administrative access point. Creating a DAG without an administrative access point reduces complexity and simplifies DAG management. In addition, it reduces the attack surface of a DAG by removing the cluster/DAG name from DNS, thereby making it unresolvable over the network.
For more information, see High Availability and Site Resilience.
As with previous CUs, SP1 follows the new servicing paradigm that was previously discussed on the blog. This package can be used to perform a new installation, or to upgrade an existing Exchange Server 2013 installation to SP1. You do not need to install Cumulative Update 1 or 2 for Exchange Server 2013 RTM when you are installing SP1.
After you install this Service pack, you cannot uninstall the Service Pack to revert to an earlier version of Exchange 2013. If you uninstall this Service pack, Exchange 2013 is removed from the server.
Once the Service Pack Installation has completed, restart the server. The server should be restarted even if you are not prompted.
Edit 23-5-2013: Added Headers as getting too long
Edit 23-5-2013: Added reference for 940012
Edit 24-7-2013: Added reference for 2557323
While Outlook 2007, 2010 and now 2013 offer many, many, many * improvements over the older Outlook 2003 client there are still many, many, many * large enterprises that use Outlook 2003. At this point in the lifecycle of Outlook 2003, customers should be looking to migrate to a newer version. Most customers that I talk to are doing that; typically in conjunction with a desktop refresh. Exchange 2013 will not support the Outlook 2003 client, and in addition there are upcoming support expiration dates that we should all be familiar with:
The Lifecycle site’s FAQ has more information and details on support options if you are not able to complete your migration prior to the end of support dates. And while you are there also take a look at the date that Exchange 2010 SP2 will transition out of support: Exchange 2010 SP2 will transition out of support on 8th April 2014. Why you may ask? Well as per the lifecycle policy since Exchange 2010 SP3 shipped then there is a 12 month period for customers to move to the new service pack.
For those organisations that are still using Outlook 2003 there are some considerations when coexisting with Exchange 2010. They are listed in no order of priority, and I’ll come back and periodically update this listed based off comments to the blog and also add other issues that I see and hear about. Please do not read this as an definitive list, consider it more a public bookmark that we can share
"Cannot open your default e-mail folder" error when users try to open their mailboxes in Outlook after migration to Exchange 2010 – users unable to logon to Outlook after migrating their mailbox to Exchange 2010. Exchange 2010 OWA works OK. This is due to duplicated addresses.
Outlook 2003 connects to Exchange differently that later versions. So when running into issues try to isolate by comparing O2003 results with O2007, O2010 and OWA. For Example you may see Outlook 2003 running into issues with Exchange 2010 throttling policies.
Error message when an Outlook 2003 client tries to open multiple shared calendars in Exchange Server 2010: "The connection to the Microsoft Exchange server in unavailable. Outlook must be online or connected to complete this action"
This problem occurs because of Outlook 2003 dependencies on reference Mailbox Database support. This is not supported in Exchange Server 2010. Outlook 2003 clients must now reference the Exchange Server 2010 Address Book service when they open shared calendars.
In order to make Outlook 2003 connections easier to complete, we changed the mailbox server name to give the appearance of connections to different mailboxes on different servers. Only the AddressBook service understands this changed mailbox server name. Therefore, clients that try to connect directly to Active Directory will fail to make the connection.
However, if many delegate mailboxes are being used, clients that are accessing the Address Book Service will reach a limit on the number of connections any single user can have. This exhausts the maximum number of connections available (20) specified by the default throttling policy that is associated with the user mailbox. In this situation, Outlook 2007 clients and later-version clients do not open multiple additional connections.
The original release of Exchange Server 2010 allows a maximum parameter value for RCAMaxConcurrency of 100. Exchange Server 2010 Service Pack 1 increases the maximum value for RCAMaxConcurrency to 2147483647.
Assume that you configure public folder replication in a mixed Microsoft Exchange Server 2003 and Microsoft Exchange Server 2010 environment. When an Exchange Server 2010 user tries to view an Exchange Server 2003 user’s free/busy information, the user intermittently cannot view the free/busy information, or the user encounters a long delay when he or she tries to view the free/busy information. This is addressed in 2557323.
This was the #1 support call generator when Exchange 2010 was released:
Outlook connection issues with Exchange 2010 mailboxes because of the RPC encryption requirement - Discusses changes to RPC Client Access encryption requirements between Exchange 2010 RTM and SP1. Should never have been an issue as Outlook should be managed through GPO, right? Well not so much
On the Topic of GPOs, make sure that they are correctly configured as part of the planning process for deploying Kerberos authentication in Exchange 2010.
An error occurs when an Exchange server 2003 user tries to open more than one delegate mailbox of Exchange Server 2010 in Outlook 2003 - Delegate issue resolved with update for Exchange 2003.
A stub object is left behind in the source database for certain users after a move mailbox operation is complete in Exchange 2003 Service Pack 2 - Logs to Event Log when a stub mailbox is left behind when moved to Exchange 2010. This does not fix the underlying root cause, that is still to be done by the admin, but they now know details about the issue and the monitoring system can alert them to the alert.
Folders take a long time to update when an Exchange Server 2010 user uses Outlook 2003 in online mode – Has the details around the changes introduced in Exchange 2010 SP1 RU3.
Note that the article states that *YOU* must manually create a Registry key to enable this feature and then restart RPC Client Access Service to kick the change in.
After you install this update, you have to create a registry subkey to enable the UDP notifications support feature.
Create the following registry subkey to enable the UDP notifications support feature:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeRPC\ParametersSystem
Subkey name: EnablePushNotifications
Type: REG_DWORD
Value: 1
Note If this registry key does not exist, or if its value is set to 0, the UDP notification support feature is not enabled.
“Unknown Error” In Outlook 2003. Work was done to improve the Outlook 2003 online mode experience in Exchange 2010 SP2. The Man (AKA Ross Smith IV) mentions this in the comments section and is also documented in KB 2579172 Items that are deleted or moved still appear in the original folder when you use Office Outlook in online mode to access an Exchange Server 2010 mailbox
“
The UDP notification work we delivered has been working correctly since its release in E2010 SP1 RU3. The underlying issue that many customers have seen with Online Mode clients has been due to view change notification issues; specifically that view change notifications are not returned in the same RPC buffer that included the move/deletion RPC operation response. This issue affected all Outlook versions operating in online mode. In the case of OL2003, this results in extra roundtrips for a client to pull notification information from the server as the original call has completed, so the RPC Client Access service has to fire a UDP notification to get the client’s attention that a change within the folder has occurred.
We have addressed this view change notification issue in E2010 SP2.
Concern: Is Having Outlook 2003 Clients Going to Prevent Me from Deploying Exchange 2010? TechNet Wiki page discussing coexistence issues.
Description of the Outlook 2003 hotfix package (Outlook.msp): July 1, 2010 Update to Outlook 2003 to resolve Exchange 2010 coexistence issue in body formatting.
Description of the Office Outlook 2003 hotfix package (Olkintl.msp, Engmui.msp): March 9, 2011 Update to Outlook 2003 to resolve Exchange 2010 coexistence issue where server name changes to a GUID.
Office Outlook 2003 does not connect to two or more additional mailboxes in a mixed Exchange Server 2007 and Exchange Server 2010 environment – Exchange 2007 legacy issue. Resolved in SP2 RU2 for Exchange 2007. This service pack is no longer supported, and all customers must now be on Exchange 2007 SP3.
Update Center for Office, Office servers, and related products central page containing links to latest Office product updates and assistance in installing them Distributing Office 2003 Product Updates
Common Client Access Considerations for Outlook 2003 and Exchange 2010 Exchange team blog with multiple client issues that have to be considered. Some are mentioned above but very worthwhile!
How to configure Outlook to a specific global catalog server or to the closest global catalog server - This is not supported when the mailbox is on Exchange 2010 as NSPI should be on the CAS server’s Address Book Service.
By design Outlook 2003 does not use Autodiscover. Only Outlook 2007 and newer are able to leverage the Autodiscover web service. This should not be a surprise, but like most elephants in the room let’s put it to bed…..
By design Outlook 2003 stores Free Busy information in Public Folders and does not natively use the Exchange Availability web service. Be aware of the replication latency that is inherent in Public Folder replication. This is typically an issue due to room booking conflicts.
Unable to view attachments in OWA 2003, when sent from OWA 2010 – Coexistence issue for Exchange 2003 OWA users. They will see the paper clip icon indicating an attachment is present but will be unable to view the attachment.
Please also do leave a comment of get in touch with via the “Email Blog Author” in the right hand side of the post if you have items to share or discuss.
Oh, and for the sharp eyed out there who were wondering about the * reference it is here as it did not fit into the flow above.
* To those who remember seeing Police Academy when it was originally released (yes I’m getting old, that was 1984 –eek! ) this was a reference to Commandant Lassard.
Time files and we are now at the end of the Exchange 2010 SP2 support lifecycle. And as previously discussed Windows XP and Office 2003 left extended support yesterday. It seems like only yesterday when Exchange 2010 SP2 was released in November 2011,
The support lifecycle marker is the Exchange 2010 Service Pack. Exchange 2010 Rollup Updates (RU), are not milestones in the support lifecycle. So regardless if you have Exchange 2010 SP2 RU 8 installed, that build of Exchange 2010 will no longer receive security updates and code updates. To receive the support you are entitled to, please ensure that all your Exchange 2010 servers have SP3 installed. Ideally they will have a recent RU installed as well. At the time of writing this should be Exchange 2010 SP3 RU4 or RU5 since there is a security issue resolved in Exchange 2010 SP3 RU4.
One note on EdgeSync and reported Exchange version information. If you do have Exchange 2010 Edge servers installed, and EdgeSync is configured, then after installing Exchange 2010 SP3 onto the Edge servers you will not see the version information change when you run Get-ExchangeServer on the internal Exchange servers. This is because the version information is only written when EdgeSync is configured. To increment the version information in the internal AD, please re-subscribe the Edge servers.
Please review the lifecycle chart here for full details
So at this point please ensure that you are on SP3.
For details on SP3 – you can take a peek at these articles.
I also blogged about the expiration of Exchange 2010 RTM and Exchange 2010 SP1 support previously.
Full details about the Microsoft lifecycle policy can be viewed here
http://support.microsoft.com/lifecycle/
I would also encourage you to sign up to the quarterly lifecycle update newsletter to ensure that you have the knowledge to keep all of your products in a supported state, and continue to receive the support that you are entitled to!
I recently blogged about using PAL to analyse an existing performance monitor log file. That is great if you have an existing log, but what if this is not the case? How can we easily capture the correct counters in the log file? Again PAL can come to the rescue!
Let's assume that PAL is installed as detailed in the previous blog post, if not hit that post and get the tool running. I’ll wait for you - you’re back – good! Now that the tool is installed and running, go to the Threshold File tab as shown in the below picture.
Select the threshold file that you are interested in, in this case let’s choose “Microsoft Exchange 2010”. This is shown in the capture above. Then click the "Export to Perfmon template file button, and save the file somewhere safe. This is an XML file which can be viewed in IE or another XML editing tool.
This saved XML file can then be copied to the Exchange server where it can be used as the template for a Perfmon (Performance Monitor) Data Collector Set. The Data Collector Set contains one or more Data Collectors which are the actual elements containing the Perfmon counters to capture. The Data Collector Set is responsible for the scheduling of the individual Data Collectors.
Under Administrative tools open up the Performance Monitor MMC, then navigate down the to Data Collector Sets\User Defined. On a newly installed server it will look like this, i.e. empty.
Right click on User Defined and go New –> Data Collector Set. This will bring up the wizard to create the Data Collector.
Ensure the “Create from a Template” option is selected then click next.
Click browse to local the PAL XML file that you previously copied to the server.
At this point you can hit Finish and take the default collection location and account to run the collector.
Selecting Next from the screen above (instead of Finish) will allow the location of the Data Collector to be changed. By default this will be saved to the root of the system drive, i.e. C:\PerfLogs\...
Again this screen can be bypassed if Finish was chosen on one of the previous stages. If skipped then Perfmon will assume the Data Collector is to run under the default user context, though we can modify this in addition to either starting the Data Collector Set immediately or opening up the properties and setting a schedule.
Now you will have something resembling the below, which is the Data Collector that contains the Exchange 2010 PAL counters.
Right click the Data Collector entry in the right hand pane, and select properties to review the list of PerfMon counters.
Note that the sample interval, log file location and log file type (.blg, CSV, TAB,SQL) can be set here.
In the main Performance Monitor console, right clicking on the Data Collector Set and selecting properties allows the start & stop conditions to be set. This allows for automated data collection, where you can start the collector at specific times of given days of the week. Of course the Collector Set can be started and stopped manually, the choice as they say is all yours !
Now that the Data Collector Set is created, you can use it to capture performance data to help troubleshoot issues.
So we have the ability to use PAL to help with creating Performance Monitor logs by using it to create a template file that allows for easy Data Collector Set creation. Once the log has been captured, PAL can then be used to analyse it as described in the previous blog.
When resolving issues with on-premises Exchange sometimes the issue may be directly within Exchange, other times the root cause may lie outside Exchange. Depending upon the exact nature of the case we may have to investigate network switches, load balancers or storage. When Exchange is virtualized then the hypervisor and it’s configuration also may require attention.
This was the case with a recent customer engagement. Initially the scope was upon Exchange with symptoms including Exchange servers dropping out of the DAG, databases failing over and poor performance for users. As with most cases that get escalated to me, there is rarely only a single issue in play and multiple items have to be addressed. The customer was using ESX 5 update 1 as a hypervisor solution, and Exchange 2010 SP3. Exchange was deployed in a standard enterprise configuration with a DAG, CASArray and a third party load balancer.
In this case, one of the biggest issues was that of the hypervisor discarding valid packets. Within this environment an Exchange DAG server that was restarted had discarded ~ 35,000 packets during the restart. Exchange servers that had been running for a couple of days had discarded 500,000 packets. That’s a whole lot of packets to lose. This was the cause of servers dropping out of the cluster and generating EventID 1135 errors. This is issue is discussed in detail in this previous post, which also contains a PowerShell script that will easily retrieve the performance monitor counter from multiple servers. The script allows you to monitor and track the impact of the issue easily.
Yay – we found the issue and all was well. Time to close the case? NO!
There were multiple other issues involved here and not all of them were immediately obvious when troubleshooting so I wanted to share these notes for awareness purposes. All software needs maintenance, Exchange itself is no exception and it is critical to keep code maintained with the vendors' updates. This ensures that you address known issues, and proactively maintain the system. As always this must be tempered with adequately testing any update in your lab prior to deploying it in production.
This post is only to raise awareness of the below issues and is not intended to be negative to the hypervisor in question. As stated above Exchange, Windows and Hyper-V all require updates. Hyper-V experienced network connectivity issues previously and required an update.
The customer reported that the DAG IP address was causing conflicts on the network. The typical cause for this is for the administrator to manually add the DAG IP to one or more cluster nodes manually. This is an IP address that can be bound to any node and the cluster service will perform the required steps, and the administrator should only add it as a DAG IP address and do no more. The DAG was correctly configured and servers only had their unique host IP address assigned.
Initially there seemed to be a correlation with the duplicate DAG IP address and backups. However this was quickly discarded as the duplicate IP issue would only happen once every several weeks and could not be reproduced on demand by initiating a backup.
There is an issue documented in KB 1028373- False duplicate IP address detected on Microsoft Windows Vista and later virtual machines on ESX/ESXi when using Cisco devices on the environment. This issue occurs when the Cisco switch has gratuitous ARPs enabled or the ArpProxySvc replied to all ARP requests incorrectly
This was the initial issue discussed above and is covered here.
It is always prudent to keep working an issue until it is proven that the root cause has been addressed. In this case additional research was done to investigate networking issues on the hypervisor and the below links are included for reference.
The symptom of large guest OS packet loss can include servers being dropped from the cluster. When a node is removed from cluster membership, EventID 1135 is logged into the system event log.
To report on such errors, I wrote a script to enumerate the instances of this EventID. Please see this post for details on the script.
KB 2055853- VMXNET3 resets frequently when RSS is enabled in a Windows virtual machine
Disabling RSS within the guest OS is not ideal for high volume machines as this could lead to CPU contention on the first core. Please work to install the requisite update for the hypervisor.
KB 2058692- Possible data corruption after a Windows 2012 virtual machine network transfer
Modern versions of Windows will typically not be using this virtual NIC – currently they will typically use VMXNet3. However be aware of the other issues on this page affecting VMXNet3 vNICs.
When installing the VMware tools in ESXi5, selecting the FULL installation option will also install the vShield filter driver. There is a known issue with this filter driver that is discussed in KB 2034490- Windows network file copy performance after full ESXi 5 VMware Tools installation.
Starting with ESXi 5.0, VMware Tools ships with the vShield Endpoint filter driver. This driver is automatically loaded when VMware Tools is installed using the Full option, rather than the Typical default.
I also saw this TechNet forum post with a related issue to what was observed onsite. Servers would discard a very high number of packets which would severely impact the application users were trying to access.
There are some important items to review when configuring NLB on VMware.
It is critical to discuss the NLB implementation with the hypervisor team and also the network team. Be very specific with what is being implemented and what is expected of both of these teams. Some network teams do not like NLB unicast as it leads to switch flooding, whilst others do not appreciate having to load static ARP entries into routers to ensure remote users can access the NLB VIP. Cisco has Catalyst NLB documentation here. Avaya has some interesting documentation on this page.
For this and other reasons Exchange recommends the use of a third party load balancer. This could be a physical box in a rack or a VM which can run inside Hyper-V or ESX. Please consult with your load balancer vendor so they can best meet your business, technical and price requirements.
Leading on where the previous post left off, here are the Exchange 2010 tips of the day from number 26 to 50.
For the related articles in this series please see:
Tips 1 – 25
Tips 51 – 75
Tips 76 - 101
Forget a property name? Not a problem because you can use wildcard characters to retrieve all properties that match the part of the name that you specify:
Get-Mailbox | Format-Table Name,*SMTP*
Want to work with data contained in a CSV file? Use Import-CSV to assign the data to an object. For example, type:
$MyCSV = Import-CSV TestFile.CSV
You can then manipulate the data easily in the Exchange Management Shell. For example, if there is a column called Mailboxes in the CSV data, you can use the following commands to sort or group the data by the Mailboxes column:
To sort: $MyCSV | Sort Mailboxes To group: $MyCSV | Group Mailboxes
This command spins through all your mailbox servers and reconnects all the uniquely identified but disconnected mailboxes in any one of the mailbox stores:
$Servers = Get-ExchangeServer $Servers | ` Where { $_.IsMailboxServer -Eq '$True' } ` | ForEach { Get-MailboxStatistics -Server $_.Name ` | Where { $_.DisconnectDate -NotLike '' } ` | ForEach { Connect-Mailbox -Identity ` $_.DisplayName -Database $_.DatabaseName} }
Tab completion reduces the number of keystrokes required to complete a cmdlet. Just press the TAB key to complete the cmdlet you are typing. Tab completion kicks in whenever there is a hyphen (-) in the input. For example:
Get-Send<tab>
should complete to Get-SendConnector. You can even use wildcards, such as:
Get-U*P*<tab>
Pressing the TAB key when you enter this command cycles through all cmdlets that match the expression, such as the Unified Messaging Mailbox policy cmdlets.
Want to create a group of test users in your lab? Use this command:
1..100 | ForEach { Net User "User$_" MyPassword=01 /ADD /Domain; Enable-Mailbox "User$_" }
Like the Exchange Management Shell Tip of the Day? Try this:
Get-Tip
Want to change the authentication settings on an Outlook Web Access virtual directory? Try the following command as an example. It changes authentication from forms-based authentication to Windows authentication:
Set-OwaVirtualDirectory -Identity "OWA (Default Web Site)" -FormsAuthentication 0 -WindowsAuthentication 1
Want to set the properties on all or some Outlook Web Access virtual directories? Pipe the output of Get-OwaVirtualDirectory to the Set-OwaVirtualDirectory cmdlet. For example, the following command sets the Gzip level for all Outlook Web Access virtual directories:
Get-OwaVirtualDirectory | Set-OwaVirtualDirectory -GzipLevel High
Want to remove an ActiveSync device from a user's device list? Type:
Remove-ActiveSyncDevice
This cmdlet can be helpful for troubleshooting devices that don't synchronize successfully with the server.
Want to clear all data from a mobile device? Use:
Clear-ActiveSyncDevice
Specify a time of day to clear the device, or let the task complete the next time that the device connects to the server .
Want to see a list of all devices that synchronize with a user's mailbox? Type:
Get-ActiveSyncDeviceStatistics
A variety of information is returned including device name, operating system, and last sync time.
Has one of your users asked you to recover their mobile device synchronization password? To return the user's password, type:
Get-ActiveSyncDeviceStatistics -ShowRecoveryPassword
Want to move your database path to another location? Type:
Move-DatabasePath -EdbFilePath DestFileName
To change the file path setting without moving data, use this command together with the ConfigurationOnly parameter. This command is especially useful for disaster recovery. Caution: Misuse of this cmdlet will cause data loss.
Need an easy way to add a new primary SMTP address to a group of mailboxes? The following command creates a new e-mail address policy that assigns the @contoso.com domain to the primary SMTP address of all mailboxes with Contoso in the company field:
New-EmailAddressPolicy -Name Contoso -RecipientFilter {Company -Eq "Contoso"} -EnabledPrimarySMTPAddressTemplate "@contoso.com"
Want to retrieve a group of objects that have similar identities? You can use wildcard characters with the Identity parameter to match multiple objects. Type:
Get-Mailbox *John* Get-ReceiveConnector *toso.com Get-JournalRule *discovery*
Want to configure a group of objects that have similar identities? You can use wildcard characters with the Identity parameter when you use a Get cmdlet and pipe the output to a Set cmdlet. Type:
$Mailboxes = Get-Mailbox *John* $Mailboxes | Set-Mailbox -ProhibitSendQuota 100MB -UseDatabaseQuotaDefaults $False
This command matches all mailboxes with the name John in the mailbox's identity and sets the ProhibitSendQuota parameter to 100MB. It also sets the UseDatabaseQuotaDefaults parameter to $False so that the server uses the new quota you specified instead of the database default quota limits.
Forgot what the available parameters are on a cmdlet? Just use tab completion! Type:
Set-Mailbox -<tab>
When you type a hyphen (-) and then press the TAB key, you cycle through all the available parameters on the cmdlet. Want to narrow your search? Type part of the parameter's name and then press the TAB key. Type:
Set-Mailbox -Prohibit<tab>
Want to add an alias to multiple distribution groups that have a similar name? Type:
$Groups = Get-DistributionGroup *Exchange* $Groups | Add-DistributionGroupMember -Member kim
This command adds the alias kim to all distribution groups that contain the word Exchange.
Want to record exactly what happens when you're using the Exchange Management Shell? Use the Start-Transcript cmdlet. Anything that you do after you run this cmdlet will be recorded to a text file that you specify. To stop recording your session, use the Stop-Transcript cmdlet.
Notice that the Start-Transcript cmdlet overwrites the destination text file by default. If you want to append your session to an existing file, use the Append parameter:
Start-Transcript c:\MySession.txt -Append
Do you have a user who has network access but maintains an external mail account outside your Exchange organization? With Exchange Server 2010, you can now create mail-enabled users that are regular Active Directory accounts, but also behave like mail-enabled contacts. By using the Enable-MailUser cmdlet, you can add e-mail contact attributes to any existing Active Directory user who doesn't already have a mailbox on an Exchange server. Users in your Exchange organization will then be able to send e-mail messages to that user's external mail account. Type:
Enable-MailUser -Identity <Active Directory Alias> -ExternalEmailAddress <Destination SMTP Address>
Want to change the default prohibit send quota for a mailbox database? Type:
Set-MailboxDatabase <Mailbox Database Name> -ProhibitSendQuota <New Quota Size> -UseDatabaseQuotaDefaults $False
You can specify a bytes qualifier when you use the ProhibitSendQuota parameter. For example, if you want to set the prohibit send quota to 200 megabytes, type:
Set-MailboxDatabase <Mailbox Database Name> ProhibitSendQuota 200MB -UseDatabaseQuotaDefaults $False
You can also configure the IssueWarningQuota parameter and the ProhibitSendReceiveQuota parameter in the same way.
Want to know what version of Exchange Server each of your servers is running? Type:
Get-ExchangeServer | Format-Table Name, *Version*
Scanning Exchange databases with file system antivirus is a recipe for disaster. This really should not come as a surprise for admins running Exchange services within the enterprise, since this has been the field requirement for a long time. The documentation provided by Microsoft is very clear in what exclusions are required for file system antivirus and Exchange to coexist. For reference the relevant articles are:
If this is so well documented, then what could possibly go wrong? Plenty….
Update 30-6-2014: Please also see this post on a related issue.
Every vendor who writes a file system AV product will implement theirs in a different way. Because of this, and the fact that I will not identify vendors by name, this article will be written in a generic style. The concepts however will apply to the vast majority of AV products.
TechNet does a good job of listing the types of file system antivirus scanners:
Other terminology that may be encountered is the term On-Access. This is where AV will process a file when it is accessed. Unlike the On-Demand scan, if a file is never opened then it is never scanned. Reversely if it is opened multiple times then it will likely get scanned each time it is accessed. The exact details of this are at the discretion of the AV vendor.
The heuristics contained within each AV product vary greatly, and they behave differently on the above point and many others. Some do not show the configured file system exclusions in their admin tool graphical interface and you have to look at the registry to see what file system paths are actually being excluded. Others allow the AV team to lock the management application on the Exchange server down so that it is harder/impossible to see what scans are running, to troubleshoot issues and to terminate the AV scan (if required) without waiting for AV team to respond.
Please consult with your AV team and review their vendor’s documentation to understand how their product works .
Regrettably there are multiple issues that can and will arise if you allow file system AV to scan Exchange. Note that this is not just the mailbox database file, there are range of other locations that must also be exempted from file system AV scanning. For details see the links at the start of this post.
File-level scanners may scan a file when the file is being used or at a scheduled interval. This can cause the scanners to lock or quarantine an Exchange log file or a database file while Exchange tries to use the file. This behaviour may cause a severe failure in Microsoft Exchange and may also cause -1018 ESE errors.
One thing to note is that file-level scanners do not provide protection against e-mail viruses, such as the Storm Worm. Storm Worm was a backdoor Trojan horse virus that propagated itself through e-mail messages. The worm joined the infected computer to a botnet, where the computer was used to send spam e-mail messages in periodic bursts. Such viruses can affect the performance of the computer and the network that it is attached to.
This is not a new issue. As my friend Dave McGarr puts it over on his blog, Friends don’t let friends scan the M- drive ! Because of this, the M:\ drive was hidden by default in Exchange 2003. Exchange 2000, which introduced the M:\ Drive, was often negatively impacted by file system AV scanning M:\…..
This is the story of a recent engagement where I ran into some serious AV issues. The customer in question had recently completed an Exchange Server Risk Assessment (ExRAP). ExRAP looks at both technical and process aspects of managing messaging services. One interview question specifically asks if the correct AV exclusions have been implemented. The customer stated that they were.
Fast forward 4 months. The customer’s stable Exchange environment started to exhibit strange behaviours all of a sudden. Issues included degraded database performance, database failover issues and very poor Outlook client response times. As part of initial troubleshooting Microsoft requested that the AV exclusions be checked to ensure that they are correct and were not causing any issues. Again they were stated as correct. Screen shots and remote assistance sessions showed that the settings were entered. So what was causing databases not to failover between DAG members?
Well it turns out that only half of the puzzle was validated. Unbeknown to the Exchange admins, the AV team had implemented a weekly On-Demand scan that started late Sunday evening and scanned every single file on the server. Yes that's right -- zero exclusions… It gets better! These scans were taking a very long time to complete, and in some cases the scan did not complete until Wednesday or Thursday!
The AV product in use has a feature where it will lock a file that looks suspicious for an un-specified amount of time. The lock duration is controlled by the AV engine and is entirely at its discretion. This is what caused the database failover issues. When trying to mount a database on a server, AV locked the Exchange database as it though that MBD01.edb was suspicious. Since the file was locked, Exchange was unable to gain access to the database and mount it. If enough time elapsed then AV would release the file and the database could be mounted. Reviewing traces corroborated this, as we would see Exchange starting to read the database but not progressing further.
Not only was this an unsupported act as far as Microsoft is concerned the impact to the customer was tremendous. Some of the issues experienced were:
Rather than just state that the required exclusions be implemented, I thought it would be more beneficial to discuss some of the areas which typically contribute to the above situation, and some resolutions.
All teams must be tightly aligned on how AV is deployed and configured. While server teams like Exchange do not need to know the exact details of implementing AV on the backend, they must understand how to communicate with the other teams effectively, more on this in a minute! For example how do the Exchange servers get the correct AV policy assigned? Is it based on server name, location in AD or are Exchange servers manually tagged with a policy? This sounds minor, but this knowledge is critical in understanding the impact of choosing a different server name or the steps required if reinstalling an Exchange server from scratch.
To assist with communicating effectively, all teams should communicate using the same terminology to minimise any potential misunderstandings. In the above example, the Exchange team understood an AV exclusion to apply to any and all AV scans. However the AV teams did not share this viewpoint, and their terminology was more granular.
There must be a detailed discussion on the configuration of the AV policies that are applied to the Exchange infrastructure. Some examples include:
The AV agent health must be monitored by the AV team to ensure that an agent does not “go native”, and ignore its configuration. The worst possible case here would be for an agent to revert back to its default configuration which typically means that there are no exclusions and all files and processes are scanned.
AV team must accept that Exchange requires certain file system exclusions to operate in a supported manner by Microsoft. This is a tendency for such AV teams to perceive a security risk by the fact that MDB01.edb is never scanned by file system AV. Their concern that NaughyFile.edb will be stored on the Exchange server needs to be tempered with:
The above are only a few points in a typical discussion on this topic. Please engage with a security consultant to fully discuss such issues, as each enterprise will have different business requirements which translate into the underlying technical configuration. Some customers track these activities through a security sign off or waiver process.
Finally, do not assume that since a previous version of Exchange ran in a given environment, the AV conversation can be skipped! Take the time to ensure that all teams are on the same page, and that the correct exclusions are applied. Exchange 2010 has different exclusions compared to Exchange 2003! Additionally there will likely have been staff changes over the years since older AV policies were defined so have this critical conversation to prevent a critical situation – aka a CritSit!
In the previous posts on RBAC we have looked at customising various roles to ensure that the role contained the minimum amount of cmdlets. RBAC provides even more granularity, and we can add or remove specific parameters from a cmdlet. Since some folks asked for examples on this topic here are a couple of quick examples and some considerations….
If you want to use ECP, please read all the way down to the bottom. The initial examples will work with the Exchange Management Shell but there are a couple of extra considerations for ECP.
As a scenario, let’s invent a crappy work task that some unlucky person could be asked to perform. Let’s say there is a mobile phone management team within a large business and their sole responsibility is to assign mobile phones and numbers to people. As part of this they need to be able to update the MobilePhone property of all users. They should not be able to modify any other attributes at all.
The cmdlet that we want is Set-User and the MobilePhone parameter.
This section builds and creates the custom RBAC role that we will use. Firstly lets see where the relevant cmdlets and parameters live!
We can use the Get-ManagementRole cmdlet to see where the cmdlet lives using the -Cmdlet and -CmdletParameters respectively. Below we can see where both the Set-User and MobilePhone exist:
Since the Set-User with the MobilePhone parameter exists in the Mail Recipients role, let’s use that role. As before the built-in roles are read only and we cannot make any changes to them. So we need to make a copy of the role, and since this copy is writable we can make the necessary changes. Our custom role will be called Mobile-Phone-Jockeys.
New-ManagementRole -Name "Mobile-Phone-Jockeys" -Parent "Mail Recipients"
If we check the contents of our new Management Role, we will see that it contains all the cmdlets and parameters present in the original parent role – not surprising since it was a copy….
Get-ManagementRoleEntry "Mail Recipients\*" | Measure-Object
Get-ManagementRoleEntry "Mobile-Phone-Jockeys\*" | Measure-Object
We need to remove all the cmdlets that we do not want, so the quickest way to do this is to get a list of cmdlets that do not match Get-User and then once we check the list, remove them.
Note: Always check the objects returned are as expected prior to piping to the remove cmdlet!
So in this case we would run
Get-ManagementRoleEntry -Identity "Mobile-Phone-Jockeys\*" | Where-Object {$_.Name -ne 'Get-User'}
Only when we are happy with what is returned should we run:
Get-ManagementRoleEntry -Identity "Mobile-Phone-Jockeys\*" | Where-Object {$_.Name -ne 'Get-User'} | Remove-ManagementRoleEntry
If we check to see what’s now in the Mobile-Phone-Jockeys Role, it only contains the Get-User cmdlet.
Get-ManagementRoleEntry –Identity "Mobile-Phone-Jockeys\*"
At this point the Get-User cmdlet is the only entry left in the Management Role.
Let’s add back the Set-User cmdlet, with *ONLY* the ability to set the MobilePhone parameter, and the identity parameter.
Add-ManagementRoleEntry "Mobile-Phone-Jockeys\Set-User" -Parameters MobilePhone, Identity
Get-ManagementRoleEntry "Mobile-Phone-Jockeys\*"
This is a great example of adding in a specific parameter, when the cmdlet did not already exist.
If you are wondering about the Identity parameter that is also present with MobilePhone, keep reading!
Taking the example above we could have specified multiple parameters in the same command. if we deviate from the example above slightly and pretend that our Mobile-Phone-Jockey role has now become responsible for also assigning office phones then they need to also set the Phone attribute. You can specify multiple parameters, separated with commas.
Add-ManagementRoleEntry "Mobile-Phone-Jockeys\Set-User" -Parameters MobilePhone, Phone, Identity
If we want to go about modifying a cmdlet’s list of parameters, a good guess would be that we would use the Add-ManagementRoleEntry or Remove-ManagementRoleEntry but neither will do what we need. They just add or remove the entire Management Role Entry. In the case of changing the available parameters on an existing role entry we need to use the Set-ManagementRoleEntry cmdlet. Its Parameters switch has the following modes:
$Null
So if we want to remove all parameters from the Set-User cmdlet apart from MobilePhone and Identity we could run:
Set-ManagementRoleEntry "Mobile-Phone-Jockeys\Set-User" -Parameters MobilePhone, Identity
An alternative would have be to remove just the Phone parameter
Set-ManagementRoleEntry "Mobile-Phone-Jockeys\Set-User" -Parameters Phone –RemoveParameter Get-ManagementRoleEntry "Mobile-Phone-Jockeys\*"
Set-ManagementRoleEntry "Mobile-Phone-Jockeys\Set-User" -Parameters Phone –RemoveParameter
The end result is the same in this case, and you can choose the method that makes the most sense!
< Common People courtesy of Pulp >
One interesting thing to note is that when we try to prune out the parameters that we do not want, the common ones should typically be left. They are needed to ensure that we can perform the basic aspects of the cmdlet. This includes parameters like WhatIf, DomainController or Verbose. None of these apply changes to a mailbox or user, but they allow us to control how the operation is performed.
Trying to see them can be a little tricky. Look at the truncated display here indicated by the three periods:
To review all the parameters present in a Management Role Entry I like to do the following:
$Params = Get-ManagementRoleEntry “Mail Recipients\Set-user” | Select Parameters
$Params.Parameters
$Params.Parameters will output the entire content of the attribute to the screen. If we split the output to see what parameters are the common ones and what are specific to the Set-User cmdlet we will see:
AssistantName, CertificateSubject, City, Company, CountryOrRegion, Department, DisplayName, Fax, FirstName, HomePhone, Initials, LastName, Manager, MobilePhone, Name, Notes, Office, OtherFax. OtherHomePhone, OtherTelephone, Pager, Phone, PhoneticDisplayName, PostalCode, PostOfficeBox, ResetPasswordOnNextLogon, SamAccountName, SeniorityIndex, SimpleDisplayName, StateOrProvince, StreetAddress, TelephoneAssistant, Title, UserPrincipalName, WebPage, WindowsEmailAddress
Confirm, Debug, DomainController, ErrorAction, ErrorVariable, Identity, IgnoreDefaultScope, LinkedCredential, LinkedDomainController, LinkedMasterAccount, OutBuffer, OutVariable, RemotePowerShellEnabled, Verbose, WarningAction, WarningVariable, WhatIf
Why is this worth mentioning, well if the necessary common parameters to run cmdlet are not present then it does not run, savvy?
If Identity is not specified for the Set-User cmdlet then you are going to see this fun error:
The operation couldn't be performed because object 'test-6' couldn't be found on 'exch-dc.tailspintoys.com'. + CategoryInfo : NotSpecified: (0:Int32) [Set-User], ManagementObjectNotFoundException + FullyQualifiedErrorId : 98454A26,Microsoft.Exchange.Management.RecipientTasks.SetUser
Why can we not find the account? It is clearly there as we can see with the first command Get-User.
The identity parameter is not present on the Set-User cmdlet in the custom Management Role. This can be a little confusing as we can type "Set-user -Identity" so where is the Identity parameter coming from? Note that the –identity parameter is visible since it is present within the Role Assignment Policy. Role Assignment Policy is just for end user RBAC and is scoped to the individual user thus it cannot be used to modify other users. When we want to modify a user account that is not our own the Role Assignment Policy does not apply as it does not fall within the scope of 'SELF", and we look to the other Management Role which is Mobile-Phone-Jockeys in this case. Since Mobile-Phone-Jockeys does not have the Identity parameter the command fails to change the properties of user account Test-6".
Morale of the story? Be sure to include the necessary common parameters else you will get some “interesting” results…
Testing time, and arguable the most important section!! Let’s assign the Mobile-Phone-Jockeys management role to a test user called User-15.
And to confirm the Mobile-Phone-Jockeys management role contains:
Testing via Exchange Management Shell on User-15’s Windows 7 x64 desktop shows that RBAC is working and they can modify the Mobile number for a separate user (Test-6):
Happy days? Well almost…
You think you are done, and the testing was OK. But then you get a call saying that ECP cannot be used, as the option to view other mailboxes is not present. Huh? Are the permissions not correct? Permissions are correct as we tested them, so what gives? This is what User-15 sees in ECP, note there is no Manage My Organization option along the top, and the shortcut on the right pane does not take us their either. The shortcut has this link:
https://mail.tailspintoys.com/ecp/PersonalSettings/HomePage.aspx?showhelp=false&#
How to fix this? We need to make ECP display the Manage My organization option. We do this by adding cmdlet(s) that will force the option to be displayed, and the in this case we will add Get-Mailbox and Get-Recipient. Lets add them in:
Add-ManagementRoleEntry Mobile-Phone-Jockeys\Get-Mailbox Add-ManagementRoleEntry Mobile-Phone-Jockeys\Get-Recipient
Add-ManagementRoleEntry Mobile-Phone-Jockeys\Get-Mailbox
Add-ManagementRoleEntry Mobile-Phone-Jockeys\Get-Recipient
Get-ManagementRoleEntry Mobile-Phone-Jockeys\*
With the modified Management Role Entries, User-15 now sees:
Note that the Manage My Organization is visible, and so are the mailboxes in the organization. The shortcut to Manage your organization link now takes us to the organization management page in ECP .
Double clicking to edit one shows that that User-15 has the permissions to edit the Mobile Phone field as it is not locked out. Fields in grey are locked out.
We can see that RBAC allows for a great deal of customisation, right down to the individual parameters that are available! This is an amazing amount of flexibility, and there is one glaring thing that should be striking Exchange 2003 and 2007 admins at this point. Did we change a single ACL to the objects in AD? NO! Not a single one. We focussed on the business requirement, which was to allow the phone team to update the mobile number assigned to users, and did not have to think about ACLs. This is a huge win as having to get change records cut to modify AD ACLs is a challenge.
This is made possible by the Exchange Trusted Subsystem universal security group as it is that group that has the permissions to the AD objects. RBAC enforces and controls the cmdlets and parameters that you can see and thus use thereby controlling what changes are made to the Exchange objects.
When I was running Windows 8 I ran into an issue importing VMs with the error “Did Not Find Virtual Machine To Import” and discussed it on the blog. The presence on the .exp file was causing the import to fail. Since I had to rebuild my corporate laptop, I hit a related the same thing in Windows 8.1 but with a different error message.
Importing the same set of lab VMs to Windows 8.1 gave me a different error:
Hyper-V encountered an error while loading the virtual machine configuration. Importing the virtual machine from file <GUID>.exp failed. The operation was passed a parameter that was not valid.
Since the last post was based of Oasis, let’s pay homage to the other Britpop icon of the 90’s – Blur.
<Courtesy link to Blur>
Since my SSD took a dive like an Italian football player in the penalty box, I had to rebuild my laptop. The previously running VMs were on a second SSD but could not be exported gracefully. After correcting the hardware issue, installing Windows 8.1 and then installing the Hyper-V feature it was time to get the lab VMs back up and running. After all, if I did not run any VMs what is the point of carrying the beast that is a Lenovo W530 around?
Starting to import the VM went swimmingly
So good so far
Still good:
And then we are the lucky winner of the big fat error below:
OK so we got an error and we can see that is relates to the .exp and the wizard also tells us exactly where it is located. This is a great improvement to the previous behaviour on Windows 8. Checking the contents of the folder shows that there is indeed an .EXP file present:
Out of curiosity I wanted to see what was being accessed on the file system, so our trusty friend Process Monitor came out to play!
In the above error message it states that we ran into an issue with the .exp file. What was slightly puzzling initially was that there are no individual lines in the Process Monitor output that shows we were accessing the .exp file but we must have to generate the error. The previous example is here.
Click to enlarge the capture below.
The import process is looking for the presence of the file, its just a little more hidden that the previous example! Unlike the previous post where Windows 8 Hyper-V performs actions to the .exp file as Windows 8 / Server 2012 support importing VMs using the legacy WMI API this is not the case with Windows 8.1 / Server 2012 R2. The Windows 8.1 import process enumerates the contents of the \Virtual Machines directory. You can see this in the right hand detail column:
As in the previous post, let’s archive the .exp from the \Virtual Machines folder and then re-try the import process. You can see that the .exp is no longer present.
Since the Hyper-V import Wizard parses the folder structure and already knows about the .exp file we need to restart the wizard. Else you will get the same error.
After archiving the .exp, we can restart the wizard and successfully complete the import process.
Now the curious readers out there will be thinking “how did he manage to get into this state in the first place? How is that .exp file present when the VM was previously running on Windows 8.1?” Well that’s a good question.
Normally I do not use the GUI to do imports as I have too many VMs, the Import-VM cmdlet is used instead. PowerShell has a slightly different VM import behaviour.
In the below example, note the .exp file is present. Import-VM must be told which .xml file to import, it is not pointed at a directory and parses the .xml directly.
You can see this in Process Monitor – the import completes before any IO is performed on the .exp file.
Since I have been asked a few times about how I pimped tweaked my TechNet Forum signature I thought it would be prudent to write it up here. This was one of the main drivers for creating this blog in the first place – so I could avoid having to write the same email several times and instead it would be a “here’s the link which contains the answer!”.
My current TechNet signature is shown below, and this is how it appears when rendered in the browser.
Updated 30-4-2014: Since the forums have had a revamp – the image was updated to reflect the new layout.
Assuming that you have a a TechNet forum account you can edit the signature associated with it. Make sure you are signed in to the forums by navigating to one of the many forums and using the sign-in link at the top right hand side.
On the mid-right hand side, as shown in the red box below, under the “Quick Access” link you can find the “My Settings” link.
Clicking on “My Settings” will take you to the settings page.
There are various options on the settings page where you can edit and tweak your preferences.
For this discussion we are going to be interested in the box at the bottom that contains the signature.
Most people put plain text into this box, which is fine if this was 1995 and we were all still using Gopher. Feel free to drift off back into nostalgia….. How I miss IIS 3.0,,,,,
You will notice that mine does not contain plain text, and actually has basic HTML. This is the secret sauce that allows me to have the links to LinkedIn, Twitter and so on in my signature.
Note one thing: Do not go crazy with the content in this box as there is a limit to the number of characters that you can enter. The editor will allow you to enter above the limit, but any excess characters will be removed when you click submit. This is where we have to do some jiggery-pokery to get the signature down to a smaller size so that we can have multiple entries.
You will want to carefully choose your URLs, and they may include links to a blog, Twitter or LinkedIn profile. Within each of these services obtain and write down the URL.
There are defined locations where the major social media sites have a link to their icons. Again work out what you want, and write them down.
As you read above the signature field is of a set size and this is why I use a URL shortening service. Since I am a Microsoft FTE I use the corporate service, http://aka.ms, though your can use any of the popular services for this. Be sure to research the service, and ensure that it is secure/reputable as you do not want the links to be altered so that instead of the LinkedIn image appearing under your name all a sudden the Backstreet Boys appear. That would be most uncool!
Create a .HTML file on your local machine and edit it with Notepad to create the HTML. By editing a local file it is easy to load it up in IE, and to see how it renders.
As an example this is my current signature contents.
Note I am not a developer, and am just a cable-plugger!
<p>Cheers,</p> <p>Rhoderick </p> <p>Microsoft Senior Exchange PFE</p> <b>Blog:</b> <a href="http://blogs.technet.com/b/rmilne/rss.aspx" target="_blank"> <img src="http://aka.ms/gmvm2k" border="0"></a> <a href="http://aka.ms/RMILNE">http://blogs.technet.com/rmilne</a>  <b>Twitter:</b> <a href="http://aka.ms/RMILNE-TW" target="_blank"><img src="http://aka.ms/iejgcv" border="0"></a>   <b>LinkedIn:</b> <a href="http://aka.ms/RMILNE-LinkedIn" target="_blank"><img src="http://aka.ms/glr0s5" border="0"></a>   <b>Facebook:</b> <a href="http://aka.ms/RMILNE-FB" target="_blank"><img src="http://aka.ms/x0lwhi" border="0"></a>   <b>XING:</b> <a href="http://aka.ms/RMilne-XING" target="_blank"><img src="http://aka.ms/nnqy4c" border="0"></a> <p><font size="1">Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.</font></p>
I’ll let you pick through that, and compare the HTML to the image at the top of this post. But it should be clear that I pull the image in from a shortened URL, and then add a target to it so that when someone clicks the picture it takes them to that link.
There are also plenty of sites that discuss the HTML tags and what they do; so pop any questions you have on them into www.bing.com
Note there is no need for opening and closing HTML tags as that is taken care of by the the page itself, we are only injecting content into an existing page.
When you are happy with the results of the local .HTML file and how it looks in IE, then copy and paste the contents into the online signature field and hit submit. Ensure that it was not too long and that the end was truncated.
Now enjoy your colourful HTML signature! Since we have moved on from Gopher we can party like its 1999. Oh wait…
The blog post on how to integrate Office 365 with Windows 2012 R2 ADFS raised an interesting question from a reader (Hi Eric!) on how should he request a certificate for the ADFS instance since there is no longer an IIS dependency. This means that there is no longer an IIS console to generate a certificate request with. What to do?
You could generate a certificate request, complete it and then export it to a .pfx file on an Exchange server. The exported certificate can then be copied over to the ADFS server[s] and then imported to the local computer certificate store to make it available for ADFS purposes.
What if you don’t want, or can’t do this? If you want to do this on the ADFS server directly then certreq.exe can help us out here! This also applies to other servers and the application of the steps here are not just for ADFS. However the question raised means that more folks in the field are probably thinking about the same thing, so that forced me to polish off yet another one of those draft blog posts!
This post is using a venerable utility that has been present in Windows for a long time. In a future post we can then look at the new features in PowerShell for this task.
Certreq.exe is built into the underlying OS. In the examples below we will use a Windows 2008 R2 SP1 server. To see the options execute “certreq.exe /?” This is shown in the image below, and the full command line parameters are at the bottom of this post for reference:
The goal of this exercise is to generate a certificate that will contain multiple Subject Alternative Names (SAN) in addition to the subject name (common name) of the certificate. if you don’t want a SAN certificate, also called a Unified Communications certificate by various vendors, then simply comment out that line in the process below.
We want to end up with a certificate that has the following Subject name:
Along with the Subject Alternative Names of:
We can break this down into three basic steps:
The syntax is to use certreq.exe with the –New parameter and specifying the request file that we can take to the issuing CA. Once the signed CA response has been obtained and copied back to the server, we can then import it using the –Accept parameter to complete the certificate request process.
Let’s go get crazy and request us some certificate! *
Before we can generate the certificate request we must be absolutely sure that we know the exact names that we want to include. Once the certificate has been issued by the CA, it cannot be changed. Some 3rd party CAs will charge a nominal amount to re-issue with a different/additional name, some will charge for a net new certificate. It is always best to do it right – the first time around!
Once we are locked on the names, then we can create the .inf file that we will feed to certreq.exe – there is a sample below for Windows 2008 and up. Copy the content between the lines to the server, save it as policy.inf and then open it up in Notepad.
========================== Copy all below this line =============================
[Version]
Signature="$Windows NT$"
[NewRequest]
Subject = "CN=sts.tailspintoys.ca" ; Remove to use an empty Subject name.
;Because SSL/TLS does not require a Subject name when a SAN extension is included, the certificate Subject name can be empty.
;If you are using another protocol, verify the certificate requirements.
;EncipherOnly = FALSE ; Only for Windows Server 2003 and Windows XP. Remove for all other client operating system versions.
Exportable = TRUE ; TRUE = Private key is exportable
KeyLength = 2048 ; Valid key sizes: 1024, 2048, 4096, 8192, 16384
KeySpec = 1 ; Key Exchange – Required for encryption
KeyUsage = 0xA0 ; Digital Signature, Key Encipherment
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
RequestType = PKCS10 ; or CMC.
[EnhancedKeyUsageExtension]
; If you are using an enterprise CA the EnhancedKeyUsageExtension section can be omitted
OID=1.3.6.1.5.5.7.3.1 ; Server Authentication
OID=1.3.6.1.5.5.7.3.2 ; Client Authentication
[Extensions]
; If your client operating system is Windows Server 2008, Windows Server 2008 R2, Windows Vista, or Windows 7
; SANs can be included in the Extensions section by using the following text format. Note 2.5.29.17 is the OID for a SAN extension.
2.5.29.17 = "{text}"
_continue_ = "dns=sts.tailspintoys.ca&"
_continue_ = "dns=legacy.tailspintoys.ca&"
_continue_ = "dns=zorg.tailspintoys.ca&"
; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP
; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format.
; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format.
; RMILNE – the below line is remmed out else we get an error since there are duplicate sections for OID 2.5.29.17
; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ
[RequestAttributes]
; and you are using a standalone CA, SANs can be included in the RequestAttributes
; section by using the following text format.
;”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"
; Multiple alternative names must be separated by an ampersand (&).
CertificateTemplate = WebServer ; Modify for your environment by using the LDAP common name of the template.
;Required only for enterprise CAs.
========================== Copy all above this line =============================
Please Note: In the above sample, the lines that you will typically modify for Windows 2008 and up are highlighted. Also noted that in the SAN line there are no spaces between the FQDNs and the ampersand symbol is the separator. Since we are using Windows 2008 R2 the SAN entries are placed in the [Extensions] section. If we were running this on a Server 2003 box then we would use the [RequestAttributes] section or encode the SAN names using MakeSanExt.vbs.
Save this file with a .inf extension. In this post we will call it policy.inf The below shows the file in the C:\Certs folder. Note the elevated cmd prompt!
Now that we have the required .inf file in place we can then create the certificate request:
Certreq.exe -New policy.inf newcert.req
This will generate the certificate, and in the folder there is now a file called newcert.req that we can provide to the issuing CA.
The newcert.req contains the public key of the certificate we just created – the private key does not leave the server. You can see this certificate in the certificate MMC under Pending Enrolment Requests
If you look at the properties of the certificate, in the Certificate Enrolment Requests folder, note that the private key is present, the certificate is not trusted and that it does not chain to an issuing CA.
And if we review the Details tab, the SAN entries are filled in:
In this step the newcert.req was provided to the public CA For external facing ADFS certificates you will need to go and follow the process with your chose CA. The choice is all yours!
Once the request process was followed, the response file was copied into the C:\Certs folder on the same server.
Once we have obtained the signed response from the issuing CA, copy it to the server. Then we can mate the pending certificate request with the signed CA response:
certreq.exe -accept certnew.cer
Please ensure that all of the documentation from your CA provider has been followed. There might be steps to remove built-in certificates from Windows, modify their purpose to add brand new intermediate CA certificates. This changes vendor by vendor, where it was issue from and over time. Please follow their instructions for the most up-to date information!
If the necessary CA certificates have not been updated as per the CA documentation you may receive the below:
Certificate Request Processor: A certificate chain could not be built to a trusted root authority. 0 x800b010a (-2146762486)
Please follow the provided documentation to import the necessary certificates etc that was provided to you by the CA and then re-attempt the import.
If you use the default .inf file then chances are you will experience the lovely error below then pull some hairs out wondering where the issue lies.
The entry already exists. 0x800706e0 (WIN32: 1760) <inf file name> [Extensions] 2.5.29.17 =
The sample .inf file includes multiple SAN sections and just like Highlander – there can be only one! In the example provided in this post note that all of the lines in this section are remmed out. The issue is with the highlighted line as it is not remmed out in the sample. This then conflicts with the previous 2.5.29.17 section.
; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP ; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format. ; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format. ; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ
Note the semi-colon at the start of the highlighted line above so that we do not conflict with the initial 2.5.29.17 section.
If you are trying to generate a SAN certificate on Windows 2008 R2, but the SAN fields are disappearing and only the common name entry remains when you provide the certificate request to the CA vendor then please check that you are specifying the SAN names in the right section.
Windows 2003 servers – place SAN names in the [RequestAttributes] section. The sample line is commented out above as we are using Server 2008 R2. Un-comment it and then place you SAN names here, and then comment out the 2.5.29.17 section. In the sample I added the follow names to convey that 2008 does not use this section.
”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"
Edit this to reflect correct values. For example:
”SAN="dns=sts.tailspintoys.ca&dns=legacy.tailspintoys.ca&dns=zorg.tailspintoys.ca"
Windows 2008 / 2008 R2 servers – place the SAN names in the [EnhancedKeyUsageExtension] section using the 2.5.29.17 field. Do not place them in the [RequestAttributes] section. Else quite simply this will no workey workey!
Please refer to the documentation on TechNet.
The below certreq.exe options are from a Windows 2008 R2 SP1 server:
Usage: CertReq -? CertReq [-v] -? CertReq [-Command] -?
CertReq [-Submit] [Options] [RequestFileIn [CertFileOut [CertChainFileOut [FullResponseFileOut]]]]
Submit a request to a Certification Authority.
Options: -attrib AttributeString -binary -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -rpc -AdminForceMachine -RenewOnBehalfOf
CertReq -Retrieve [Options] RequestId [CertFileOut [CertChainFileOut [FullResponseFileOut]]] Retrieve a response to a previous request from a Certification Authority.
Options: -binary -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -rpc -AdminForceMachine
CertReq -New [Options] [PolicyFileIn [RequestFileOut]] Create a new request as directed by PolicyFileIn
Options: -attrib AttributeString -binary -cert CertId -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -user -machine -xchg ExchangeCertFile
CertReq -Accept [Options] [CertChainFileIn | FullResponseFileIn | CertFileIn] Accept and install a response to a previous new request.
Options: -user -machine
CertReq -Policy [Options] [RequestFileIn [PolicyFileIn [RequestFileOut [PKCS10FileOut]]]] Construct a cross certification or qualified subordination request from an existing CA certificate or from an existing request.
Options: -attrib AttributeString -binary -cert CertId -PolicyServer PolicyServer -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -noEKU -AlternateSignatureAlgorithm -HashAlgorithm HashAlgorithm
CertReq -Sign [Options] [RequestFileIn [RequestFileOut]] Sign a certificate request with an enrollment agent or qualified subordination signing certificate.
Options: -binary -cert CertId -PolicyServer PolicyServer -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -noEKU -HashAlgorithm HashAlgorithm
CertReq -Enroll [Options] TemplateName CertReq -Enroll -cert CertId [Options] Renew [ReuseKeys] Enroll for or renew a certificate.
Options: -PolicyServer PolicyServer -user -machine
* – I was led to believe that this was correct US grammar
After seeing several posts and some folks discussing whether or not Exchange supports the Hyper-V Replica feature I thought it would be prudent to address the following:
For some background reading on Hyper-V and the Replica feature; the component poster and downloadable documents can be found in this post. In short the Hyper-V Replica feature tracks changes to the specified VHDs, then ships these changes to keep a copy of the virtual machine synchronised on a second host. Windows Server 2012 R2 will enhance this further. The diagram below from the replica whitepaper shows a 20,000 foot view.
In short, Exchange does not support the Hyper-V Replica feature. Exchange has a long history of supporting virtualisation from Exchange 2003 onwards. It is fully supported to install Exchange 2007, 2010 or 2013 as a virtual machine on Hyper-V, but using the Hyper-V replica feature is not supported.
The Exchange 2010 virtualization support requirements and Exchange 2013 virtualization requirements pages are rather detailed in what must and must not be done on an Exchange virtual machine or hypervisor. If what you want to do is not listed, then that should set an alarm bell off….
The Exchange virtualization support statement has never listed Hyper-V replica, though some of the Hyper-V content has alluded to running Exchange as a replicated VM. Some workloads have announced support for Hyper-V replica, which includes:
Jeff Mealiffe delivered a great session on Exchange & Virtualization at TechEd. You can find this session here, and a list of all of the other Exchange sessions here.
In his session Jeff highlighted what was supported and what was not supported. Here is his slide for Hyper-V replica.
Exchange has specific supportability requirements, and additional virtualization specific stipulations. Please see the system requirements page for the appropriate version. You can find them at the bottom of this post. The Server Virtualization Validation Program (SVVP) should also be consulted to ensure that the hypervisor is supported under SVVP.
So until the Exchange Product Group folks have signed off and stated that Hyper-V Replica is supported, please do not leverage it for Exchange VMs.
In the Exchange 2003 world and below, those administrators looking to automate and control the behaviour of MAPI profiles on user’s desktops quickly became familiar with tools like:
For a refresher on such joys of .PRF files etc. take a peek at:
Whitepaper: Configuring Outlook Profiles by Using a PRF File
Automate Outlook Profile Creation Using PRFPATCH
The Exchange Profile Update tool
Owch, those were some painful days! Thankfully with Exchange 2007/2010 and Outlook 2007/2010 we are able to move on from such tasks. Exchange 2007 introduced the Autodiscover web service which is used by Outlook 2007 and above to automatically configure the required Outlook settings. This not only includes the initial connection to Exchange but also if the administrator then makes changes to URLs then Outlook will detect and apply such changes. This is a great boon to administrators and will reduce user & configuration issues.
Sounds good does it not?
It is; but I typically see this as one of the most misunderstood and misaligned services in Exchange. As far as I am concerned if Autodiscover is broken, then Exchange is broken in your environment and needs immediate remediation.
Take 10 minutes to carefully read through these links:
Exchange 2007 -- White Paper: Exchange 2007 Autodiscover Service
Exchange 2010 -- Understanding the Autodiscover Service
White Paper: Understanding the Exchange 2010 Autodiscover Service
You’re back? Good reading – right? If you didn’t read it shame on you
The key issue that I encounter when working on Exchange engagements is the perception that Outlook ONLY uses DNS for Autodiscover. If your workstation is domain joined and you are connected to the internal network you should NOT be using DNS to determine which server you will contact for Autodiscover, this is a very common falsehood. In actual fact you should be leveraging a Service Connection Point (SCP) in AD. The SCP is published into AD when a CAS server is installed. This is done automatically by the Exchange setup routine. You can see the value in ADSIEDIT as the serviceBindingInformation attribute and in PowerShell using the Get-ClientAccessServer –AutoDiscoverServiceInternalUri parameter. For example in my lab, the serviceBindingInformation attribute is found under the server object here:
CN=EXCH-1,CN=Autodiscover,CN=Protocols,CN=EXCH-1,CN=Servers,CN=Exchange Administrative Group (FYDIBOHF23SPDLT),CN=Administrative Groups,CN=Tailspintoys,CN=Microsoft Exchange,CN=Services,CN=Configuration,DC=tailspintoys,DC=com
While we are here, you will also see the keywords attribute. This is the underlying attribute that holds the information set buy Set-ClientAccessServer –AutoDiscoverSiteScope. The two attributes can be seen here
By default this will be the FQDN of the server. This should be changed to a Load Balanced URL as per your Exchange design to achieve HA.
To show this in a diagram:
Outlook will build either (but not both) a list of CAS servers in-site or out of site. The AutodiscoverSiteScope value is used to determine site membership. It will the sort them and connect to the 1st one in the list. This means that you will typically connect to the CAS that was installed first. If Outlook fails to contact any CAS server based off its SCP look-up then it will fall back to DNS.
For external Outlook clients in Starbucks, they are not able to directly contact AD (I sure hope that you don’t have a DC exposing 389 TCP to the Internet…..) and thus will use DNS to locate the Autodiscover endpoint. This is illustrated here:
A new feature is available that enables Outlook 2007 to use DNS Service Location (SRV) records to locate the Exchange Autodiscover service
http://support.microsoft.com/?kbid=940881
Prior to this update Outlook would perform these DNS queries by default:
With this update installed the SRV query is added:
As an example:
GET http://autodiscover.contoso.com/Autodiscover/Autodiscover.xml
This fails.
Note: If you Internet facing DNS provider does not support SRV records then you cannot use this feature.
You may not want your users to see the redirect warning as mentioned in step 5 above. if so then please review :
You cannot suppress the Autodiscover redirect warning in Outlook 2007
http://support.microsoft.com/kb/956528
(Note the change in Registry path for the recent updates)
BONUS CHAT
There are other really interesting things you can do with the registry to tune and alter the default behaviour of Autodiscover on the Outlook client machine.
The registry key for Outlook 2007 is:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\12.0\Outlook\AutoDiscover
The registry key for Outlook 2010 is:
HKEY_CURRENT_USER\Software\Policies\Microsoft\Office\14.0\Outlook\AutoDiscover
By changing the values below you alter the default behaviour of Autodiscover.
Value name: PreferLocalXML Value type: DWORD Value data: 0 or 1
Value name: ZeroConfigExchange Value type: DWORD Value data: 0 or 1
DisableAutoStartup
Value type: DWORD Value data: 0 or 1
Value name: ExcludeHttpRedirect Value type: DWORD Value data: 0 or 1
Value name: ExcludeHttpsAutodiscoverDomain Value type: DWORD Value data: 0 or 1
Value name: ExcludeHttpsRootDomain Value type: DWORD Value data: 0 or 1
Value name: ExcludeScpLookup Value type: DWORD Value data: 0 or 1
Value name: ExcludeSrvRecord Value type: DWORD Value data: 0 or 1
To expand on the Local XML option, when Autodiscover functionality is available on your e-mail server, Outlook 2007 initiates the Autodiscover process to obtain server connectivity settings. Once a server that supports Autodiscover is located, the server returns XML data that provides the information needed for Office Outlook 2007 to automatically configure your e-mail account.
The Local XML registry value allows you to specify a local path to an .xml file that Outlook 2007 can additionally use to configure its e-mail account. The name of the registry value is the host name of the e-mail address that is provided to Outlook. In the following example, the specified path to the .xml file would be used for any e-mail addresses ending in contoso.com. The path in the first case is to a file named Autodiscover.xml located on a server named server1. A local option is then shown.
Key: HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\AutoDiscover
Value Type: DWORD
Name: contoso.com
Data: \\server1\share\autodiscover.xml
Data: C:\autodiscover.xml
See http://technet.microsoft.com/en-us/library/cc837949(office.12).aspx for additional registry entries that you may wish to deploy with the Outlook client.
When troubleshooting various Exchange issues it can be very beneficial to get a network capture to look at the actual packets going over the wire. For example when looking at Outlook connectivity issues we can enable Outlook client logging and RPC Client Access Logging on the Exchange Server. Both are both great troubleshooting tools and while we can solve a lot of issues with that information, there can still be great value in looking at the actual network packets. Is there some inline WAN optimisation device that is causing an issue or do we see retransmits?
Unlike in days of old * we do not install the full version of NetMon from the SMS installation media, it can be downloaded from the Microsoft Download Centre.
Note that there are separate packages for x86, x64 and Itanium installations. Ensure that you chose the correct type else it will not install.
Installing Network Monitor is straightforward. It will install the tool itself, and then the parsers which are required to split and analyse the traffic. The parsers that come with the Netmon download are a bit out-dated nowadays. The latest parsers can be downloaded from Codeplex. Installing the newer parsers will overwrite the older ones.
By default Netmon 3.4 will install into C:\Program Files\Microsoft Netmon 3. This folder is used for both x86 and x64 installations since the image type is native to the OS.
I won’t go into detail in using the Netmon GUI, rather let’s focus on the command line aspects of capturing. As it can be easier to capture data this way rather than explaining what buttons or options someone has to select in the UI. From a Microsoft perspective this allows us to send out a command that we know will be correctly executed and the required data gathered. Else the data could be missed and we have to wait for another occurrence thus delaying the troubleshooting.
The GUI Network Monitor executable file is Netmon.exe. For command line work we need to use NMCap.exe.
Some quick examples before we go down the rabbit hole……
NMCap.exe /Network * /Capture /File C:\Netmon.cap:100MB
NMCap.exe /Network “Local Area Connection” /Capture /File C:\NetMon.cap:100MB
NMCap.exe /Network * /Capture /File C:\Netmon.chn:100MB
To explore all of the command line options open an elevated CMD prompt, and change directory to C:\Program Files\Microsoft Netmon 3 directory. Once there run
NMCap.exe /?
To see examples run:
NMCap.exe /Examples
I’ll let you pick through all of the syntax, but there are a couple to note. First up, the file extension controls if circular logging is used.
/File <Capture File>[:<File Size Limit>] Name of capture file to save frames to. Extensions are used to determine the behavior of NMCap. .cap -- Netmon 2 capture file .chn -- Series of Netmon 2 capture files: t.cap, t(1).cap, t(2).cap... <File Size Limit> is optional. It limits the file size of each capture file generated. Default single capture file size limit is 20 MB. The upper bound of the file size limit is 500 MB. The lower bound of the file size limit depends on the frame size captured. (Note that the maximal size of Ethernet frames is 1500 bytes). The files are circular, so once the size limit is reached, new data overwrites older data.
Example Usage: /File t.cap:50M
There are various start and stop conditions that can be used. They can be time or action based. If noting is specified use Ctrl + C to stop the capture. Take a look at the help for more details on /TerminateWhen, /StopWhen and /StartWhen.
In Short the command line steps are:
One thing to note! There are options to limit the networks that data will be captured on. For example Nmcap.exe /Network 3 could be used. How do we know what networks are present, and which interface is what?
NMCap.exe /DisplayNetwork to the rescue!
This example starts capturing all TCP frames and will be saved in a capture file name tcp.cap. If you want to stop capturing, Press Control+C.
nmcap /network * /capture tcp /File tcp.cap
This example starts capturing network frames that DO NOT contain ARPs, ICMP,NBtNs and BROWSER frames. If you want to stop capturing, Press Control+C.
nmcap /network * /capture (!ARP AND !ICMP AND !NBTNS AND !BROWSER) /File NoNoise.cap
This example starts capturing network frames that are TCP Continuations. The capture filter is searching for String "Continuation in TCP Frame Summary Description. In order to see the complete list of Netmon Properties that are filterable,type ".Property" in the Netmon Filter UI.
nmcap /network * /capture contains(.Property.Description, \"Continuation\") /File TCPContinuations.cap
This example starts capturing network frames at 3:17 PM on September 10, 2002. All DNS frames that contains the QRecord Questions name 'my_computer' will be saved in a capture file named dns.cap. The size of the capture file will not exceed 6 megabytes. If the user presses x at any time during this capture, the program will terminate, otherwise the capture will stop 10 minutes after it has begun.
nmcap /network * /startwhen /time 3:17:00 PM 9/10/2002 /capture contains(dns.qrecord.questionname,'my_computer') /file dns.cap:6M /stopwhen /timeafter 10Min /TerminateWhen /KeyPress x
This example starts capturing network frames after 10 seconds past. All IPv4 frames that received by local machine which has IP address 192.168.0.1 will be saved in a capture file named ip.cap. The size of the capture file will not exceed 4 megabytes (the default size). If the user presses c at any time during this capture, the program will terminate, otherwise the capture will stop 10 minutes after it has begun.
nmcap /network * /startwhen /timeafter 10 /capture ipv4.destinationaddress == 192.168.0.1 /file ip.cap /stopwhen /timeafter 10 min /TerminateWhen /KeyPress c
Starts capturing network frames immediately. All TCP frames that have a source port or destination port of 80 are saved to the chained capture files named test.cap, test(1).cap, test(2).cap, ... When the user presses the 'x' key the program stops.
nmcap /network * /capture tcp.port == 80 /file c:\temp\test.chn:6M /stopwhen /keypress x
Starts capturing network frames immediately. All syn TCP frames that have the specified IPv4 network address are stored into to the capture file t.cap. The program stops when the TCP connections ends.
nmcap /network * /startwhen /frame tcp.flags.syn == TRUE AND ipv4.Address == 192.168.0.1 /capture /file t.cap:8M /stopwhen /frame (tcp.flags.fin == TRUE OR tcp.flags.reset == TRUE) AND ipv4.Address == 192.168.0.1
This example reassembles fragmented frames of capture.cap at all layers possible. The resultant capture file, Reassembled.cap will contain the Reassembled payloads alongwith the original unfragmented frames.
nmcap /inputcapture capture.cap /reassemblecapture /file Reassembled.cap
This example starts capturing frames and will be saved in a capture file name result.cap. If you want to stop capturing, Press Control+C. When the free disk space is less than 20% the total space of current disk, capture will stop as well.
nmcap /network * /capture /File result.cap /MinDiskQuotaPercentage 20
This example starts capturing frames and also tracks processes that generated network traffic. The resultant output file is ProcessTraffic.cap.
nmcap /network * /capture /File ProcessTraffic.cap /CaptureProcesses
The examples below are from real troubleshooting incidents. Note that %computername% is embedded into the output file so that we can easily identify which capture is which. Just like in 1994 and we have CV~1.doc, CV~2.doc and CV~3.doc on a floppy disk, it’s a real pain looking at a bunch of files called capture.cap. Yes we can place them in folders, but they often get messed up!
One of the neat things is that NMCAP uses the same filter syntax as Netmon.exe. That means you can tweak and develop the capture filter in the UI and then transpose it.
In the examples below we are using the Blob mechanism to determine the data that we want to capture. While this is not explicitly documented in the NMCap help content, it does fall within the [FrameFilter] section. As discussed on the NetMon team blog, to create a filter using the Blob, you need to know the offset and length of the pattern you are matching. Often, the simplest way to do this is open a trace you’ve taken from the network you are interested in, and click on the field in question. Then look in the hex details for that location and offset.
Below is an example of this. We have highlighted the IPv4 Destination Address field. Note in the Hex Details pane, the Frame Offset is then show as 30 with the Sel Bytes of 4.
For this example the filter would be Blob(FrameData, 30, 4) == 192.168.2.40
Neat, eh? I have to thank Curtis Houck for introducing this to me
Capture all data between Source IP 192.168.16.5 and Destination IP 131.107.2.200.
In this example we will capture on any network, limiting FrameLength to 256 Bytes with an series of chained output files each of which are limited to 100 MB. The capture filter uses the Blob methodology described above for high performance parsing. Blob(FrameData,26,4)==192.168.16.5 is the filter for IPv4 Source Address. Blob(FrameData,30,4)==131.107.2.200 is the IPv4 Destination Address filter.
Nmcap /network * /maxframelength 256 /capture Blob(framedata,26,4)==192.168.16.5 or Blob(framedata,30,4)==131.107.2.200 /file C:\NetMon\%computername%.chn:100MB
Capture all data between to Destination IP 192.168.2.15 on network interface 6
This example is similar to the
Nmcap /network 6 /maxframelength 256 /capture Blob(framedata,30,4)==192.168.2.15 /file C:\NetMon\Capture\%computername%.chn:100MB
* – I still think that I miss having to re-install the service pack on NT and getting prompted to restart just by looking at the network connection properties. I think….