A virtual machine from Windows Server 2012 R2 cannot be directly imported into Windows Server 2012. This is discussed in this previous post along with the import error 32784 .
There will be times when we want to take a 2012 R2 VM and import it into 2012, this happened to me recently when setting up one of the workshops that I deliver to customers. While the setup guide said to deploy 2012 R2, what was installed was another story as the setup team had not prepared 2012 R2 builds for this environment.
So what to do?
While the 2012 R2 VM configuration file cannot be imported into 2012, the virtual hard disk can be reused!
This post will walk through reusing the original VHD with a new 2012 configuration file. We will assume that the VM in question is a generation 1 VM, since generation 2 VMs were first introduced on 2012 R2! Please note that the intent of this post is to provide workaround steps for lab or test VMs, and is not intended to be taken as a supportability statement.
We will focus on one VM that is called EMSDC and is currently stored in the C:\VMs-EXT folder. This is the location where the VM should reside so there is no need to move files around.
The files on disk are shown below. Note that the directory structure is pretty straightforward and that there are only two files:
The files that you see are the XML configuration file and the VHD virtual hard disk. If you prefer to see the directory in explorer that is also shown:
Since we cannot import the Server 2012 R2XML file, we shall back it up and then delete the original from the Virtual Machines folder.
Note that in the bottom section of the below, the F5164EF0***.XML file has been deleted.
Now that we have removed the original configuration file, we can make a new one.
In this example we will use the Hyper-V manager to create new a VM configuration which will reuse the original VHD.
First up, start the New VM wizard.
Specify the same VM name in the wizard. Note that the location is also entered so that we build the complete configuration in the same set of folders. You could leave the default if you wanted, but I wanted to mimic the original configuration. Note that the location path specified is C:\VMs-Ext and is not C:\VMs-Ext\EMSDC.
Specify the memory and network configuration.
This leads to one point. You need to know what values to enter here, as this wizard has no knowledge of what the original VM was set to. Documentation to the rescue!
The default options are shown below for the connect virtual disk screen. The default is to create a brand new VHDX within the VM folder. This is not what we want as that does not contain our OS and applications from the original VM’s VHD.
Thus we have to change the options so that we will use the existing VHD. This is shown below:
Finally we complete the new VM wizard.
And it is shown in the 2012 Hyper-V manager.
If we review the settings for the VM we can see that the original VHD is attached to the IDE controller.
Referencing the original disk layout, we can see that the starting configuration XML file was removed then a new XML file was created.
At this point we can now power the VM on, though we still have work to do!
At this point it is very likely that the installed version of the Hyper-V Integration Components (IC or ICs) are not what we require. In this case the 2008 R2 ICs are installed which causes the 2012 devices to be improperly recognized. If you open device manger the below will likely be observed.
To check the Integration Component version we can look at the vmbus.sysfile
6.1.7601.17514is one of the Windows 2008 R2 IC builds. In this case we need to upgrade to a 2012 IC version. Depending upon your VM, you may need to uninstall the existing IC and downgrade them.
When updating the ICs I have had better success when the OS has had sufficient time to complete the hardware detection process. Once this process has finished, Windows will then prompt for a restart.
Be sure to wait for the hardware detection to complete. This is indicated with the prompt to restart. Be patient – you may need to wait for various services and applications to time out – and the desktop may go black as this happens.
Once restarted, log on and then align the Integration Components. Again - this example is one where the ICs need to be upgraded.
When the install has completed you will be prompted to restart:
I noted that on some of my VMs if I did not allow the initial hardware detection to complete and immediately tried to upgrade the Hyper-V Integration Components, the machine would get stuck at the Preparing to Configure Windowsscreen. This is why I would wait for the initial hardware detection to complete; and when prompted restart the VM prior to changing the Integration Components.
At this point we have a new VM configuration which uses the original VHD and the Integration Components are at the right build level. The virtual hardware inside the VM should all be recognized
One issue caused by replacing the VM configuration is that Windows sees a new NIC. This means that any static IPs inside the VM must be re-assigned. If this is done successfully Windows will prompt that you are assigned an existing IP to the new NIC and ask if you want to overwrite the configuration.
Cheers,
Rhoderick
When importing a VM into Windows Server 2012 that was exported from Windows 2012 R2 you will be unable to import the VM. If you import the VM using PowerShell’s Import-VM cmdlet the error message is not very descriptive – “The operation cannot be performed because the object is not in a valid state”. You will also find EventID 15040 in the VMMS event log.
The full error message for reference:
Import-VM '.\Virtual Machines\F5164EF0-5F87-40F1-9872-C669406A18A5.XML' Import-VM : Failed to import a virtual machine. The operation cannot be performed because the object is not in a valid state. At line:1 char:1 + Import-VM '.\Virtual Machines\F5164EF0-5F87-40F1-9872-C669406A18A5.XML' + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [Import-VM], Virtualizatio nOperationFailedException + FullyQualifiedErrorId : InvalidObjectState,Microsoft.HyperV.PowerShell.Commands.ImportVMCommand
Import-VM '.\Virtual Machines\F5164EF0-5F87-40F1-9872-C669406A18A5.XML'
Import-VM : Failed to import a virtual machine.
The operation cannot be performed because the object is not in a valid state.
At line:1 char:1
+ Import-VM '.\Virtual Machines\F5164EF0-5F87-40F1-9872-C669406A18A5.XML'
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Microsoft.HyperV.PowerShell.VMTask:VMTask) [Import-VM], Virtualizatio
nOperationFailedException
+ FullyQualifiedErrorId : InvalidObjectState,Microsoft.HyperV.PowerShell.Commands.ImportVMCommand
If you look in the Microsoft-Windows-Hyper-V-VMMS/Admin event log, EventID 15040 should be present, stating that “Failed to import a virtual machine”.
Not much to go on and troubleshoot there…
Let’s retry this using the GUI…..
If you used the Hyper-V Manager to import the same VM it provides a clue:
Now that we have an error code, we can plug that into our favourite search engine. Which should lead you to the following KB article:
KB 2868279 Moving a virtual machine (VM) from a Windows Server 2012 R2 Hyper-V host to a Windows Server 2012 Hyper-V host is not a supported scenario under any circumstances.
I wanted to publish this quick post so folks who gravitate to importing via PowerShell can search and easily find the cause of the issue since it is the GUI that provides the clue.
Exchange 2013 CU6 has been released to the Microsoft download centre! Exchange 2013 has a different servicing strategy than Exchange 2007/2010 and utilises Cumulative Updates (CUs) rather than the Rollup Updates (RU/UR) which were used previously. CUs are a complete installation of Exchange 2013 and can be used to install a fresh server or to update a previously installed one. Exchange 2013 SP1 was in effect CU4, and CU6 is the second post SP1 release. CU6 contains AD DS schema changes so please test and plan accordingly!
Update 1-9-2014: If you are deploying into a mixed environment with Exchange 2007, you need to review KB2997209 Exchange Server 2013 databases unexpectedly fail over in a co-existence environment with Exchange Server 2007
Update 1-9-2014: Please also review the comments here for an issue that affects Hybrid mailboxes.
Update 9-9-2014: If you are deploying into a mixed environment with Exchange 2007, you also need to review KB 2997847 You cannot route ActiveSync traffic to Exchange 2007 mailboxes after you upgrade to Exchange 2013 CU6
This is build 15.00.0995.029 of Exchange 2013 and the update is helpfully named Exchange2013-x64-cu6.exe. Which is a great improvement over the initial CUs that all had the same file name! Details for the release are contained in KB2961810.
As with previous CUs, CU6 follows the new servicing paradigmthat was previously discussed on the blog. The CU6 package can be used to perform a new installation, or to upgrade an existing Exchange Server 2013 installation to CU6. You do not need to install Cumulative Update 1 or 2 for Exchange Server 2013 when you are installing CU6. Cumulative Updates are well, cumulative. What else can I say…
After you install this cumulative update package, you cannot uninstall the cumulative update package to revert to an earlier version of Exchange 2013. If you uninstall this cumulative update package, Exchange 2013 is removed from the server.
Note that customised configuration files are overwritten on installation. Make sure you have any changes fully documented!
CU6 contains AD Schema updates – please test and plan accordingly!
What do I mean by that? Well, you need to ensure that you are fully informed about the caveats with the CU and are aware of all of the changes that it will make within your environment. Additionally you will need to test the CU your lab which is representative of your production environment.
The Exchange team today announced the availability of Update Rollup 7 for Exchange Server 2010 Service Pack 3. RU7 is the latest rollup of customer fixes available for Exchange Server 2010. The release contains fixes for customer reported issues and previously released security bulletins. For example, the security issue that was addressed in Exchange 2010 SP3 RU4 is contained in RU7.
This is build 14.03.0210.002 of Exchange 2010, and KB2961522 has the full details for the release
Note that this is for the Service Pack 3 branch of Exchange code. Why? Exchange 2010 SP2 exited out of support on the 8th of April 2014 and will no longer receive updates.
Now, before we rush off to download and install this there are a couple of items to mention!
Test the update in your lab before installing in production. If in doubt test…
If the Exchange server does not have Internet connectivity then this introduces significant delay in building the Native images for the .Net assemblies as the server is unable to get to http://crl.microsoft.com. To resolve this issue, follow these steps:
On the Tools menu in Windows Internet Explorer, click Internet Options, and then click the Advanced tab.
In the Security section, click to clear the Check for publisher's certificate revocation check box, and then click OK.
We recommend that you clear this security option in Internet Explorer only if the computer is in a tightly controlled environment. When setup is complete, click to select the Check for publisher’s certificate revocation check box again.
Update Internet facing CAS servers first
Backup any OWA customisations as they will be removed
Test (yes technically this is in here for a second time but it is important!)
You may have noticed that I’ve been a bit quiet on the blog, TechNet forums and also replying to comments. I took some time off to go back to Europe for a few weeks travelling around Malta and Scotland. Will get back to normal soon, just have to slash through my inbox and get that under control!
After enjoying this:
It was time for the below views at Glencoe:
While this is all nice and green (even without the filter applied to it), just think about the amount of rain that this takes to keep it so green. And in Scotland rain = midgies. *
* – For those who are lucky enough not to have experienced midgies they are just as annoying as mosquitoes and sometimes even more so!
After using various types, architectures and generations of computers over the years there is always the habit of “you go to what you know!” In other words once you figure out a solution to an issue, you then use that repeatedly in the future as you know the process/steps involved. This adroitly describes me when it comes to doing certain command line tasks. If I were being a bit more unkind to myself, then I could also use the saying “If all you have is a hammer, all you see are nails”.
Sometimes I like to mix it up and combine PowerShell commands with output from the cmd prompt since I have known ways of doing certain tasks. This is all good, until you start parsing output from PowerShell in the cmd prompt and get no matches/returns/hits on the data even though you know that there are matches within the data.
For example you might do the following:
However there are no results returned from the cmd prompt.
This is a case of data types and their evolution over time. Let’s take a look at an example and how to address it.
In this example, we shall use PowerShell to parse the IIS logs to get a list of all the Outlook 2010 users who hit the /Autodiscover virtual directory, from a particular domain in the forest. We will look for hits to the Autodiscover.xml file from Outlook version 14.0 and ensure that the user is from one specific domain in our AD forest, this is the “Contoso” domain. The results will be outputted to the test file called Autodiscover.txt. This is in the $PWD – the present working directory:
Get-ChildItem -Recurse -Filter *.log | Get-Content | Where-Object {$_ -Match "Microsoft\+Outlook\+14.0" -And $_ -Match "Contoso" -And $_ -Match "POST /Autodiscover/autodiscover.xml"} | Out-File $PWD\Autodiscover.txt
As you can see, the command completes and exists in the current folder - All good! You may be wondering about the –Recurse option. If we were parsing IIS logs from lots of servers, then typically they would be copied to one central location in the format of TopFolder\Server\IIS logs. Or expressed another way:
TopFolder
Server1
Log1.log
Log2.log
Server2
Using Get-Content, we can then look at the content of the Autodiscover.txt file. The content of the file is what we’d expect, lines containing the phrases we specified in the PowerShell search:
An example line would be:
2010-11-12 09:07:02 192.168.2.15 POST /Autodiscover/Autodiscover.xml – - 443 CONTOSO\DFunker 192.168.5.10 Microsoft+Office/14.0+(Windows+NT+6.1;+Microsoft+Outlook+14.0.4760;+Pro) 200 0 0 3411
Now since I have a penchant for the Findstr.exe command. The below example parses the file splitting using the specified delimiters and then retrieving tokens %a and %b. Using this, let’s then try to search the output from the above command using the cmd prompt:
FOR /F "Tokens=10,13 Delims=,; " %a IN (Autodiscover.txt) DO @ECHO %a %b
Hmm. No results, but we already saw that there is content in the file. What gives?
The issue is that we are looking at utilities that were created in different computing eras. A lot has changed in computing, and localisation of content is one. Previously ASCII could be used quite happily, but nowadays UNICODE is typically the default option as it supports double byte characters.
To detect the current format of the file we can use PowerShell to inspect it with a script. Alternatively, open up the file in notepad and do a “Save As”. Notepad will default to the current encoding type and file location. We can see that the output file from PowerShell was encoded in UNICODE.
UNICODE is the default encoding from the Out-File cmdlet, but we can change this quite easily by adding the Encoding parameter. Valid values are "Unicode", "UTF7", "UTF8", "UTF32", "ASCII", "BigEndianUnicode", "Default", and "OEM".
"Default" uses the encoding of the system's current ANSI code page.
So this time around, let’s tell Out-File to encode as ASCII and save as Autodiscover-v2.txt
Get-ChildItem -Recurse -Filter *.log | Get-Content | Where-Object {$_ -Match "Microsoft\+Outlook\+14.0" -And $_ -Match "Contoso" -And $_ -Match "POST /Autodiscover/autodiscover.xml"} | Out-File $PWD\Autodiscover-v2.txt -Encoding ASCII
Now, if we look at the contents of Autodiscover-v2.txt the FOR command gets results:
The net result is that I get to keep on running batch file commands from the 90s!
I was recently reminded of a simple yet effective Event Viewer filtering tip. If you have thousands of event entries that are pollution/flooding the log it becomes very difficult to see the actual real issues.
For example this tip has proved very useful when the application event log is full of these types of noise:
The third example will log once a minute if it detects that the server was sending SMTP email. If the server is an Exchange box then SMTP is its raison d'être. Logging one of these a minute will generate 1440 events a day that you then have to ignore. Talk about seeing not seeing the trees for the wood.
This all came to mind from looking one of my lab servers that has a very chatty RAID card. The application event log typically looks like this from source MR_Monitor
In amongst all of this, how to narrow down and just look for the relevant information?
Right clicking the event log name and selecting the “Filter Current Log” will then display the various options which are shown in the screen capture below.
There are various options for filtering the event log, and depending upon your exact circumstance different options will make sense.
There are options to filter based on, which are fairly self explanatory:
You can also flip to the XML tab and specify an event filter in XPath, but we can leave that for another post.
There are a few things that I have a habit of doing in here.
As discussed at the start of this post, how to we stop the noise events from showing up in the list? Quite simply we can punch these events out by use the EventID field. This is documented right in the middle of the above screen capture but people tend not to read it for whatever reason.
Using this filed we can include/exclude:
If you know the EventID(s) that you want to see then simply specify them as the only included ones. All others will filtered out. This is great if you know that, but as mentioned above we want to remove the noise from the long and see all the other events. In that case we will exclude only the noise EventIDs.
Events are excluded by using the negative operator – the minus sign. If we want to filter the EventID 2080 MSExchange ADAccess information events from the log, specify –2080 as the filter:
Using the above example of the chatty RAID card, I want to see all messages relating to the battery and nothing about disk detection. The disk detection events are EventID 247 and 236. Another one that I’d like to suppress is EventID 44 as that reports the runtime of the card. To exclude Event ID 44 and the range of 203 to 205 the below filter can be used:
-44, -230 - 250
When first looking at a server with issues I like to use the Administrative Events view. This is a built in view that surfaces warnings, error and critical events from all administrative logs on the server.
This allows me to get a decent overview before diving into one specific log.
Knowing some EventIDs helps when investigating issues.
For example, knowing to filter the system log looking for EventID 6009 shows immediately the server restart dates.
What other filtering tips do you have? Care to share?
Quick post for a Friday!
A customer had an “interesting” issue where the DAG networks were not being displayed inside the Exchange Management Console. The underlying reason is because the implemented firewalls between Exchange servers, and then restricted network traffic between Exchange and the DCs. This is not a supported situation. For details on the firewall restricting traffic aspect please see TechNet and the EHLO blog post.
In the Exchange Management Console they expected to see the DAG networks like this:
However they saw something like this – note that the networks pane is blank:
In order to work around this we used Exchange Management Shell to target an individual server in each DAG and direct the commands to that specific server that was local and the network traffic was not restricted. We used the Set-DatabaseAvailabilityGroupNetwork cmdlet with the –Server parameter. For example:
Get-DatabaseAvailabilityGroupNetwork –Server MailboxServer01
Since the customer had switched data centres, added and removed some interfaces there were 9 DAG networks in this single DAG. Yes nine - EEEK! We really should only have two. One for MAPI and one for REPL. Some customers may have iSCSI interfaces but that is not too common.
Once we had worked through the interfaces we deleted the old stale ones that had no assigned subnets, and then adjusted the remaining two. As part of this I requested the customer rename the interface as DAGNetwork01 and DAGNetwork02 and not intuitive names nor are then descriptive.
This then got a deer in the headlights look from the admin, as they then thought how can I specify the name of something and then in the same command then rename it? Does that not go into a circular loop? Thankfully no! The cunning developers allow us to specify the identify parameter to call out which DAG network we are working with, and then we use the –Name parameter to supply the new name. For example:
Set-DatabaseAvailabilityGroupNetwork -Identity 'DAG01\DAGNetwork01' -Name 'MAPI'
If you do have iSCSI networks on a DAG server, then those iSCSI networks should not be used for Exchange or cluster operations. This configuration must not be done using cluster tools – you will use the Exchange tools that will then call the cluster APIs for you. For example you would run:
Set-DatabaseAvailabilityGroupNetwork –Identity DAG01\iSCSI -IgnoreNetwork $True -ReplicationEnabled $False
Note that the iSCSI network was cunningly renamed to iSCSI. It’s the simple things in life that make it easy…
In a little under 6 months, multiple products will experience a support life cycle change. As indicated by the space shuttle being readied, the countdown has started and the dates are set. Make sure you are aware of what is happening on the 13th of January 2015.
2014 has already been a busy year for product transitions, with Exchange 2003, Office 2003 and Windows XP Pro all exiting out of extended support.
While not changing with the products below, please also make sure that the Windows Server 2003 and 2003 R2 end of extended support date is also on your calendars and project plans. Please plan to move off Windows Server 2003 by the 14th of July 2015. I also blogged about the upcoming change for Office 2010, where SP2 will be required after the 14th October 2014 since SP1 will no longer be supported.
There are multiple products that we need to be aware of that will experience support status changes in January 2015. They include:
Mainstream support for Exchange 2010 will end on the 13th of January 2015, and Exchange 2010 will then enter its extended support phase.
Windows 7 will also experience a state change. As with Exchange 2010, it will leave mainstream support on the 13th of January 2015 and enter extended support phase.
Windows Server 2008’s mainstream support will also end on the 13th of January 2015.
As will Windows Server 2008 R2
Forefront Unified Access Gateway (UAG) customers need to update to SP4 since UAG 2010 SP3 support will end on the 13th of January 2015.
Finally, not that I see this much in the wild since Hyper-V was launched, Virtual Server 2005 and Virtual Server 2005 R2 support will end on the 13th of January 2015.
The Lifecycle site’s FAQhas more information and details on support options if you are not able to complete your migration prior to the end of support dates.
Make sure that you are able to migrate to a supported product prior to the support expiration date. Security updates will notbe provided for products that are not supported.
When resolving issues with on-premises Exchange sometimes the issue may be directly within Exchange, other times the root cause may lie outside Exchange. Depending upon the exact nature of the case we may have to investigate network switches, load balancers or storage. When Exchange is virtualized then the hypervisor and it’s configuration also may require attention.
This was the case with a recent customer engagement. Initially the scope was upon Exchange with symptoms including Exchange servers dropping out of the DAG, databases failing over and poor performance for users. As with most cases that get escalated to me, there is rarely only a single issue in play and multiple items have to be addressed. The customer was using ESX 5 update 1 as a hypervisor solution, and Exchange 2010 SP3. Exchange was deployed in a standard enterprise configuration with a DAG, CASArray and a third party load balancer.
In this case, one of the biggest issues was that of the hypervisor discarding valid packets. Within this environment an Exchange DAG server that was restarted had discarded ~ 35,000 packets during the restart. Exchange servers that had been running for a couple of days had discarded 500,000 packets. That’s a whole lot of packets to lose. This was the cause of servers dropping out of the cluster and generating EventID 1135 errors. This is issue is discussed in detail in this previous post, which also contains a PowerShell script that will easily retrieve the performance monitor counter from multiple servers. The script allows you to monitor and track the impact of the issue easily.
Yay – we found the issue and all was well. Time to close the case? NO!
There were multiple other issues involved here and not all of them were immediately obvious when troubleshooting so I wanted to share these notes for awareness purposes. All software needs maintenance, Exchange itself is no exception and it is critical to keep code maintained with the vendors' updates. This ensures that you address known issues, and proactively maintain the system. As always this must be tempered with adequately testing any update in your lab prior to deploying it in production.
This post is only to raise awareness of the below issues and is not intended to be negative to the hypervisor in question. As stated above Exchange, Windows and Hyper-V all require updates. Hyper-V experienced network connectivity issues previously and required an update.
The customer reported that the DAG IP address was causing conflicts on the network. The typical cause for this is for the administrator to manually add the DAG IP to one or more cluster nodes manually. This is an IP address that can be bound to any node and the cluster service will perform the required steps, and the administrator should only add it as a DAG IP address and do no more. The DAG was correctly configured and servers only had their unique host IP address assigned.
Initially there seemed to be a correlation with the duplicate DAG IP address and backups. However this was quickly discarded as the duplicate IP issue would only happen once every several weeks and could not be reproduced on demand by initiating a backup.
There is an issue documented in KB 1028373- False duplicate IP address detected on Microsoft Windows Vista and later virtual machines on ESX/ESXi when using Cisco devices on the environment. This issue occurs when the Cisco switch has gratuitous ARPs enabled or the ArpProxySvc replied to all ARP requests incorrectly
This was the initial issue discussed above and is covered here.
It is always prudent to keep working an issue until it is proven that the root cause has been addressed. In this case additional research was done to investigate networking issues on the hypervisor and the below links are included for reference.
The symptom of large guest OS packet loss can include servers being dropped from the cluster. When a node is removed from cluster membership, EventID 1135 is logged into the system event log.
To report on such errors, I wrote a script to enumerate the instances of this EventID. Please see this post for details on the script.
KB 2055853- VMXNET3 resets frequently when RSS is enabled in a Windows virtual machine
Disabling RSS within the guest OS is not ideal for high volume machines as this could lead to CPU contention on the first core. Please work to install the requisite update for the hypervisor.
KB 2058692- Possible data corruption after a Windows 2012 virtual machine network transfer
Modern versions of Windows will typically not be using this virtual NIC – currently they will typically use VMXNet3. However be aware of the other issues on this page affecting VMXNet3 vNICs.
When installing the VMware tools in ESXi5, selecting the FULL installation option will also install the vShield filter driver. There is a known issue with this filter driver that is discussed in KB 2034490- Windows network file copy performance after full ESXi 5 VMware Tools installation.
Starting with ESXi 5.0, VMware Tools ships with the vShield Endpoint filter driver. This driver is automatically loaded when VMware Tools is installed using the Full option, rather than the Typical default.
I also saw this TechNet forum post with a related issue to what was observed onsite. Servers would discard a very high number of packets which would severely impact the application users were trying to access.
There are some important items to review when configuring NLB on VMware.
It is critical to discuss the NLB implementation with the hypervisor team and also the network team. Be very specific with what is being implemented and what is expected of both of these teams. Some network teams do not like NLB unicast as it leads to switch flooding, whilst others do not appreciate having to load static ARP entries into routers to ensure remote users can access the NLB VIP. Cisco has Catalyst NLB documentation here. Avaya has some interesting documentation on this page.
For this and other reasons Exchange recommends the use of a third party load balancer. This could be a physical box in a rack or a VM which can run inside Hyper-V or ESX. Please consult with your load balancer vendor so they can best meet your business, technical and price requirements.
Whilst working on a customer’s Exchange 2010 DAG issue, I wrote a quick script to quickly grab some performance monitor counters from all of their Exchange servers. The issue that we were investigating was related to discarded packets when the VM was running on a certain hypervisor host. The customer had moved their Exchange VMs to a new host and after doing so they were experiencing cluster issues. Randomly nodes would be dropped from cluster membership which would impact the Exchange 2010 DAG as any active copies of those Exchange databases would then have to be mounted on another server. The activation was happening automatically (as expected) but it is still not a desired state.
On the Exchange servers we observed EventID 1135 – Cluster node was removed from the active failover cluster membership.
At this point we did not do the typical knee jerk reaction that normally happens -- which is to simply rack up the cluster timeout values. Why you ask? Well that does not address the root cause, and only masks the symptom.
Update 18-11-2014: Please see this post for a script to retrieve the number of 1135 EventId errors on multiple servers.
We quickly checked the basics, and made sure that the Exchange 2010 recommended DAG update (it’s a cluster update but Exchange recommends it strongly) was installed, and also the generic updates recommended for the version of the OS Exchange was installed onto. They are discussed in this post along with other Exchange 2010 deployment tweaks.
None of this made a difference. The cluster still experienced EventID 1135 cluster disconnects. Since this only started after the VMs were moved to the new host, known issues for those hosts were then reviewed. In VMware KB 1010071 and 2039495 these symptoms are discussed and Exchange 2010 is specifically tagged in the second article.
While the hypervisor admins have their own tools to report and investigate such issues, we can use Performance Monitor to see how Windows perceives the lay of the land.
The counter that we were looking at was “Packets Received Discarded”. The sample image below is from Hyper-V and shows the location:
From the Perfmon description: Packets Received Discarded is the number of inbound packets that were chosen to be discarded even though no errors had been detected to prevent their delivery to a higher-layer protocol. One possible reason for discarding packets could be to free up buffer space.
This is great – we can use this counter to look at the issue, but how to do it easily across multiple servers? And then potentially across every single VM that the customer has since if we are hitting the issue on one set of VMs what other VMs are affected? We could:
To see if we were experiencing the issue across multiple Exchange servers, and to gauge the severity I wrote a quick PowerShell script that would pull in the required performance counters from multiple servers quickly and easily. This uses the Get-Counter cmdlet as shown here:
Get-Counter "\Network Interface(*)\Packets Received Discarded" -ComputerName $Server
The script will get a collection of NICs from the specified server, and then loop through them and remove the pseudo ones. For example do not want to see Teredo, ISATAP or 6to4 interfaces. For the purposes of this script we are concerned with the physical ones, and that includes the "physical" NICs that are made visible in virtual guest Operating Systems. NIC names are not hardcoded into the script else it would not be portable across physical server types and hypervisors.
You can obtain this from the TechNet Gallery under the Get Perfmon Counter Packets Received Discarded On Multiple Servers.
Update 22-10-2014: Updated script to also include OS uptime and OS installation date
Using the script, we were able to quickly check all of the customer’s servers and quickly pinpoint trends in the environment. One half of the DAG servers were experiencing discarded packets many times higher that the others, and the trends were noted on both MAPI and REPL networks. This allowed us to focus on particular hypervisor hosts.
Armed with this data, we then could ask why specific VMs were more impacted than others and prove it. It turns out that the Exchange VMs had 64GB of RAM assigned and had been placed onto hosts which had 64GB of physical memory. Since there was no free memory for the hypervisor, this was placing pressure onto the hypervisor and exacerbating the issue.
This is an issue that has received attention in the past from the Exchange community. In addition to the other great posts out there on this topic, I posted this to set the context around the script as we all want an easy to check lots of servers and potentially monitor for this issue.
Want to grow and expand your technical, customer and soft skills? Do you dream of working with the product groups at Microsoft to influence future product updates? Want to have fun while delivering true mission critical support to top tier Microsoft customers? Then you are in luck!
Microsoft Canada GBS is looking for Exchange and Office 365 experts. Microsoft's Global Business Support (GBS) group is part of our Customer Service and Support team which in turn is part of our worldwide Services Organisation.
You may also know us as Premier Field Engineering (PFE) #MSPFE
As an example of what we do please see:
If you have reached that point in your career where you are craving to expand, grow your sphere of influence and opportunities are limited this is a great opportunity to join the team at Microsoft Canada.
The job posting can be found on the Microsoft Canada careers site.
Update 20-7-2014: Replaced placeholder text with the exact job description link.
ADFS 2012 R2 provides an interesting feature called Extranet Lockout Protection, where the intent is to protect AD accounts from malicious lockout from external access attempts. Previous versions of ADFS had no native mechanism to protect AD from such hammering attempts. For details on the feature please review this post.
One issue that can occur when extranet lockout protection is enabled is around how it deals with AD accounts that have had no bad passwords submitted against them. Bad password attempts are stored in the BadPwdCount attribute in AD, and are stored on the server that processed the failed logon request.
In this post we will look at the account called Vanilla-1 - this is an account that has not had a single wonky password submitted to it. Making sure there were no password typos was the most stressful part of writing this post! To see BadPwdCount on the Windows 2012 R2 DC, we can use the Get-ADDomainController cmdlet to enumerate all domain controllers. This lab has two domain controllers, which is why there are two lines returned. This collection is then used in the ForEach loop to enumerate the user’s properties on each DC passed to it as shown below:
Get-ADDomainController -Filter * | ForEach { Get-ADuser "Vanilla-1" -Properties * -Server $_ } | Format-Table Name, PasswordLastSet, BadPwdCount
For comparison, note that a separate account User-2 has 4 and 1 BadPwdCount reported from different DCs.
In the above example we are looking at one specific account, if you wanted to dump all user objects, then change the filter for the Get-ADUser cmdlet:
Get-ADDomainController -Filter * | ForEach { Get-ADuser -Filter {(ObjectClass -eq "user")} -Properties * -Server $_} | Format-Table Name, PasswordLastSet, BadPwdCount
Now that we have verified that the BadPwdCount is not set for Vanilla-1, let’s try and logon to ADFS 2012 R2 using this account. The URL we will hit is:
https://adfs.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
And we get the lovely error below:
An error occurred An error occurred. Contact your administrator for more information Activity ID: 00000000-0000-0000-0b00-0080000000d2 Error time: Tue, 15 Jul 2014 19:08:55 GMT Cookie: enabled User agent string: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.3; WOW64; Trident/7.0; .NET4.0E; .NET4.0C; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729; BRI/2; MS-RTC EA 2; MS-RTC LM 8; InfoPath.3)
An error occurred
An error occurred. Contact your administrator for more information
Activity ID: 00000000-0000-0000-0b00-0080000000d2
Error time: Tue, 15 Jul 2014 19:08:55 GMT
Cookie: enabled
User agent string: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.3; WOW64; Trident/7.0; .NET4.0E; .NET4.0C; .NET CLR 3.5.30729; .NET CLR 2.0.50727; .NET CLR 3.0.30729; BRI/2; MS-RTC EA 2; MS-RTC LM 8; InfoPath.3)
(I’ll come back to why the Activity ID is highlighted in a moment)
Since ADFS auditing is enabled, post on that coming up soon, we can see the below in the security event log on the ADFS server:
Looking at the details of the failed ADFS events we can see EventId 300 stating that there was an error enumerating the AD account.
Then EventID 413 is logged when processing the request:
How do I know for sure that these events map to the failed logon shown in the IE screen capture? Well apart from the fact that this is my lab and no-one else is using it? Remember the Activity ID that was highlighted in red? Go back and look at the IE screen capture. Notice that they are the same? This is how we can correlate between a client and lots of logged events.
Now that we have the background how to address this? Brian Reid who has forgotten more about transport than I know, kindly added a comment to the initial blog post. As Brian points out this issue is discussed in 2971171 - A new Active Directory user cannot log on from the AD FS server when the server is running from a GMSA account. Before this can be installed, the April 2014 2012 R2 must be installed – this is KB 2919355. This lab machine was updated with this when the April update was first released:
What is not totally clear though is the fact that this lab is *NOT* using a GMSA account – it is using a standard service account. However, installing the update did resolve the issue.
After installing the ADFS 2012 R2 update, let’s try this again. Things are much better, and the account Vanilla-1 is able to logon to ADFS.
In the security event log, we now see successful events when Vanilla-1 logs on to ADFS.
EventID 4624 shows Vanilla-1 as a successful logon:
And to make sure that I did not fnangle anything in the background (like making a typo in the password), note that the bottom command was taken 3 minutes after the screenshot above. The BadPwdCount is the same as the start. Phew – I did not typo the password!!
Just like all software components ADFS needs maintenance. In the Office 365 portal, the notification page has been alerting admins that a performance issue exists with ADFS 2012 R2 and that a hotfix must be installed. This is one of the items below.
Not related to ADFS 2012 but the same maintenance is also needed for ADFS 2.0 – Updates are also available for those older builds, for example Update Rollup 3 for ADFS 2.0.
In addition to break fix, KB 2927690 also lights up the alternative logon capability. It pays to stay current on updates!
After a recent video driver update, my corporate Outlook client started to do some strange things. Within Office 2013, the screen output would be distorted. Menu bars were not painted properly until I mouse-over them again, or moved Office programs around. Other times the display would look corrupted and the navigation tree would not be properly rendered.
Update 16-7-2014: Adding link to Office 2013 known issues. This discusses implications of disabling the hardware acceleration and some issues with certain drivers.
Update 5-11-2014: Adding link to Windows 10 Preview display issue.
I would see issues like this in Outlook -- note that the left hand navigation tree is unreadable in places
Then when composing/reading an email, the ribbon would be corrupted and/or distorted:
The underlying issue is discussed in KB 2768648 - Performance and display issues in Office 2013 client applications.
While the underlying issue is with the video card driver, there is a workaround until the video driver is updated.
On an individual machine, the user can open up the Outlook advanced properties and select the option to “Disable hardware graphics acceleration”
Doing so immediately fixed up my errant display issues. I’ll update this post when I see a video card release that resolves the issue.
There are some alternative methods of implementing this via the registry and GPO.
We can also set a registry key to disable the feature:
HKEY_CURRENT_USER\Software\Microsoft\Office\15.0\Common\GraphicsREG_DWORD DisableHardwareAcceleration Value: 0x1
To query this via cmd prompt:
REG.exe Query HKCU\Software\Microsoft\Office\15.0\Common\Graphics /V DisableHardwareAcceleration
To set this via cmd prompt:
REG.exe Add HKCU\Software\Microsoft\Office\15.0\Common\Graphics /T REG_DWORD /V DisableHardwareAcceleration /V 0x1 /F
(Note that the above is one line that may wrap)
We can retrieve the current configuration using the first command, whilst the second sets the value:
Get-ItemProperty -Path HKCU:\Software\Microsoft\Office\15.0\Common\Graphics -Name DisableHardwareAcceleration | Select-Object DisableHardwareAcceleration | FT –AutoSize New-ItemProperty -Path HKCU:\Software\Microsoft\Office\15.0\Common\Graphics -Name DisableHardwareAcceleration -PropertyType DWORD -Value "0x1" –Force
Get-ItemProperty -Path HKCU:\Software\Microsoft\Office\15.0\Common\Graphics -Name DisableHardwareAcceleration | Select-Object DisableHardwareAcceleration | FT –AutoSize
New-ItemProperty -Path HKCU:\Software\Microsoft\Office\15.0\Common\Graphics -Name DisableHardwareAcceleration -PropertyType DWORD -Value "0x1" –Force
(Note that the above are all one line that may wrap)
In the Office 2013 Administrative Template filesthere is an option to disable hardware acceleration. To do this:
In a previous post we saw the Microsoft requirements for the exclusions that must be added to file system AV on Exchange servers. In a recent CritSit, basically an uber urgent support request where the customer is down or as good as down, I also got to examine some of the other causes for file system AV not being correctly configured for Exchange.
In the aforementioned post the majority of the issues were caused by the lack of exclusions to a scheduled scan task. The issue below was not related to that but how different processes are identified and what exclusions get applied to the various processes.
Please note that this post is not intended to slight the AV vendor’s product in any way whatsoever. The product was performing as designed, it was how the customer’s AV team had configured the product which was the issue. The underlying intent for this and the other post is to raise field awareness of the types of issues that we see, and to facilitate better and more focussed discussions with the various AV and security teams that we work with on a daily basis.
The file system AV product in question has the option to categorise processes into different risk levels. By default this feature is not enabled, and the customer must explicitly enable it. The different process levels that you may see are Default, Low Risk and High Risk. The below is a brief description:
The key concept to note is that the level a process is defined at will dictate which set of exclusions will apply. For example a process like Trojan.exe can be defined at the high risk level. This means that the exclusions applied to what Trojan.exe touches will be the exclusions defined at the high risk level. Typically by default there will be minimal exclusions at the high risk level.
What happened to cause the issue to get me onsite in a hurry?
(Subject should read Pete Tong MBE, and refers to cockney rhyming slang “It’s gone wrong”.)
The customer’s Exchange team correctly identified that file system AV exclusions were required as part of the design. The required exclusions were passed to the customer’s AV team. Consider this the WHAT of this story. The exclusions are WHAT is required. HOW they get implemented varies depending upon the file system AV product the customer has implemented. AV products each have their own best practices and implementation requirements, for details on this you must consult with your AV team and their vendor. Microsoft cannot provide guidance on HOW a 3rd party vendor’s product be configured to achieve the required results.
In this case, the customer started off by defining all of the required exclusions in the default process section. As noted above this will apply to all process on the system uniformly. What happened next was a bit baffling. For some reason, that was not well understood, they then enabled the low risk process section (and by extension this also enabled high risk). All of the Exchange processes were then added to the low risk section. Job done, no?
<Borat> Not so much </Borat>
Since the Exchange processes were now defined as a low risk process, they picked up the exclusions that were defined at the low risk level. In the paragraph above note that there was no mention of the exclusions being copied over from the default process section, and that was the crux of the issue. The Exchange content was now being scanned by file system AV since it was not excluded at the same level as the defined process. In this case every read and write to the database was intercepted by file system AV. The performance on the system was terrible, CPU consumption was through the roof and since the business was so unsatisfied with Exchange performance I won a free trip to go and fix it.....
Again, the AV product was working as designed. Absolutely no issues were identified with it apart from the configuration the customer had applied. After I noted that not all of the required exclusions were present, I requested the customer’s AV team, the AV vendor and the Exchange team get on a conference call to thrash this out. I have to applaud the level of support we got from the AV support person on the call, she was fantastic! In the space of 60 minutes she clearly and precisely identified the configuration issues, stated what needs to be corrected and then provided multiple other items the customer should address.
What can we take away from this?
Please also refer to the previous post for the other learning items also presented there.
Every so often I see folks run into issues with scripts/one-liners that they obtained from a blog or crafted themselves. One common issue is when they think the command is perfect and then when they go to dump the output to a file, the content is mince. **
Imagine your surprise when you open up the output file expecting pristine data, and it starts with:
#TYPE Microsoft.PowerShell.Commands.Internal.Format.FormatStartData
As an example, we can use the below script that I saw a customer try last week:
Get-Mailbox –Database DB01 –Resultsize Unlimited | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -GT 50000} | Sort-Object -Property ItemsInFolder -Descending | Format-List Identity, ItemsInFolder | Export-CSV $PWD\NaughtyMailboxes.csv
This is meant to get a list of mailboxes in a given database, then look at each folder in turn to see if any of them have a number of items in excess of 50,000. This code looks to run successfully and produces the following output.
You proclaim “This is excellent – job done!” However, when you open up the .CSV file in Excel, the results appear to be less than excellent…..
The output below is not really what you wanted to see…
#TYPE Microsoft.PowerShell.Commands.Internal.Format.FormatStartData"ClassId2e4f51ef21dd47e99d3c952918aff9cd","pageHeaderEntry","pageFooterEntry","autosizeInfo","shapeInfo","groupingEntry""033ecb2bc07a4d43b5ef94ed5a35d280",,,,"Microsoft.PowerShell.Commands.Internal.Format.ListViewHeaderInfo","9e210fe47d09416682b841769c78b8a3",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"27c87ef9bbda4f709f6b4002fa4af63c",,,,,"4ec4f0187cb04f4cb6973460dfe252df",,,,,"cf522b78d86c486691226b40aa69e95c",,,,,
Where did the wheels fall off the bus??
<courtesy link to Gerry Rafferty and Stealers Wheel>
Taking a closer look at the PowerShell code, carefully read through it and think about what each cmdlet does. The heading for this section is a clue….
At this point you should be thinking why is there a Format-List in the middle of this? If so then you are on the money.
As discussed previously, in this series of posts, PowerShell does not pass raw text down the pipeline. It passes .NET objects. Format-List, Format-Table and Format-Wide convert the underlying objects so that they can then be rendered for output. The format cmdlets, such as Format-List, arrange the data to be displayed but do not display it. The data is displayed by the output features of Windows PowerShell and by the cmdlets that contain the Out verb (the Out cmdlets), such as Out-Host, Out-File, and Out-Printer. If you do not use a format cmdlet, Windows PowerShell applies that default format for each object that it displays.
Whilst this looks OK on the screen, as soon as you pipe this to the next cmdlet that expects .NET objects bad things happen….
The Format-List, Format-Table and Format-Wide cmdlets should be the last ones that are on the pipeline, not in the middle.
With that in mind, how do we then select particular objects in the middle of the pipeline if Format-List cannot be used? We use the Select-Object cmdlet instead. This does not prepare the object for output, and they are left as objects which allows them to be piped to the next cmdlet.
<courtesy link to Queen>
This is why if you remove the Export-CSV cmdlet, the output looks OK on the screen:
My lab does not have humungous mailboxes which is why the above item count was changed to 5, but that is irrelevant for the issue here.
And if you pipe to Get-Member to look at the objects at the end of the pipeline, you will notice that one they have passed through Format-List they are no longer the native Exchange objects:
TypeName: Microsoft.PowerShell.Commands.Internal.Format.FormatStartData
Now compare this with replacing the Format-List with Select-Object. Note that the output object type is still a native Exchange class:
TypeName: Selected.Microsoft.Exchange.Management.Tasks.MailboxFolderConfiguration
If we change the PowerShell code, replacing the Format-List with Select-Object we get:
Get-Mailbox –Database DB01 –Resultsize Unlimited | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -GT 50000} | Sort-Object -Property ItemsInFolder -Descending | Select-Object Identity, ItemsInFolder | Export-CSV $PWD\NaughtyMailboxes.csv
And the output looks like what we need/expect:
Adding –NoTypeInformation to the end of the above command means that you will not see the type information in the output CSV file. In this case this would be:
#TYPE Selected.Microsoft.Exchange.Management.Tasks.MailboxFolderConfiguration
** – This is a Scottish technical term stating the said item does not meet or exceed the functional spec. There are indeed other more colourful phrases, but I can’t really post them here!
Exchange 2010 introduced a very interesting feature – the Scripting Agent. The intent for this component is to provide extensibility to the base management tools and ensure consistency for the execution of cmdlets in the environment. The feature is not enabled by default and you must manually enable it if you want to leverage the Scripting Agent.
If you are looking for a way to set default options on mailboxes that do not inherit that specific configuration item from the database or server level, then this is for you!
As TechNet describes: when you enable the Scripting Agent cmdlet extension agent, the agent is called every time a cmdlet is run on a server running Exchange 2010. This includes not only cmdlets run directly by you in the Exchange Management Shell, but also cmdlets run by Exchange services, the Exchange Management Console (EMC), and the Exchange Control Panel (ECP). We strongly recommend that you test your scripts and any changes you make to the configuration file, before you copy your updated configuration file to your Exchange 2010 servers and enable the Scripting Agent cmdlet extension agent.
To summarise -- Every time an Exchange cmdlet is executed the list of cmdlets and actions contained within the Scripting Agent configuration is checked. If there are actions defined for the cmdlet in the Scripting Agent configuration then those actions are automagically added to the cmdlet being executed prior to the actual command doing anything.
This means that the Scripting Agent is a great tool to ensure that certain options are set in the environment. For example this can be used to:
The last one will be of interest to Blackberry users who run into the issue where they need to allow Exchange 2010 to process external meeting messages. You can run the below to enable for one mailbox:
Set-CalendarProcessing -ProcessExternalMeetingMessages $True
Or the below to change all the mailboxes on a given server.
Get-Mailbox -Server "servername” -ResultSize Unlimited | Set-CalendarProcessing -ProcessExternalMeetingMessages $True
But this is all after the fact. Some customers have implemented scheduled scripts to go back and re-set such configuration items but that still leaves a period of time when the configuration is not what it should be. The Scripting Agent can fix you up here! Additional filtering examples for PowerShell are in this post.
How does this good stuff all work then?
The purpose of the scripting agent is to insert your own custom values and logic into the Exchange workflow. This applies to both Exchange Management Console, Exchange Management Shell actions and Exchange Control Panel. The cmdlets underpin actions taken in the GUI, and every time an Exchange cmdlet is called the Scripting Agent cmdlet extension agent is called. This agent check to see if there are any additional actions to be added to the cmdlet.
Note that the Scripting Agent is only for Exchange cmdlets, it will not fire on Get cmdlets and does not exist on the Exchange Edge role.
There is a sample Scripting Agent file on a default Exchange 2010 installation. This file can be found in the Exchange Installation Folder\Bin\CmdletExtensionAgents folder. By default this is:
C:\Program Files\Microsoft\Exchange Server\V14\Bin\CmdletExtensionAgents
The file is called ScriptingAgentConfig.xml.sample and to allow Exchange to use it, the file must be renamed to remove the .sample suffix. For those who had to endure it, it is the same concept as “LMHosts.sam” – but let’s not go down the #PRE and #DOM silly road again….
There are four APIs that are available and are called in the following order:
Validate
Here are some items to consider before going live with the feature:
Michel also has a great end to end solution for enabling archive mailboxes using the Scripting Agent – check that out too.
Let’s look at an example of enabling the Scripting Agent and a sample configuration file that overcomes some of the common issues with writing to multiple domain controllers.
In the below screen shot the Scripting Agent is still in its default configuration and is disabled:
The sample ScriptingAgentConfig.xml.sample file is present and is dated the 21st July 2009.
Copy the ScriptingAgentConfig.xml to all Exchange servers, and administrator workstations. Ensure that you have a process to keep the files in lock step else you will get varying results.
We can then enable the scripting agent using PowerShell Enable-CmdletExtensionAgent and check that the Scripting Agent’s status is now enabled.
For reference, the above command is:
Enable-CmdletExtensionAgent "Scripting Agent"
Now that the Scripting Agent is enabled and the same ScriptingAgentConfig.xml copied to all machines, we can start to test it out!
Let’s test out the Scripting Agent. To do this we will make a mailbox using the Exchange Management Shell and then using the Exchange Management Console. The custom configuration file that was deployed will enable Single Item Recovery for all newly created mailboxes. Please see the end of this post of the contents of the XML.
First up, creating a new mailbox (SA-Test-1) using Exchange Management Shell:
Secondly using the Exchange 2010 Management Console to create mailbox SA-Test-1. For reference only the completion screen is shown here, so that we can see the cmdlet properties that were specified:
If you look at the details of the cmdlet Executed in the above screenshot there is no mention of SingleItemRecovery. And the same is also true when examining the contents of the Exchange Management Console log file.
As you can see, when creating these mailboxes, there has been absolutely no reference to Single Item Recovery. But let’s go and check the properties of these newly created mailboxes!
You can see that both accounts have SingleItemRecoveryEnabled set to $True, which means the feature is enabled despite not specifying this in the New-Mailbox cmdlet.
Round of applause here!
For comparison, user account (User-1) that was created months ago does not have this feature enabled.
When multiple domain controllers are present in the same AD site, then some of the commands will fire against DC-1, some against DC-2 and so on. You will get errors along the lines of:
The cmdlet extensionagent with the index 5 has thrown an exception in OnComplete. The Exception is: Microsoft.exchange.provisioning.provisioningexception. Scriptingagent exception thrown while invoking scriptlet for OnComplete API. The operation couldn't be performed because object 'objectname' couldn't be found
As an added bonus you will also get errors from ms.exchange.provisionin.provisioninglayer.oncomplete.
There are a few ways around this;
I typically use option three, and store the DC that was automatically selected in a variable that can then be used consistently throughout the script. This would look like:
$DC = [string]($readOnlyIConfigurable.originatingserver)
Thus when running the Set-Mailbox cmdlet we will then specify that domain controller using the DomainController parameter:
Set-Mailbox -Identity $Newmailbox -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com
Since seeing sample files makes it easier to understand this feature, and some folks will be able to just use the examples below directly there are a few included. As always note that any and all sample code follows the terms of use as described here.
My lab servers are a tad slow, so the Start-Sleep is in there for my purposes, you can remove or decrease the timeout.
This example enables SingleItemRecovery and also sets custom default calendar permissions for the Enable-Mailbox and New-Mailbox cmdlets.
Be sure to change domain.com to match your domain suffix.
<?xml version="1.0" encoding="utf-8" ?><Configuration version="1.0"> <Feature Name="NewMailbox" Cmdlets="new-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $newmailbox = $provisioningHandler.UserSpecifiedParameters["Name"] Set-mailbox -Identity $Newmailbox -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com
$AccessRights = "Reviewer" $mailbox = Get-Mailbox $newmailbox $calendar = (($mailbox.SamAccountName)+ ":\" + (Get-MailboxFolderStatistics -Identity $mailbox.SamAccountName -FolderScope Calendar | Select-Object -First 1).Name) Set-MailboxFolderPermission -User "Default" -AccessRights $AccessRights -Identity $calendar -DomainController $DC.domain.com } </ApiCall> </Feature> <Feature Name="EnableMailbox" Cmdlets="enable-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $newmailbox = $provisioningHandler.UserSpecifiedParameters["Identity"] set-mailbox -Identity "$newmailbox" -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com
$AccessRights = "Reviewer" $mailbox = Get-Mailbox -identity "$newmailbox" $calendar = (($mailbox.SamAccountName)+ ":\" + (Get-MailboxFolderStatistics -Identity $mailbox.SamAccountName -FolderScope Calendar | Select-Object -First 1).Name) Set-MailboxFolderPermission -User "Default" -AccessRights $AccessRights -Identity $calendar -DomainController $DC.domain.com } </ApiCall> </Feature></Configuration>
This example disables POP and IMAP access to newly created mailboxes
<?xml version="1.0" encoding="utf-8" ?><Configuration version="1.0"> <Feature Name="NewMailbox" Cmdlets="New-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"] Set-CASMailbox -Identity $NewMailbox -ImapEnabled $false -POPEnabled $false -DomainController $DC.domain.com } </ApiCall> </Feature> <Feature Name="EnableMailbox" Cmdlets="Enable-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"] Set-CASMailbox -Identity $NewMailbox -ImapEnabled $false -POPEnabled $false -DomainController $DC.domain.com } </ApiCall> </Feature></Configuration>
The following example Disables Outlook Anywhere
<?xml version="1.0" encoding="utf-8" ?><Configuration version="1.0"> <Feature Name="NewMailbox" Cmdlets="New-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"] Set-CASMailbox -Identity $NewMailbox -MAPIBlockOutlookRpcHttp $True -DomainController $DC.domain.com } </ApiCall> </Feature> <Feature Name="EnableMailbox" Cmdlets="Enable-Mailbox"> <ApiCall Name="OnComplete"> if($succeeded) { Start-sleep 20 $DC = [string]($readOnlyIConfigurable.originatingserver) $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"] Set-CASMailbox -Identity $NewMailbox -MAPIBlockOutlookRpcHttp $True -DomainController $DC.domain.com } </ApiCall> </Feature></Configuration>
Please feel free to leave suggestions in the comments for other great use cases for this feature.
The blog post on how to integrate Office 365 with Windows 2012 R2 ADFS raised an interesting question from a reader (Hi Eric!) on how should he request a certificate for the ADFS instance since there is no longer an IIS dependency. This means that there is no longer an IIS console to generate a certificate request with. What to do?
You could generate a certificate request, complete it and then export it to a .pfx file on an Exchange server. The exported certificate can then be copied over to the ADFS server[s] and then imported to the local computer certificate store to make it available for ADFS purposes.
What if you don’t want, or can’t do this? If you want to do this on the ADFS server directly then certreq.exe can help us out here! This also applies to other servers and the application of the steps here are not just for ADFS. However the question raised means that more folks in the field are probably thinking about the same thing, so that forced me to polish off yet another one of those draft blog posts!
This post is using a venerable utility that has been present in Windows for a long time. In a future post we can then look at the new features in PowerShell for this task.
Certreq.exe is built into the underlying OS. In the examples below we will use a Windows 2008 R2 SP1 server. To see the options execute “certreq.exe /?” This is shown in the image below, and the full command line parameters are at the bottom of this post for reference:
The goal of this exercise is to generate a certificate that will contain multiple Subject Alternative Names (SAN) in addition to the subject name (common name) of the certificate. if you don’t want a SAN certificate, also called a Unified Communications certificate by various vendors, then simply comment out that line in the process below.
We want to end up with a certificate that has the following Subject name:
Along with the Subject Alternative Names of:
We can break this down into three basic steps:
The syntax is to use certreq.exe with the –New parameter and specifying the request file that we can take to the issuing CA. Once the signed CA response has been obtained and copied back to the server, we can then import it using the –Accept parameter to complete the certificate request process.
Let’s go get crazy and request us some certificate! *
Before we can generate the certificate request we must be absolutely sure that we know the exact names that we want to include. Once the certificate has been issued by the CA, it cannot be changed. Some 3rd party CAs will charge a nominal amount to re-issue with a different/additional name, some will charge for a net new certificate. It is always best to do it right – the first time around!
Once we are locked on the names, then we can create the .inf file that we will feed to certreq.exe – there is a sample below for Windows 2008 and up. Copy the content between the lines to the server, save it as policy.inf and then open it up in Notepad.
========================== Copy all below this line =============================
[Version]
Signature="$Windows NT$"
[NewRequest]
Subject = "CN=sts.tailspintoys.ca" ; Remove to use an empty Subject name.
;Because SSL/TLS does not require a Subject name when a SAN extension is included, the certificate Subject name can be empty.
;If you are using another protocol, verify the certificate requirements.
;EncipherOnly = FALSE ; Only for Windows Server 2003 and Windows XP. Remove for all other client operating system versions.
Exportable = TRUE ; TRUE = Private key is exportable
KeyLength = 2048 ; Valid key sizes: 1024, 2048, 4096, 8192, 16384
KeySpec = 1 ; Key Exchange – Required for encryption
KeyUsage = 0xA0 ; Digital Signature, Key Encipherment
MachineKeySet = True
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
RequestType = PKCS10 ; or CMC.
[EnhancedKeyUsageExtension]
; If you are using an enterprise CA the EnhancedKeyUsageExtension section can be omitted
OID=1.3.6.1.5.5.7.3.1 ; Server Authentication
OID=1.3.6.1.5.5.7.3.2 ; Client Authentication
[Extensions]
; If your client operating system is Windows Server 2008, Windows Server 2008 R2, Windows Vista, or Windows 7
; SANs can be included in the Extensions section by using the following text format. Note 2.5.29.17 is the OID for a SAN extension.
2.5.29.17 = "{text}"
_continue_ = "dns=sts.tailspintoys.ca&"
_continue_ = "dns=legacy.tailspintoys.ca&"
_continue_ = "dns=zorg.tailspintoys.ca&"
; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP
; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format.
; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format.
; RMILNE – the below line is remmed out else we get an error since there are duplicate sections for OID 2.5.29.17
; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ
[RequestAttributes]
; and you are using a standalone CA, SANs can be included in the RequestAttributes
; section by using the following text format.
;”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"
; Multiple alternative names must be separated by an ampersand (&).
CertificateTemplate = WebServer ; Modify for your environment by using the LDAP common name of the template.
;Required only for enterprise CAs.
========================== Copy all above this line =============================
Please Note: In the above sample, the lines that you will typically modify for Windows 2008 and up are highlighted. Also noted that in the SAN line there are no spaces between the FQDNs and the ampersand symbol is the separator. Since we are using Windows 2008 R2 the SAN entries are placed in the [Extensions] section. If we were running this on a Server 2003 box then we would use the [RequestAttributes] section or encode the SAN names using MakeSanExt.vbs.
Save this file with a .inf extension. In this post we will call it policy.inf The below shows the file in the C:\Certs folder. Note the elevated cmd prompt!
Now that we have the required .inf file in place we can then create the certificate request:
Certreq.exe -New policy.inf newcert.req
This will generate the certificate, and in the folder there is now a file called newcert.req that we can provide to the issuing CA.
The newcert.req contains the public key of the certificate we just created – the private key does not leave the server. You can see this certificate in the certificate MMC under Pending Enrolment Requests
If you look at the properties of the certificate, in the Certificate Enrolment Requests folder, note that the private key is present, the certificate is not trusted and that it does not chain to an issuing CA.
And if we review the Details tab, the SAN entries are filled in:
In this step the newcert.req was provided to the public CA For external facing ADFS certificates you will need to go and follow the process with your chose CA. The choice is all yours!
Once the request process was followed, the response file was copied into the C:\Certs folder on the same server.
Once we have obtained the signed response from the issuing CA, copy it to the server. Then we can mate the pending certificate request with the signed CA response:
certreq.exe -accept certnew.cer
Please ensure that all of the documentation from your CA provider has been followed. There might be steps to remove built-in certificates from Windows, modify their purpose to add brand new intermediate CA certificates. This changes vendor by vendor, where it was issue from and over time. Please follow their instructions for the most up-to date information!
If the necessary CA certificates have not been updated as per the CA documentation you may receive the below:
Certificate Request Processor: A certificate chain could not be built to a trusted root authority. 0 x800b010a (-2146762486)
Please follow the provided documentation to import the necessary certificates etc that was provided to you by the CA and then re-attempt the import.
If you use the default .inf file then chances are you will experience the lovely error below then pull some hairs out wondering where the issue lies.
The entry already exists. 0x800706e0 (WIN32: 1760) <inf file name> [Extensions] 2.5.29.17 =
The sample .inf file includes multiple SAN sections and just like Highlander – there can be only one! In the example provided in this post note that all of the lines in this section are remmed out. The issue is with the highlighted line as it is not remmed out in the sample. This then conflicts with the previous 2.5.29.17 section.
; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP ; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format. ; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format. ; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ
Note the semi-colon at the start of the highlighted line above so that we do not conflict with the initial 2.5.29.17 section.
If you are trying to generate a SAN certificate on Windows 2008 R2, but the SAN fields are disappearing and only the common name entry remains when you provide the certificate request to the CA vendor then please check that you are specifying the SAN names in the right section.
Windows 2003 servers – place SAN names in the [RequestAttributes] section. The sample line is commented out above as we are using Server 2008 R2. Un-comment it and then place you SAN names here, and then comment out the 2.5.29.17 section. In the sample I added the follow names to convey that 2008 does not use this section.
”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"
Edit this to reflect correct values. For example:
”SAN="dns=sts.tailspintoys.ca&dns=legacy.tailspintoys.ca&dns=zorg.tailspintoys.ca"
Windows 2008 / 2008 R2 servers – place the SAN names in the [EnhancedKeyUsageExtension] section using the 2.5.29.17 field. Do not place them in the [RequestAttributes] section. Else quite simply this will no workey workey!
Please refer to the documentation on TechNet.
The below certreq.exe options are from a Windows 2008 R2 SP1 server:
Usage: CertReq -? CertReq [-v] -? CertReq [-Command] -?
CertReq [-Submit] [Options] [RequestFileIn [CertFileOut [CertChainFileOut [FullResponseFileOut]]]]
Submit a request to a Certification Authority.
Options: -attrib AttributeString -binary -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -rpc -AdminForceMachine -RenewOnBehalfOf
CertReq -Retrieve [Options] RequestId [CertFileOut [CertChainFileOut [FullResponseFileOut]]] Retrieve a response to a previous request from a Certification Authority.
Options: -binary -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -rpc -AdminForceMachine
CertReq -New [Options] [PolicyFileIn [RequestFileOut]] Create a new request as directed by PolicyFileIn
Options: -attrib AttributeString -binary -cert CertId -PolicyServer PolicyServer -config ConfigString -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -user -machine -xchg ExchangeCertFile
CertReq -Accept [Options] [CertChainFileIn | FullResponseFileIn | CertFileIn] Accept and install a response to a previous new request.
Options: -user -machine
CertReq -Policy [Options] [RequestFileIn [PolicyFileIn [RequestFileOut [PKCS10FileOut]]]] Construct a cross certification or qualified subordination request from an existing CA certificate or from an existing request.
Options: -attrib AttributeString -binary -cert CertId -PolicyServer PolicyServer -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -noEKU -AlternateSignatureAlgorithm -HashAlgorithm HashAlgorithm
CertReq -Sign [Options] [RequestFileIn [RequestFileOut]] Sign a certificate request with an enrollment agent or qualified subordination signing certificate.
Options: -binary -cert CertId -PolicyServer PolicyServer -Anonymous -Kerberos -ClientCertificate ClientCertId -UserName UserName -p Password -crl -noEKU -HashAlgorithm HashAlgorithm
CertReq -Enroll [Options] TemplateName CertReq -Enroll -cert CertId [Options] Renew [ReuseKeys] Enroll for or renew a certificate.
Options: -PolicyServer PolicyServer -user -machine
* – I was led to believe that this was correct US grammar
After doing Exchange Risk Assessment (ExRAP) and Exchange Risk Assessment As A Service for almost four years, one thing continues to irritate my OCD personality! When I look at the event logs on an Exchange server, the logs should be a sea of blue. That is there should be no errors as that indicates something is not quite right and should be addressed.
Opening up the system event log on numerous customer’s servers I’m pretty much guaranteed to see errors related to mapping printer drivers in the Terminal Services/Remote Desktop session. As you would expect, this is fluff that I just do not want to see. How to make this disappear then?
Since our venerable friend, Windows Server 2003, is going to exit extended support in a little over a year, I’ve based this post on a Server 2012 R2 box, but the principles still hold true for our trusty friend!
Let’s look at manually getting rid of the errors, and then using Group Policy to effectively manage multiple servers. First up, let’s see the error:
Opening up the system event log, we are greeted with the below errors:
This is EventID 1111 on a Windows Server 2012 server. In this case the Exchange server does not have a printer driver for OneNote installed (as expected), and is unable to create a printer mapping to the printer that is installed on the workstation that just RDP’ed to the Exchange server.
If you look for the Remote Desktop Session Host Configuration tool in Windows Server 2012 R2, you will not find it present. Either set the desired configuration using the registry directly or the Local Group Policy Object which is located here:
Computer Configuration –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host -> Printer Redirection > Do not allow client printer redirection
The below is a screenshot of the Local Group Policy Object where we can configure printer redirection:
Select Enabledto activate the policy setting and click OK.
This will set the following registry key for fDisableCpmwhich we can also set manually.
HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services RegDWORD fDisableCpm 1
HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services
RegDWORD fDisableCpm 1
To automate this we can use reg.exe – note that the command is a single line and may wrap.
REG.exe ADD "HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services" /V fDisableCpm /t REG_DWORD /D 1 /F
To check the status of the Registry value we can run:
REG.exe QUERY "HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services" /V fDisableCpm
That’s great for a couple of test boxes, but we do not want to do that on 100,000 servers in the enterprise and we will typically look to Group Policy in such cases.
In a typical enterprise environment we do not want to change lots of individual servers when it is easy to leverage GPO for this purpose. Again the configuration is stored in the same location as the Local GPO discussed above. Open the Group Policy Management Console, create a new GPO or edit an existing one – the choice is yours! Navigate to:
Select Enabled to activate the policy setting and click OK.
Once the GPO has refreshed on the Exchange server, the pesky printer mapping errors should be banished!
There are many settings contained in GPO that can be applied to an Exchange server to tune the Windows installation that it sits upon. One other area that is typically identified in a ExRAP/ExRaaS is that of event log size and retention. We can set the maximum size of Event logs and how to retain data under the following GPO location:
Computer Configuration –> Policies –> Windows Settings –> Security Settings –> Event Log
Again we can use local policies, command line tasks and PowerShell to configure this but GPO will typically be the best bet for larger customers.
Since some of my test machines are a touch low on resources, having to wait for the server manager process to finish loading can be a little bit painful.
There are group policy and local policy options to suppress this in addition to editing the registry.
There is always the option to disable it directly from Server Manager, but I’d rather automate this. For completeness sake, the below shows how to manually disable Server Manager on a Windows Server 2012 R2 server.
Select “Manage” in the top right hand corner, then Server Manager properties
In the Server Manager Properties Window, you can choose to disable it from starting up automatically at logon.
The option to control via a GPO is contained here:
Computer Configuration\Administrative Templates\System\Server Manager
Using the Group Policy Management Console on Windows 2012 R2, we can set the policy as follows:
When the GPO is refreshed on the machines that fall under the scope of the policy, the settings will be applied. This then greys out the “Do not start Server Manager automatically” at logon option.
In addition we can also set a registry key automatically via a script to set the required registry key.
HKCU\Software\Microsoft\ServerManager\DoNotOpenServerManagerAtLogon REG_DWORD 0x1
HKCU\Software\Microsoft\ServerManager\DoNotOpenServerManagerAtLogon
REG_DWORD 0x1
REG.exe Query HKCU\Software\Microsoft\ServerManager /V DoNotOpenServerManagerAtLogon
REG.exe Add HKCU\Software\Microsoft\ServerManager /V DoNotOpenServerManagerAtLogon /t REG_DWORD /D 0x1 /F
We can retrieve the current configuration using the first two commands, whilst the third one sets the value:
Get-Item HKCU:\Software\Microsoft\ServerManager Get-ItemProperty -Path HKCU:\Software\Microsoft\ServerManager -Name DoNotOpenServerManagerAtLogon | select DoNot OpenServerManagerAtLogon | Ft –AutoSize New-ItemProperty -Path HKCU:\Software\Microsoft\ServerManager -Name DoNotOpenServerManagerAtLogon -PropertyType DWORD -Value "0x1" –Force
Get-Item HKCU:\Software\Microsoft\ServerManager
Get-ItemProperty -Path HKCU:\Software\Microsoft\ServerManager -Name DoNotOpenServerManagerAtLogon | select DoNot OpenServerManagerAtLogon | Ft –AutoSize
New-ItemProperty -Path HKCU:\Software\Microsoft\ServerManager -Name DoNotOpenServerManagerAtLogon -PropertyType DWORD -Value "0x1" –Force
Exchange 2013 CU5 has been released to the Microsoft download centre! Exchange 2013 has a different servicing strategy than Exchange 2007/2010 and utilises Cumulative Updates (CUs) rather than the Rollup Updates (RU/UR) which were used previously. CUs are a complete installation of Exchange 2013 and can be used to install a fresh server or to update a previously installed one. Exchange 2013 SP1 was in effect CU4, and CU5 is the first post SP1 release. CU5 contains AD DS schema changes so please test and plan accordingly!
This is build 15.00.0913.022 of Exchange 2013 and the update is helpfully named Exchange2013-x64-cu5.exe. Which is a great improvement over the initial CUs that all had the same file name! Details for the release are contained in KB2936880.
As Ross discussed on the Exchange team blog, CU5 changes OAB behaviour in Exchange 2013. Please read his excellent post, and the issues with OAB as discussed by Herr Brian Day at MEC 2014.
The Exchange 2007 fixes will be very welcome for some of the Dr Pepper loving customers I have recently visited where they upgraded directly from Exchange 2007 to 2013. Lync clients were not functioning correctly as Exchange 2013 mailbox was not correctly generating the Autodiscover XML for the client when the mailbox was located on Exchange 2007.
As with previous CUs, CU5 follows the new servicing paradigm that was previously discussed on the blog. The CU5 package can be used to perform a new installation, or to upgrade an existing Exchange Server 2013 installation to CU5. You do not need to install Cumulative Update 1 or 2 for Exchange Server 2013 when you are installing CU5. Cumulative Updates are well, cumulative. What else can I say,,,,
CU5 contains AD Schema updates – please test and plan accordingly!
What do I mean by that? Well, you need to ensure that you are fully informed about the caveats with the CU and are aware of all of the changes that it will make within your environment. Additionally you will need to test the CU your lab which is representative of yourproduction environment.
The Exchange team today announced the availability of Update Rollup 6 for Exchange Server 2010 Service Pack 3. RU6 is the latest rollup of customer fixes available for Exchange Server 2010. The release contains fixes for customer reported issues and previously released security bulletins. For example, the security issue that was addressed in Exchange 2010 SP3 RU4 is contained in RU6.
This is build 14.03.0195.001of Exchange 2010, and KB2936871 has the full details for the release
Note that this is for the Service Pack three branch of Exchange code. Why? Exchange 2010 SP2 exited out of support on the 8th of April 2014 and will no longer receive updates.
When I was in Seattle for some internal training in January, one of the chaps delivering a demo used a feature that I wish I’d known about previously. When he was demonstrating some of the update mechanics for Office Pro Plus he immediately skipped to the correct portion of the registry by using a shortcut feature in Registry Editor.
Yes, there is a favourites bookmark feature!
Update 20-8-2014: Added Windows 2000 information and screenshots
Update 6-9-2014: Link to other post on sharing this between machines is here
When you open up Registry Editor, and yes I still compel myself to use regedt.exe rather than regedit..exe (those NT habits die hard…..).
Look along the top, and you will see a favourites menu.
To add a key to the favourites, highlight it, and then choose “Add To Favourites”. In the case below we are adding the Exchange 2010 MSExchangeAB key as a favourite.
This allow me to easily come back, and skip directly to the section in the registry that controls how the Address Book service is configured.
Now all I have to do, is to select the MSExchangeAB entry from the Favourites menu and I get instantly teleported there – nice!
I’ve also added the MSExchangeRPC key as we typically either set both to static in Exchange 2010 enterprise deployments.
One of my AD colleagues (thanks Pierre!) mentioned that he had a Windows 2000 DC running, so I asked him to check for the feature there, and yes it was present way back un build 2195!
Just for a giggle, this is the Start screen from a Windows Server 2012 R2 server. I’m searching for regedt32.exe
And when we zoom in on the upper right hand portion of the screen, its the old school NT registry editor icon.
The young pup, Windows 95 registry editor, regedit.exe is shown here for comparison.
Security is an integral aspect of running modern IT operations. There is a clear understanding that we need to protect our IT assets, company data and personal identifiable information. So when we discuss a migration to Office 365, security is an inevitable topic. One aspect that we need to discuss is around account lockout, and how to protect our Active Directory accounts as part of the overall cloud solution.
Methods to protect user accounts can be broken down into a few categories that include:
Customers wish to look at such options to mitigate the impact from:
In a future post I'll circle back on the underlying account lockout policy discussion, so let's park that one for right now. What I do want to cover in this post is ADFS and how it can impact account lockouts should you have an aggressive lockout policy enabled.
Update 3-9-2014: Please also review this post for an issue requiring a hotfix to resolve with Extranet Account Lockout Protection
In the previous versions of ADFS there was no native mechanism within ADFS itself to prevent brute force attacks upon ADFS. If AD has a password lockout policy set, then an external entity hammering the ADFS logon page could then lockout an AD account. If an entity knew the user account name, they could access the ADFS proxy page and enter a bad password for the user account. The below is an example for ADFS 2.0 running on Windows 2008 R2.
In order to mitigate this, the external firewall in front of the ADFS server could be set to only allow HTTPS traffic to the ADFS endpoint from the IP address ranges that are part of Office 365. Since this is a manual configuration, the onus is on the on-premises firewall administrator to keep the IP ranges up to date else authentication may fail. In the traffic flow, the HTTPS traffic coming to the on-premises ADFS proxy server is initiated from Office 365. As discussed at MEC, this will have to be a planning point for the upcoming OAuth changes in Q2 this CY. As part of the authentication changes, by default clients will connect directly to the ADFS servers. Either the firewall rules will need to be changed, or modifications made to the clients to use the legacy behaviour. More on that when the team announces the details later this year! This was discussed publically at MEC in the What’s new in Authentication for Outlook 2013 session.
If you did not get to MEC, then the content is available here for your viewing pleasure!
Apart from locking down the firewall, Windows Server 2012 R2 ADFS now adds a feature to natively allow the ADFS proxy to prevent AD DS accounts from being locked out! This is the Extranet Lockout feature. This is similar to the TMG 2010 Soft Account Lockout feature that was introduced in TMG 2010 SP2. It is said to be "soft" as the AD DS account is not locked, and after a period of time the ADFS server then automatically allows the account to retry the authentication.
Only Windows Server 2012 R2 has the Extranet Lockout feature. For this and other reasons you want to look at deploying Server 2012 for your ADFS infrastructure. Some reasons include:
As mentioned above, only ADFS 2012 R2 has the Extranet Lockout feature. Thus the ADFS infrastructure must be upgraded or installed as this version. For upgrade steps, please check out the excellent ASKPFE PLAT blog!
While the Extranet Lockout feature is enabled on the ADFS server, you must also deploy an ADFS proxy.
Traffic must hit the ADFS proxy. If you publish the ADFS server instead or your network misroutes the traffic and bypasses the proxy, the Extranet Lockout feature will not work as expected. Trust me, I’ve been there – but more on that later in a separate blog post!!
The other base ADFS requirements and prerequisites are also documented on TechNet.
As with the other articles in the recent ADFS posts, this is again in the Tailspintoys.ca lab. The ADFS namespace is adfs.tailspintoys.ca. The environment looks like the diagram below. The ADFS server is deployed on the internal corporate network and is joined to AD. The ADFS proxy is deployed in the DMZ, and is in a workgroup. Since we are using ADFS 2012 R2, the ADFS proxy uses Web Application Proxy (WAP) rather than a dedicated ADFS proxy role as in older versions.
For the details in building this lab please see the previous series of three posts.
The diagram was drawn with the April 2014 Visio Stencils for Office 365.
AD DS is set with a domain account lockout policy that states an account will lock out after 10 invalid logon attempts. This can be seen in the GPO Management Console:
And for those LAN Manager freaks out there the command prompt too!
PS -- This was taken from a DC that does not have the PDC emulator role
So we know that after 10 attempts the account will lock out. What happens if we launch a mini-DOS attack on some guy called user-1@tailspintoys.ca via the ADFS sign in page?
Browse to the ADFS sign in page in IE11 at https://adfs.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
And we enter a bad password 11 times…
Staying with the LAN Manager freak show, look what happened to that poor user, their account is now locked out.
On the ADFS server we see the 10 failed logon attempts before the account locked out:
Zooming in on one event we see that the response from AD is that this is an unknown user name and bad password. Well, that’s the generic text string. If we really want to know what is going on, then we look at the status and sub status codes. In this case 0xC0000006D maps to the bad user name response but 0xC0000006A tells us that the password was not correct. Well, that’s because I was making like Jean Michel Jar on the keyboard to make up a random string in the password entry field. Well, less the light show….
Not good! A malicious person ( moi ! ) managed to do a denial of service on this account.
AD FS Extranet Lockout to the rescue!
In the context of AD FS in Windows Server 2012 R2, Web Application Proxy functions as a federation server proxy. Web Application Proxy also serves as a barrier between the Internet and your corporate applications.
Web Application Proxy provides a number of security features to protect your corporate network, such as your users and your resources, from external threats. One of these features is AD FS extranet lockout. In case of an attack in the form of authentication requests with invalid (bad) passwords that come through the Web Application Proxy, AD FS extranet lockout enables you to protect your users from an AD FS account lockout. In addition to protecting your users from an AD FS account lockout, AD FS extranet lockout also protects against brute force password guessing attacks.
There are three ADFS settings that we need to look at with respect to the Extranet Lockout feature.
The intent is that the ADFS administrator will define a maximum number of failed authentication requests that the ADFS proxy will allow in a certain time period. Once these authentication attempts have been used up for that specific user, then the ADFS server will go into <Seinfeld> soup Nazi -- no auth for you!!! </Seinfeld>. The ADFS proxy server will then cease attempting to log the user on. By doing so, it does not hammer on the AD account thereby locking the AD account out. This protects the AD account from losing access to all resources, i.e. it is still functional on the corporate network and can get to file and print resources etc.
One thing to note. The value for the ExtranetLockoutThreshold on the ADFS server must be set to a lower value than the AD DS account lock out threshold, else the AD DS account will lock out before the ADFS proxy ceases to attempt authentication and enabling this on ADFS is pretty pointless!!
This is a global setting on the ADFS server, and the settings apply to all domains that the ADFS server can authenticate. Please plan accordingly.
To configure the AD FS extranet lockout, you must set three properties on the AD FS service object. To set the configuration, use Set-ADFSProperties and Get-ADFSProperties to verify.
For example, you can use the following oneliner PowerShell command to set the AD FS extranet lockout:
Set-AdfsProperties -EnableExtranetLockout $true -ExtranetLockoutThreshold 15 -ExtranetObservationWindow (New-Timespan -Minutes 30)
(The command is one line, please ensure that it does not word wrap)
You could split it out into multiple commands if desired:
$Timespan = New-TimeSpan -Minutes 30 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 15 -ExtranetObservationWindow $Timespan Get-AdfsProperties | Format-List *extranet*
$Timespan = New-TimeSpan -Minutes 30 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 15 -ExtranetObservationWindow $Timespan
Get-AdfsProperties | Format-List *extranet*
(Each command is one line, please ensure that it does not word wrap)
Opening up PowerShell on the ADFS server, and querying for the *Extranet* values we can see the default Extranet Lockout settings. Extranet Lockout is disabled by default.
Where is the default value for the lockout threshold coming from? Since it is disabled, 2147483647 is the maximum value in an Int32 data type. Run [int32]::maxValue in PowerShell to see.
Let’s now configure the ADFS server so that the ADFS proxy will lock out after 4 bad attempts in a 60 minute observation window.
$Timespan = New-TimeSpan -Minutes 60 Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 4 -ExtranetObservationWindow $Timespan Get-AdfsProperties | Fl *extranet*
$Timespan = New-TimeSpan -Minutes 60
Set-AdfsProperties -EnableExtranetLockout $True -ExtranetLockoutThreshold 4 -ExtranetObservationWindow $Timespan
Get-AdfsProperties | Fl *extranet*
When I first tried to configure this feature, I ran into this wonderful error:
Set-AdfsProperties : A parameter cannot be found that matches parameter name 'ExtranetLockoutEnabled'. At line:1 char:20 + Set-AdfsProperties -ExtranetLockoutEnabled $True + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Set-AdfsProperties], ParameterBindingException + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.IdentityServer.Management.Commands.SetServiceProperties Command
Huh???
As we saw above, there is definitely a property on the ADFS object that is called ExtranetLockoutEnabled – so why was I unable to set it?
This is probably because I have been spoilt with Exchange PowerShell since 2006. The attributes are carefully thought out and after running a get cmdlet we just change it to a set cmdlet and change what we need. For that reason I get frustrated with Windows PowerShell, especially the AD cmdlets. Why do I have to have a separate cmdlet for each tiny task? Anyway I digress…
In this case the developer changed the parameter that we use to set ExtranetLockoutEnabled. To set it we have to use the EnableExtranetLockout parameter. The two values are different:
It’s always the little things that get me……
After waiting a minute for the ADFS proxy to pickup on the change, we can test to make sure this is working!
Remember that AD DS is set to lockout after 10 invalid logons, and AD FS will cease after 4 failed authentication attempts.
Again we browse to the ADFS sign in page in IE11 at https://adfs.tailspintoys.ca/adfs/ls/idpinitiatedsignon.htm
This time we will pick on user-2@tailspintoys.ca, just so that it is easy to distinguish the two scenarios in the event logs.
Again, the account is hammered with 11 bad logon attempts.
This time however, there are only 4 failed audit events on the AD FS server:
Please note The events at from 02:10 to 02:11 were the user-1 logon attempt at the top of this blog post.
Let’s check the status of the User-2 account
Even after 11 bad logon attempts via the ADFS proxy, the account is still active – boyashaka !
Just to prove what is in the security event log of the ADFS server, let’s look for audit failure events over the last day for each of these test accounts. To look for this data, PowerShell will be the weapon of choice. Note that Get-EventLog is not used as it lame when it comes to filtering so we will use Get-WinEvent which is way more powerful. Why the difference? Get-Eventlog was in the initial PowerShell release and Get-WinEvent was added in PowerShell 2.0…..
The code used below is:
$StartTime = (Get-Date).AddDays(-1) Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime} | Measure-Object Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime}
$StartTime = (Get-Date).AddDays(-1)
Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime} | Measure-Object
Get-WinEvent -FilterHashtable @{Logname="Security"; ProviderName="Microsoft-Windows-Security-Auditing"; Data="user-2@tailspintoys.ca"; StartTime=$StartTime}
The $StartTime variable goes back 24 hours from when it was created, i.e. a day. We then create a hashtable and look for only failure security audits for accounts that match the given username. In the example above this is the user-2@tailspintoys.ca account. Measure-Object is used to save us having to count….
First up is user-1. Note that there are 10 failed logon attempts which corresponds to the AD DS account lockout policy. The timeframe of the ‘attack” was 02:10 – 02:11.
For User-2, note that there are only 4 failed logon attempts. This correlates to the AD FS Extranet Lockout protection setting. Also note that since the AD FS lockout setting is lower than the AD DS account lockout policy the AD DS account is not locked out.
In addition to the content and links in the previously published ADFS blog posts there is also the following:
Troubleshooting AD FS
AD FS 2012 R2 Extranet feature lights up a new feature to make it very easy to provide protection from AD DS account lockout scenarios where the internal AD account is locked out due to malicious or fat-fingered end user logon attempts.
One thing to note is that applications that require ADFS to federate the authentication request will not be able to do so whilst this account is in a state of Extranet Lockout. Because of this some organisations may still choose to restrict access to their ADFS proxy via firewall rules and to set “reasonable” AD account lockout policies. We can talk more next time about why locking an AD account out after 3 bad attempts is not so good…..
MEC 2014 session recordings and slides are now available for everyone’s enjoyment!
As always Ross Smith, Scott Schnoll and Brian Day deliver some great content that you must check out! If you are in the throes of deploying Exchange 2013 SP1 right now, then please do look at Brian Day’s session. You will not regret it!
Take some time to go through these 74 awesome sessions.
My MEC 2014 write-up can be found here.