• Exchange and AntiVirus Exclusions – Still A Critical Conversation

    Security DoohickeyIn a previous post we saw the Microsoft requirements for the exclusions that must be added to file system AV on Exchange servers.  In a recent CritSit, basically an uber urgent support request where the customer is down or as good as down, I also got to examine some of the other causes for file system AV not being correctly configured for Exchange.

    In the aforementioned post the majority of the issues were caused by the lack of exclusions to a scheduled scan task.  The issue below was not related to that but how different processes are identified and what exclusions get applied to the various processes.

    Please note that this post is not intended to slight the AV vendor’s product in any way whatsoever.  The product was performing as designed, it was how the customer’s AV team had configured the product which was the issue.  The underlying intent for this and the other post is to raise field awareness of the types of issues that we see, and to facilitate better and more focussed discussions with the various AV and security teams that we work with on a daily basis.

     

    Process Definitions

    The file system AV product in question has the option to categorise processes into different risk levels.  By default this feature is not enabled, and the customer must explicitly enable it.  The different process levels that you may see are Default, Low Risk and High Risk.  The below is a brief description:

    • Default  - This is the default option that is enabled.  All processes thus fall into the same level.  Since all processes are treated as equal then the same set of exclusions apply to all processes on the server.
    • Low Risk – Not enabled by default.  When this is enabled High Risk is also enabled.  Processes that are defined by the AV administrator adding them to this level will get the exclusions defined at the low risk level.
    • High Risk – Enabled when Low Risk is enabled.  Processes that are defined by the AV administrator adding them to this level will get the exclusions defined at the High risk level.

    The key concept to note is that the level a process is defined at will dictate which set of exclusions will apply.  For example a process like Trojan.exe can be defined at the high risk level.  This means that the exclusions applied to what Trojan.exe touches will be the exclusions defined at the high risk level.  Typically by default there will be minimal exclusions at the high risk level.

    What happened to cause the issue to get me onsite in a hurry?

     

    It’s All Gone A Bit Pete Tong

    (Subject should read Pete Tong MBE, and refers to cockney rhyming slang “It’s gone wrong”.)

    The customer’s Exchange team correctly identified that file system AV exclusions were required as part of the design.  The required exclusions were passed to the customer’s AV team.  Consider this the WHAT of this story.  The exclusions are WHAT is required.  HOW they get implemented varies depending upon the file system AV product the customer has implemented.  AV products each have their own best practices and implementation requirements, for details on this you must consult with your AV team and their vendor.  Microsoft cannot provide guidance on HOW a 3rd party vendor’s product be configured to achieve the required results.

    In this case, the customer started off by defining all of the required exclusions in the default process section.  As noted above this will apply to all process on the system uniformly.  What happened next was a bit baffling.  For some reason, that was not well understood, they then enabled the low risk process section (and by extension this also enabled high risk).  All of the Exchange processes were then added to the low risk section.  Job done, no?

    <Borat> Not so much </Borat>

     

    Since the Exchange processes were now defined as a low risk process, they picked up the exclusions that were defined at the low risk level.  In the paragraph above note that there was no mention of the exclusions being copied over from the default process section, and that was the crux of the issue.   The Exchange content was now being scanned by file system AV since it was not excluded at the same level as the defined process.  In this case every read and write to the database was intercepted by file system AV.   The performance on the system was terrible, CPU consumption was through the roof and since the business was so unsatisfied with Exchange performance I won a free trip to go and fix it.....

     

    Learning Points

    Again, the AV product was working as designed.  Absolutely no issues were identified with it apart from the configuration the customer had applied.  After I noted that not all of the required exclusions were present, I requested the customer’s AV team, the AV vendor and the Exchange team get on a conference call to thrash this out.  I have to applaud the level of support we got from the AV support person on the call, she was fantastic!  In the space of 60 minutes she clearly and precisely identified the configuration issues, stated what needs to be corrected and then provided multiple other items the customer should address.

     

    What can we take away from this?

    • As Exchange  people we cannot just throw the exclusion list over the wall and expect that it is correctly implemented.  We need to follow up to ensure that what we see looks correct.
    • Do not be shy to ask the vendor for assistance or as part of the design process to ensure that their product is being correctly implemented.  You are paying for support – why not use it?
    • Look at the logs on disk to see what has been detected and repaired.  The AV product may log to clear text files on disk, that you are able to view.  This is useful when if you are locked out of the local AV console, and these text logs can be your friend.  They will show you things like what the AV product is doing, the number of exclusions and how long certain scan operations take.  This allows the Exchange admin to keep the AV team honest.
    • Mount Points continue to be an issue as certain AV vendors need to exclude the mount point path such as D:\Mounts and also the disk GUID\Volume ID.  While wildcards can make this easier to implement, Exchange admins must impress the requirement to investigate this area with their AV colleagues. If the AV team do not know you are using mount points then they will not be aware they have to be excluded!
    • Certain file system AV products allow you to use environment variables such as %Program Files% – however this is no use when you install a product like Exchange to the D:\ drive…..
    • Add Exchange servers and the FSW server to the correct AV group as part of the OS build prior to installing Exchange.
    • If scanned by file system AV, Microsoft cannot guarantee the integrity of the database.

     

    Please also refer to the previous post for the other learning items also presented there.

     

    Cheers,

    Rhoderick

  • PowerShell Pipeline Perversion

    Every so often I see folks run into issues with scripts/one-liners that they obtained from a blog or crafted themselves.  One common issue is when they think the command is perfect and then when they go to dump the output to a file, the content is mince**

    Imagine your surprise when you open up the output file expecting pristine data, and it starts with:

    #TYPE Microsoft.PowerShell.Commands.Internal.Format.FormatStartData

     

    As an example, we can use the below script that I saw a customer try last week:

    Get-Mailbox –Database DB01 –Resultsize Unlimited  | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -GT 50000} | Sort-Object -Property ItemsInFolder -Descending | Format-List  Identity, ItemsInFolder | Export-CSV  $PWD\NaughtyMailboxes.csv

    This is meant to get a list of mailboxes in a given database, then look at each folder in turn to see if any of them have a number of items in excess of 50,000.    This code looks to run successfully and produces the following output. 

    Running PowerShell To Generate Output File

    You proclaim “This is excellent – job done!”   However, when you open up the .CSV file in Excel, the results appear to be less than excellent….. 

    Oh Bugger  - This Is Not The Data I'm looking For

    The output below is not really what you wanted to see…

    #TYPE Microsoft.PowerShell.Commands.Internal.Format.FormatStartData
    "ClassId2e4f51ef21dd47e99d3c952918aff9cd","pageHeaderEntry","pageFooterEntry","autosizeInfo","shapeInfo","groupingEntry"
    "033ecb2bc07a4d43b5ef94ed5a35d280",,,,"Microsoft.PowerShell.Commands.Internal.Format.ListViewHeaderInfo",
    "9e210fe47d09416682b841769c78b8a3",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "27c87ef9bbda4f709f6b4002fa4af63c",,,,,
    "4ec4f0187cb04f4cb6973460dfe252df",,,,,
    "cf522b78d86c486691226b40aa69e95c",,,,,

     

    Where did the wheels fall off the bus??

     

    Stuck In The Middle With You

    <courtesy link to Gerry Rafferty and Stealers Wheel>

    Taking a closer look at the PowerShell code, carefully read through it and think about what each cmdlet does.  The heading for this section is a clue….

    Get-Mailbox –Database DB01 –Resultsize Unlimited  | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -GT 50000} | Sort-Object -Property ItemsInFolder -Descending | Format-List  Identity, ItemsInFolder | Export-CSV  $PWD\NaughtyMailboxes.csv

     

    At this point you should be thinking why is there a Format-List in the middle of this?  If so then you are on the money. 

    As discussed previously, in this series of posts, PowerShell does not pass raw text down the pipeline.  It passes .NET objects.  Format-List, Format-Table and Format-Wide convert the underlying objects so that they can then be rendered for output.  The format cmdlets, such as Format-List, arrange the data to be displayed but do not display it. The data is displayed by the output features of Windows PowerShell and by the cmdlets that contain the Out verb (the Out cmdlets), such as Out-Host, Out-File, and Out-Printer.  If you do not use a format cmdlet, Windows PowerShell applies that default format for each object that it displays.

    Whilst this looks OK on the screen, as soon as you pipe this to the next cmdlet that expects .NET objects bad things happen….

    The Format-List, Format-Table and Format-Wide cmdlets should be the last ones that are on the pipeline, not in the middle.  

    With that in mind, how do we then select particular objects in the middle of the pipeline if Format-List cannot be used?   We use the Select-Object cmdlet instead.  This does not prepare the object for output, and they are left as objects which allows them to be piped to the next cmdlet. 

     

    Breakthrough

    <courtesy link to Queen>

    This is why if you remove the Export-CSV cmdlet, the output looks OK on the screen:

    Format-List Output Looks OK On Screen

    My lab does not have humungous mailboxes which is why the above item count was changed to 5, but that is irrelevant for the issue here. 

    And if you pipe to Get-Member to look at the objects at the end of the pipeline, you will notice that one they have passed through Format-List they are no longer the native Exchange objects:

    Format-List Output Looks OK On Screen But Has Been Changed!

    TypeName: Microsoft.PowerShell.Commands.Internal.Format.FormatStartData

     

    Now compare this with replacing the Format-List with Select-Object.  Note that the output object type is still a native Exchange class:

    Select-Object Used Instead Of Format-List - Note the Difference In Object Type

    TypeName: Selected.Microsoft.Exchange.Management.Tasks.MailboxFolderConfiguration

     

    If we change the PowerShell code, replacing the Format-List with Select-Object we get:

    Get-Mailbox –Database DB01 –Resultsize Unlimited  | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -GT 50000} | Sort-Object -Property ItemsInFolder -Descending | Select-Object   Identity, ItemsInFolder | Export-CSV  $PWD\NaughtyMailboxes.csv 

    And the output looks like what we need/expect:

    Much Better Output - These Are the Droids We Are Looking For!

     

    Bonus Tip

    Adding –NoTypeInformation to the end of the above command means that you will not see the type information in the output CSV file.  In this case this would be:

    #TYPE Selected.Microsoft.Exchange.Management.Tasks.MailboxFolderConfiguration

     

    Cheers,

    Rhoderick

     

    ** – This is a Scottish technical term stating the said item does not meet or exceed the functional spec.  There are indeed other more colourful phrases, but I can’t really post them here!

  • Exchange Scripting Agent - The Power Of Script

    Exchange 2010 introduced a very interesting feature – the Scripting Agent.  The intent for this component is to provide extensibility to the base management tools and ensure consistency for the execution of cmdlets in the environment.  The feature is not enabled by default and you must manually enable it if you want to leverage the Scripting Agent.

    If you are looking for a way to  set default options on mailboxes that do not inherit that specific configuration item from the database or server level, then this is for you!

    As TechNet describes: when you enable the Scripting Agent cmdlet extension agent, the agent is called every time a cmdlet is run on a server running Exchange 2010. This includes not only cmdlets run directly by you in the Exchange Management Shell, but also cmdlets run by Exchange services, the Exchange Management Console (EMC), and the Exchange Control Panel (ECP). We strongly recommend that you test your scripts and any changes you make to the configuration file, before you copy your updated configuration file to your Exchange 2010 servers and enable the Scripting Agent cmdlet extension agent.

    To summarise -- Every time an Exchange cmdlet is executed the list of cmdlets and actions contained within the Scripting Agent configuration is checked.  If there are actions defined for the cmdlet in the Scripting Agent configuration then those actions are automagically added to the cmdlet being executed prior to the actual command doing anything.

    This means that the Scripting Agent is a great tool to ensure that certain options are set in the environment.  For example this can be used to:

    • Disable Outlook Anywhere for new users by default
    • Disable POP/IMAP for new users by default
    • Enable Single Item Recovery for new users by default
    • Enable External Calendar Processing  for new users by default

    The last one will be of interest to Blackberry users who run into the issue where they need to allow Exchange 2010 to process external meeting messages.  You can run the below to enable for one mailbox:

    Set-CalendarProcessing -ProcessExternalMeetingMessages $True

    Or the below to change all the mailboxes on a given server.

    Get-Mailbox -Server "servername”  -ResultSize Unlimited  | Set-CalendarProcessing -ProcessExternalMeetingMessages $True

    But this is all after the fact.  Some customers have implemented scheduled scripts to go back and re-set such configuration items but that still leaves a period of time when the configuration is not what it should be.  The Scripting Agent can fix you up here!  Additional filtering examples for PowerShell are in this post.

    How does this good stuff all work then?

     

    Functionality Breakdown

    The purpose of the scripting agent is to insert your own custom values and logic into the Exchange workflow.  This applies to both Exchange Management Console, Exchange Management Shell actions and Exchange Control Panel.  The cmdlets underpin actions taken in the GUI, and every time an Exchange cmdlet is called the Scripting Agent cmdlet extension agent is called.  This agent check to see if there are any additional actions to be added to the cmdlet.

    Note that the Scripting Agent is only for Exchange cmdlets,  it will not fire on Get cmdlets and does not exist on the Exchange Edge role.

    There is a sample Scripting Agent file on a default Exchange 2010 installation.  This file can be found in the Exchange Installation Folder\Bin\CmdletExtensionAgents folder.  By default this is:

    C:\Program Files\Microsoft\Exchange Server\V14\Bin\CmdletExtensionAgents

    The file is called ScriptingAgentConfig.xml.sample  and to allow Exchange to use it, the file must be renamed to remove the .sample suffix.  For those who had to endure it, it is the same concept as “LMHosts.sam” – but let’s not go down the #PRE and #DOM silly road again….

    There are four APIs that are available and are called in the following order:

    1. ProvisionDefaultProperties   This API can be used to set values of properties on objects when they're created. When you set a value, that value is returned to the cmdlet, and the cmdlet sets the value on the property. You can fill in values on properties if the user didn't specify a value, or you can override the value specified by the user. This API respects the values set by higher priority agents. The Scripting Agent cmdlet extension agent won't overwrite the values set by higher priority agents.
    2. UpdateAffectedIConfigurable   This API can be used to set values of properties on objects after all other processing has been completed, but the Validate API hasn't yet been called. This API respects the values set by higher priority agents. The Scripting Agent cmdlet extension agent won't overwrite the values set by higher priority agents.
    3. Validate   This API can be used to validate the values on an object's properties that are about to be set by the cmdlet. This API is called just before a cmdlet writes any data. You can configure validation checks that allow a cmdlet to either succeed or fail. If a cmdlet passes the validation checks in this API, the cmdlet is allowed to write the data. If the cmdlet fails the validation checks, it returns any errors defined in this API.
    4. OnComplete   This API is used after all cmdlet processing is complete. It can be used to perform post-processing tasks, such as writing data to an external database.

     

    Deployment Considerations

    Here are some items to consider before going live with the feature:

    • The  Scripting cmdlet extension agent is disabled by default. You must manually enable it
    • The configuration file must be called ScriptingAgentConfig.xml
    • You are responsible for copying the ScriptingAgentConfig.xml to every server
    • You are responsible for ensuring that the correct version of ScriptingAgentConfig.xml is copied to every server.  Exchange does not replicate or manage this file at all.
    • Test changes to the file in a lab prior to deploying to production
    • Management workstations will also need a copy of the ScriptingAgentConfig.xml file.  Again you must manage the deployment of this file.
    • Exchange installer expects the scripting file to be present when installing a new Exchange server or installing management tools.  Do not disable the Scripting Agent just to install a server/admin workstation.  You can manually create the necessary directory path and copy the ScriptingAgentConfig.xml to the \Bin\CmdletExtensionAgents folder.
    • By default, the Scripting Agent cmdlet extension agent runs after every other agent, with the exception of the Admin Audit Log agent
    • You will likely experience object not found errors when multiple domain controllers are present in an AD site.  This is because one part of the Scripting Agent script will fire against DC-1 and then the next part against DC-2.  This will result in a failure on DC-2 since the object has not yet replicated to it.  To deal with this we will pin a domain controller.  Note that this is not hardcoding the DC.  Hardcoding is never  a great idea since if that one DC fails then we will run into issues.  See below for more details on this and how to deal with it.
    • Since the file is XML based, Notepad will be able to edit it but you probably find it easier to edit using an XML aware tool such as XML Notepad.  There are other editors out there – the weapon of choice is yours!
    • Be aware of the XML sensitivity of the XML file.  Please see Michel’s post here for more details.
    • Carefully consider the cmdlets that you will fire actions on.  For example do not include just  the New-Mailbox cmdlet to take your specific actions.  What about mailbox enabling existing accounts?  They will be missed as the Enable-Mailbox cmdlet was not also added.  This would lead to inconsistent behaviours.

    Michel also has a great end to end solution for enabling archive mailboxes using the Scripting Agent – check that out too.

    Enabling And Deploying Scripting Agent

    Let’s look at an example of enabling the Scripting Agent and a sample configuration file that overcomes some of the common issues with writing to multiple domain controllers.

    In the below screen shot the Scripting Agent is still in its default configuration and is disabled:

    Check Status Of CmdletExtensionAgents

    The sample ScriptingAgentConfig.xml.sample file is present and is dated the 21st July 2009.

    Exchange Scripting Agent - Sample File

     

    Copy the ScriptingAgentConfig.xml to all Exchange servers, and administrator workstations.  Ensure that you have a process to keep the files in lock step else you will get varying results.

    Exchange Scripting Agent - Custom File Deployed

     

    We can then enable the scripting agent using PowerShell Enable-CmdletExtensionAgent  and check that the Scripting Agent’s status is now enabled.

    Enabling Exchange Scripting Agent

    For reference, the above command is:

    Enable-CmdletExtensionAgent "Scripting Agent"

     

    Now that the Scripting Agent is enabled and the same ScriptingAgentConfig.xml  copied to all machines, we can start to test it out!

     

    Testing The Scripting Agent

    Let’s test out the Scripting Agent.  To do this we will make a mailbox using the Exchange Management Shell and then using the Exchange Management Console.   The custom configuration file that was deployed will enable Single Item Recovery for all newly created mailboxes.  Please see the end of this post of the contents of the XML.

    First up, creating a new mailbox (SA-Test-1) using Exchange Management Shell:

    Creating Test Mailbox To Verity Scripting Agent - Exchange Management Shell

    Secondly using the Exchange 2010 Management Console to create mailbox SA-Test-1.  For reference only the completion screen is shown here, so that we can see the cmdlet properties that were specified:

    Creating Test Mailbox To Verity Scripting Agent - Exchange Management Console

    If you look at the details of the cmdlet Executed in the above screenshot there is no mention of SingleItemRecovery. And the same is also true when examining the contents of the Exchange Management Console log file.

    Noting Up Sleeves - Exchange Management Console Cmdlet Audit Log

    As you can see, when creating these mailboxes, there has been absolutely no reference to Single Item Recovery.  But let’s go and check the properties of these newly created mailboxes!

     Verifying That SingleItemRecovery Was Enabled For The Test Mailboxes

    You can see that both accounts have SingleItemRecoveryEnabled set to $True, which means the feature is enabled despite not specifying this in the New-Mailbox cmdlet.

    Round of applause here!

    For comparison,  user account (User-1) that was created months ago does not have this feature enabled.

    Mailbox Created Prior To Enabling Scripting Agent - SingleItemRecovery Was NOT Enabled Automatically

     

    Dealing With Multiple Domain Controllers

    When multiple domain controllers are present in the same AD site, then some of the commands will fire against DC-1, some against DC-2 and so on.   You will get errors along the lines of:

    The cmdlet extensionagent with the index 5 has thrown an exception in OnComplete. The Exception is: Microsoft.exchange.provisioning.provisioningexception. Scriptingagent exception thrown while invoking scriptlet for OnComplete API. The operation couldn't be performed  because object  'objectname' couldn't be found 

    As an added bonus you will also get errors from  ms.exchange.provisionin.provisioninglayer.oncomplete.

     

    There are a few ways around this;

    • Uninstall all domain controllers but one.  Yes – I’m joking but it would fix the Scripting Agent issue
    • Hardcode a single domain controller to all cmdlets.  This is bad on multiple levels.  If that one DC goes offline or us decommissioned all of the Scripting Agent tasks will fail.  The other issue is for larger enterprises, it would not be efficient for every machine to target a single DC thus ignoring AD site boundaries. 
    • Add a script section that determines the site, domain controllers in the site and then choose one.  This is not ideal as Exchange already does the DC selection process so the code is a tad superfluous.  Additionally how do you then deal with your chosen DC being unavailable?  That is yet more code to trap and handle the error.
    • Workout what DC Exchange used automatically, then persist that for the duration of the session.  This allows for local site awareness, and should the DC fail or be decommissioned other DCs will be automatically chosen. 

    I typically use option three, and store the DC that was automatically selected in a variable that can then be used consistently throughout the script.  This would look like:

    $DC = [string]($readOnlyIConfigurable.originatingserver)

    Thus when running the Set-Mailbox cmdlet we will then specify that domain controller using the DomainController parameter:

    Set-Mailbox -Identity $Newmailbox -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com

     

     

    Sample Scripting Agent Files

    Since seeing sample files makes it easier to understand this feature, and some folks will be able to just use the examples below directly there are a few included.  As always note that any and all sample code follows the terms of use as described here.

    My lab servers are a tad slow, so the Start-Sleep is in there for my purposes, you can remove or decrease the timeout. 

     

    Example 1

    This example enables SingleItemRecovery and also sets custom default calendar permissions for the Enable-Mailbox and New-Mailbox cmdlets.

    Be sure to change domain.com to match your domain suffix.

     

    <?xml version="1.0" encoding="utf-8" ?>
    <Configuration version="1.0">
      <Feature Name="NewMailbox" Cmdlets="new-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $newmailbox = $provisioningHandler.UserSpecifiedParameters["Name"]
                    Set-mailbox -Identity $Newmailbox -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com

                    $AccessRights = "Reviewer"
                    $mailbox = Get-Mailbox $newmailbox
                    $calendar = (($mailbox.SamAccountName)+ ":\" + (Get-MailboxFolderStatistics -Identity $mailbox.SamAccountName -FolderScope Calendar | Select-Object -First 1).Name)
                    Set-MailboxFolderPermission -User "Default" -AccessRights $AccessRights -Identity $calendar -DomainController $DC.domain.com
                   
        }
       </ApiCall>
      </Feature>
      <Feature Name="EnableMailbox" Cmdlets="enable-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $newmailbox = $provisioningHandler.UserSpecifiedParameters["Identity"]
                    set-mailbox -Identity "$newmailbox" -SingleItemRecoveryEnabled $True -DomainController $DC.domain.com

                    $AccessRights = "Reviewer"
                    $mailbox = Get-Mailbox -identity "$newmailbox"
                    $calendar = (($mailbox.SamAccountName)+ ":\" + (Get-MailboxFolderStatistics -Identity $mailbox.SamAccountName -FolderScope Calendar | Select-Object -First 1).Name)
                    Set-MailboxFolderPermission -User "Default" -AccessRights $AccessRights -Identity $calendar -DomainController $DC.domain.com
        }
       </ApiCall>
      </Feature>
    </Configuration>

     

    Example 2

    This example disables POP and IMAP access to newly created mailboxes

    Be sure to change domain.com to match your domain suffix.

    <?xml version="1.0" encoding="utf-8" ?>
    <Configuration version="1.0">
      <Feature Name="NewMailbox" Cmdlets="New-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"]
            Set-CASMailbox -Identity $NewMailbox -ImapEnabled $false -POPEnabled $false  -DomainController $DC.domain.com            
                   
        }
       </ApiCall>
      </Feature>
      <Feature Name="EnableMailbox" Cmdlets="Enable-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"]
            Set-CASMailbox -Identity $NewMailbox -ImapEnabled $false -POPEnabled $false  -DomainController $DC.domain.com
        }
       </ApiCall>
      </Feature>
    </Configuration>

     

    Example 3

    The following example Disables Outlook Anywhere


    Be sure to change domain.com to match your domain suffix.

     

    <?xml version="1.0" encoding="utf-8" ?>
    <Configuration version="1.0">
      <Feature Name="NewMailbox" Cmdlets="New-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"]
            Set-CASMailbox -Identity $NewMailbox -MAPIBlockOutlookRpcHttp $True  -DomainController $DC.domain.com            
                   
        }
       </ApiCall>
      </Feature>
      <Feature Name="EnableMailbox" Cmdlets="Enable-Mailbox">
       <ApiCall Name="OnComplete">
        if($succeeded)    {
                    Start-sleep 20
                    $DC = [string]($readOnlyIConfigurable.originatingserver)
                    $NewMailbox = $provisioningHandler.UserSpecifiedParameters["Name"]
            Set-CASMailbox -Identity $NewMailbox -MAPIBlockOutlookRpcHttp $True  -DomainController $DC.domain.com  
        }
       </ApiCall>
      </Feature>
    </Configuration>

     

     

    Please feel free to leave suggestions in the comments for other great use cases for this feature.

     

    Cheers,

    Rhoderick

  • How To Request Certificate Without Using IIS or Exchange

    The blog post on how to integrate Office 365 with Windows 2012 R2 ADFS raised an interesting question from a reader (Hi Eric!) on how should he request a certificate for the ADFS instance since there is no longer an IIS dependency.  This means that there is no longer an IIS console to generate a certificate request with.  What to do?

    You could generate a certificate request, complete it and then export it to a .pfx file on an Exchange server.  The exported certificate can then be copied over to the ADFS server[s] and then imported to the local computer certificate store to make it available for ADFS purposes.

    What if you don’t want, or can’t do this?  If you want to do this on the ADFS server directly then certreq.exe can help us out here!  This also applies to other servers and the application of the steps here are not just for ADFS.  However the question raised means that more folks in the field are probably thinking about the same thing, so that forced me to polish off yet another one of those draft blog posts!

     

    This post is using a venerable utility that has been present in Windows for a long time.  In a future post we can then look at the new features in PowerShell for this task.

     

    Examining Certreq

    Certreq.exe  is built into the underlying OS.  In the examples below we will use a Windows 2008 R2 SP1 server.  To see the options execute “certreq.exe /?”  This is shown in the image below, and the full command line parameters are at the bottom of this post for reference:

     Certreq Command Line Options

    The goal of this exercise is to generate a certificate that will contain multiple Subject Alternative Names (SAN) in addition to the subject name (common name) of the certificate.  if you don’t want a SAN certificate, also called a Unified Communications certificate by various vendors, then simply comment out that line in the process below. 

    We want to end up with a  certificate that has the following Subject name:

    • sts.tailspintoys.ca

     

    Along with the Subject Alternative Names of:

    • legacy.tailspintoys.ca
    • zorg.tailspintoys.ca

     

    Process Overview

    We can break this down into three basic steps:

    1. Generate certificate request
    2. Obtain response from issuing CA
    3. Import response to complete certificate

    The syntax is to use certreq.exe with the –New parameter  and specifying the request file that we can take to the issuing CA.   Once the signed CA response has been obtained and copied back to the server, we can then import it using the –Accept parameter to complete the certificate request process.

    Let’s go get crazy and request us some certificate!  *

     

    Generate certificate request

    Before we can generate the certificate request we must be absolutely sure that we know the exact names that we want to include.  Once the certificate has been issued by the CA, it cannot be changed.  Some 3rd party CAs will charge a nominal amount to re-issue with a different/additional name, some will charge for a net new certificate. It is always best to do it right – the first time around!

    Once we are locked on the names, then we can create the .inf file that we will feed to certreq.exe – there is a sample below for Windows 2008 and up.  Copy the content between the lines to the server, save it as policy.inf and then open it up in Notepad.

    ========================== Copy all below this line =============================

     

    [Version]

    Signature="$Windows NT$"

    [NewRequest]

    Subject = "CN=sts.tailspintoys.ca" ; Remove to use an empty Subject name.

    ;Because SSL/TLS does not require a Subject name when a SAN extension is included, the certificate Subject name can be empty.

    ;If you are using another protocol, verify the certificate requirements.

    ;EncipherOnly = FALSE ; Only for Windows Server 2003 and Windows XP. Remove for all other client operating system versions.

    Exportable = TRUE ; TRUE = Private key is exportable

    KeyLength = 2048 ; Valid key sizes: 1024, 2048, 4096, 8192, 16384

    KeySpec = 1 ; Key Exchange – Required for encryption

    KeyUsage = 0xA0 ; Digital Signature, Key Encipherment

    MachineKeySet = True

    ProviderName = "Microsoft RSA SChannel Cryptographic Provider"

    RequestType = PKCS10 ; or CMC.

    [EnhancedKeyUsageExtension]

    ; If you are using an enterprise CA the EnhancedKeyUsageExtension section can be omitted

    OID=1.3.6.1.5.5.7.3.1 ; Server Authentication

    OID=1.3.6.1.5.5.7.3.2 ; Client Authentication

    [Extensions]

    ; If your client operating system is Windows Server 2008, Windows Server 2008 R2, Windows Vista, or Windows 7

    ; SANs can be included in the Extensions section by using the following text format. Note 2.5.29.17 is the OID for a SAN extension.

    2.5.29.17 = "{text}"

    _continue_ = "dns=sts.tailspintoys.ca&"

    _continue_ = "dns=legacy.tailspintoys.ca&"

    _continue_ = "dns=zorg.tailspintoys.ca&"

     

    ; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP

    ; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format.

    ; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format.

    ; RMILNE – the below line is remmed out else we get an error since there are duplicate sections for OID 2.5.29.17

    ; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ

    [RequestAttributes]

    ; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP

    ; and you are using a standalone CA, SANs can be included in the RequestAttributes

    ; section by using the following text format.

    ;”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"

    ; Multiple alternative names must be separated by an ampersand (&).

    CertificateTemplate = WebServer ; Modify for your environment by using the LDAP common name of the template.

    ;Required only for enterprise CAs.

     

    ========================== Copy all above this line =============================

    Please Note: In the above sample, the lines that you will typically modify for Windows 2008 and  up are highlighted.  Also noted that in the SAN line there are no spaces between the FQDNs and the ampersand symbol is the separator.  Since we are using Windows 2008 R2 the SAN entries are placed in the [Extensions] section.  If we were running this on a Server 2003 box then we would use the [RequestAttributes] section or encode the SAN names using MakeSanExt.vbs.

    Save this file with a .inf extension.  In this post we will call it policy.inf   The below shows the file in the C:\Certs folder.  Note the elevated cmd prompt!

    Content Of Policy.Inf File

    Now that we have the required .inf file in place we can then create the certificate request:

    Certreq.exe -New policy.inf newcert.req

     

    This will generate the certificate, and in the folder there is now a file called newcert.req that we can provide to the issuing CA.

    Certreq.exe Generating New Certificate Request

    The newcert.req contains the public key of the certificate we just created – the private key does not leave the server.  You can see this certificate in the certificate MMC under Pending Enrolment Requests

    Pending Certificate Enrolment Request In Certificates MMC

    If you look at the properties of the certificate, in the Certificate Enrolment Requests folder, note that the private key is present, the certificate is not trusted and that it does not chain to an issuing CA.

    Properties Of Pending Certificate RequestProperties Of Pending Certificate Request

    And if we review the Details tab, the SAN entries are filled in:

    Properties Of Pending Certificate Request

     

    Obtain response from issuing CA

    In this step the newcert.req was provided to the public CA For external facing ADFS certificates you will need to go and follow the process with your chose CA.  The choice is all yours!

    Once the request process was followed, the response file was copied into the C:\Certs folder on the same server.

     

    Import Response To Complete Certificate

    Once we have obtained the signed response from the issuing CA, copy it to the server.  Then we can mate the pending certificate request with the signed CA response:

    certreq.exe -accept certnew.cer

    Certreq.exe Importing Certificate Response

     

    Please ensure that all of the documentation from your CA provider has been followed.  There might be steps to remove built-in certificates from Windows, modify their purpose to add brand new intermediate CA certificates.  This changes vendor by vendor, where it was issue from and over time.  Please follow their instructions for the most up-to date information!

     

    Potential Issues

     

    Update Intermediate CA Certificates

    If the necessary CA certificates have not been updated as per the CA documentation you may receive the below:

    Certificate Request Processor: A certificate chain could not be built to a trusted root authority. 0
    x800b010a (-2146762486)

    Certificate Request Processor: A certificate chain could not be built to a trusted root authority. 0

    Please follow the provided documentation to import the necessary certificates etc that was provided to you by the CA and then re-attempt the import.

    Use The Correct SAN Section Depending On The OS

    If you use the default .inf file then chances are you will experience the lovely error below then pull some hairs out wondering where the issue lies.  

    The entry already exists.  0x800706e0 (WIN32: 1760)  <inf file name> [Extensions] 2.5.29.17 =

    The sample .inf file includes multiple SAN sections and just like Highlander – there can be only one!  In the example provided in this post note that all of the lines in this section are remmed out.  The issue is with the highlighted line as it is not remmed out in the sample.  This then conflicts with the previous 2.5.29.17 section.

    ; If your client operating system is Windows Server 2003, Windows Server 2003 R2, or Windows XP

    ; SANs can be included in the Extensions section only by adding Base64-encoded text containing the alternative names in ASN.1 format.

    ; Use the provided script MakeSanExt.vbs to generate a SAN extension in this format.

    ; 2.5.29.17=MCaCEnd3dzAxLmZhYnJpa2FtLmNvbYIQd3d3LmZhYnJpa2FtLmNvbQ

    Note the semi-colon at the start of the highlighted line above so that we do not conflict with the initial 2.5.29.17 section.

     

    If you are trying to generate a SAN certificate on Windows 2008 R2, but the SAN fields are disappearing and only the common name entry remains when you provide the certificate request to the CA vendor then please check that you are specifying the SAN names in the right section.

    Windows 2003 servers – place SAN names in the [RequestAttributes] section.  The sample line is commented out above as we are using Server 2008 R2.  Un-comment it and then place you SAN names here, and then comment out the 2.5.29.17  section.  In the sample I added the follow names to convey that 2008 does not use this section.

    ”SAN="dns=not.server2008r2.com&dns=stillnot.server2008r2.com&dns=meh.2003server.com"

    Edit this to reflect correct values.  For example:

    ”SAN="dns=sts.tailspintoys.ca&dns=legacy.tailspintoys.ca&dns=zorg.tailspintoys.ca"

     

    Windows 2008 / 2008 R2 servers – place the SAN names in the [EnhancedKeyUsageExtension] section using the 2.5.29.17 field.  Do not place them in the [RequestAttributes] section. Else quite simply this will no workey workey!

     

    Certreq INF File Reference

    Please refer to the documentation on TechNet.

     

    Certreq Command Line Options For Reference

    The  below certreq.exe  options are from a Windows 2008 R2 SP1 server:


    Usage:
      CertReq -?
      CertReq [-v] -?
      CertReq [-Command] -?

      CertReq [-Submit] [Options] [RequestFileIn [CertFileOut [CertChainFileOut [FullResponseFileOut]]]]

        Submit a request to a Certification Authority.

      Options:
        -attrib AttributeString
        -binary
        -PolicyServer PolicyServer
        -config ConfigString
        -Anonymous
        -Kerberos
        -ClientCertificate ClientCertId
        -UserName UserName
        -p Password
        -crl
        -rpc
        -AdminForceMachine
        -RenewOnBehalfOf

      CertReq -Retrieve [Options] RequestId [CertFileOut [CertChainFileOut [FullResponseFileOut]]]
        Retrieve a response to a previous request from a Certification Authority.

      Options:
        -binary
        -PolicyServer PolicyServer
        -config ConfigString
        -Anonymous
        -Kerberos
        -ClientCertificate ClientCertId
        -UserName UserName
        -p Password
        -crl
        -rpc
        -AdminForceMachine

      CertReq -New [Options] [PolicyFileIn [RequestFileOut]]
        Create a new request as directed by PolicyFileIn

      Options:
        -attrib AttributeString
        -binary
        -cert CertId
        -PolicyServer PolicyServer
        -config ConfigString
        -Anonymous
        -Kerberos
        -ClientCertificate ClientCertId
        -UserName UserName
        -p Password
        -user
        -machine
        -xchg ExchangeCertFile

      CertReq -Accept [Options] [CertChainFileIn | FullResponseFileIn | CertFileIn]
        Accept and install a response to a previous new request.

      Options:
        -user
        -machine

      CertReq -Policy [Options] [RequestFileIn [PolicyFileIn [RequestFileOut [PKCS10FileOut]]]]
        Construct a cross certification or qualified subordination request
        from an existing CA certificate or from an existing request.

      Options:
        -attrib AttributeString
        -binary
        -cert CertId
        -PolicyServer PolicyServer
        -Anonymous
        -Kerberos
        -ClientCertificate ClientCertId
        -UserName UserName
        -p Password
        -noEKU
        -AlternateSignatureAlgorithm
        -HashAlgorithm HashAlgorithm

      CertReq -Sign [Options] [RequestFileIn [RequestFileOut]]
        Sign a certificate request with an enrollment agent or qualified
        subordination signing certificate.

      Options:
        -binary
        -cert CertId
        -PolicyServer PolicyServer
        -Anonymous
        -Kerberos
        -ClientCertificate ClientCertId
        -UserName UserName
        -p Password
        -crl
        -noEKU
        -HashAlgorithm HashAlgorithm

      CertReq -Enroll [Options] TemplateName
      CertReq -Enroll -cert CertId [Options] Renew [ReuseKeys]
        Enroll for or renew a certificate.

      Options:
        -PolicyServer PolicyServer
        -user
        -machine

     

     

    Cheers,

    Rhoderick

     

    * – I was led to believe that this was correct US grammar Smile 

  • How To Disable Remote Desktop Printer Mapping

    After doing Exchange Risk Assessment (ExRAP) and Exchange Risk Assessment As A Service for almost four years, one thing continues to irritate my OCD personality!  When I look at the event logs on an Exchange server, the logs should be a sea of blue.  That is there should be no errors as that indicates something is not quite right and should be addressed.

    Opening up the system event log on numerous customer’s servers I’m pretty much guaranteed to see errors related to mapping printer drivers in the Terminal Services/Remote Desktop session.  As you would expect, this is fluff that I just do not want to see.  How to make this disappear then? 

    Since our venerable friend, Windows Server 2003, is going to exit extended support in a little over a year, I’ve based this post on a Server 2012 R2 box, but the principles still hold true for our trusty friend! 

    Let’s look at manually getting rid of the errors, and then using Group Policy to effectively manage multiple servers.  First up, let’s see the error:

    Opening up the system event log, we are greeted with the below errors:

    Client Printer Redirection Error 1111 on Server 2012 R2

    This is EventID 1111 on a Windows Server 2012 server.  In this case the Exchange server does not have a printer driver for OneNote installed (as expected), and is unable to create a printer mapping to the printer that is installed on the workstation that just RDP’ed to the Exchange server. 

    Client Printer Redirection Error 1111 on Server 2012 R2

     

    Manually Controlling Printer Mapping On Server 2012 R2

    If you look for the Remote Desktop Session Host Configuration tool in Windows Server 2012 R2, you will not find it present.  Either set the desired configuration using the registry directly or the Local Group Policy Object which is located here:

    Computer Configuration –> Administrative Templates –> Windows Components  –>  Remote Desktop Services –> Remote Desktop Session Host -> Printer Redirection > Do not allow client printer redirection

     

    The below is a screenshot of the Local Group Policy Object where we can configure printer redirection:

    Using GPO To Block Client Printer Redirection

    Select Enabledto activate the policy setting and click OK.

     

    This will set the following registry key for fDisableCpmwhich we can also set manually. 

    HKLM\SOFTWARE\Policies\Microsoft\Windows NT\Terminal Services

    RegDWORD  fDisableCpm   1

    To automate this we can use reg.exe – note that the command is a single line and may wrap.

    REG.exe ADD "HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services" /V fDisableCpm   /t REG_DWORD /D 1 /F

    To check the status of the Registry value we can run:

    REG.exe  QUERY "HKLM\Software\Policies\Microsoft\Windows NT\Terminal Services" /V fDisableCpm  

     

    That’s great for a couple of test boxes, but we do not want to do that on 100,000 servers in the enterprise and we will typically look to Group Policy in such cases.

     

    Controlling Printer Mapping On Server 2012 R2 Using GPO

    In a typical enterprise environment we do not want to change lots of individual servers when it is easy to leverage GPO for this purpose.  Again the configuration is stored in the same location as the Local GPO discussed above.  Open the Group Policy Management Console, create a new GPO or edit an existing one – the choice is yours!  Navigate to:


    Computer Configuration –> Administrative Templates –> Windows Components  –>  Remote Desktop Services –> Remote Desktop Session Host -> Printer Redirection > Do not allow client printer redirection

    Select Enabled to activate the policy setting and click OK.

    Using GPO To Block Client Printer Redirection

     

    Once the GPO has refreshed on the Exchange server, the pesky printer mapping errors should be banished!

     

    Bonus Tip

    There are many settings contained in GPO that can be applied to an Exchange server to tune the Windows installation that it sits upon.  One other area that is typically identified in a ExRAP/ExRaaS is that of event log size and retention.  We can set the maximum size of Event logs and how to retain data under the following GPO location: 


    Computer Configuration –> Policies –> Windows Settings –> Security Settings –> Event Log 

     

    Group Policy Object (GPO) To Control Event Log Configuration

     

    Again we can use local policies, command line tasks and PowerShell to configure this but GPO will typically be the best bet for larger customers.

     

    Cheers,

    Rhoderick