• Collecting Mailbox Counts per Database Using LDAP

    Recently I have been gathering a lot of mailbox data for a customer running Exchange 2010 that is in the process of upgrading from Exchange 2007.  One of the more frequent requests has been for mailbox numbers, often broken out by database.  It’s key to know the current state of your environment, especially during an upgrade where mailboxes seem to never stop moving.  For my customer this is how we track which databases are full and keep tabs on the current mailbox load balancing procedure.


    Typically you can just run a command similar to ‘Get-Mailbox –ResultSize Unlimited | group database’ to quickly see how the databases look.  However, when you have an environment with tens/hundreds of thousands of mailboxes, the Get-Mailbox command will take a long time to finish.  Sometimes it’s just not fast enough, and I saw this as a great opportunity to see just how fast I could gather a count of mailboxes per database.  After testing a few different methods (including –asjob, start-job, etc) I found the fastest method was LDAP interrogation.


    The idea is pretty straight forward as every AD user with an Exchange mailbox will have a homeMDB attribute pointed to the mailbox’s database.  First, let’s pull back a list of databases and retrieve the distinguishedName of the database, then perform an LDAP query for that DN in the homeMDB attribute of the user accounts.  This is a simple example but could be adapted to different environments.  For example, in my lab I tag ‘SLA-1’ into the ‘extensionAttribute1’ attribute for all my Tier 1 SLA users.  I could modify this script to include ‘extensionAttribute1=SLA-1’ in my LDAP filter to make sure these users are all on DB1 which has more database copies than my other databases.

    Disclaimer: By no means do I think this is ‘better’ than Get-Mailbox or other native commands.  In some cases, specifically when pulling a small amount of data from a large set of users, this method is simply faster.


     

    Script Execution

    Enough talk, let’s see the script run in a lab environment.  Between each test, I reboot the DC to flush any cached LDAP results.  Here are the details of my virtualized lab:
    One Domain Controller with dynamic memory up to 3GB, 2 virtual CPU.
    One Exchange 2010 SP3 (RU3) server with 4GB RAM, 2 virtual CPU.
    All disks share the same set of Raid 0 disks.
    ~20,000 mailboxes
    20,000 mail-enabled users
    20 mailbox databases

    As a baseline, let’s execute a typical ‘Get-Mailbox’ cmdlet.

    DatabaseCountGetMBX1
    Here we see the cmdlet finish in 3.9 minutes – not bad for 20,000 mailboxes.  But we can go faster!  Here’s the script in action:

    DatabaseCountLDAP1

    DatabaseCountLDAP2
    Once it’s complete, we see that it took 16.9 seconds to run, That’s over 14x faster!

    The script can be found on the Script Repository here.

    Stay tuned for more PowerShell/automation scripts and tips…

  • Caching Objects in PowerShell – Part 2

    In my previous post about this subject, I demonstrated how to cache objects using an XML file. Let’s take a look at how to load, manipulate and save the data again.

    Since the Export-Clixml cmdlet retains the data type in the XML file, we can perform actions on the objects in memory that are type-specific without having to cast the objects.  This means if we have an integer value, we can increment the count by calling the ++ operator.  Likewise, if we have a DateTime object, we can call type-specific methods such as .ToUniversalTime().  I think this is very cool and I wish I’d discovered this sooner. 

    Loading the XML back into memory couldn’t be easier:

    $DBCache = Import-Clixml C:\DatabaseCache.xml

    Using the data created in part 1, let’s assume we just created a new mailbox on Database-1.  With the below code, we can easily take the objects in memory and increment the count to show the new updated mailbox number.  Another example would be adding time to the Timestamp attribute before we save .

    ($DBCache | where {$_.DatabaseName –eq “Database-1”}).UserCount++

    foreach ($DBItem in $DBCache) { $DBItem.Timestamp.AddHours(3) }

    Once the objects are modified, simply overwrite the XML file with the updated data:

    $DBCache | Export-Clixml “C:\DatabaseCache.xml”

    I realize this is a very simple example but the principal applies to any class and any of those class methods.

  • Log-Event

    Recently I had a need to write to the event logs using PowerShell.  I knew there was a built in command for this action so I decided to test it out.  After some time using the Write-EventLog cmdlet, I kept getting errors generated which seemed to be symptoms from the following constraints of the event log itself.  When using the cmdlet, the following apply:

    -You must specify a source and a log (Application, System, etc) to write to.  An example of a source would be “Windows Error Reporting” in the Application log.

    - A source can only be associated with one event log.  If you attempt to write an event using the source “Windows Error Reporting” to the System log, you will get the following error: The source 'Windows Error Reporting' is not registered in log 'System'. (It is registered in log 'Application'.)

    Since the Write-EventLog cmdlet makes you specify the log to write to with the mandatory –LogName parameter, this can be problematic because you have to know which log the source is associated with.  From a programmatic perspective, you can’t guarantee all computers everywhere have the source “AutomationLogs” registered, and you also can’t guarantee which event log this is registered to.

    I developed the following function to assist me in writing to the event logs that I plan to implement in a lot of my scripts moving forward.  I wanted to be able to call a command and specify what I want to write and not have to worry about sources, log locations, registering new event logs if needed, and so on.

    The function first checks to see if the source is already registered with an existing log.  If the source exists, the function leverages the static method LogNameFromSourceName from the System.Diagnostics.EventLog class to determine which log to write to.  If the return from this function is null or an empty string, we know the source does not exist and will have to register a new event source to the desired log (as passed in by the caller). 

    This function requires administrative access in an elevated command prompt, because it needs to check the Security log, and this access is also required when registering new sources.

    According to the MSDN documentation regarding creation of event sources – after registering a new event source, it should not be immediately used.  I am not sure exactly how long we should wait but I assumed 5 seconds is ample time for whatever replication/post-processing tasks need to happen on the back end before we use this.

    I have tested this on Server 2012 and 2012 R2, but any machine with PowerShell 3 or later should be able to leverage the function.  The function is below and can also be found on the TechNet Script Gallery here.

     

    function Log-Event {
        param(
            [Parameter(Mandatory=$true)]
            [ValidateNotNullorEmpty()]
            [String]$Message,

            [ValidateNotNullorEmpty()]
            [String]$Source = "ScriptOutput",

            [int32]$EventID = 1,

            $LogName = "Application",

            [System.Diagnostics.EventLogEntryType]$Type = "Information"
        )
        try {
            $SourceLogName = [System.Diagnostics.EventLog]::LogNameFromSourceName($source, ".")
            if (![string]::IsNullOrEmpty($SourceLogName)) {
                Write-EventLog -LogName $SourceLogName -Source $Source -Message $Message -EntryType $Type -EventId $EventID
            }
            else {
                New-EventLog -Source $Source -LogName $LogName
                #After register new event source, do not use right away, accoring to MSDN:
                #http://msdn.microsoft.com/en-us/library/2awhba7a(v=vs.110).aspx
                Sleep 5
                Write-EventLog -LogName $LogName -Source $Source -Message $Message -EntryType $Type -EventId $EventID
            }
        }
        catch {
            Write-Warning "Unexpected error occured while attempting to write to log."
            Write-Warning "Caller command: $((Get-PSCallStack)[1].Command)"
            Write-Warning "Error: $($_.exception.message)"
        }
    }

  • Caching Objects in PowerShell – Part 1

    In the first section of this two part series, I will discuss caching objects to disk in PowerShell, and reading the cache back into memory.  I recently came up with this solution at a customer site where I was running a script as a scheduled task and needed some data to persist after the task was complete.  I use this to track different errors and how many times I’ve encountered those errors; as well as caching data from complex (read: expensive) LDAP queries.

    So let’s take a look at something LDAP we could cache.  Let’s say I have an array of objects in memory representing Exchange databases with current number of mailboxes (I like to query LDAP for this data - see this post).  The objects could be anything – but the concept is straightforward as we will cache the data to disk using XML.  For this example, let’s create some fake data with custom PSObjects.  I chose to use different types (string, int, datetime) on purpose here to show how this really shines compared to using something like export/import –csv, which only recognizes strings without doing type casting or conversion.

    $DBs = @()
    $DBs += New-Object psobject -Property @{DatabaseName="Database-1"; UserCount = 3800; Timestamp=(Get-Date)}
    $DBs += New-Object psobject -Property @{DatabaseName="Database-2"; UserCount = 2600; Timestamp=(Get-Date)}

    Once the array of custom objects is created, we can simply export it using the built in Export-Clixml cmdlet.

    $DBs | Export-Clixml .\DatabaseCache.xml

    CreateFakeData

    This will save all of our data to an XML file and retain associated type information, etc.  Here is what the raw XML data looks like.

    DatabaseCacheXML

    To use this data again, simply use the Import-Clixml cmdlet and you will have the exact same array of objects you previously exported.  Stay tuned as my next post will show some examples of how to update the cache.

  • Estimating/Calculating Execution Time

    Over the years, I’ve noticed that I develop a general routine depending on the needs of my current customer.  When I get to work in the morning I generally check a few items that I know can develop into hot button issues.  As a general rule, if I am going to repeat a task more than a few times, I write a quick script in PowerShell if possible.  Lately I’ve had to search for and modify several tens of thousands of objects in AD.  The majority of my scripts display a progress bar.  The progress bar is nice, but I wanted to improve the progress bar by adding an active countdown timer.

    When I set out to write this, I wanted a function I can just drop into a script if I feel it’s necessary, with minimal code changes.  The function I came up with makes the following assumptions:

    • Each processing task takes relatively the same amount of time to execute
    • You must know how many items you are processing.  I’m not sure mathematically how to solve this without knowing both how many items we have looped through and how many more we have left to process
    • If one or several of the processing tasks takes significantly longer than the other processing tasks (such as a network timeout), the estimated time remaining will be skewed

    After adding the function I created to a script, all I need to add is this code to my foreach loop (where $cycles is an integer representing the total objects to process):

    $i++
    ShowExecutionTime -TotalObjectsToProcess $Cycles

    Here is the script in action:

    CalculateRemainingExecutionTimeRunning

    The script in the repository also has a –ShowCalculatedTimes parameter which will document each time the function estimates the script completion time, then will display how many times we guessed each specific completion time.  Below are the results of the script with –ShowCalculatedTimes enabled:

    CalculateRemainingExecutionTimeResults

    The sample script containing the function can be found here.