• Log-Event

    Recently I had a need to write to the event logs using PowerShell.  I knew there was a built in command for this action so I decided to test it out.  After some time using the Write-EventLog cmdlet, I kept getting errors generated which seemed to be symptoms from the following constraints of the event log itself.  When using the cmdlet, the following apply:

    -You must specify a source and a log (Application, System, etc) to write to.  An example of a source would be “Windows Error Reporting” in the Application log.

    - A source can only be associated with one event log.  If you attempt to write an event using the source “Windows Error Reporting” to the System log, you will get the following error: The source 'Windows Error Reporting' is not registered in log 'System'. (It is registered in log 'Application'.)

    Since the Write-EventLog cmdlet makes you specify the log to write to with the mandatory –LogName parameter, this can be problematic because you have to know which log the source is associated with.  From a programmatic perspective, you can’t guarantee all computers everywhere have the source “AutomationLogs” registered, and you also can’t guarantee which event log this is registered to.

    I developed the following function to assist me in writing to the event logs that I plan to implement in a lot of my scripts moving forward.  I wanted to be able to call a command and specify what I want to write and not have to worry about sources, log locations, registering new event logs if needed, and so on.

    The function first checks to see if the source is already registered with an existing log.  If the source exists, the function leverages the static method LogNameFromSourceName from the System.Diagnostics.EventLog class to determine which log to write to.  If the return from this function is null or an empty string, we know the source does not exist and will have to register a new event source to the desired log (as passed in by the caller). 

    This function requires administrative access in an elevated command prompt, because it needs to check the Security log, and this access is also required when registering new sources.

    According to the MSDN documentation regarding creation of event sources – after registering a new event source, it should not be immediately used.  I am not sure exactly how long we should wait but I assumed 5 seconds is ample time for whatever replication/post-processing tasks need to happen on the back end before we use this.

    I have tested this on Server 2012 and 2012 R2, but any machine with PowerShell 3 or later should be able to leverage the function.  The function is below and can also be found on the TechNet Script Gallery here.

     

    function Log-Event {
        param(
            [Parameter(Mandatory=$true)]
            [ValidateNotNullorEmpty()]
            [String]$Message,

            [ValidateNotNullorEmpty()]
            [String]$Source = "ScriptOutput",

            [int32]$EventID = 1,

            $LogName = "Application",

            [System.Diagnostics.EventLogEntryType]$Type = "Information"
        )
        try {
            $SourceLogName = [System.Diagnostics.EventLog]::LogNameFromSourceName($source, ".")
            if (![string]::IsNullOrEmpty($SourceLogName)) {
                Write-EventLog -LogName $SourceLogName -Source $Source -Message $Message -EntryType $Type -EventId $EventID
            }
            else {
                New-EventLog -Source $Source -LogName $LogName
                #After register new event source, do not use right away, accoring to MSDN:
                #http://msdn.microsoft.com/en-us/library/2awhba7a(v=vs.110).aspx
                Sleep 5
                Write-EventLog -LogName $LogName -Source $Source -Message $Message -EntryType $Type -EventId $EventID
            }
        }
        catch {
            Write-Warning "Unexpected error occured while attempting to write to log."
            Write-Warning "Caller command: $((Get-PSCallStack)[1].Command)"
            Write-Warning "Error: $($_.exception.message)"
        }
    }

  • Shutting Down an Exchange Site

    The argument could be made that this script is not something that has a practical use.  One of my customers had a requirement to quickly (and gracefully) shut down an entire site of Exchange servers.  I assume the scenario would be due to some kind of natural disaster or cooling issue in a datacenter.  I am posting this because one of my customers had an actual need for it, and some of the techniques in this script could be applied to other scripts.  I will leave the practicality of this script up to the reader.

    The script will identify servers by site and optionally by server name as well.  The script leverages WinRM to stop services remotely on multiple servers, so WinRM must be enabled and firewall rules must be opened.  The advantage is this can be executed on any number of servers and should take generally around the same time to complete.

    After the script identifies which servers need to be shut down, it will dismount any databases that are mounted and stop the cluster service on DAG nodes, as well as stopping the msExchangeADTopology service (and dependent services).  Once all services are stopped, the script will initiate a shutdown of each server.

    Here is the script in action:

    Stop-ExchangeSite3

    As a reminder, all servers will need to be manually powered back on; in addition, all databases that were dismounted will need to be mounted after the servers are brought back online.

    *Edit* - The script can be found here.

  • Caching Objects in PowerShell – Part 2

    In my previous post about this subject, I demonstrated how to cache objects using an XML file. Let’s take a look at how to load, manipulate and save the data again.

    Since the Export-Clixml cmdlet retains the data type in the XML file, we can perform actions on the objects in memory that are type-specific without having to cast the objects.  This means if we have an integer value, we can increment the count by calling the ++ operator.  Likewise, if we have a DateTime object, we can call type-specific methods such as .ToUniversalTime().  I think this is very cool and I wish I’d discovered this sooner. 

    Loading the XML back into memory couldn’t be easier:

    $DBCache = Import-Clixml C:\DatabaseCache.xml

    Using the data created in part 1, let’s assume we just created a new mailbox on Database-1.  With the below code, we can easily take the objects in memory and increment the count to show the new updated mailbox number.  Another example would be adding time to the Timestamp attribute before we save .

    ($DBCache | where {$_.DatabaseName –eq “Database-1”}).UserCount++

    foreach ($DBItem in $DBCache) { $DBItem.Timestamp.AddHours(3) }

    Once the objects are modified, simply overwrite the XML file with the updated data:

    $DBCache | Export-Clixml “C:\DatabaseCache.xml”

    I realize this is a very simple example but the principal applies to any class and any of those class methods.

  • Caching Objects in PowerShell – Part 1

    In the first section of this two part series, I will discuss caching objects to disk in PowerShell, and reading the cache back into memory.  I recently came up with this solution at a customer site where I was running a script as a scheduled task and needed some data to persist after the task was complete.  I use this to track different errors and how many times I’ve encountered those errors; as well as caching data from complex (read: expensive) LDAP queries.

    So let’s take a look at something LDAP we could cache.  Let’s say I have an array of objects in memory representing Exchange databases with current number of mailboxes (I like to query LDAP for this data - see this post).  The objects could be anything – but the concept is straightforward as we will cache the data to disk using XML.  For this example, let’s create some fake data with custom PSObjects.  I chose to use different types (string, int, datetime) on purpose here to show how this really shines compared to using something like export/import –csv, which only recognizes strings without doing type casting or conversion.

    $DBs = @()
    $DBs += New-Object psobject -Property @{DatabaseName="Database-1"; UserCount = 3800; Timestamp=(Get-Date)}
    $DBs += New-Object psobject -Property @{DatabaseName="Database-2"; UserCount = 2600; Timestamp=(Get-Date)}

    Once the array of custom objects is created, we can simply export it using the built in Export-Clixml cmdlet.

    $DBs | Export-Clixml .\DatabaseCache.xml

    CreateFakeData

    This will save all of our data to an XML file and retain associated type information, etc.  Here is what the raw XML data looks like.

    DatabaseCacheXML

    To use this data again, simply use the Import-Clixml cmdlet and you will have the exact same array of objects you previously exported.  Stay tuned as my next post will show some examples of how to update the cache.

  • PowerShell Performance: Write-Host

     

    This month’s post will be a short one – and there’s not much code to this topic, it’s just an observation.  When I write scripts, I like the idea of adding some output to track what my script is doing, especially if there’s an issue with the script.  It’s so much easier to find bugs or performance issues if I can see the last task that completed successfully – I then know where to start looking for the issue in my code.  However, it is important to understand as more output is added, your script will slow down.  It may not seem like much but when you are calling write-host multiple times for thousands of objects, overall script performance will be impacted.

     

    Let’s take an example script I wrote and assume the script will call a function in a foreach loop processing thousands of objects.  I’ll also add a write-host in my foreach loop each time I enter and exit a function.  That’s a lot of text but it could be meaningful if there is an issue.  Let’s run the following code:

    function foo {
        Write-Host "Entering function 'foo'"
        # Some code
        Write-Host "Completed function 'foo'"
    }

    $startTime = Get-Date
    write-host "script started at $starttime"
    $Cycles = 10000

    1..$Cycles | % {
        foo
    }
    Write-Host "Done"

    $endTime = Get-Date
    ($endTime - $startTime).totalseconds

    This takes about 10 seconds to execute on my computer, which is significant since the only thing running is the write-host command.  After commenting out the write-host commands, the script completes in about 1 second.  In my personal experience, I have seen scripts where over 30 minutes have been saved by removing write-host output.

    My suggestion and what I do for most of my scripts: as an alternative to write-host, you can use write-verbose for output, and when you need to see what your script is doing you can call the script with the –verbose parameter.  Your script will execute faster under normal circumstances, and you will still be able to output diagnostic information when you need it - with no code changes.