Stuff from stuf

Bringing sexy back to management...

Stuff from stuf

  • Not all Orchestration tools are created equal

    I’ve been doing a lot of work with the Release Candidate of System Center Orchestrator, so it’s always interesting to see what other orchestration products are capable of.  I recently read a blog post on Creating Workflow Loops in vCenter Orchestrator and I was struck by just how complicated it is to do relatively simple tasks, with lots of really arcane syntax to work with. 

    It’s probably worth taking a look at how System Center Orchestrator would accomplish a similar task.  First, we start with a new empty runbook.  I’m going to use a text file as an input into the runbook, but we could just as easily prompt for user input, or even better store the list of servers to be patched in a change control record in a service desk system (of course it doesn’t have to be the Microsoft service desk, that’s one of the strengths of the Orchestrator platform).

    We’re going to build a runbook that looks like this.

    Snapshot Final

    First we drag out a standard Read Line activity from the palette on the right from the  Text File Management category.  Then we’ll grab a Get VM List activity from the VMware vSphere.  We can then join the two activities together by hovering over the activity, clicking the arrow that appears and then drag it to the next activity.  We’ll repeat the process dragging out a Compare Values activity, and then finally a Take VM Snapshot activity.  We’ve now built the structure of our runbook, and we can go about customising the activities.

    First we’ll specify the text file to read in.  Double click the Read Line activity, and you can enter the properties.  I’ve used an ASCII format text file that just contains a list of VM Names, and the 1-END tells the activity to read from the first line to the end of the file.

    SnapshotReadLine

    The Get-VMList activity doesn’t need much customisation, it just needs to be told which vSphere connection to use (defined in the Runbook Designer under Options…VMware vSphere).

    SnapshotGetVMList

    The Compare Values activity allows us to compare two text or numeric values – we are going to use this to match our list of VM’s to snapshot from the text file against the list of VM’s returned from vSphere.  We’re going to use one of the real key features of Orchestrator, which is the concept of published data.  Each activity preceding this one returns data onto the databus, and any activity following can take advantage of the published data.  We’re going to use the Line Text returned from the Read Line activity, and the VM Names returned from the Get VM List activity.  In the Test area, we right click and choose Subscribe…Published Data.  We select from the drop down at the top the Read Line activity, and choose the Line Text option

    SnapshotReadLinePD

    We then right click in the pattern field and subscribe Published Data again, and choose the Get VM List activity.

    SnapShotGetVMListPD

    We’ll end up with a Compare Values task that looks like this.  (By default we do a text comparison, if you want to compare numbers, the general tab allows you to select that.)

    SnapshotCompareValue

    We can then customise the Take VM Snapshot activity to customise the behaviour.  Again, we’ll use some published data to identify which VM’s to snapshot.  Up to now we’ve been working with VM names, but the snapshot activity actually requires a VM Path parameter – not to worry, the VM Path is returned along with the VM Name as part of the Get VM List published data.

    SnapshotVMSnapshot

    The final step in the puzzle is to customise our link so that the snapshot task only runs for VM’s that match the list of VM’s in the text file.  To do this, we double click the link object between the Compare Values & Take VM Shapshot activities.  This brings up a dialog that allows us to set the conditions when we will execute the next step – by default it will proceed should the task succeed, but success in this case is simply that the task ran.  We will change it by clicking the text that says “Compare Values” and select the Comparison result published data.  We will then change the criteria to true by clicking the text that says value, and entering true in the popup.

    SnapShotLinkProperties

    The great thing about this runbook is that it will loop by itself, you don’t need to keep track of the loops (you might still want to maintain the state of the loop so you can restart the runbook should it strike an error) and without any strange syntax we’ve built an easy to understand & debug runbook.  The looping was handled automatically as part of the runbook, and we performed a relatively complex text comparison very quickly.

    Compare this to the vCenter Orchestrator example, and you’ll see that using System Center Orchestrator you’ll be up and running with your runbooks much faster.

  • Disingenuous Cost Comparisons

    I was inspired by Eric Gray’s recent post “Disingenuous cost comparisons” which purports to show all the ways that VMware costs less than Microsoft.  I thought I’d take Eric’s advice and see for myself by trying the VMware cost comparison calculator and see exactly how VMware are trying to reduce the high cost of their hypervisor. 

    I entered the data below as my starting point:

    vmwarelicensingcalc

    This gives the rather interesting claim that vSphere supports 50% more VM’s that Hyper-V:

    appsperhost

    That’s a pretty interesting claim, and in the spirit of transparency VMware are good enough to give the reasons they make this assumption.  It’s interesting to see what those claims are, and how they stack up against reality.

    VMwareMemoryMgmt

    From what I know of VMware’s memory overcommitment methods, there are 4 techniques they use:

    1. Transparent Page Sharing (TPS)
    2. Memory Ballooning
    3. Memory Compression
    4. Host Paging/Swapping

    The bottom two methods are only used once the host is under memory pressure, and isn’t a great place for you to be really.  If you’re at the point that the host is paging memory out to disk (even SSD) it’s still orders of magnitude slower than real memory, and will have a performance impact.  VMware point this out in their documentation:

    “While ESX uses page sharing and ballooning to allow significant memory overcommitment, usually with
    little or no impact on performance, you should avoid overcommitting memory to the point that it requires
    host-level swapping.” – Page 23

    TPS is the concept of finding shared pages of memory between multiple VM’s and collapsing that down to a single copy stored in physical memory, and ballooning allows the host to request the guest to release memory that it can, so it can reduce the overall memory footprint of a guest.  So of the “multiple methods” of oversubscribing memory, two are really used in day to day production.  What VMware don’t talk about much is that most of the benefit of TPS comes in the sharing of blank memory pages (i.e. memory that is allocated to a VM, but isn’t being used).  There is an incremental benefit of shared OS memory pages as well, but the majority of the benefit is from those blank pages.  TPS is also affected by large memory pages & ASLR technologies in modern versions of the Windows OS, and isn’t an immediate technology – TPS takes time to identify the shared pages, and runs on a periodic basis.

    Hyper-V has Dynamic Memory functionality that allows machines to be allocated the memory that they need, and ballooning to reclaim memory that isn’t in use.  In practice, this has similar benefits to TPS – blank pages are simply not allocated to VM’s until they need them so they can be used elsewhere as a scheduled resource.  And it’s faster than TPS, as it is immediately responsive to VM demand.  So on a direct comparison, TPS may save slightly more memory due to shared OS pages, but ultimately TPS & Dynamic Memory solve the blank memory pages issue in different ways. 

    VMwareDirectDriverModel

    I think in this case it’s best to let the facts speak for themselves.  It’s pretty clear from the sorts of IO numbers in those articles that the indirect driver model (parent partition architecture) doesn’t impose a bottleneck on Hyper-V IO performance.  And it certainly appears that Hyper-V support is a requirement of WHQL.  And it’s not like VMware can claim to have no driver problems.  It also always makes me laugh when VMware claim that their drivers are optimised for virtualisation.  What does that really mean?  From what I can see, optimised for virtualisation means that you’ve got drivers that can deliver massive amounts of IO to your hardware – which is exactly what Windows Server 2008 R2 has.

    VMwareGangScheduler

    Wouldn’t it be embarrassing if a company had invested so much time and money in building a highly optimised gang scheduler, and then someone came along with a general purpose OS scheduler and the independent test results showed that the general purpose OS scheduler performed as well as, or in some cases better than that gang scheduler (or even, that the vSphere results were so bad that they released a hotfix specifically for that issue)? The simple fact is that regardless of what VMware claim, Hyper-V does not use a general purpose OS scheduler – it just shows that they fundamentally don’t understand the Hyper-V architecture. The only place a general purpose OS scheduler is involved is in the parent partition & the running guests. The Hyper-V scheduler is not the parent partition scheduler – it is it’s own optimised for virtualisation scheduler.  So in terms of performance, Hyper-V more than holds it’s own against vSphere and it’s clear that VMware are just throwing this out there and hoping their customers don’t look into it.

    VMwareDRS

    DRS is certainly a useful technology for adding flexibility to virtual environments, and Microsoft ships a similar technology with System Center (called PRO) that does a similar function.  So if VMware’s cost calculator adds in extra virtual machines to allow for running System Center, surely they should also allow for the fact that this will give this extra functionality?  And without wanting to talk about the future, Virtual Machine Manager 2012 offers Dynamic Optimisation as an alternative to PRO if you don’t have System Center Operations Manager – give it a try.

     

    So looking through the claims for a 50% increase in VM’s per host we get:

    1. Memory overcommit – very slight advantage to VMware because of shared OS pages
    2. Direct Driver Model – no advantage
    3. High performance gang scheduler – no advantage
    4. DRS – similar functionality in System Center, and really contributes to flexibility, not a higher consolidation ratio.

    Based on those claims, I’m struggling to figure out how VMware can make these claims with a straight face.  And overall, my rating of VMware’s licensing calculator – nice try, but you’re smart guys and I’m sure you can do better.

  • Controlling HP ILO with Opalis

    Back to my long neglected blog…

    Inspired by Adam’s post about building his own Opalis integration pack I thought I’d give it a go myself.  I’ve seen the sessions at MMS, but never actually sat down to do this.  I thought I’d start with a scenario that I was looking at for my demo environment, which is all based on HP equipment, and I was trying to figure out a way to ensure that I can bring the demo environment up in an orderly fashion, and in a way that allows me to control when hosts start.  The first step in all of this is controlling the power state of the HP servers using ILO.

    HP publish tools to work with the ILO controller from the command line, and also a set of sample scripts that allow you to control the machine using the command line tools.

    To get things going, I downloaded the HP tools and installed them in the default location (C:\program files\HP Lights-Out Configuration Utility).  I created a C:\scripts directory and extracted the sample scripts into this directory.  I took the “Set_host_Power.xml” file and made two copies – one called “poweron.xml” & one called “poweroff.xml”.  The script shipped from HP turns the power off, so I left the poweroff.xml file alone, and edited the "poweron.xml file so that the line that read:
    <SET_HOST_POWER HOST_POWER="No" />
    was changed to:
    <SET_HOST_POWER HOST_POWER="Yes" />

    I also copied “get_host_power.xml” and named the new copy “getpower.xml” – not essential though.

    Login username & password are defined in the file, but we will override them at the command line we specify in our Opalis objects so we can leave them as default.

    You can test the command line control by issuing the following commands at the command line:

    cpqlocfg –s <ILO IP Address> –u Administrator –p <password> –f <input file from above>

    What I’ve done with my integration pack is simply wrap the command line & input files together into a single pack that I can then use to control my host power, and created some published data.

    Adam went through how to use QIK in it’s basic form, so I’m not going to cover that here.  What I’ll cover is the basic commands I added, and the published data I added, along with a couple of limitations I found.

    I’ll go through the most detailed one which is the GetPower command, as this returns public data.  I added a new command in QIK which runs the cpqlocfg tool.

    01. GetPower

    I then added my arguments as below.  It’s pretty straightforward – we have three parameters that we pass into the tool (ILO IP, Username, Password), we ensure that we define password as encrypted text so the password isn’t exposed in the GUI.  I’ve also hard-coded the path to the getpower.xml input file – one of the issues I found in QIK is that if you want to pass filenames with spaces in them or the path, you need to put double quotes around them but this makes the QIK compilation process fail. I’ve worked around this by using an 8.3 path to the file.

    02. GetPower

    What I also want this to do is to return to me the power state in a form I can use, and that’s where the published data tab comes in.  What I’m doing is using the “Extract Group” option when defining published data.  This actually pulls out the matching text from the output, and makes it available as published data.  This uses standard .NET regular expressions.  My output text looks like HOST_POWER=”ON” or HOST_POWER=”OFF”.  The regular expression I have below matches the = sign, followed by any character, followed by an O and then one or more other characters.  This is specific enough to match my text (I could also use =.(ON|OFF)) only, and not anything else.  The brackets around the text tells the regular expression to extract the actual data that matches, so my PowerStatus will pass data back that is either ON or OFF.  Ideally I would have used =”(O\w+), but again with QIK using double quotes causes some compilation issues that I needed to work around.

    04. GetPower

    So that’s it, I added a couple of other tasks – one to power the server on, and the other to power the server off – using the same command line & just pointing at the appropriate input files.  I didn’t add any published data to these tasks at this point.

    I also haven’t added any additional error handling in the tasks to extract error conditions, if I require this I’ll extract further information from the XML output.

    The last step was just to compile the dll into an Integration Pack as per the instructions in Adam’s blog.  I included the contents of C:\scripts to simplify matters.  I won’t attach the OIP file to this blog, as I don’t want to distribute HP’s files – you can download them yourselves.  I will attach the dll that I generated with QIK though so you can download and see what I’ve done.

  • Hotfixes for DPM Protected Hyper-V Guests

    One of the things that isn’t made 100% clear in the DPM documentation is what you need to do for Hyper-V VM’s that you are protecting at the Hyper-V host layer.  The “Protected Computer Software Requirements” document on Technet tells you that you need to apply certain hotfixes to the Hyper-V host, and you could be forgiven for thinking that was all you needed to do.

    However, if you remember that when DPM uses the Hyper-V VSS Writer to take a snapshot of a running VM, it also leverages the in-guest VSS Writers to ensure that the guest itself is consistent inside, so we are able to have application consistent backups.

    What’s the implication of this?  If there is a hotfix required in the physical world (for instance a file server running Windows Server 2003 requires hotfixes 940349 & 975759) then you should also have that hotfix applied to your protected VM’s, even though you aren’t running a DPM agent in that VM.  Essentially treat any machine you are protecting the same according to the “Protected Computer Software Requirements” document, regardless of whether it is physical or virtual, and protected at the host or guest level.

  • server cluster patching with opalis – part 2

    In my last post on this subject I set up the cluster patching framework in Opalis.  In this post I’m going to modify the policy initiation phase to use PowerShell & some text manipulation to make it more dynamic.

    I’d set this up originally to simply read the cluster nodes from a text file.  There are more options available to us though – we could use PowerShell to query the cluster for a list of cluster nodes, or in this case I’m going to use PowerShell to query Virtual Machine Manager to get the list of Hyper-V cluster nodes.

    I’ve got a simple PowerShell script which is as follows:

    Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager
    $cluster=get-vmhostcluster -name "democluster.contoso.com" -vmmserver "vmm.contoso.com"
    $vmhosts=get-vmhost -vmhostcluster $cluster | select -property Name

    Running this script will return me a variable $vmhosts which contains text as follows:

    @{Name=clusternode1.contoso.com}

    @{Name=clusternode2.contoso.com}

    That’s not quite in the format that I want and I could do some further processing in PowerShell to get it right, but I’m lazy, and also want to show off some of the text manipulation features in Opalis.

    First thing though, I want to execute this in Opalis.  Because I’m using Virtual Machine Manager objects I’ll need to have the VMM 2008 R2 console installed on my Opalis machine.

    Once I’ve got it installed I can then modify my policy.  I’m going to add a “Run .NET script” object.  “Run .NET Script” is pretty powerful and lets us run scripts in C#, JScript, VB.NET or PowerShell.  I just drag and drop my object into my policy, and then double click it to open the properties.

    My policy looks like this:

    overall policy

    When I open the properties, I can simply paste my script in there, and change the language to PowerShell:

    net script

    Now I need some way to get the output from the script.  The great thing about Opalis is that it makes this really easy.  I go to the “Published Data” area and tell the policy which variable I want to return as published data.  In this case my vmhosts variable is the data I want, so I simply add that in.  Now when I need to retrieve the published data from the data bus, vmhosts will appear as available.

    published data

    As I said before the data that comes out of that script isn’t exactly in the format I want – I only want hostnames, not all the other text so I need some way of stripping that out.  Fortunately this is another area that Opalis makes really easy.

    In my “Trigger Policy” task I previously set it up to pass the Computer Name parameter through to the next policy.  I’m going to continue to do this, but manipulate the text that I pass through.

    There is a screen shot below that shows the start of this, but I will expand further.  Opalis will treat information in square brackets as data manipulation functions, so I start with that.  In this case there is a consistent set of data coming out of the script – there is effectively a header (“@{Name=”) , the data (“clusternodex.contoso.com”) and a footer (“}”.  I’m going to use a combination of three functions – the Mid function, the Sum function and the Len function.  Mid allows me to retrieve text from the middle of a string, and Len will allow me to get the length of a string.  Sum just allows me to add two numbers together.

    Because my data is in a consistent format the information I want always starts at the 8th character, and the data I need to retrieve is from that point through to the second last character.  The length of text I need to grab is the overall length of the string less 8 characters (the 7 in the header plus the 1 in the footer).  For this I use the Sum function to add –8 to the length of the string.  I could equally use the Diff function to subtract 8.

    I then build my function as:

    [Mid('<vmhosts published data>’,8, Sum(Len(‘<vmhosts published data>’),-8))]

    I insert my published data by right clicking where I want to insert it and choosing “Subscribe…published data”.  I choose the “Run .NET Script” task, and the vmhosts data.

    Trigger Policy

    So now when my trigger policy task runs it will pass the ComputerName parameter as clusternodex.contoso.com, having stripped off the header and footer.

    If you wanted to simply use the Windows 2008 R2 Failover Cluster PowerShell cmdlet’s to get the cluster nodes you could do this:

    Import-Module -Name FailoverClusters
    Get-ClusterNode -cluster democluster | Format-Table -Property Name –HideTableHeaders

    This actually gives nicer output than the VMM Cmdlets but the VMM ones were slightly easier to work with in my case (I can execute them locally on the Opalis server).

    Next time we’ll look at some of the pre-patching checks.

    Edit: Should have made clear, the Opalis service account needs to have permission in VMM to execute this script.  As there is no concept of an "Operator" in VMM the Opalis account will need to be a delegated administrator.  To run the standalone Failover Cluster scripts the Opalis account simply needs to have Read-only permissions on the cluster.

  • Server Cluster patching with Opalis – Part 1

    I’m using this post to document my experiments with Opalis.  One of the things I’ve been thinking about is using Opalis to orchestrate complicated patching scenarios that have traditionally been difficult to do with tools like System Center Configuration Manager alone (or at least would have involved lots of scripting).  For example patching a cluster would normally involve some manual tasks like failing over the active node before patching, then patching the offline nodes and cycling through the cluster.

    In this post I’m going to document the initial setup of the policy (or at least, what I think it’s going to look like at this point).

    The first thing that I’m going to set up is the different sets of policies – I’ve divided them into four parts:

    1. Policy Initiation (more on this later) and looping

    2. Node pre-activity (for instance – in a Hyper-V node putting the node into maintenance mode)

    3. The actual patching

    4. Node post-activity (confirming the node has returned to service successfully)

    I’ll create four empty policies – you can see the policy names in the screenshot below:

    OpalisClusterPatch01

    In that screenshot you can also see the first steps I’ve set up.  I’m using a “Read Line” task to read a text file containing my cluster node names, although in the future I’ll modify this to execute powershell against the cluster to get the node names.  This task then passes into a Trigger Policy task which is used to start the next step in the policy chain.

    There is one key configuration to change on the “Trigger Policy” step which I’ve highlighted below.  The “Wait for completion” option will allow the following steps to be executed sequentially for each node in the cluster, and the next node will only start once the previous node has completely finished.

    OpalisClusterPatch05

    The other thing I’m going to do is setup a custom start action in policies 2-4 and define an input parameter for each of “Computer Name” – there may be other parameters to pass in later, but at this point I’ll keep it simple (and I expect I’ll discover the other things I need as I develop this further).

    OpalisClusterPatch06

    Then in policies 1-3 I set up a “Trigger Policy” action as you can see in the first screenshot – this calls the next policy in line and passes the Computer Name as the parameter.  You can see this in the second screenshot – the parameter is defined as Computer Name, and in this instance I’ve used the “Subscribe to published data” option to subscribe to information read in from the text file.  In subsequent policies I’ll pass that parameter between policies.

    That’s the start of my cluster patching policy, stay tuned as I develop this further.

  • Exercising live migration

    One of the scenarios that we had running at TechEd NZ was a continuous live migration between two hosts which I had set up using Virtual Machine Manager & the Powershell components.  Someone asked for it internally, and I thought I’d post it here in case anyone else would find it useful.

    This is useful if you want to do any long running tests of live migration, and record the number of migrations you have done.

    To set up, create C:\temp\counter.txt and edit it so that it has a single line which has the content 0.  It uses this text file to record the number of migrations (it also stores in memory but uses the file in case you have to restart the script for any reason).  Edit the script to replace the following fields:

    vmmhost.yourdomain.com –> FQDN of your VMM Server

    HyperVHost1.yourdomain.com –> FQDN of Hyper-V host 1

    HyperVHost2.yourdomain.com –> FQDN of Hyper-V host 2

    VMName –> Name of the VM you are going to migrate.

    There is also a random delay introduced at the end of the script so the migration is not predictable.

    get-vmmserver -computername "vmmhost.yourdomain.com"
    $vm = get-vm | where { $_.Name -eq "VMName"}
    $host1 = get-vmhost | where {$_.Name -eq "HyperVHost1.yourdomain.com"}
    $host2 = get-vmhost | where {$_.Name -eq "HyperVHost2.yourdomain.com"}

    Do
    {

    if ($vm.VMhost -eq "HyperVHost1.yourdomain.com")
    {$desthost=$host2}
    else
    {$desthost=$host1}

    move-vm -vm $vm -vmhost $desthost -jobvariable movejob
    if ($movejob.Errorinfo.DetailedCode -eq 0)
    {
    $rawmigrations = get-content -Path C:\temp\counter.txt -TotalCount 1
    $migrations = [int32] $rawmigrations
    $migrations++
    $migrations
    set-content -Path C:\temp\counter.txt -value $migrations
    }
    $wait=get-random -minimum 60 -maximum 240
    start-sleep -seconds $wait

    }
    while ($true)

     

    To enhance this you could also randomise the guest that is being migrated, and if you have more than a two node cluster you could randomise the destination host.  If I get bored over the next few weeks I’ll update it so that it does these things.

  • Application Virtualisation – Agent or Agentless Part II?

    In my previous post I talked about some of the advantages of using agent based technology to deliver virtual applications.  There were a couple of things that I forgot to include when I wrote that post.  So continuing on from that list, here are the extras:

    5. Inventory

    If I’ve got my application content separated from my agent, I can now query my agent to find out what applications are present on my machine.  That means using tools like System Center Configuration Manager I can find out which virtual applications are where (just because they’re virtual doesn’t mean you don’t have to pay for them). 

    6. Usage information

    My agent can now collate all the usage information about my virtual applications and report that into a single location, using a single mechanism that is independent of the virtual application itself.  Now I can get accurate information about how my users are using the virtual applications, and use that to optimise my licensing.

     

    Those 6 points demonstrate that to truly be effective with virtualisation, the management component is essential.  And the only way to effectively provide an application virtualisation solution that is enterprise ready is to use an agent based technology.  Anyone selling you an agentless application virtualisation solution is not selling you any cost savings or flexibility, they are selling you complexity and overhead.

  • Application Virtualisation – Agent or Agentless?

    One of the things I sometimes hear from my customers when we’re talking about Application Virtualisation is “why do you need an agent for virtualising applications, especially when some of your competitors don’t?”.  My answer to them is “what do you think an agentless solution does except introduce management overhead?”.  Sure it looks attractive in the short term since all you have to do is distribute an executable to your clients and they can run virtual applications. 

    However I can think of several reasons why an agent based technology is superior.

    1. Management of the virtualisation agent

    Let’s say that I discover a security vulnerability in my virtualisation agent, or I want to add new capability to my application virtualisation client (for instance, in the App-V 4.5 release, we added the capability to stream over HTTP).  If I’m using an agentless technology and want to update my client, I have to go off and rebuild every package that I’ve deployed, and then redistribute that package to each of my clients.  With agent based technology I just deploy an update to my agent software and can leverage that new technology without rebuilding my application packages.

    In addition, if I want to know what version of my virtualisation agent that is in the packages it out there, I can’t really do that.

    2. Duplication of resources

    If I have an agent based technology, when I launch a virtual app I’m sharing all the same agent resources as any other virtual apps (CPU/Memory/Disk etc).  When I’m running agentless, every virtual app that I launch starts it’s own instance of the virtualisation agent, each adding it’s own overhead to the process.

    3. Speed

    If I have to extract the agent and execute it every time I launch the application, that adds overhead to my application launch time.

    4. Scriptability

    Because we’ve separated the virtual application content from the virtual application client, now we can do some more interesting things with the client.  Things like prepopulating the cache with certain applications, or removing applications from the machine at the command line.

     

    Overall having the virtual application content separated from the virtual application client allows us to have a more manageable implementation, and this will in turn lead to significantly reduced cost of ongoing maintenance over an agentless environment.  If you can’t manage your virtual apps, you’ve missed half the benefit of having them virtualised in the first place.

  • Save money managing your virtual hosts

    As I said in my post below, I read a lot of blogs.  I’ve been meaning to write a response to a post I read on Vcritical.com for a while, and haven’t got around to it.  Vcritical is written by Eric Gray who is a VMware employee, so clearly he has a biased view, much as I do given my employer :).  However it’s always fun to debate the issues, and we shouldn’t shy away from respectful disagreement.

    Eric wrote a post a while back “Save $14970 on VMware ESX Management” where he said to save money managing your VMware hosts, don’t use Virtual Machine Manager, and then pointed out that the cost of the Server Management Suite Enterprise license is $1,497 per physical host.

    I think Eric is understating the value that the SMSE license brings.  SMSE doesn’t just give you the Operations Manager management license for the host, but also for all the guests running on that host, along with the management agents for Configuration Manager, Data Protection Manager & Virtual Machine Manager.  So now we get detailed information about what’s going on at the host level, but detailed application level information about what’s going in all the guests, plus backup, plus inventory, plus patching, plus software distribution, plus desired configuration management, plus self service VM provisioning, plus a whole bunch of other capability. 

    For me the deep application information is really important.  Sure you could take a black box view of the VM and just treat it as a CPU & memory & disk IO consuming thing, but just because it’s virtualised doesn’t mean you don’t want to know what’s going on in that VM.  If that VM is running BizTalk you’d want to know that the BizTalk services are behaving the way they should, if the VM is running Exchange you want to know that mail is flowing, if it’s running SQL you want to know that the SQL databases aren’t running out of space.  The black box view is great if all you want to do is play musical chairs with your VM’s, but if you really want to know that they’re doing what they’re supposed to, you need deep application knowledge.  And that’s where Operations Manager excels.

    And forget that Eric tells you that PRO is too hard to configure – it’s not.

     

    Just to be explicit on my comment policy: comments are moderated on this blog, but I’ll publish every comment that isn’t offensive, spam or defamatory.  It just takes me a while sometimes.

  • Rethinking the guest?

    I love working in the IT industry.  Apart from being involved in an industry that is constantly changing and evolving, there are also lots of smart people out there who are doing & saying stuff that is interesting & challenging.  I read a lot of blogs, and I read a lot of blogs of people who are working with competitors technology, or doing things that aren’t in my area of focus.  I read an interesting post recently on the vinternals.com site called Rethinking the guest.  I was going to comment on the post there, but thought I would blog my response instead.

    Stu who posts there is a VMware guy and has a lot of interesting things to say.  This post touches my area of work (Systems Management) and challenged me to think about how we do stuff, and how our world might evolve – thanks Stu!

    Stu’s theory is that agent based management of guests needs to change, and posted four key areas where things could be done better.  I find myself half agreeing and half disagreeing with him.

    His first point is that managing patching with an agent is probably not efficient, and a sub point was that an enterprise software management system will likely do other stuff like hardware & software inventory, but we should disable hardware inventory because Vcenter captures that.  I think that misses the important point that a lot of these systems that capture inventory then pass that information up to other systems (like your CMDB) and it’s nice to have a consistent place to capture that.  I know that with System Center Configuration Manager I can grab all the information about my Windows inventory (software & hardware, physical or virtual) from a single place, and with partners like Quest I can grab information about my non-Windows environment as well from the same location.  Do we need to complicate our environment by splitting our virtualised hardware inventory from our physical hardware inventory, or from our desktop inventory?  And what if we want to do other things with our inventory system like baselining our desired configurations?
    And agentless patch management is not without it’s problems.  What about when the machine is turned off or unreachable (maybe the Windows Firewall is switched on)?  What about when we want more control over when things happen – agentless patching can be good, but I think we give up control of a lot.

    His second point is that agentless monitoring is also possible with the new Windows eventing subsystem.  Again, that only gives us a subset of the information that we might care about.  If all I care about is what events are being logged in the event log then sure, maybe that’s a potential solution.  But what if I want deeper information?  What if I want to be alerted when my disk space is low?  What if I want deeper information about what an application is doing?  If I really want to understand what my application is doing, merely looking at Windows events simply isn’t enough.  I need to look at more metrics than events expose.  That’s where System Center Operations Manager excels – and then using the PRO functionality inside Virtual Machine Manager we can use that contextual information to make smart decisions about remediating problems.  Which might be live migrating/Vmotioning a machine to another node, or it might be provisioning a new VM to take the extra load because we’ve hit an OS limit that providing extra resource can’t solve.
    And don’t get me started on SNMP.  SNMP is an overcomplicated, insecure (at least until SNMP v3) mess that is great for monitoring simple network devices, but it really shows it’s age.  And you simply don’t get the depth of information about Windows devices with SNMP.

    His third point – backup.  All I can say is, good to see VMware catching up on the great backup tools we have available with System Center Data Protection Manager which provides the same functionality but across physical and virtual environments. :)

    I’ll skip VMsafe for now, I don’t know enough about it to comment – but I guess virtualisation will require security models to evolve, and VMsafe looks like one step in the process.

    But his overall point – this will hinder the move to the cloud.  This is where I start to agree.  I think it probably will, and it’s one of the things people will have to consider when they look at their cloud strategy.  So he’s right in that things have to change, but I’m not sure that the alternatives he’s proposed are good ones yet.  It’s going to be interesting to see how the management tools industry does evolve to take into account the cloud.  I’m just glad I’m here to see what happens!

  • Monitoring service levels in ops Mgr r2

    I’ve just been playing with the new Service Level Dashboard for System Center Operations Manager 2007 R2.  It’s a great improvement on the first version, and it’s pretty straightforward to set up.  I thought I’d document the steps I took to configure it, although if you just follow through the documentation you’ll get there pretty easily.

    First the prerequisites:

    1. Install Operations Manager 2007 R2

    2. Install Windows SharePoint Services 3.0 (You need to install against SQL, not against the internal Windows database)

    3. You’ll need to import the Service Level Dashboard management pack into Ops Mgr

    4. Install the Service Level Dashboard following the wizard – it’s pretty straightforward.

    5. Make the following change to the web.config file for the Service Level Dashboard application that gets created (by default it’s stored in C:\inetpub\wwwroot\wss\VirtualDirectories\51918) and change the line that reads:
    <identity impersonate=”false”> to:
    <identity impersonate=”true”>

    You’ll need to run iisreset after you’ve done this.  If you don’t do this, you’ll get a “Cannot complete this action” when you navigate to the home page.

    6. Make sure to create a firewall exception for the port that you’ve configured for the Service Level Dashboard.

    Now you should be good to go.  First thing you need to do is define your service level in the Operations Manager console.  Open the console and go to the Authoring pane.  Choose the “Service Level Tracking” option. 

    1. SL definition

    In the right hand pane, click Create. This brings up the Service Level Tracking wizard.  Give it a name.

    2. SL definition

    Now we need to define what we’re interested in tracking for service levels.

    3. SL definition

    The first thing we need to do is select the class that we’re monitoring.  Click the top Select button and choose the class that matches what you’re interested in.  By default it’s scoped to Distributed Application, but you can change this to Group or All.  In this case I’m interested in tracking one of my pre-created distributed applications, so I choose “Distributed Application” from the list.

    4. SL definition

    Now we’ve selected our class, we need to target to a specific object – in this case I’m looking for my distributed application called “OpsMgr Website”.  This is a distributed application I created based on the “Line of business web application template” – it monitors the OpsMgr web console & backend database of the OpsMgr server itself.

    5. SL definition

    Also make sure you choose a management pack to store this new service level in.

    6. SL definition

    Now we need to define what our service level objective actually is

    7. SL definition

    Click the Add button and we can define what we want our service level objective to be.  I want my app to available at least 95% of the time and I can specify which states count as downtime.

    8. SL definition

    Once I’ve done this, I can now create the service level tracking.

    9. SL definition

    Now that I’ve defined what I need to track, I can then go to the Service Level Dashboard and setup the web page that lets me view this.  Navigate to the website you set up when installing the service level dashboard (by default it’s http://localhost:51918).  It’s also blank by default.  Make sure you log on as an Administrator of the site so that you can make changes.

    1. SLD Config

    Go to the Site Actions button, and choose Edit Page.  This will bring up the screen below and, all things going well, you should have your service level that you’ve defined available to select.  Choose the one you want to monitor, and choose the refresh rate and over what period you want to report.  Once you’re done, exit edit mode.

    2. SLD Config

    You should now have a nice view of how you’re tracking against your service levels – you can see that I’m not meeting my 95% objective (that’s what happens when you stop the web console web site for a few hours…)

    3. SLD Config

    As you can see, the new version of the Service Level Dashboard is easy to set up and provides a great view of how you’re tracking.

  • PowerPoint from DPM Events

    For those that want them, here are the PowerPoint presentations from the DPM Unplugged events that ran in Auckland, Wellington & Christchurch last week.  I hope that all who attended enjoyed the events, and got a lot out of them, and thanks to Peter Niven for making it over and working so hard presenting. 

  • OEM Deployment Packs for ConfigMgr

    Like I said before, I don't like to use this blog to simply mirror product announcements (although can I just say - the Windows 7 beta rocks!).  I talk a lot to customers about Operating System Deployment in System Center Configuration Manager.  With the ability to deploy server operating systems with it as well a lot of customers are using the ConfigMgr OSD technology to provide a single provisioning location for their Windows clients and servers.  It's a great way to do it, as the technologies are essentially the same.  If you can deploy Windows Vista, you can deploy Windows Server 2008.  If you can deploy Windows XP, you can deploy Windows Server 2003.  However some things a slightly different for servers as they have some unique hardware requirements that you don't have to do on workstations (e.g. configuring the hardware RAID etc etc).  This is where our partners at Dell, HP & IBM are extending the ConfigMgr platform and providing tools that plug in to the OSD framework and let you configure their server hardware as part of a task sequence.

    Dell released their tool late last month: Dell Deployment Pack for ConfigMgr

    IBM released their tool at the end of last year: IBM Deployment Pack for ConfigMgr

    Now all we need is the HP pack and we've got the whole set!

  • Subselect syntax in Config Mgr

    I'm mostly posting this as a reminder to myself, but someone may find it useful.  This is the subselect syntax for ConfigMgr queries, so you can do a query along the lines of "Show me all the machines that meet criteria X except the machines that meet criteria Y".  My SQL skills are weak, so this is my permanent record of this.

    select SMS_R_System.Name from  SMS_R_System where SMS_R_System.SystemGroupName = "Domain\\GroupName" and SMS_R_System.Name not in (select SMS_R_System.Name from  SMS_R_System inner join SMS_G_System_ADD_REMOVE_PROGRAMS on SMS_G_System_ADD_REMOVE_PROGRAMS.ResourceID = SMS_R_System.ResourceId where SMS_G_System_ADD_REMOVE_PROGRAMS.DisplayName = "ApplicationX")

     

    This particular subselect looks for machines that are members of a particular AD System Group called "Domain\GroupName" (note that you need two backslashes in the query to escape the special character) that don't have an application called "ApplicationX" present in Add/Remove Programs.  If you're resourceful you'll be able to modify this to do other stuff as well.

     

    Thanks Jaimie!

  • User initiated F12 machine rebuilds with Config Mgr OSD

    One of the things I often get asked by customers is how do they make it possible for a user to rebuild a failed machine at their desk, without requiring that the machine is moved to a special collection, or other administrative input.  There are several problems with doing this:

    • Config Mgr & the machine (if the OS is running) remember that the machine has run the OS deployment task sequence before, so will not execute the task sequence again.  The solution to this is to add a mandatory recurring schedule.  However, this means that as long as the machine remains in the collection that the task sequence is targeted at it will try to rebuild itself according to the schedule.
    • If you advertise a slightly different task sequence at the machine collection without making it mandatory this could get around this (the first time at least), but the user will be prompted that there is an optional operating system deployment available for them.  In the standard software distribution case we can suppress this by selecting the "Suppress program notifications" option in the Program.  This option is not available when advertising task sequences, so it is not an option here (as useful as it would be).

    One of my colleagues shared a great way of making this happen by leveraging the way WinPE behaves.  One of the options you get inside the task sequence is to specify which client platform that the task sequence can run on. 

    OSD-suppress

     

    This is useful in a number of situations, one where you are doing an OS upgrade and don't want to rebuild machines that are already at the current release.  In this case you would say it can run on Windows XP but not Windows Vista.  The other situation would be where you are using task sequences to do complicated software installations that need a lot of chained tasks or logic in place, and need to control which OS versions it can run on.  However where your OS is in a steady state (e.g. everyone running Windows Vista SP1) then the client platform targeting doesn't help much for operating system deployment, except in the case I outline above.  In this case we can now target the task sequence at an operating system you don't have in your environment - in the screenshot above I have selected x86 Windows 2000 Service Pack 4 (you don't still have this right?).  In this case we can advertise our task sequence at our collection with our standard machines, make it mandatory & recurring, but because they don't meet the client platform requirement, they will simply ignore the task sequence advertisement.

    The magic comes when you PXE boot and run the task sequence inside WinPE.  WinPE just ignores any of the client platform requirements (which makes sense, since it isn't any of those client platforms so if that was set the task sequence could never execute inside WinPE, and also WinPE has no way of knowing what any existing OS on the machine might be).  WinPE will pick up the recurrence and mandatory nature of the advertisement, so the users can execute an F12 rebuild of their machine at their desk with no administrative input.

     Thanks to Andrew Cobb for the idea!

  • Upcoming New Zealand DPM Events

    I made a decision when I started this blog to only use it for original content and not just repost links to other site (which explains why I don't post frequently).  However this one is near and dear to my heart.  We have some DPM technical events coming up in a couple of weeks, and I'd really like to see these events full.  DPM is a really exciting data protection & backup technology for Windows workloads, and this event will be great for learning about the technology behind it.  The event details are below, it's a free event so please sign up and get along.

     

    March 2009 - TechNet UNPLUGGED Update

    Data Protection Manager 2007 Technical Briefing

    This technical briefing focuses on Microsoft Data Protection Manager (DPM) 2007 - a member of the Microsoft System Centre family of management products. If you are an IT Administrator that works with Microsoft Technologies or simply looking to understand how Microsoft is addressing the demands from our customers and partners for greater manageability and protection of IT infrastructure - this is the technical briefing for you.
    Peter Niven - Microsoft's Regional Specialist on DPM - will be helping attendees understand how DPM can better protect your Microsoft IT investment and elaborate on the wider opportunities to design a more resilient infrastructure. See below for our proposed agenda:

    · Why Microsoft developed Data Protection Manager

    · Brief DPM 2007 overview

    · Specific Workloads (with demos)

    o Exchange

    o SQL

    o SharePoint

    o Virtualised environments

    · What about non Microsoft Workloads?

    · Sizing and Deployment

    · Architecting for Disaster Resilience

    · Service Pack 1 (released December 2008)

    · Roadmap

    Note: This is a Level 300 Event for IT Professionals
    Registration is free. Dates and locations below. Full details here.

    Christchurch - 17 Mar 09: 1- 4pm Register
    Christchurch Convention Centre, 95 Kilmore Street, Christchurch

    Wellington - 18 Mar 09: 1- 4pm Register
    Microsoft Wellington, Level 12, 157 Lambton Quay, Wellington

    Auckland - 19 Mar 09: 1 - 4pm Register
    Microsoft Auckland, Level 5, 22 Viaduct Harbour Ave, Auckland

  • Windows 7 & Error 80072ee2

    I upgraded my home laptop to the beta of Windows 7 this weekend, and when I ran Windows Update I kept getting error 80072ee2 in the windowsupdate.log.  It's a good idea to run windows update when you've done a fresh install of Windows 7 as it picks up a fair few of the missing device drivers (including in my case video).  Error 80072ee2 is one of those errors that seems to be relatively generic, and related to network connectivity of some kind.  My fix was relatively simple - I manually downloaded the latest Intel network drivers for my machine and installed  them - problem solved.  Windows Update works again.  If you're getting this error, it's a relatively easy first step to try.

  • strange virtual app error message in config mgr

    I’ve recently been working in a proof of concept environment streaming some virtual applications from System Center Configuration Manager 2007 R2.  The process for adding a virtual application is really simple – there is a nice wizard that you run and point at the xml manifest that gets created when you sequence an application in App-V 4.5.

    However, in this particular case I pointed at my valid manifest file and got this error instead of the nice display of the virtual apps in my sequence.

    "Load Virtual Application Package Failed. Error: Argument 'picture' must be a picture that can be used as a Icon".

    AppvError

    Unfortunately it’s not a very helpful message as there is no ‘picture’ element in my manifest or in my OSD files (the error is also grammatically incorrect - but then half this post probably is as well).  It took a little bit of digging to figure out what was going on.

    My application was made up of three separate “applications” – the main executable, a license executable and a help file.  The only clue I had to what might be going wrong was the reference to an icon.  When you virtualise an application with App-V, the icons are stripped out and stored in a subdirectory <application name> icons.  Taking a look in there I saw four .ico files - (one for each executable and a fourth for the document association).  However, the icon file for the help application was 251KB, and all the others were 2KB.  Looking at the properties of the help icon, it’s actually the winhelp executable and for some reason the sequencer has grabbed the whole application rather than just the icon.

    In this case, I didn’t actually need the Help application (I should have probably removed it during sequencing) so to get around this issue I edited my manifest xml and removed the help application section.  Once I did this, the manifest imported perfectly and I was able to successfully deploy using Configuration Manager.

  • Unplugged MDT Session Deck

    Here is the deck from the "Deploying Windows Vista with MDT" session that I've run at Technet Unplugged.  It's shamelessly stolen from Michael Niehaus & Tim Mintners sessions from Teched, so thanks to them for doing all the hard work.

     

     

  • Hyper-V with Server Core - too hard for VMware to use?

    There was an video posted recently showing the difference between setting up Hyper-V with Windows Server Core, and setting up ESX3i.  Mike DiPetrillo posted an interesting comment to the effect that the commands that were entered were net new for Windows Server 2008 Core.

     

    I watched the first video and counted one net new command line for Windows Server 2008 (let alone core).  The command sequence is something like:
    netdom (around since Windows NT 4.0)
    shutdown (around since Windows NT 4.0)
    netsh (around since Windows 2003)
    netdom (see above)
    netsh (see above)
    ocsetup (Only net new command)

    The second video shows setting up the ISCSI storage and I counted no net new command line tools there:
    iscsicli (has been available as part of the ISCSI initiator for a while now - it's available for 2K, XP, 2003 - this is first time the iscsi initiator is in the OS) - it is probably fair to say that it's not the most intuitive command line in the world, but it is well documented.
    diskpart (been around since Windows XP)

    So far from being all net new command lines, we have one net new command line tools, and a bunch that have been around for a while (over ten years in some cases).  Guess you'll need to update that MCSE to the new Windows 2008 certification Mike? :)

    The great thing about having all this command line stuff available is that it works on both Windows Server Core & Windows Server Full installs, and you can use all this stuff to automate your server builds (or even better, use System Center Configuration Manager to deploy your Windows Server with image based deployment and task sequences).

    My question to Mike is this:  once I have my host up and running, what extra steps do I have to do to do things like copy my ISO files on there, or copy my gold images on there? 

    With Windows I browse to my file share and copy them on.  It's the Windows you know and love.

    What do I have to do to do this on ESX3i?  Usually it's getting a SCP tool and copying files on there, then when I want to provision it means logging on to the console (command line) and copying my images around.  It's the *nix you might not know or love.

    Either that or I deploy System Center Virtual Machine Manager and use that to manage my Hyper-V & VMware environments, and use that to do all my provisioning. 

  • Virtualisation, Time Sync & Domain Controllers

    Looks like there is an issue with VMware and their update 2 for ESX 3.5 where when the date gets to 12th August, ESX stops working:

    http://communities.vmware.com/thread/162377?tstart=0 

     The workaround is to manually reset the time back a day or two.  If you're doing this and you've got domain controllers running on the ESX environment (yeah, I know it's unsupported but I know there are plenty of you out there doing it) make sure you don't have time sync enabled between the host and the domain controller guest.  The Kerberos protocol is very sensitive to time skew, anything more than 5 minutes means AD replication will stop, authentication will stop, a bunch of things will stop.  Then when you fix the time, you can get weird issues with deleted objects reappearing and other strangeness.

    In general, if you are running domain controllers on a virtual environment (be it Hyper-V, ESX or XenServer) always disable the time sync between the host and the domain controller.  Let AD take care of the time sync itself - by default all domain controllers will sync with the PDC emulator, which then should sync with an external NTP source.

    (Edit: Corrected Hyperlink)

    (2nd Edit: Read Nick's comment below, there is a lot of useful information in there)

  • Configuration Manager PXE Service Point Errors

    I recently had a look at an issue with the Configuration Manager PXE Service Point running on a test environment of mine.  I couldn't get any of my operating system images to deploy - in fact I couldn't get the PXE boot to complete successfully.  In the smspxe.log file I was getting the following messages:

    No Boot Action for Device (38) found

    ProcessDatabaseReply: No Advertisement found in Db for device

    I found this a little odd, as the machine definitely had a task sequence advertised against it, and my custom WinPE image had been made available for PXE. 

    Digging in the event log, I was getting error messages when WDS started up, saying:

     

    Source: WDSIMGSRV

    Category: WdsImgSrv

    Event ID: 258

    Description: An error occurred while trying to initialize the Windows Deployment Services image server.  Error information 0xC1030104

    A little cryptic, and not exactly obvious what it meant.  Having a dig around, it appears that 0xC1030104 actually means that the WDSServer is not configured.  Strange, I thought the Configuration Manager PXE Service Point setup did that?  I checked all the appropriate logs (pxesetup.log, pxemsi.log) and everything seemed in order.  I decided to try and initialise the server at the command line to see if that made a different.  I ran:

    wdsutil /initialize-server /REMINST:"D:\remoteinstall"

    where D:\remoteinstall was the directory that Configuration Manager had configured WDS to use.  Sure enough, as soon as I did that, PXE started working perfectly.

  • Extending System Center Operations Manager Classes & Discovery

    One of the things I've recently been working on is a question asked by a partner - how do I get extra information about my machines into Operations Manager?  If you're not familiar with extending Operations Manager and using Classes & Discovery, it can be quite intimidating, but it's relatively simple.  I figure if I can work out, most people probably can.  This post will walk through extending the Windows Computer class with an extra attribute populated by an AD query.

    First, download and install the Operations Manager Authoring Console: http://www.microsoft.com/downloads/details.aspx?FamilyID=6c8911c3-c495-4a03-96df-9731c37aa6d7&DisplayLang=en

    It's also handy to have the Operations Manager Authoring Guide: http://www.microsoft.com/downloads/details.aspx?FamilyID=d826b836-59e5-4628-939e-2b852ed79859&DisplayLang=en.  The Authoring Guide covers all the concepts involved, and is especially useful to know about classes and inheritance.

    Start the Authoring Console & create a new Management Pack (File...New or click the icon). Create an Empty Management Pack. The best practice for the identity (see the Authoring Guide - Management Packs & Namespaces) is to use Vendor.ProductName.Version - in this case I'm going to use stufox.windowsserver with no version as this will be generic across all Windows server versions, and it's for demo purposes.

    NewMP1

     

    Name & description is up to you, this is what will show up in the Operations Manager console when you import the management pack.

    NewMP2

    Now you have a new blank Management Pack. The first thing to do is to create our new class, so go to Classes and click New Custom Class. You then need to provide a unique identifier for the Class, I'm going to create a class called stufox.windowsserver.ExtendedComputer which will be used to store the Description information from the Active Directory.

    NewClass1

    Once you click OK you will see the class creation screen. You need to set a few fields in here:

    Base Class: Microsoft.Windows.Computer - this is the class that we're extending to create our new attribute. 

    Name: StuFox Extended Computer (this will appear in the Operations Manager console as the name of the attribute)

    Description: Up to you

    NewClass2

    Change to the Properties tab, and right click in the left hand pane, and choose Add Property. Our property in this case is going to be ADDescription. The defaults are fine, although you can set a Display Name & Description if you want to.

    NewClass3

    Click OK to create the class.

    Now that the class is created you need to create a discovery to populate the class. I'm going to use a VBscript to do this, there is a lot of information on MSDN about this:

    How to create Discovery Data by using a script

    How to use Runtime Scripts for Discovery

    Change to the Health Model section & click Discoveries. Click New & choose Script.

    In the screen that opens, set the following values:

    Element ID: stufox.windowsserver.DiscoverAdDescription

    Display Name: Discover AD Description

    Target: Microsoft.Windows.Computer (important: the default will not be set to this, if you don't target it correctly you won't get any data)

    NewDiscovery1

    Click Next and you will get to set the schedule for the discovery. This discovery doesn't really need to run very frequently - once every 24 hours is fine.

    NewDiscovery2

    Click Next and you will get the script screen. This allows you to paste in a script written in VBScript or Jscript.

    Set the filename to DiscoverDescription.vbs, and paste in the discovery script below (Caveat:  This script isn't 100% production ready, I've just used it as an example to show how to get the information in there.  There isn't much error handling, and there isn't any allowance for machines that aren't domain members.  Use at your own risk!)  . I've highlighted (bold, italic) two places you need to modify the script to set the class name to whatever your class name is set to. Set the timeout to 10 minutes.

    NewDiscovery3

    Click the parameters button, and enter the following three parameters.

    $mpelement$ $Target/Id$ $Target/Property[Type="Windows!Microsoft.Computer"]/PrincipalName$

    NewDiscovery4

    Click OK to create the discovery. Once this has been created, highlight the discovery and choose Properties. Change to the Discovered Types tab and click Add. Highlight your custom class in the list, and choose OK. Right click the newly added class and choose the ADDescription property.

    NewDiscovery5

    Save the management pack to your hard disk somewhere and it is ready to import into Operations Manager.  It will just be a standard XML file that you can open with an XML editor (or notepad) if you like to see what's in there.

    So, now that it's imported how do you tell that it's there?  The discovery will be available to view in the Operations Manager Console. Go to the Authoring view, and then choose Object Discoveries. You may have to change the scope of the console to see the discovery - include your new management pack. You can also change the schedule of the discovery to force it to run at a time of your choosing if you like - in fact, because the management pack is unsealed you can edit the discovery script directly if you want to.  This can be quite handy if like me you haven't necessarily ironed out all the bugs in your script. :)

    Once the discovery is run, to see the if the data is in the database you can look at the Ops Mgr database - the data will be stored in a table named MT_<Class Name> - in this case it's MT_ExtendedComputer (Extended_Computer being the name of our new class).

    You can troubleshoot by looking at the Operations Manager event logs on the Ops Mgr server.  If you need to see if the script has made it down to the machines to actually run, you can go to a command prompt and change to the "Health Service State" directory where the Operations Manager agent is installed (by default C:\Program Files\System Center Operations Manager 2007) and then running dir /s discoverdescription.vbs.  It will be in one of the "Monitoring Host Temporary Files x" directories.

     

    I assume there will be some clever people out there who will show me better ways to write my script, feel free, I'm always open to good advice.  One of the things I've thought about adding to it is the ability to determine if a machine is in a domain or not, and if not use the description from the local computer instead.

     

    DiscoverDescription.vbs

    'DiscoverDescription.vbs

    '

    ' Script discovers the Description field from AD given the DNS name of the computer

    '

    Option Explicit

    'On Error Resume Next

    Const ADS_SCOPE_SUBTREE = 2

    Dim oAPI, oArgs, strDescription

    Dim SourceID, ManagedEntityId, TargetComputer, objRootDSE, strDomain

    Dim wshnetwork, strCompName

    Dim objConnection, objCommand, objRecordSet, objComputer, strComputerAccount

    Dim oDiscoveryData, oInst

    Set oAPI = CreateObject("MOM.ScriptAPI")

    ' Arguments:

    ' 0 - SourceID

    ' 1 - ManagedEntityID

    ' 2 - ComputerIdentity

    Set oArgs = WScript.Arguments

    if oArgs.Count < 3 Then

    call oAPI.LogScriptEvent("DiscoverDescription.vbs",9999,2,"Script was called with fewer than 3 arguments and was not executed")

    Wscript.quit -1

    End If

    SourceID = oArgs(0)

    ManagedEntityId = oArgs(1)

    TargetComputer = oArgs(2)

    call oAPI.LogScriptEvent("DiscoverDescription.vbs",9999,1,"Source ID: " & SourceID & "ManagedEntityID: " & ManagedEntityID)

    Set objRootDSE = GetObject("LDAP://RootDSE")

    strDomain = objRootDSE.Get("DefaultNamingContext")

    set wshnetwork=wscript.createobject("Wscript.Network")

    strCompName=wshnetwork.computername

    if err.number <> 0 Then

    call oAPI.LogScriptEvent("DiscoverDescription.vbs",9997,2,"Error creating objects")

    Wscript.quit -1

    End If

    set objConnection = CreateObject("ADODB.Connection")

    set objCommand = CreateObject("ADODB.Command")

    objConnection.Provider = "ADsDSOObject"

    objConnection.Open "Active Directory Provider"

    set objCommand.ActiveConnection = objConnection

    if err.number <> 0 Then

    call oAPI.LogScriptEvent("DiscoverDescription.vbs",9996,2,"Error creating ADO objects")

    Wscript.quit -1

    End If

    objCommand.Properties("Page Size") = 1000

    objCommand.Properties("SearchScope") = ADS_SCOPE_SUBTREE

    objCommand.CommandText = "SELECT ADsPath FROM 'LDAP://"&strDomain&"' WHERE objectCategory='computer' AND name='"&strCompName&"'"

    Set objRecordSet = objCommand.Execute

    strDescription = ""

    objRecordSet.MoveFirst

    if NOT objRecordSet.EOF Then

    Do until objRecordSet.EOF

    strComputerAccount=objRecordSet.Fields("ADsPath").value

    set objComputer = GetObject(strComputerAccount)

    strDescription = left(objComputer.Description,255)

    objRecordSet.MoveNext

    Loop

    Else

    call oAPI.LogScriptEvent("DiscoverDescription.vbs",9998,2,"Active Directory Object could not be found")

    End If

    Set oDiscoveryData = oAPI.CreateDiscoveryData(0, SourceID, ManagedEntityID)

    set oInst = oDiscoveryData.CreateClassInstance("$MPElement[Name='stufox.windowsserver.ExtendedComputer ']$")

    call oInst.AddProperty("$MPElement[Name='Windows!Microsoft.Windows.Computer']/PrincipalName$", TargetComputer)

    call oInst.AddProperty("$MPElement[Name='stufox.windowsserver. ExtendedComputer']/ADDescription$",strDescription)

    call oDiscoveryData.AddInstance(oInst)

    call oAPI.Return(oDiscoveryData)

  • Hyper-V: Revert & Apply

    I was just talking to one of my colleagues about the difference between Revert & Apply when using the Hyper-V management console.  They are almost the same thing, just that Revert is a special case of Apply.  Apply is used on any snapshot that you want to start using - you right click the snapshot and choose Apply.  This then takes the virtual machine to the state it was in at the time the snapshot was taken.  Revert is a special case, and takes you back to the last snapshot that you took or applied.  This is the one that is shown with a green triangle in the Snapshots pane.

     For more information about Snapshots in Hyper-V I recommend Ben Armstrong's blog post: http://blogs.msdn.com/virtual_pc_guy/archive/2008/01/16/managing-snapshots-with-hyper-v.aspx