• Server Cluster patching with Opalis – Part 1

    I’m using this post to document my experiments with Opalis.  One of the things I’ve been thinking about is using Opalis to orchestrate complicated patching scenarios that have traditionally been difficult to do with tools like System Center Configuration Manager alone (or at least would have involved lots of scripting).  For example patching a cluster would normally involve some manual tasks like failing over the active node before patching, then patching the offline nodes and cycling through the cluster.

    In this post I’m going to document the initial setup of the policy (or at least, what I think it’s going to look like at this point).

    The first thing that I’m going to set up is the different sets of policies – I’ve divided them into four parts:

    1. Policy Initiation (more on this later) and looping

    2. Node pre-activity (for instance – in a Hyper-V node putting the node into maintenance mode)

    3. The actual patching

    4. Node post-activity (confirming the node has returned to service successfully)

    I’ll create four empty policies – you can see the policy names in the screenshot below:

    OpalisClusterPatch01

    In that screenshot you can also see the first steps I’ve set up.  I’m using a “Read Line” task to read a text file containing my cluster node names, although in the future I’ll modify this to execute powershell against the cluster to get the node names.  This task then passes into a Trigger Policy task which is used to start the next step in the policy chain.

    There is one key configuration to change on the “Trigger Policy” step which I’ve highlighted below.  The “Wait for completion” option will allow the following steps to be executed sequentially for each node in the cluster, and the next node will only start once the previous node has completely finished.

    OpalisClusterPatch05

    The other thing I’m going to do is setup a custom start action in policies 2-4 and define an input parameter for each of “Computer Name” – there may be other parameters to pass in later, but at this point I’ll keep it simple (and I expect I’ll discover the other things I need as I develop this further).

    OpalisClusterPatch06

    Then in policies 1-3 I set up a “Trigger Policy” action as you can see in the first screenshot – this calls the next policy in line and passes the Computer Name as the parameter.  You can see this in the second screenshot – the parameter is defined as Computer Name, and in this instance I’ve used the “Subscribe to published data” option to subscribe to information read in from the text file.  In subsequent policies I’ll pass that parameter between policies.

    That’s the start of my cluster patching policy, stay tuned as I develop this further.

  • Exercising live migration

    One of the scenarios that we had running at TechEd NZ was a continuous live migration between two hosts which I had set up using Virtual Machine Manager & the Powershell components.  Someone asked for it internally, and I thought I’d post it here in case anyone else would find it useful.

    This is useful if you want to do any long running tests of live migration, and record the number of migrations you have done.

    To set up, create C:\temp\counter.txt and edit it so that it has a single line which has the content 0.  It uses this text file to record the number of migrations (it also stores in memory but uses the file in case you have to restart the script for any reason).  Edit the script to replace the following fields:

    vmmhost.yourdomain.com –> FQDN of your VMM Server

    HyperVHost1.yourdomain.com –> FQDN of Hyper-V host 1

    HyperVHost2.yourdomain.com –> FQDN of Hyper-V host 2

    VMName –> Name of the VM you are going to migrate.

    There is also a random delay introduced at the end of the script so the migration is not predictable.

    get-vmmserver -computername "vmmhost.yourdomain.com"
    $vm = get-vm | where { $_.Name -eq "VMName"}
    $host1 = get-vmhost | where {$_.Name -eq "HyperVHost1.yourdomain.com"}
    $host2 = get-vmhost | where {$_.Name -eq "HyperVHost2.yourdomain.com"}

    Do
    {

    if ($vm.VMhost -eq "HyperVHost1.yourdomain.com")
    {$desthost=$host2}
    else
    {$desthost=$host1}

    move-vm -vm $vm -vmhost $desthost -jobvariable movejob
    if ($movejob.Errorinfo.DetailedCode -eq 0)
    {
    $rawmigrations = get-content -Path C:\temp\counter.txt -TotalCount 1
    $migrations = [int32] $rawmigrations
    $migrations++
    $migrations
    set-content -Path C:\temp\counter.txt -value $migrations
    }
    $wait=get-random -minimum 60 -maximum 240
    start-sleep -seconds $wait

    }
    while ($true)

     

    To enhance this you could also randomise the guest that is being migrated, and if you have more than a two node cluster you could randomise the destination host.  If I get bored over the next few weeks I’ll update it so that it does these things.

  • Application Virtualisation – Agent or Agentless Part II?

    In my previous post I talked about some of the advantages of using agent based technology to deliver virtual applications.  There were a couple of things that I forgot to include when I wrote that post.  So continuing on from that list, here are the extras:

    5. Inventory

    If I’ve got my application content separated from my agent, I can now query my agent to find out what applications are present on my machine.  That means using tools like System Center Configuration Manager I can find out which virtual applications are where (just because they’re virtual doesn’t mean you don’t have to pay for them). 

    6. Usage information

    My agent can now collate all the usage information about my virtual applications and report that into a single location, using a single mechanism that is independent of the virtual application itself.  Now I can get accurate information about how my users are using the virtual applications, and use that to optimise my licensing.

     

    Those 6 points demonstrate that to truly be effective with virtualisation, the management component is essential.  And the only way to effectively provide an application virtualisation solution that is enterprise ready is to use an agent based technology.  Anyone selling you an agentless application virtualisation solution is not selling you any cost savings or flexibility, they are selling you complexity and overhead.

  • Application Virtualisation – Agent or Agentless?

    One of the things I sometimes hear from my customers when we’re talking about Application Virtualisation is “why do you need an agent for virtualising applications, especially when some of your competitors don’t?”.  My answer to them is “what do you think an agentless solution does except introduce management overhead?”.  Sure it looks attractive in the short term since all you have to do is distribute an executable to your clients and they can run virtual applications. 

    However I can think of several reasons why an agent based technology is superior.

    1. Management of the virtualisation agent

    Let’s say that I discover a security vulnerability in my virtualisation agent, or I want to add new capability to my application virtualisation client (for instance, in the App-V 4.5 release, we added the capability to stream over HTTP).  If I’m using an agentless technology and want to update my client, I have to go off and rebuild every package that I’ve deployed, and then redistribute that package to each of my clients.  With agent based technology I just deploy an update to my agent software and can leverage that new technology without rebuilding my application packages.

    In addition, if I want to know what version of my virtualisation agent that is in the packages it out there, I can’t really do that.

    2. Duplication of resources

    If I have an agent based technology, when I launch a virtual app I’m sharing all the same agent resources as any other virtual apps (CPU/Memory/Disk etc).  When I’m running agentless, every virtual app that I launch starts it’s own instance of the virtualisation agent, each adding it’s own overhead to the process.

    3. Speed

    If I have to extract the agent and execute it every time I launch the application, that adds overhead to my application launch time.

    4. Scriptability

    Because we’ve separated the virtual application content from the virtual application client, now we can do some more interesting things with the client.  Things like prepopulating the cache with certain applications, or removing applications from the machine at the command line.

     

    Overall having the virtual application content separated from the virtual application client allows us to have a more manageable implementation, and this will in turn lead to significantly reduced cost of ongoing maintenance over an agentless environment.  If you can’t manage your virtual apps, you’ve missed half the benefit of having them virtualised in the first place.

  • Save money managing your virtual hosts

    As I said in my post below, I read a lot of blogs.  I’ve been meaning to write a response to a post I read on Vcritical.com for a while, and haven’t got around to it.  Vcritical is written by Eric Gray who is a VMware employee, so clearly he has a biased view, much as I do given my employer :).  However it’s always fun to debate the issues, and we shouldn’t shy away from respectful disagreement.

    Eric wrote a post a while back “Save $14970 on VMware ESX Management” where he said to save money managing your VMware hosts, don’t use Virtual Machine Manager, and then pointed out that the cost of the Server Management Suite Enterprise license is $1,497 per physical host.

    I think Eric is understating the value that the SMSE license brings.  SMSE doesn’t just give you the Operations Manager management license for the host, but also for all the guests running on that host, along with the management agents for Configuration Manager, Data Protection Manager & Virtual Machine Manager.  So now we get detailed information about what’s going on at the host level, but detailed application level information about what’s going in all the guests, plus backup, plus inventory, plus patching, plus software distribution, plus desired configuration management, plus self service VM provisioning, plus a whole bunch of other capability. 

    For me the deep application information is really important.  Sure you could take a black box view of the VM and just treat it as a CPU & memory & disk IO consuming thing, but just because it’s virtualised doesn’t mean you don’t want to know what’s going on in that VM.  If that VM is running BizTalk you’d want to know that the BizTalk services are behaving the way they should, if the VM is running Exchange you want to know that mail is flowing, if it’s running SQL you want to know that the SQL databases aren’t running out of space.  The black box view is great if all you want to do is play musical chairs with your VM’s, but if you really want to know that they’re doing what they’re supposed to, you need deep application knowledge.  And that’s where Operations Manager excels.

    And forget that Eric tells you that PRO is too hard to configure – it’s not.

     

    Just to be explicit on my comment policy: comments are moderated on this blog, but I’ll publish every comment that isn’t offensive, spam or defamatory.  It just takes me a while sometimes.