Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

April, 2011

  • Hyper-V integration components, notes and queries

    Virtual machine need resources from the host operating system/hypervisor and in the Hyper-V world this is done through integration components.  The same thing is true of Virtual PC however they the VM additions in Virtual PC are totally different from Hyper-V integration components, so although Virtual PC uses the same VHD format for virtual hard disks as Hyper-V moving a virtual machine from from Virtual PC to Hyper-V means doing a few things..

    1. Uninstall the Virtual Machine Additions.

    2. Insert the Hyper-V integration components from the VM console actions menu option

    3. "Detect HAL" in msconfig after moving to Hyper-V

    a. Open the System Configuration utility (MSConfig.exe) -- click Start > Run type msconfig, and then click OK.

    b. Click the Boot tab, and then click Advanced options.

    c. Select the Detect HAL check box, click OK, and then restart the virtual machine.

    This will then cause at least one reboot and what has happened to this VM is that all the hardware it was running on has now changed , in the physical world this is the equivalent of me taking my hard disk out of Dell laptop and sticking in Simon’s HP laptop – it’s going to get upset, but should sort itself out.  The same sort of thing applies to moving VHDs between Citrix Xen Server and Hyper-V as Xen also uses the VHD format.

    Integration components also need to be upgraded and as I have already mentioned you might already need to do this for service pack 1 of Windows Server 2008 R2 which adds dynamic memory to Hyper-V.  Question how do you know which integration components are installed in a given VM?

    For people to whom hyper-V is just another way to get stuff done then my top tip is to download the new  Hyper-V Best Practices Analyser  (this is a KB) which adds functionality into server manager so that when you expand the hyper-V role it tells you what you need to do..

    HYPER-V BPA

    If you have SCVMM in your environment the this command (courtesy of Michael Michael) will get you the answer..

    PS D:\Windows\system32> get-vm | select name, hostname, hasvmadditions, vmaddition | format-list

    For the serious scripting orientated data centre admin then Christian Edwards has this blog on scripting the answer from the registry key.

    So integration components need to treated with the same consideration as any drivers, keep them current or they will affect performance and reliability

  • Help stamp out SQL Server neglect

    eI am thinking of starting a new royal society the for the protection of cruelty to SQL Server as I am seeing my favourite database suffering a lot of abuse out there as part of the blind rush to virtualisation.  A good example is, Gillian,  a dba I met at SQL Bits who was nearly in tears because she was getting loads of timeouts after her data centre administrator subjected SQL server to this.  What made her really upset was that this ‘virtualisation expert’ couldn’t see the problem, however her users certainly could and this was obviously very real for her.

    I don’t exactly what the root cause of the problem is in that situation but I routinely have conversations of this kind.  so to protect and care for y0our SQL Server treat it the same way you would a pet:

    Understand its dietary needs

    SQL Server thrives on healthy IO with plenty of RAM and CPU (in that order).  IO is the one that’s neglected as it’s not obvious where you declare this in most hypervisors.  It is all to easy to setup a VM that is asking for lots of IO, doesn’t get and yet the actual storage in the physical environment is not stressed as in Gillian’s example.

    Regular health checks

    Looking at a VM from the outside is no indication of what's happening to the services on the inside. Also you need to understand what your performance is before you virtualise. Gillian did this bit fine she knew her database were all healthy on her physical environment and understood the performance of them.  However when they went virtual no attempt was made by the data centre admin to provide comparable resources or even test that performance before going live.

    There are the obvious free tools like Perf Mon in the OS and MAPT (Microsoft Assessment & Planning Toolkit) which you can download. 

    Regular Exercise

    My favourite approach for checking the health of SQL Server and other services is to exercise them  in System Center Operations Manager by using synthetic transactions to test the response time and error conditions  on a periodic basis from a designated machine.  This allows you to test that you are achieving your SLA and warn you if you aren’t, before the help desk even rings.

    Obedience Training

    SQL Server 2008 has policy based management which can be used to check that all the servers databases and objects in those databases conform to rules that you define, and you can also check policies on earlier versions of SQL Server.

    Treats

    Show your affection for your database by giving it regular treats such as new service packs and hot fixes.  Don’t forget the same applies to the operating system and the hypervisor, as these can also significantly affect performance.  A good example is the recent service pack 1 for Windows Server 2008 R2 which adds dynamic memory. 

     Dog Whisperer

    The closest thing we have to this is the SQL CAT (Customer Advisory Team), who have excellent white papers on optimising SQL Server on Hyper-V and a new one on SQL Server and the private cloud.  There are also resources on Vmware’s site for optimising SQL Server on their hypervisor, so ignorance is no excuse. Of course you can call in the many performance tuning experts in the UK and many of them have already come across virtualisation starvation issues.

     

    So don’t give virtualisation or SQL Server a bad name, read the documents, do the work and listen to your DBAs.

  • Cloud and emerging economies

    I just noticed a comment on our UK TechNet blog about how public cloud could put emerging economies at more of a disadvantage than those in developed countries.

    The comment I saw specifically mentioned broadband as the problem as  cloud services need internet connectivity.  However while in some cases this can be higher than would be the case if your services run in your data centres, this is not always the case. If you think about an e-commerce site running locally the speed your customers see is only as good as your outbound connectivity. However if this service was in the cloud then this problem would be replaced by the speed customers get to that site.  Actually in many economies broadband speed is better in the UK  - my friend Senthil in Bangalore get 20 Mbs to my meagre 1.5 so who’s deprived here.  The other factor is GPRS which is nearly universally available and bypasses the need for landlines for phones and data, and costs and performance will only come down in price.

    However there are other barriers to adopting technology in emerging countries apart from pervasive internet connectivity, for example:

    • Availability of hardware and support.  You might be able to procure hardware locally but how long do you have to wait for support to turn up or spares to arrive?
    • Power in many emerging economies can be unreliable and often worse than mobile reception. 
    • Data security adds more cost in terms of more hardware for high availability.

    The cloud takes all of this away and providers like Microsoft can choose to make its offerings affordable to these emerging market to stimulate new markets or out of a sense of citizenship. 

    So I don’t see this as a problem in fact I think the reverse is true, the cloud will enable emerging market to compete globally for business, and  enable health and education programs to run more efficiently.     

    Discuss..

  • Event Handling for the IT Professional

    My diary is solid for the next few weeks as there seem to be a lot of events on. This is good for me as the best bit of my job is swapping ideas and stories with like minded individuals, not to mention the odd lively debate.  I also get the impression that many of you enjoy this too and I often get asked why we don’t do more of this, and that would be partly down to costs not just ours but yours too as it’s a big ask for you to get away from your desks for a morning or a whole day.

    Obviously Online events are less expensive to put on and to attend so you’ll notice a lot more of these, and my diary has these in:

    • Techdays Online a series of live meetings on three themes, public cloud, private cloud or IT as a service, and the optimised desktop.
    • Also within the Techdays Online series there is a special System Center afternoon best on Thursday 21st April which is a look at the vNow editions of all of the key products in the suite including Opalis and AVICode
    • Microsoft Virtual Academy, actually these aren’t exactly live meetings they are basic training to bridge the gap between little or no knowledge on a topic and starting to get certified in it.

    The Offline events I know about are:

    • TechDays Live, Hammersmith 23-26th May each day covers a different topic :
    • Optimised desktop
    • Public Cloud
    • Private Cloud and IT as a service
    • Mission critical applications for the enterprise (this is at Microsoft Victoria and is already full)
    • SharePoint Event with polymorph this Thursday
    • SharePoint Best Practices conference QE II Conference Centre Westminster
    • DevConnections London with tracks for the IT Professional and the Developer including ScottGU and Paul Randall Kimberly Tripp (note this is a paid for event)

     

    It may not be obvious but these are all UK events, but of course anyone around the world is welcome to join the online meetings and we’ll do our best to use BBC English to help our intenrnational guests get the most from this.   

  • Thoughts on Dynamic Memory

    Dynamic memory has some people worried.  I get it because even with my monster orange laptop (Dell Precision M6500) with 16Gb RAM and 600Gb of storage across 3  SSDs I can see my memory is better optimised and I can be really aggressive with the memory each virtual machine gets on start up and let my priority settings handle the excess. At the other end of the scale my good friend Jeff Woolsey gets it because was able to provision more virtual machines on less memory for the labs at MMS.

    Why is this possible?

    Airlines give you a baggage allowance but the plane wouldn’t take off if everyone used that allowance, they rely on some people just having carry on bags. The situation isn’t quite the same in dynamic memory as demand changes moment by moment as each VM asks for and releases memory. What this means is that you can cram in more virtual machines because you can start them with the bare minimum and then allow the rules you define to decide who wins when there is pressure for memory, for exampl in a VDI scenario you might be able to get 25-50% more VM’s running according to another good friend of mine, Kleefy

    So what’s to worry about?

    Pushing and pulling memory in and of windows can make understanding what’s happening to your memory harder.  Possibly and this is already a reality on other virtualisation platforms, but what do you need this for: second guessing the way Hyper-V works out what to give or to tune and understand a particular service.  I would point you to the excellent post the SQL Server team have done on dynamic memory and then apply this to your application.

    You might have thought there was some sort of smoke and mirrors going on in dynamic memory and I would point you to another colleague Hector Linares who is delighted that  even Vmware get that dynamic memory works OK.

    And of course if you are worried about it in certain scenarios because you have proved it’s an issue then turn it off for that virtual machine.

    My final thought about these concerns is that it works for me, and you can only make that statement if you have turned it on and tried it yourself.

  • Using System Center Advisor

    In my last post I fired up the new release candidate of System Center Advisor, and this is what it looks like after I have left it running across a couple of servers I have pointed to the servers..

    Just one thing to note, in my last post I introduced Advisor as “essentially Systems Center Operations Manager in the cloud” it doesn’t mean it actually is a replacement for SCOM – for a start it doesn’t  collect real time data from your servers, currently it’s just once a day. This means you don’t get the alerting and event handling that is in SCOM that makes so much automation possible.

    So worth looking at if you haven’t got SCOM already, but you do then have to action the feedback the tool is giving you. 

  • Bye bye Atlanta, Hello System Center Advisor

    One of the many announcements to come out of the recent Microsoft Management Summit was System Center Advisor. Essentially this is Operations Manager (SCOM) in the cloud, so a sort of InTune for servers. 

    I had a look at the precursor to this, Project Atlanta back in January, but at the time it was just focused on SQL Server management. However in the new Advisor, you can also manage Windows Server health, specifically Hyper-V and Active directory as well as the general health of the operating system.

    Of course this means my Atlanta screencasts are now obsolete so here’s the replacement screencast on setting up Advisor if you haven’t tried it before..

     

    Here’s the things you need to know if screencast aren’t your thing:

    • Obviously you aren’t going to want to connect all your Windows Servers to the internet, so you can set up a gateway in advisor which does connect to the Advisor cloud service, and then you configure the agent on each server to be managed to point to this gateway.
    • The gateway sends the data gets to the service once a day in the release candidate.
    • You can access the service from IE7 and later or Firefox 3.5 or later , but in both cases you must have Silverlight installed.
    • The agent needs to have .Net 3.5 sp1 or later installed
    • SQL Server 2008 and later can be managed for all editions including Express for both 32 and 64 bit installations.
    • The data collected is private to you protected in the same way to the same standard as other Microsoft OnLine services like Azure, Office 365 and InTune.

    This is just a release candidate so I expect that more and more environments will be included in much the same way as there are numerous management packs for on premise SCOM today.  So this could be well worth looking into and for now it’s free as well as being very easy to setup.

    If that’s too much trouble then you might want to come to our Private Cloud TechDay on 24th may in London for more on this or tune into the the Managing the Private Cloud TechDays Live session on 19th April with Gordon McKenna(from Inframon) and me.

  • KPI confusion

    I saw a discussion the other having a go about the fact that it is possible to create KPIs in a number of Microsoft tools and how confusing that can be and I thought it would be good to understand why there is a choice at all. 

    Firstly the term KPI mans key performance indicator, I mention this because of the word “key”.  In any business there should be relatively few KPI, for example inside Microsoft we have 31 and despite pressure to increase them this has been the same for the last 5 years. These are set once a year and may be readjusted at the half year point so there isn’t too much work in creating them, but they need to be available across the business so they need to be on company intranet.

    Each KPI is typically the responsibility of a manager and his team and to hit that number each division department will have its own dashboard and subordinate KPIs.  This hierarchy cascades down and down until each member of staff can identify their objectives with those KPIs.

    Of course measuring performance is one thing achieving it another.  Even if things are going well there will always be the desire to improve things and to understand the underlying factors affecting successes and failures.  The type of tools used to create KPIs can be used for this and I would call these just PIs as they aren’t so key any more.

    This is one of the reasons that there are several tools to crate indicators in the Microsoft BI platform:

    SharePoint.

    At the strategic end of the business where the ‘true’ KPIs are created these need to be widely available and so there is the dashboards, scorecards and tools to create KPIs embedded into SharePoint enterprise.  The data for these can come from pretty well any structured source  (I have yet to see a BI solution without at least one excel spreadsheet as a source), and not just from Microsoft products.

    Excel

    At the other end of the scale you can do quite a lot to create things that look like KPIs in Excel and if you are in a five man company this would be more than sufficient to keep your pulse on the business. 

    Analysis Services

    KPIs have been a feature of cubes since SQL Server 2005 and this is a very powerful feature that is rarely used. The value of putting them here is really good performance and that once they are created in a backend datastore they can be accessed from any toll, Microsoft or not that can connect to the cube. For you can import these KPIs directly into Dashboard designer and deploy them to SharePoint, and there are some 35 third party tools that work with analysis services if you don’t or can’t invest in SharePoint.

    Reporting Services

    Since SQL Server 2008 there are all sorts of traffic lights and gauges in reporting services.  This can  be a good option for sharing performance indicators with third parties; perhaps to show parents how a schools is performing or in a business to business scenario like suppliers and supermarkets.

    However what makes anyone of these a true KPI is the data not the tool: if the number , trend and status in the traffic light, speed gauge, thermometer is key to your business then it’s a KPI. My only concern is that you use the right tool to make sure the right people can see the up to date status quickly and simply and that there backend systems in place to keep it up to date.