Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

May, 2011

  • Direct access to DirectAccess

    My favourite  thing Windows 7 working together with Windows Server 2008 R2 is DirectAcess, not because of how cool the technology is nor because its just built in, but because it gets me out of jail on so many occasions.   If you aren’t aware of it, it’s the power of a VPN without all  the pain of it either for you or for your users.  I mention it now as I had to pull down decks from our team SharePoint  site  for UK TechDays Live while at this event just a few minutes a go.  Despite the appalling network speeds at the venue all but the ports for  http and https (80 & 443), I got what I needed.

    Not only that but some of the Microsoft speakers at TechDays could leave their demos in the office and run them from the event over remote desktop services (albeit form a wired connection on stage to get the necessary speed).It has also just annoyed me – I typed “directaccess” into IE9 and it took me to an internal site rather than executing a search on Bing so I could put some resources into this post – doh!

    This is not the only example where  Windows 7 and Windows Server 2008 R2 can be combined to create extra functionality, and these often get missed as they don’t  fall into specific discussion on desktops or data centres. To fix this Simon and I have decided to rectify this and do a specific session on this dynamic duo as part of TechDays Live. We also want to try out out more collaboration in the Live Meeting to make it more informative and collaborative so do register and join us live if you can to participate.

  • Get your own best of the Microsoft Management Summit bits

    Many of us couldn’t afford to make it Vegas for the MMS 2011 so in the UK last month the UK TechNet team put together a one day event Best of MMS and we managed to get quite a few of the US speakers over for that.  However that day was quickly sold out, and I know many of you couldn’t get to it because it’s now very hard to get a day out of the office, and justify travelling expenses etc. So everything was filmed and you can now watch these here as well as download all the decks from the day.

    However watching someone else playing with all the new stuff or making sense of a side deck from an event you didn’t go to is only so exciting, personally I would rather have a go myself and now we can..

    The System Center engineering team has announced its Community Evaluation Program (CEP).  Members of this program are able to evaluate early versions of products with guidance from the product team and by sharing of experiences and best practices among a community of peers.  Multiple programs are now open and accepting applications including:

    Configuration Manager 2012 Beta 2 - Program starts April 2011

    · Download the Datasheet

    · Apply to Configuration Manager CEP

    Virtual Machine Manager 2012 Beta - Program starts May 2011

    · Download the Datasheet

    · Apply to Virtual Machine Manager CEP

    Forefront Identity Manager 2010 R2 - Program starts June 2011

    · Download the Datasheet

    · Apply to Forefront Identity Manager CEP

    Orchestrator 2012 Beta - Program starts June 2011

    · Download the Datasheet

    · Apply to Orchestrator CEP

    Operations Manager 2012 Beta - Program starts July 2011

    · Download the Datasheet

    · Apply to Operations Manager CEP

    To apply and for more information about the program, please visit Microsoft Connect

    I’ll also be knocking out a few install and configure videos as I find the best way to learn is to explain to someone else.

  • Virtualising SQL Server - a second opinion

    I caught a really good presentation from Chris Kranz(www.wafl.co.uk and @ckranz),  Principal Solutions Architect responsible for Storage and Virtualisation projects at leading Systems Integrator, Kelway (UK) Ltd. at the Leeds VM user group last month.  Chris first encountered virtualisation technologies back in 2005 with early releases of VMware ESX and has since gained the highest level of certification with VMware and continues to expand his knowledge with Microsoft and Citrix virtualisation technologies.

     

    His talk was on virtualising  tier 1 SQL Server applications and he has kindly translated this into a guest post  -  take it away Chris.. 

    There are some real challenges around virtualisation at this stage, but (un)fortunately most of these are due to being over cautious about the technology now available or holding on to some of the key concepts and fears from when virtualisation first burst into the x86 market some years ago.

    Many people have now started the journey into virtualisation, and many have a little experience of the process of virtualising smaller applications. This has been hugely successful and a favoured route for people to follow while they are getting to grips with this new technology. Grab the “low hanging fruit” and virtualise these first, then learn from mistakes and move on and up the service stack. The challenge comes in that many people stall at around 70-80% virtualised; around this time we get to the business critical or performance sensitive applications. Through the initial virtualisation process, performance issues and capacity issues are often encountered and dealt with through the normal learning curve, but this creates a concern and business hesitance for tier-1 applications, specifically for SQL DBAs!

    However an important point to understand is that virtualising tier-1 applications should not be the same as virtualising the rest of the infrastructure! Tier-1 is very important and business critical, so why consolidate this? The platform costs are rarely an issue as these systems pay for themselves through business services and their availability is hard to put proper costs on as they are critical to the business actually functioning. So we need to create new rules for virtualising tier-1 that are different from standard virtualisation.

    • Encapsulation of any workload is going to make it easier to protect, easier to replicate, and easier to upgrade!
    • Dedicate resources as they are required. There is no issue with a 1:1 consolidation ratio of tier-1 applications if you require 100% guarantee of resources. But don’t forget that the hypervisor requires resources!!!
    • Ensure that the sizing is correct, not just CPU and memory, but disk and network also! These are all critical areas and something a tier-1 application will use differently than other virtualisation candidates.
    • Make sure the design is validated and you do you maths! Double check any calculations supplied and don’t make assumptions unless you have a way of validating them later.

    A very important statement to drill home: “Good practices are industry recommendations. Best practices are validated by yourself based on your services”. Don’t believe everything you read and make sure you validate your design yourself. Even this article here, validate my recommendations and others to prove that they work for you. Public Enemy said it best: “Don’t believe the hype”.

    Check with your vendors and resellers about licensing and support. Many software vendors are now fully on board with virtualisation, Microsoft have the Microsoft Server Virtualisation Validation Program (SVVP) and VMware have an ISV portal specifically designed to encourage ISVs to get VMware approval. Hassle your ISV if they haven’t certified yet! CPU’s can be restricted in virtual environment, so you could actually get license cost savings. Make sure you do your research though! There’s no point in me covering the specifics of licensing here as the vendors seem to change their mind quite often! Different licensing models may be more effective for you once you are virtualised from how you license the systems today.

    Virtualisation efficiency has improved phenomenally over the years. 6 years ago when I first got into virtualisation the overheads were 20% or greater, but today with better CPU efficiency, hardware offloading and general hypervisor improvements, the overhead should be less than 5% on CPU, and memory. IO throughput is large enough to satisfy even the most resource intensive applications. The advantage of virtualisation is that you have the ability to choose between scale-up and scale-out with fairly easy and simple management and control. Both offer their own arguments and different styles in making your infrastructure agile, although a scale-out approach tends to be favoured to create a more modular infrastructure. Remember this is just one point of view, work out what works best for you and validate!

     

    Scale Up Approach

    Scale Out Approach

    Multiple databases per VM

    Single databases per VM

    Fewer VMs

    Better workload isolation

    Greater single point of failure

    Easier change management

    Larger VMs (SMP overheads)

    Load balancing more effective

    OS bottlenecks

    Faster migrations

    Less flexibility or load balancing

    Greater performance due to multiple BUS and IO planes

    Greater impact on maintenance

    Quicker provisioning

    While you’re analyzing the infrastructure, consider moving the servers to a scale-out design when virtualised. This will make the adoption quicker as you can migrate single instances and applications rather than entire database servers. This will also minimise any downtime or impacts of configuration issues as you migrate and the effected services will be greatly reduced. Another nice feature of virtualisation is being able to create virtual appliances. This allows you to tie a database server to an application server and package these up. This is really compelling when you come to consider moving to a shared infrastructure as you can pickup individual applications and services and move them around much more flexibly.

    While talking scale-up / scale-out it is important to ask the question, do you really need multiple CPUs? Do you understand the overhead that SMP can cause? Now while you may never question that SQL requires multiple CPUs, do other VMs that share the same hardware require this? Whenever you move from 1 CPU to 2 CPUs you add the overhead of CPU scheduling. Every CPU thread needs to be scheduled at the same time, and if that number of cores is not available, then the CPU is put into a wait state and the transactions pause. Additionally don’t forget the hypervisor. A SQL VM with 8 vCPUs assigned to a dual socket, quad core system will be in direct competition with the hypervisor as the SQL VM will be requesting all CPUs at once. This is important in the planning stages as you don’t want to create this sort of contention. Remember that with Windows 2008, CPU and memory can be hot-add to virtual machines. I have seen application servers run quicker as single CPU servers because of the CPU scheduling overhead on a larger shared infrastructure. Do you testing and don’t give away SMP unless it is proven to give a performance improvement.

    Remember to read and follow the Microsoft best practices for SQL deployments. These don’t change critically just because it’s a VM, you should still follow the basic rules defined by Microsoft. Evaluate your load and run your performance planning when the SQL server is under intensive load. It’s no use getting the capacity reports for a 5 day business week if the SQL server does a huge amount of batch processing and backups over the weekend! While you’re sizing the solution, make sure you consider the back-end storage requirements and the connectivity media. 1GbE iSCSI may not be able to deliver if you are serving storage over 4Gb FC today. This can have a huge impact on performance if it’s done wrong! Also consider defragmenting your SQL databases (http://support.microsoft.com/kb/943345). This can have a big performance benefit to your SQL server, but make sure you consult your storage administrator as it can have a major impact (negatively) on the SAN! Monitor all aspects of the existing implementation, including disk performance and queues. High queues could show that there is an existing bottleneck that may be hiding other potential configuration issues. Make sure that you always have dual paths to the storage and you are using full multi-pathing software when it is available.

    When sizing and placing the storage, don’t forget to consider that multiple virtual disks on a single data store or LUN will not provide physical IO separation, and you may require this for separating database files or logs. Make sure the back-end storage is built in such a way that you are using as many spinning disks as possible, or if you are lucky enough to have SSD or Flash, make full use of what is available! If you separate data files for each of your databases (which can be a very good idea to effectively spread the database load across multiple storage areas), make sure you use equal sized data files. SQL uses a proportional fill algorithm that favours allocations in files with more free space, so if these are all equal in size, the allocation across all files should be equal. Hopefully it should go without saying that you should pre-size the data files, both for databases and logs. Auto-grow can have a huge impact on both performance and data fragmentation (which will affect all future read performance), if you haven’t sized the data files effectively at the start, then grow the database manually to prevent too many file extensions! If you do use auto-grow, set the growth iterations large enough to minimise the number of extensions a database will have to make. Make sure that log files are designed the same way; pre-size these for the expected load (typically 10% of the database size). A good rule of thumb is that a separate TempDB should be created for each CPU in the system, so size for the number of vCPUs. Testing has shown that above 4 TempDBs, the performance improvements start to diminish, so look at a ceiling point of 4 TempDBs (even with 8/16 CPU systems).

    Monitoring the performance is probably the most important aspect of virtualisation, especially after you have virtualised. It helps you pre-empt any configuration issues and plan for performance growth and requirements. Some key areas to monitor from within PerfMon are the following…

    • CPU
      • Ready
      • Usage
    • Memory
      • Active
      • Swapin
      • Swapout
    • · Storage
      • Commands
      • deviceWriteLatency
      • deviceReadLatency
      • kernelWriteLatency
      • kernelReadLatency
    • Network
      • packetsRx
      • packetsTx

    These should give you a good overview of how the system is performing. Also check performance at the hypervisor level and make sure you don’t have any contention issues. A traditionally simple way to diagnose CPU contention issues is if the clock is more skewed than you expect as the system clock usually relies on CPU cycles.

    Virtualising SQL should not be such a dangerous project, and if you approach it carefully with a clear plan of what you want to achieve and how you want to achieve it, it will be a success! You don’t need expensive infrastructures or lengthy consultancy engagements, with the above information and knowledge you should be able to approach this and succeed.

    The last thing I’ll leave you with is this: validate your own designs and develop your own best practices! My recommendations are only good practices!

  • NULL <> ISBlank()

    While Excel is definitely the most widely used BI end user tool,  SQL skills are often needed to get data from a source and into Excel be that Excel stand alone or with an add-on like PowerPivot. There are many big differences between Excel expressions and SQL but it’s the little ones that floor us sometimes in this case the difference between the way BLANK works in Excel and NULL in SQL:

    In SQL:

    NULL + 3 = NULL

    NULL + ‘Deep Fat’ = NULL

    However in Excel if I have a cell with nothing in it say AF1 such that =ISBLANK() returns TRUE or 1 then:

    =AF1 + 3 returns 3

    =AF1 & “Deep Fat”  returns “Deep Fat”

    This divergence goes back a long way and to change it now would be like trying to get us to drive and the same side as the road as the French. 

    Anyway just FYI as it caught a few of us out the other day but I am not naming any other names!

  • Do What You Love

    “Happiness lies in being privileged to work hard for long hours in doing whatever you think is worth doing.”

    -  Robert A. Heinlein, “To Sail Beyond The Sunset

    I want to make two comments about that quote; firstly it pretty much sums up working at Microsoft, secondly that is on an e-mail signature of one of my colleagues Dave Wickert (PowerPivot Geek) and illustrates some of the freedom we have.  This ethos is partly why I stay at Microsoft, the other being trust and mutual respect; I am trusted to be smart on my blog and my ideas are respected on merit. 

    I mention this because occasionally I am asked if there are any jobs here, and yes there are, all detailed on the Microsoft vacancies portal for the UK.  It’s how I applied 4 years and although the process was time consuming and tough I can definitely say it was worth it.

    So if you want to do what you love then you could do worse than apply.

     

    PS that’s where the Do What You Love icon on my blog takes you

    Red Badge v2

    PPS my other favourite e-mail signature is from private cloud expert Tim Cerling:

    ..with enough thrust, pigs fly just fine

  • Dear Diary

    Good diary management is career mission critical at Microsoft and may well be the same for you and if it is I have some free stuff for you to try. The Microsoft BI team have issued a calendar analytics spreadsheet that shows you how you are spending your time. It works by connecting to exchange and picks up the appointments for a named e-mail account and then stuffs the data into PowerPivot The spreadsheet has got all sorts of analytics in it and an instruction tab to show you how to get it working.  It’s also Office 365 aware so if you have your e-mail in that then this will still work. 

    Here’s what it looks like with my diary for May 2011..

    image

    If this sounds interesting then you need to get the PowerPivot add-in for Excel (you’ll need Excel 2010 and there is an x86 and x64 version of this).  Then you need to download and install the Calendar Analytics spreadsheet which will end up on your desktop.  Initially it will have dummy data in and to get it to pick up your diary follow the instructions on the instructions tab.

    I wonder what this would look like for Dr Who!

  • SQL Server AlwaysOn in a virtual world

    Despite what I posted yesterday about clustering, mirroring is the preferred option for many DBAs in a virtual world although you can of course combine the two.  On the up side mirroring gives you one (and only one) copy of the database in another location which you can’t really use, and failover is instantaneous, the down side is that the connection needs to be mirror aware (e.g.OLE DB) and you are only protecting a database at a time. 

    However the SQL Server product team have a cunning plan for the next release (Codenamed Denali) originally called HADRON, now called AlwaysOn which provides mirroring like ease of use but is built on Windows Server failover clustering (WSFC). Don’t panic all you need to do in WSFC is turn on the feature and setup a cluster..

    cluster

     

    As you can see from the above you don’t need shared storage so forget about FCoE, ISCSI Fibre Channel etc.  You then enable Always On in configuration manager by right clicking on the SQL Server service and selecting its properties specifically the SQL HADR tab (you’ll need to do this on each node on the cluster)..

    hadron

     

    The next step is to create an availability group from SQL Server Management Studio (under management), select some databases to include in it and your done.  One catch here is the databases have to be suitable , the full list of restrictions is here but essentially they need to be in full recovery mode and backed up.  The wizard does all the work for you and of course there are T-SQL commands to do this programmatically as well as PowerShell support for control and management and of course the wizard will generate a script for you as well.

    Here you can see I have declared a replica server (in ctp1 you can only have one), and I can decide which connections to allow..

    availibility

    The process also sets up endpoints (note the option to encrypt data)..

    endpoints

    You can then synch the data (I created a share called backup for this)..

    data synch 1

    and then when you go back to management studio you can see that you have something like mirroring setup..

    hadr completed

    so interesting stuff but you need to be aware that

    • I have done this in ctp1 which only has a basic implementation of AlwaysOn, (so only 2 nodes just now) compared to what will be in the final release. However I wanted to make you aware that this is coming as it will drastically change the way high availability is done, and make your  virtual world a lot easier.
    • Books Online has a big section on this
    • Before you ask I don’t have details of when the next ctp is coming out, like you I have to track announcements at the big US shows like WDC, PASS and TechEd
  • Guest Clustering–What is it and do you need it?

    I am pretty sure the term guest clustering is not a standard industry term nor is it official Microsoft speak, so what do I mean by it? I mean the business of creating a cluster out of a group of virtual machines – you might call this a virtual cluster but that might confuse it with clustering physical machines to run virtual machines on them so that no physical machine is a single point of failure.

    Clustering of any kind is about improving availability and if your physical servers are clustered why would then cluster the virtual machines running on them?

    The answer is to improve the availability of the application in the virtual machine.  A physical cluster will fail over if there’s a hardware or physical operating system problem, but can’t respond to a dead service like SQL Server, while a guest cluster will do this and allow you take nodes off line for maintenance  patching upgrade etc. while keeping the rest of the cluster running.

    Windows clustering of any kind has traditionally been seen as difficult and for me this all changed in 2008 with the dramatic changes to clustering both in Windows Server 2008 and in SQL Server 2008:

    • Windows Server clustering is supported on any platform capable of running Windows Server itself as per the Hardware Compatibility List.  When you configure clustering in Windows it will run a series of checks to confirm it will work, and you’ll need to run this check if you want support as it’s the first thing the Microsoft engineers will ask for.    With SQL Server the resources you assign to an instance in the cluster must be available on each node to support automatic failover (cpu, memory storage etc.) .
    • SQL Server went back to its roots in 2008 as far as clustering was concerned – each node is maintained individually rather than trying to patch the whole cluster as was the case for SQL Server2005. In practice you take a node offline, patch it, upgrade etc. and than bring it on line and repeat for the next node.

    I understand implementing guest clusters  places a lot of restrictions on Vmware like:

    • Only being able to use a 2 node cluster,
    • Only using fiber channel for shared storage,
    • No Vmotion,
    • No memory over commit

    note:  I am not a Vmware expert so please check the Vmware resources and forums for the definitive word on this, and please correct me if I am wrong.

    I do know that in Hyper-V a cluster behaves exactly like a real cluster  and there isn’t this loss of functionality so virtual machines can be live migrated and moved between physical servers as required. In other words you so you have the power of clustering and the added flexibility of virtualisation and this shouldn’t affect your performance , licensing etc. compared with a physical cluster running on the same spec of machine.

    Try it yourself:

  • Microsoft VDI – it exists and it works

    I really enjoy going to technical community events even the ones that are run by Vmware, but you would think from the Vmware ones that Microsoft doesn’t have a Virtual Desktop Infrastructure (VDI) solution at all.  I don’t intend to turn this blog into a feature comparison of the two or anything like that I just want to explain what there is and that it does actually work.  I do have a private theory that anything really good from Microsoft is often not widely publicised and the VDI stack is a good example of this.

    Firstly what is VDI and how does it differ from good old remote desktop services (nee terminal services) that has been around for years and why is there a need for a new one?

    VDI is the business of providing a pool or individual virtual machines running windows client to your users. In an extreme example you would migrate each users desktop PC to a Virtual Machine (VM) and then run all those VMs on a server and replace the desktop PCs with thin client devices.  This can achieve some power savings and save you having to manage those physical machines, however you still have to manage the OS in each virtual machine and the applications users want to run in pretty much the same way as you did before.  Those backend servers have got to be pretty powerful too so they can run all those VMs plus they have to be bought and paid for. 

    I have already written this post showing that you can support more users per server with Remote Desktop Services (RDS) than you can with VDI so why bother with VDI, especially when you virtualise applications with App-V onto the standard RDS desktop thus giving each users the apps they need on a group by group basis? Some applications aren’t easily susceptible to this approach like trading systems in banks, CAD  and other compute intensive solutions such as development environments while VDI enables these.

    Given that VDI can result in higher costs than RDS, is is essential that VDI is managed efficiently and that you get the most out of the hardware and licenses you are using. Given my views on RDS you might be forgiven for thinking that Microsoft isn’t serious about VDI?  A quick review of the recent developments that affect VDI shows that it is :

    • Hyper-V in Windows Server 2008 R2 sp1 has two killer features :
      • dynamic memory increases user VM density as you can set minimal memory startup for each VM and then implement rules about each of these can be assigned more memory as needed and which VMs win when overall memory is low.
      • RemoteFX allows the graphics card in the physical server to be virtualised and shared across multiple VMs for applications that need more graphics power
    • System Center has extensive VDI support for example:
      • The provisioning of client VMs from predefined templates in Virtual Machine Manger together with process automation in Opalis and Service Manager
      • patch management and software inventory in Configuration manager
      • Incident management in Service Manager
    • Remote Desktop Services provides access to the virtual machines in exactly the same way as it does to traditional remote desktops. This means you can provide one portal where users including remote users can get the desktop they have been assigned.

    The resources on remote desktop are already pretty extensive but to help get you started on VDI then I would ask you join join Sarah Mannion  and myself  next Tuesday (10th May) for our TechDays Online session on VDI . Sarah is the expert on VDI in Microsoft and advisers enterprise customers on deployment and best practice, so do try and make it.

  • Extreme Standardisation

    I spent yesterday at the Best of the Microsoft Management Summit in London and rather than have endless debates about what is or isn’t private cloud the term extreme standardisation was used to explain Microsoft’s approach to the modern data centre.  Although this term doesn’t implicitly refer to virtualisation this is a good thing:

    • You don’t have standardisation just because you have implemented server virtualisation, so there is a different emphasis on what the objective is not a fluffy (no pun intended ) over used marketing term
    • However if you think about it virtualisation is the first step to having some sort of standardisation – you need to have a standard way of moving services around and virtualisation is needed for that.

    Standardisation can only be truly achieved if it is automated.  It also plays well into the non-technical aspects of running IT in a business:

    So if you have an automated process to provision a service the rules to meet these standards can be written into that process. 

    Extreme standardisation manifests it self in other ways as well:

    • templates for services
    • a consistent process for patching
    • decommissioning of services and virtual machines
    • SLAs which in turn translate into rules to apply when a service is under pressure, what to allow to expand, which services take priority etc.rather than a goal or an idea without a clear end result.
    • How to charge for services

    All of this together is private cloud as per the National Institute of Standards & Technology (NIST) definition, but my point is that this approach articulates it in terms of outcomes.

    If that sounds a bit more real and practical than Private Cloud, you might want to find out how to do some of this stuff and I would suggest taking a day out of the office for to come to Days Live for the event on Transforming your datacentre with Hyper-V and System Center.

  • SQL Azure Update

    The annoying thing about cloud services like SQL Azure is that they keep changing as more features are added.

    The great thing about cloud services like SQL Azure is that they keep changing as more features are added.

    So if I write post about version x of SQL Server that has some value while that version is still being used somewhere however with SQL Azure when the service changes the old version. This catches a lot of people out including Microsofties like me, and my good friend Alan le Marquand who owns a lot of the content on the excellent new Microsoft Virtual Academy (MVA) is no exception.  He posted up the SQL Azure content for internal review and got feedback that it was already out of date, so he asked me to do reshoot part of it.

    Essentially the SQL Azure interface has now changed to integrate it with other Azure services and to bring in the lightweight management tool introduced as Project Houston.  If you go into MVA you can watch my me setting up the firewall rules..

    As well as this I need to post a quarterly update on SQL Azure to explain what’s changed and what’s coming next, so adding that to my Outlook to do list now.