Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

June, 2011

  • The 7P’s of projects

    Proper

    Preparation &

    Planning

    Prevents

    Pathetically

    Poor

    Performance

    I mention this because the process of working out how to make changes to your data centre can and should take longer than doing the actual work.  For example if you are going to dive into virtualisation or do more with it, upgrade or migrate your database servers then a lot of testing and checking needs to be done.  It’s even harder if you are planning to change platforms.  It’s really hard if you don’t even know what you’ve got and thus migrations and updates can be a great opportunity to de-clutter the servers and applications you have.

    But where to start? If it were me I would try to work out the size of the problem and to do that I would use a killer utility, the Microsoft Assessment & Planning Toolkit (MAPT). Essentially this casts a microscope over your infrastructure and comes up with a load of analysis and recommendations.  Obviously it can only do this if you give it high powered access to your systems as it needs to interrogate active directory, the network, WMI etc. to work.  If you want to connect to non Microsoft servers such as linux , MySQL Vmware etc. the tools allows you to enter credentials for that as well.

    I have two uses for it one is spotting wild copies of SQL Server which gets installed all over the place with various third party applications, :

    mapt 6 b

    and secondly to make recommendations about what servers could be virtualised and how that will work on a set of physical servers you specify. 

    mapt 6 a

     

    (BTW the insufficient data is nothing to with my blog it’s just that there were some redundant entries in my active directory and the toolkit couldn’t find them.

    There are three other things it does:

    1. logs performance of the servers you select enabling you to establish a baseline for planning.

    2. Gather an inventory of the software you have.

    3. A reference section linking to solution accelerators to help you implement your project.

    The Toolkit is under constant development, at the time of writing its up to  version 6  beta, and this version also has assessments for migrating databases to SQL Azure and readiness for Office 365 . 

  • SCO Saturday – part 1 What is System Center Orchestrator?

    Of all the bits of System Center to come out next year it is Orchestrator (SCO) I am most interested in, as it is one of the enablers to create a private cloud. It’s a new bit of the System Center suite and is the glue that not only integrates the rest of System Center, but also most of the popular non-Microsoft tools in the systems management space e.g. HP, BMC, CA, Symantec and Vmware. It has been developed from the acquisition of Opalis, and if you were familiar with that, Orchestrator is very similar except it is now in .Net and has a much simpler installation process.

    Very simply, if you have to do the same task more than ten times then it's probably something that should be automated.  For the low level techie this might mean scripting in PowerShell, and although this might work and you might be able to reach all of the moving parts you need to, it is difficult to change, maintain and debug. On the other hand, Orchestrator allows you to map and design the process visually with very little coding:

    • Each step (object) of the process (a runbook in Orchestrator speak) might be working with a different part of your infrastructure, such as the file system, active directory, e-mail etc.  This is where the rich integration in Orchestrator shows up as it understands what other tools can do and what data they can supply and respond to.  This knowledge about what systems management tools can do is encapsulated into integration packs which are rather like management pack in System Center Operations Manager (SCOM).
    • Each step can get, set and pass data to another step like a user name, a machine name or an error condition.  This published data, as it is called, passes along a data bus and is eventually stored in a SQL Server database.
    • You have flow control to wait for event, loop, and conditionally branch.
    • You then build and deploy this task from your designer to the Orchestrator server.
    • Monitors can be setup to initiate workflow by watching for events in your system.  Note that in System Center Service Manager 2012 runbooks can be directly invoked as part of an incident.

    Creating the right runbooks will transform your data centre into more of a private cloud, for example:

    • A runbook can respond to a helpdesk incident to provision a new service.
    • Doing that endlessly, however, will mean you will end up running out of physical servers so other runbooks can be created to end of life servers.
    • Runbooks can respond to events, indicating services are idle or are under pressure and reallocate resources to better balance operational needs.

    If all that sounds interesting then the beta of Orchestrator is now available to download. If you do plan to evaluate it, here are few things you need to know:

    • Orchestrator has a server component and a web console to monitor it and then there is a designer to create your runbooks.  The easiest way to deploy Orchestrator is put everything including the designer on the one server and remote desktop into it. The only exception might be to deploy its data store to an existing database server.
    • The integration packs I mentioned for connecting to System Center and other systems management tools aren’t included in the beta download (they will be for release), so you can get them by downloading Opalis 6.3 180 day trial, as these are nearly all compatible with Orchestrator.
    • If you have been using Opalis before you can export your policies and then import into Orchestrator as runbooks.  There may be some legacy objects that won’t work or need to be adapted which will block a runbook being deployed, and you’ll have to fix that.
    • The service account that Orchestrator needs the following privileges:
      • local administrator on the Orchestrator server
      • log on as a service (set in local policy)
      • db create privileges on the SQL Service server where your Orchestrator database (datastore)  will reside. Note:  this is a beta only issue

    I recorded my installation process here if that helps..

    next Saturday I’ll show you how how to design a runbook to interact with virtual machines using the integration pack for System Center Virtual Machine Manager

  • Notes & Queries from TechDays 2011

    I have been at TechDays Live all last week and although I did a bit of presenting, I was mainly there to listen and keep the week running smoothly.  I scribbled down a few notes during the various sessions I was in which I thought might be of interest:

    • Easyjet have an amazing set of stories around their IT.  For example their internet site has been down for one  2 hour period over the last four year and was down to a switch failure.  During that period SQL Server itself has never been off line during that period and they have one DBA. 
    • The Royal Mail is moving to hyper-V and although this and Office 365 might seem like a loss in revenue for their outsourcing partner, CSC, it is actually beneficial for CSC as well.
    • I got hassled by a Data Protection Manager fan that it is amazing but Microsoft UK never present any content on it.  What can I say -  it just works, and hopefully we will have a session on it when all of System Center 2012 launches in the meantime check out the DPM blog by my good friend  Jason Buffington.
    • Ridgian, a SharePoint partner provided a rich geospatial experience to Cherwell District council. What I found out about this is that Ridgian had cracked the problem of converting Ordnance Survey (easting and northing based) to plot data.gov .uk  onto Bing maps (which uses latitude and longitude).
    • EMI with help from another partner Adatis have implemented the new Master Data Services in SQL Server 2008 R2 , to help fill in the blanks in their data warehouse.
    • The North West  Ambulance Service asked Ascribe to implement the Microsoft BI stack to help with resource planning and now that’s mission critical for them.  BTW I was on an IT PRO webcast with them a couple of months ago.  
    • Matt McSprirt’s session on a comparison of Vmware vs Hyper-V + System Center was my personal favourite in that he was one of the best presenters and his content was simple yet effective.
    • I went to Planky’s excellent Azure Boot Camp, ..

     

    IMG_0127

     

    so that I could understand how the various roles work together, however Simon and I spent quite a bit of lab time helping the developers get all the prerequisites installed.  Where we couldn’t fix the problem (e.g. delegates on XP can’t run IIS 7) I ended up doing some ad hoc VDI from my uber Dell Precision M6500 which was pretty good considering we were running Azure and remote desktop over RDP for four sessions over the one adapter.  The lab was pretty straight forward otherwise and despite my lack of c# skills I got through it so it might be good to do re run for IT Professionals interested in the cloud let me know what you think

    The decks for the week will all be available from the site where you registered as well as the session recording, and I’ll update this post with the links. 

  • How the Private Cloud affects your IT department

    My Desktop counterpart Simon May posted a great article on How the public cloud affects your IT department so here’s my corresponding thoughts on the private cloud..

    Firstly the Private Cloud definition Microsoft uses is from the National Institute of Science & Technology.  For me this about taking best practice from running large scale data centres and making it available as techniques and software to those organisations who can’t or won’t move their services to a public cloud provider. This efficiency comes from automation; automation to respond automatically to known events and automation to requests for change from customers. 

    Automation means less human intervention, giving repeatability and consistency with a by product of reduced response times to requests.  However the human bit of human intervention is the IT Professional so does that mean that they are so much excess baggage in a modern organisation?

    To answer that I will use the same roles Simon used in his article:

    The helpdesker (a.k.a. Frontline support analyst)

    These guys are still needed especially in dealing with the “how do I?” request from the users, but of course their ability to help individuals will be limited by the service level the business want from the help desk.  Cutting back the help desk can be short sighted as their ability to help the user population can greatly increase their productivity, but this is hard to measure and therefore this might be under threat as well. 

    In a private cloud world a helpdesk request for “fix it for me it’s broken” , should  mean that the helpdesk team will know about some issues before they arise because of there will be management tools like System Center in place.  The service monitoring tools will tell them that a service is down while the desktop management suite will pick up any issue with the user’s desktop ( be it physical or virtual)  before they do.  So less time spent searching for what’s wrong leaving more time to concentrate on resolving the issue.

     

    The desktop technician (a.k.a. 2nd level support, a.k.a “Dave in IT”)

    If the desktop itself is the problem then in a private cloud style world this is either a commoditised or virtualised and can be swapped out.  Even in the world of VDI the desktop is the desktop and most tech problems are down to drivers apps and connectivity, so these guys will still be pretty busy. However remote administration either with EasyAssist in Win 7 or Remote Desktop means that problems can be solved remotely. In my world Dave is a guy in Salt Lake City or Ravi in Hyderabad depending on the time of day.  This is a good thing as we have 24 hour cover and we can be green and inclusive in our workforce by giving these people the tools to work at home. 

    The other key request that goes through the service desk , is requests for something new be it an application, a service on a virtual machine or a new starter will all be automated once the request has been through an approval process. This will have an impact on the size of the team but this should be offset by spending more time on strategic projects like implementing upgrades and designing new services.

    The server huggers

    On the surface of it, the private cloud is much more likely to the number of data centre administrators by allowing each one to manage many more servers (be they physical or virtual).

    Actually these guys are already in decline as physical servers are virtualised.  However virtualisation alone is NOT private cloud so this decline could be offset by out of control virtual server sprawl.  However the routine administration of servers has already been outsourced in many companies and government departments, and the pressure is on the large system integrators (Cap Gemini, CSC, EDS etc.) who now run those data centres to make further savings and pass them onto their customers.

    So for my money the only roles at  risk are the ones that involve a lot of repetitive process, and lets face humans aren’t too good at that. Of course this type of work has already been eroded by offshoring and outsourcing so actually Private Cloud is just evolution not revolution here. Also skills in data and user management will continue to be required as before no matter where the database, mailbox , content management system actually is. The other thing to bear in mind is that the automation the private cloud brings to the data centre will be balanced by increased demand for richer services , compliance and the raw demand to store more stuff.

    Summary

    I think uncertainty is the issue here; uncertainty about the survival of your business, the future of your role in it, and if you have been doing the same thing for a while what else is out there.  I can’t advise you on how to fix your business but the mitigation for all of theses issues is education and training.  You don’t needed to outrun the lion of redundancy you just need to be running faster than the other guys he’s chasing, and to do that in a toughening job market you need more skills and the right attitude. 

    So my top tip would be to start with the free Microsoft Virtual Academy before going on to some wider or later certification.  If you aren’t in work at all right now then Britain Works would be the place to start.

  • Mixing the public and private cloud

    No one except startups will completely embrace the public cloud immediately. This means that a mixed environment of some services on site and some not will persist for some time in many organisations.  For any given service, a decision will be made about where to put it , in the public cloud to expand business opportunity and change the costing model for it, or to keep it under tighter control and keep it in house. 

    The tools and techniques to make this a really simple process are only just emerging and of course the application itself has to have some changes to make flexible enough to cope with this.While many applications can be ported to azure whether they are .Net, Java or PHP to make them work properly they will probably need some changes if only to make them more fault tolerant to cope with the longer latency and distributed nature of the cloud.

    But that’s not the world of the IT Professional. Our world is managing infrastructure and in the Microsoft world System Center is front and centre for this and this is evolving to enable unified management of what is referred to as hybrid cloud:

    • System Center Virtual Machine Manager 2012 will have server application virtualisation which will all applications to be packaged up and deployed to Azure. 
    • Concero the newest bit of the System Center will give you the elusive one pane of glass showing your public and private cloud.
    • the Azure Management Pack for System Center Operations Manager 2007 R2 (CU3 or later)  which allows you to see how your application is performing in Azure

    There are also the management tools in Azure itself including the database manager for SQL Azure and Data Sync to keep an on premises database in sync with its cloud counterpart.

    Simon and I are back on air for TechDays OnLine next Tuesday for a session on Mixing and Moving Services between the Private and Public Cloud and I can think of three reasons why you should tune in…

    1. We’ll be to show you this stuff and field your questions
    2. I think this is the future and it’s more important than ever in this uncertain world that you are up to date with the cloud as it affects the IT Professional.  A good example of this trend is that Fujitsu have just launched their own Azure powered Global Platform Service so that customers in Japan can get a local service which should mitigate any concerns about data privacy 
    3. Samantha our awesome intern has sorted out a competition to win a Samsung 42” plasma TV, you just need to follow the clues in the sessions.
  • OLTP performance in Denali - Less is more

    I get the occasional criticism about SQL Server that all then new stuff is related to BI, and there isn’t anything for the traditional or grumpy DBA (aka Colin Leversutch Roberts).  Listening to Tom Casey  (corporate vice president within the Microsoft Business Platform Division)of  at the SQL Social last week, the only way to make SQL Server transactions perform faster on a given bit of kit is to take stuff away.  A good example of this in Denali is the new Always On capability which combines Windows Server clustering technologies and database mirroring to give:

    • Multiple copies of the mirrored* database which might be in remote offices, in Denali parlance these are referred to as secondaries.
    • Availability groups  mean that a group of databases can be mirrored as a unit.
    • The secondaries are readable.

    Making these secondaries readable is the key to transaction performance:

    • Any of the secondaries can be used as a source for reporting as it is nearly up to date freeing up the principal to do more transactions.
    • The same applies to backup except that you can only make full backups of a secondary

    However there is also one add-in in Denali that might speed up transactions; column based indexes.  As the name suggests these store columns of data in pages rather than in traditional databases where rows get stored in pages.  This offers massive performance benefits and these indexes can be significantly compressed as the values in any column or often similar.  So if a transaction needs to a lookup then these indexes could help, of course you’ll need to testy that and the use of these indexes will just show up as part of the query execution plan like we have in earlier version of SQL Server.     

    BTW If you are in Manchester (15th June)  or Leeds (16th June)  I’ll be at the SQL Server User Group meetings  to go into some of this in more detail.

    * In mirroring the live database i.e. the one that transactions are first is called principal, the offline copy is called the mirror and in all earlier version up to Denali there can be only be one mirror 

  • Have a word with your accountant

    I spent yesterday afternoon at a CIMA members meeting to discuss enterprise 2.0, on the back of some commissioned research by Manchester Business School (Heba El-Sayed and Chris Westrup). It occurred to me that a lot of this plays well into the world of the IT professional not least because both professions are under the same sort of threat, the loss of work to outsourcing, and automation.  However more importantly accountants see cloud, BI and enterprise 3.0 as all parts of the same thing.  In another presentation the top reasons to move to the cloud were cited as:

    • Improved billing
    • speed of implementation
    • Reduce IT spend and avoid the next refresh
    • Flexibility & scale
    • No more upgrade hell
    • Global reach

    These accountants don’t seem too worried by security and cite such examples as ADP, the world’s largest payroll company which is now all online.  However the most alarming thing about the various presentations was a lack of knowledge about what the cloud is in its various forms.  For them it is essentially software as a service i.e a vendor supplying their application from their servers.  For me this is a classic example of the sort of thing the future IT professional should be doing -  advising other parts of the business on new technology and how that translates into value  for the business.  You would also quite rightly explain some of the risks and implications such many of which would also apply to outsourcing and hosting:

    • Possibly being locked into one vendor   - so how easy is it to switch to another or bring the data back in house.
    • What is the availability of the system and what redress is there in the event of an outage
    • Where is my data stored e.g. within the EU
    • What are the costs so the accountant can model ROI

    So I would suggest that you initiate these conversations to show your thought leadership and support for the business as these discussions are going on in any case, and you should be involved.  If you need help to articulate thinking and evidence to back it up your starter for  ten would be the Microsoft’s cloud portal and for extra points on 21st June Simon and I will be looking at Governance Risk and Compliance in the cloud in our next TechDays Live session (BTW these sessions are recorded if you can’t make the date).

  • Volunteering with SharePoint 2010

    The voluntary sector is a actually quite similar to the rest of business it’s just that the profits are then used to drive the aims of the not for profit or charity rather than benefitting shareholders or partners.  So the challenges of market share and operational efficiency are just as relevant to the voluntary sector and therefore they way that software is used in these organisations is in itself relevant to anyone.

    Global Xchange is on such charity, they provide youth volunteer services as part of the VSO and British Council both overseas and more recently in the UK.  Persuading people to donate charities is one thing but to get them to give up their time is much harder and so the reasons must be compelling, and if your website is your main communication engine, then that then needs to be compelling. So Global Xchange commissioned Rubicon to create a rich social media experience which could be kept fresh without any input from the IT team…

    image 

    Another part of the Rubicon brief was to allow potential volunteers to go through a process to get access to the Global Xchange extranet once they had been accepted so that they could share documents and resources when they were on the ground working on their volunteering project.  They could also share their experiences to encourage the next week of young people to sign up as well.  I think this is a great job and so do Global Xchange as this video short shows..

     

     

    So a rather unusual project and Rubicon took the rather unusual step of building the whole solution around SharePoint 2010 as that gave them the basics of content management, ease of use and is relatively easy to set up to be accessible outside the organisation as well as inside.  Rubicon did develop a number of custom web parts to meet the requirements(e.g. being able to create user defined cascading menus)  but the fact that this is possible shows how flexible this platform is.  In fact there is very healthy ecosystem around SharePoint with a number of niche partners specialising in BI, records manager and in the case of Inframon using it as a front end for System Center to show how the data centre is behaving.

    A final thing to note is that this is a hosted solution and this gives this small organisation the flexibility to grow and expend the site to match demand without having to predict and bid for more resources internally.

  • SQL Server developer training kit–to useful to be just for developers

    One of the few niggles I have about working at Microsoft is the way we try and put you the reader into pigeon holes, and an area where this fails for me is when it comes to SQL Server.  My developer friends won’t go near SQL Server unless they have to and this things like  LINQ / Entity Framework in Visual Studio try and merge these two different worlds.  However there is a lot of good stuff for developers in the later versions of SQL Server, be it new data types , new ways of working with data like FileStream and StreamInsight, or the data tier application for easier deployments.

    In an effort to try and get developers to feel a bit more love for SQL Server, my fellow American evangelist Roger Doherty, produced the Developer Training Kit for SQL Server 2008R2.  The good thing about this is that he keeps updating it to reflect each release  and to add more labs and examples, but he’s also left the SQL Server 2008 stuff in there as well if you aren’t on the latest version.  It’s got short punchy videos and more importantly labs for you to work through. 

    But back to my point about Microsoft audience segmentation what is a developer?

    I think anyone working with SQL or in BI is unless your only job is to back it up then chances are you are changing things, hopefully for the better and that’s a good a definition of development as i have got. So if you’re working SQL Server you’re a developer and so you should have a look at this excellent resource.

    On a final note Roger is in the UK for Expedition Denali 29th June in Reading , and is also at the BI Event of the Year the following day in London co hosted by IMGroup