Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

December, 2010

  • SQL or No SQL ?

    I have a post on NoSQL but as Azure moves on it’s looking a little long in the tooth so I thought it was time for an update.. 

    I am a big fan of Darwin because he was able to question his own beliefs and perceptions by taking a hard look at the evidence around him, and if his survival of the fittest is to be believed then the relational database will be replaced by something better. Mass extinctions generally occur when the environment changes and in the IT landscape the biggest change is the cloud and I can see that this alters the way we interact with data in online transactions.  So if we set aside the pros and cons of a traditional relational database for this work,what is there to replace it, that makes it more evolved  and better?

    The recent arguments I have seen seem to advocate column based databases which are superb for querying but less efficient than row based solutions for oltp.  This isn’t really new,  I was using Sybase IQ back in 2004 and it really made Business Object fly so I can understand why SAP have acquired Sybase given they now own Business Objects.  So was it ahead of its time and will it now flourish as the cloud takes off.  Possibly possibly not and I don’t really care because SQL Server is evolving itself to adapt to the demands of the cloud and to handle the many kinds of data we need to keep track of:

    • SQL has always been able to store various forms of unstructured data, and this has got better over  time: SQL Server 2008 filestream and remote binary storage allow massive files to be indexed and retrieved at nearly the speed of native file access using the win32 api.  Denali (SQL Server vNext)  will have file tables which takes this even further and embodies a lot of the principles of the WinFS work of a few years ago.
    • Denali will also have column based storage which gives massive query improvements like I saw in Sybase IQ but without the need to have a replica of the data in another product. So you can use the same SQL language and decide on whether to optimise for OLTP or reporting needs.
    • As hardware evolves and operating systems address these changes then databases will make use of these, be that multiple cores solid state storage or the availability of huge amounts of RAM. 

    I can’t speculate how much of this ends up in future iterations of SQL Azure, but here’s a couple of things to bear in mind:

    • 70% plus  of Microsoft development resource is working on cloud services
    • SQL Azure gets updated every 90 days, it has 5x the storage capacity it did a year ago (and sharding allows you to exceed this if you need to) , it now has spatial data types and reporting services is on the way.

    So my assertion is that databases like SQL Server have continually adapted and survived over the last 35 years and will probably continue for the foreseeable future,.

    Happy as ever to debate this online or over coffee

  • Active Directory for Demos

    Are you bored setting up domain controllers for your demos to work with, not sure of all the prompts to fill in then try this on a clean copy of Windows Server 2008 R2..

    Create a text file called NewDC.txt and copy this into it..

    [DCINSTALL]

    InstallDNS=Yes

    NewDomain=forest10

    NewDomainDNSName=Contoso.com

    DomainNetBiosName=domain

    ForestLevel=3

    DomainLevel=3

    DatabasePath=%systemroot%\NTDS

    Logpath=%systemroot%\LOG

    RebootonCompletion=yes

    SYSVOLPath=%systemroot%\SYSVOL

    SafeModeAdminPassword=

    on the last line you’ll want to add your own password to the SafeModeAdminPassword.

    Save the file and then enter this command:

    DCPROMO /unattend:NewDC.txt

    and your done. except you might also want to put some accounts in there.  Powershell is your friend here, and if you need a regular script to do this don’t forget my favourite powershell command:

    Set-ExecutionMode unrestricted

    which allows you to run any old script  (usually with a .ps1 extension). Here’s a typical script I use..

    Import-Module ActiveDirectory
    New-ADUser -SamAccountName SQLService -Name "SQLService" -AccountPassword (ConvertTo-SecureString -AsPlainText "Pa55word" -Force) -Enabled $true ,DC=CONTOSO,DC=COM' -PasswordNeverExpires 1
    New-ADUser -SamAccountName LabUser -Name "Andrew" -AccountPassword (ConvertTo-SecureString -AsPlainText "Pa55word" -Force) -Enabled $true,DC=CONTOSO,DC=COM' -PasswordNeverExpires 1
    Add-ADGroupMember -Identity Administrators -Member SQLService   
    Add-ADGroupMember -Identity Administrators -Member Andrew

    Enjoy

  • Virtualised Domain Controllers

    My standard demo rig has a separate virtual machine (VM) running my domain controller(DC),  I then have a bunch of client and server VMs all of which belong to that domain and I spin those up for different demo scenarios such as Business Intelligence, System Center etc.  However I have still broken a lot of best practice here for example my DC VM also provides DHCP and DNS, whereas my desktop expert Simon runs that in another VM albeit on the same physical server (We can only carry so much gear around with us!).

    Other variations on this theme are:

    • Running active directory alongside hyper-v on the host machine, definitely not great for production as per my post here,
    • Creating a demo environment which has all of the services you need in one virtual machine, e.g. active directory, SQL Server, SharePoint, Visual Studio and office.

    I mention this because there is best practice for domain controllers in KB888794 which discusses what you need to be aware when  virtualising them. For example my DC will take longer to boot because it’s running DNS and it has to wait for that to come up before active directory can work to resolve names.  A lot of it is common sense, but as with all Microsoft KB’s they are created when a the support engineers are asked the same questions again and again.

    One final thought..

    What is the future of domain controllers in a world of cloud based services?

  • Merry Christmas and a Happy 2011

    Like most IT professionals I usually avoid reading blogs with too much personal stuff in, so I try and avoid that in my own posts. But it is Christmas Eve so here’s your card ..

     

     

    image

    As this supposed to be a technical blog you might like to know that this is a tawny owl that lives near me and was drawn from photographs as he doesn’t like sitting still. I used Derwent Signature pencils on Aquarelle 300 matt watercolour block paper.  You’ve got  the scanned copy and my wife has another original to hang.

  • Conversations on the Cloud

    One way to get interesting interviews is to put a group of people with differing views in a room and fire up the cameras.  So when Zane Adam (general manager for Azure) was over for IPExpo we thought is would be interesting to get him to talk to some of the leading lights in the Vmware community about his vision for the cloud.  The result is on showcase here..

    We then got everyones’ reaction to this on film, including a few of my friends form the London Vmware user group like Barry Coombs..

    We also wanted to get his vision on how the role of the  IT Professionals will change as more companies move some or all of their services to the cloud, so we made this..

    You might well be thinking that this all has nothing to do with you, your organisation will never have any cloud services and you can’t seen any future in it for you.  Perhaps the idea of cloud services is as ridiculous as the idea that there would be a PC in every home seemed 30 years ago, but surely if you are working in IT you expect change and this is the big bet that all the major players in the industry are agreed on, and agreement in our industry is pretty rare>

    So please watch and learn.

  • How Dense are you?

    One of the drivers to moving to the cloud be that public or private is the promised reduced management overhead and cost savings.  Part of this comes from how effectively infrastructure can be utilised and so partially idle virtual machines can be consolidated alongside each other to get the most out of that infrastructure.  The other factor affecting cost is the cost of the IT team and what occurs to me is that the simple way to measure this is the ratio of IT professionals to the amount of infrastructure being looked after.

    For example if you are a helpdesk specialist how many desktops/users do you look after and if you work in the data centre how many servers physical/or virtual are you responsible for?

    The mission for Azure is one infrastructure specialist per 4,000 servers, however that’s a Microsoft target and there will also be IT professionals working for customers who will have to do some administration and integration work on them. How much extra work is need is going to depend on what sort of service you have opted for:

    • If all you are doing with the cloud is shifting your existing virtual machines to the cloud then who patches and maintains those virtual machines, the IT team in the customer business not the cloud provider. While this means a customer has no more ‘tin’ to look after , there are only small savings in the numbers of IT professionals employed by the customer namely the guys who actually work on hardware – a relatively small number of staff.
    • However if you move  a service to the cloud then the responsibility for managing the operating system  and possibly the service (if it’s off the shelf like SQL Azure, Office 365 including SharePoint and Exchange) pass to the cloud provider.  The IT staff savings in the customer will fall and while the cloud provider needs more staff to manage those services their expertise , software automation and scale make this economic for the cloud provider.   

    However this doesn’t mean the end of the IT professional, but a shift from executing tasks which can be automated and scripted to such areas design work governance risk and compliance (GRC) and change management.  The increase in this work as data volumes and system capabilities grow will offset the losses in the routine work many of have to carry out today, and the fact that we simply don’t seem to be getting new people interested in working infrastructure.

    So my question is

    BTW Simon has a similar survey for desktops on his blog

  • SCE Sunday part 7– Software Deployment

    Deploying software has traditionally been a major headache for IT Professionals, and System Center Essentials largely takes this away as you can see in the latest of my screencasts..

    However you need to be aware of a few things:

    • It won’t do out of the box operating system deployment.
    • Software can be handled providing it’s a folder, exe or msi file, but doesn’t have the full capabilities of its big brother System Center Operations Manager (SCOM)
    • I experienced an odd problem where the overall screen said the deployment was in progress but in fact, it did deploy correctly.
    • I used the default clients group in my demo. In reality you’ll need to create specific groups to deploy applications. At the simplest level this might be to split clients into 32 bit and 64 bit as many programs such as Office come in both forms.  You might take this further and create groups for HR accounting , remote workers and so on to deploy specialist software they need

    Next week I’ll look at authoring management packs in SCE. In the meantime, If you want to try SCE yourself it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • Beta to release upgrade paths for Windows Server 2008 R2 sp1

    I see loads and loads of threads tweets and forum questions which run along the lines of

    “What is the upgrade path from [insert any Microsoft Product here] RC to RTM?”

    To which the answer is always

    “There isn’t one – you have to uninstall RC and install RTM”.

    First let me translate the acronyms:

    The RC (Release Candidate) is pretty much the final code before RTM (release to manufacture). So RC is the final pre-production beta although in the past there have been RC1 and RC2 around.  Earlier betas are also known as CTPs (Community Technology Previews) and in between these there can be interim builds.

    The point of all this is to reinforce the partnership Microsoft has with its customers and partners,

    • Microsoft can crowd test the piece of software in the wild and ensure it is of good quality
    • Customers and partners get an early look at what is upcoming and train and plan accordingly.

    However all these betas, CTP’s  including RC are not intended to be used in production, unless you are on an invitation only program like the Technical Adoption Program (TAP).  One of the reasons for this is while the various betas mimic the installation/uninstallation experience of the released product the upgrade option obviously won’t be there as the beat will essentially be upgrading to itself. Occasionally you can get around this by fooling the process as was widely noted for Windows 7 RC to RTM, but you can’t rely on this, so Microsoft’s advice stands; there isn’t an upgrade path from any beta to RTM.

    This also applies to service packs which can also have their own betas, whihc brings me to Windows Server 2008 R2 SP1 which has the dynamic memory for Hyper-V in it (BTW there is also the attendant service pack for System Center Virtual Machine Manager (SCVMM) 2008 R2 to support this and is also in RC).  Putting this anywhere near production is especially dangerous as it affects the integration components in the guest virtual machines not just the physical machines running sp1, and so those integration components have to be uninstalled and reinstalled too.  All of this gets even more fun in a cluster as each node has to be taken down and put back again with the possibility that the VM’s will disappear for a while depending on which integration components they have. 

    I realise this might be too late to avoid , but if you have done it already then you might want to back out now during the quieter Christmas period (possibly not if you are in the slow clearance business ) rather than wait until the service pack actually ships.

  • SCE Sunday part 8 – Authoring

    In this weeks episode of my review of System Center Essentials 2010 (SCE) I want to review the business of customising SCE to monitor the parts of the infrastructure that you are interested in, which is known as authoring. 

    Many IT departments these days provide service level agreements to the business areas they support and in order to make this meaningful you need to have an end to end view of the service you are providing.  This might be a web site and application, or something at a lower level like network speed or database access. SCE’s authoring tools allow you to describe these services so that you can monitor them and decide what makes up and error or warning. Authoring builds on the management packs in SCE and the components of these are then exposed for you to customise, much as you can do in System Center Operations Manager (SCOM), and as I have mentioned before SCE actually uses the same management packs. I mention this because you’ll find better documentation on the advanced features of authoring , like modelling distributed Applications in the SCOM books on line (like this).

    In my demo video this week I wanted to keep an eye on the reporting services database performance using the provided OLE DB monitoring template..

     

    Things to note from this are:

    • You can monitor any OLEdb connection e.g. to Oracle Postgres etc. provided the drivers are installed on the watchers (nominated servers and clients that you nominate to do the monitoring) you nominate in the wizard.
    • Nominating watchers mimics the real world performance your users will see, so you can see what they see and not just the database performance on the server.
    • I created a new management pack of my own called SCE Sunday and I’ll put more stuff in there in my next demo. Many supplied management packs are sealed (locked) so you can’t add content to them and so creating your own pack is a good idea.

    Next week I’ll look at more authoring features.

    In the meantime, If you want to try SCE yourself it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • Microsoft BI and Kerberos

    I was briefing a bunch of partners the other day, and during coffee I went off piste and did a short overview of my presentation “It’s not about the Technology” to the BI MSc course at Dundee University.  However sometimes technology does get in the way especially when it comes to implementation and security, so  not planning for these will upset deadlines and cause projects to over run. 

    The most common problem for a typical large BI project is the “double hop” security problem that occurs when reporting services is used, so a client on PC (A) access reporting services on server (B) this one hop is fine as windows authentication will work, however in many scenarios the data in the report is on another server (C), and windows authentication will treat this second hop as anonymous, and authentication will fail.

    The standard approach to this is to use kerberos authentication, which overcomes this problem without the users having to login again, or use aliases to connect to the source data. The headache for BI professional like us is that this is another skill to learn and even if we know what to do we are unlikely to be give the permission to make the changes to the servers in a production environment.  However the infrastructure guys who can do this don’t know too much about reporting services and may have other priorities.

    I can’t help with the priorities but I do know of the key resources you and the infrastructure guys need to make the right changes to get reporting services working and to troubleshoot any potential problems, and that is this whitepaper.  The whitepaper also works just as well for reporting services in SQL Server 2008 R2, and all parts of the BI stack where a double hop is needed, not just reporting services.

    Hopefully this will help to clarify Kerberos, but if not this post will at least signpost the resources next time I get asked about the “double hop” problem!

  • What is the Private Cloud?

    One of my most read blog posts is Private Cloud? which means that the private cloud is of interest and hopefully what I wrote went over fairly well.  However greater minds than mine actually design all of this  and one of those minds is Zane Adam who is pretty much in charge of Windows Azure.   My good friend Planky  (the Azure evangelist on our team) and I managed to get him on video to share his vision of the public and private cloud..

    Hopefully this all makes sense, so how to go about having your own private cloud?  As usual with Microsoft , you have a choice,  in this case 3 choices, all detailed on Microsoft’s private cloud portal:

    1. Get a service provider to do it for you.
    2. Go for an almost off the shelf with reference hardware configured for use as a private cloud from anyone of the six top hardware vendors.  This is the recently launched Hyper-V Cloud  offering, the good thing about this is that you can use your existing trusted hardware vendor and it’s all based on commodity hardware.
    3. Alternatively you can build your own and in that case  I would still point you to the private cloud resources, but I would also recommend you have a look at Alan le Marquand’s (another UK based evangelist) blog series on how to build your own private cloud:

    Option 1 is the closest you can get to a public cloud as the service provider can provide elasticity and scalability by evening out demand across multiple customers.  The other two can only emulate this if you are working in a large enough organisation that individual departments combined needs are largely flat across time, and you can then use automation and monitoring to shuffle capacity around to give each department the service it needs when it needs it.

    No doubt this will evolve again over the coming months as one of the benefits of the cloud is its agility, but the problem for me is keeping my posts up to date!

  • Support calls in TechNet Subscriptions

    If you are an IT professional working with Microsoft products you need a TechNet subscription in the same way you need food.  A TechNet subscription shouldn’t be confused with the publicly available TechNet site (including this blog)  and it isn’t just a bunch of license keys either.  One of the least known features is the 2 x support calls included in a TechNet professional subscription, which can easily be worth more than entire the cost of subscription.

    However one piece of advice is that it takes time  (possibly a couple of days) to activate them before you can use them so you need to get yourself setup as soon as you get your subscription and not wait until the excrement hits the air conditioning (to quote Kurt Vonnegut).  To do this you need to phone 0844 800 2400  (in the UK, other numbers are here) to get an Access ID, before you can raise a call. 

    Just so you know!

  • Search and deploy

    It’s a sad fact of life that search doesn’t always get you that answer you were looking for.  This is mainly because it isn’t always obvious what you are actually looking for especially when it comes to technical stuff.  My main problem when looking around for stuff is that there is sometimes a lack of getting started guides, as I may know something about SQL Server but I am a complete amateur when it comes to Exchange, for example.

    Searching Microsoft sites like TechNet can be more complicated compared to other software vendors who only have one or two offerings, and the guidance on a lot legacy Microsoft products is still online as part of the commitment to support them, further adding to the list of hits you get back on a topic. However at least these resources are actually out there and not hidden in the minds of a few who can charge a premium to get you started with a particular piece of technology.

    However I have a couple of pieces of good news:

    There’s a new  simplified set of resources around common virtualisation scenarios:

    Each of these has simple Deploy Manage and Maintain sections.

    Specifically on Exchange I noticed this simple deployment guide the other day starting with the simple question of where you are now; upgrade form a particular version of Exchange or a new build.

    The updated Microsoft Assessment and Planning Toolkit (version 5.5 is now in beta), is probably your best free toolkit in deploying new Microsoft versions in your organisations.  My personal favourite bit of this is its ability to find SQL Server instances even if they are not using the standard port (1433) to allow connections as this tool uses WMI to find them.