Insufficient data from Andrew Fryer

The place where I page to when my brain is full up of stuff about the Microsoft platform

January, 2011

  • Looking backwards over a decade of Microsoft BI

    It was in early 2000 if memory serves that I first used OLAP Services in SQL Server 7 and Cognos Novaview 1.0 and ten years later I wondered, as you do at this time of the year, what has changed and what will the future bring? and specifically  is there a future for the BI professional with the dawn of BI in the cloud?

    What did/do people like me actually do?

    • All the usual tasks that surround any engineering project form gathering requirements, to testing and training users to get accurate results and insight form the system.
    • Export Transform & Load (ETL), this is a huge area encompassing data quality, integration of data across disparate systems and encoding business logic
    • Possibly design a pack of reports/dashboards/ analytics or provide some user interface to the data.  This ranges from using off the shelf tools and wizards to custom development (e.g. in Silverlight) to create tailored rich experiences for the users.

    Even without mentioning the cloud these task have changed over the last ten years I have worked in BI:

    • DTS was fun to use and a swine to debug but the power of integration services made the initial design more important and allowed for a much more robust and flexible solution to load data.
    • OLAP services also got completely overhauled in SQL Server 2005; it got the Kimball bus architecture, and a host of powerful new features like many to many dimensions, but this also took it further away from the business user as did the new BI Design studio which was clearly aimed at the BI professional.
    • Reporting Services arrived in 2002 and this has become more and more user friendly while retaining the power and flexibility to be embedded in applications and deliver rich reporting at scale.

    SQL Server 2008 R2 arrived last year and attempted to address the need for the business user (or information worker in Microsoft speak) to design their own analytics.  This new tool PowerPivot introduced a new column based in memory analytical engine (vertipak), which is simple and fast at the expense of the power of  traditional analysis services in such areas as fine grain security control and the development of really complex business logic.  This will be up-scaled in SQL Server vNext (aka Denali), but will exist in parallel with analysis services.

    Assume for the moment all of this and more will be available at some point in SQL Azure/Office 365 or something like it – where Microsoft BI is offered as a platform as a service (Note: This is pure conjecture on my part) what will change for the BI Professional?

    I don’t see any of the fundamental tasks changing, all of the promise of the cloud is good as I never had to worry about setting up infrastructure on most of my projects, indeed often I was never allowed near it and had to put in change requests to get accounts setup and access to data . And that’s a good point where is the data in this new world?  If the data is on premise it will have to be moved to the cloud, presumably using some sort of cloud based integration services and we could be talking about a lot of data. 

    A lot of of the presentation layer is already built on web services so moving that to the cloud will make little or no difference to those, for example SharePoint is in Office 365 although PowerPivot isn’t there yet that can’t be too difficult.

    That leaves the design tool which are typically rich clients, like BI Dev Studio and Excel , and these will have to stay on our desktops, it will just be a question of having the right credentials to deploy to the cloud , for example PowerPivot can already load data from SQL Azure and BI Dev studio will allow you to design reports for SQL Azure shortly.

    The point about all of this is that the role won’t change that much even if all of the services are available through SQL Azure + Office 365, data will have to be cleansed, and transformed, and users will need support and guidance on how to turn their data into meaningful insights. This means working with the clients and understanding their culture and this in turn means work where they are.  This can mean expensive travel and working away form home, but if it can be done anywhere then anyone can do it remotely, which is one reason why off-shoring has only had a limited impact on the BI services industry in the UK.

    The appetite for BI in a tough economy shows no sign of declining with the exception  of the public sector, and even here some large programs may well have been scrapped but administrators at all levels looking for savings  will still need BI to assess the impact of any cuts and to prioritise them. 

    So I am pretty sure that the next 10 years will be interesting and possibly disruptive but at the end of the next decade people will still need reports and analytics and so they’ll still be a role for us.

  • Making sense of SQL Azure reporting

    In order to understand why you might want use the new Reporting Services in SQL Azure you need to understand where it fits and in order to do that you need to know a little bit about how reporting services works, and even what it is if you’re new SQL Server but have perhaps heard of Azure.

    The on premise traditional Reporting Services included in SQL Server is a web service which consumes a special xml file with an .rdl  (report definition language) extension to render a report from any data source you have connectivity to from that web service.  The important bit data doesn’t have to be in SQL Server, it could be in Oracle , Terradata, Excel, in xml etc. However SQL Server is used in two ways to support reporting services:

    • The report definition files and associated metadata about permissions, data sources, etc, are stored in a SQL Server database (called ReportServicer by default) associated with the service.
    • A second temporary database (called ReportingServicesTempdb) is used to store snapshots of reports and to do aggregations, sorts etc. on complicated reports.

    When a user wishes to run a report the following occurs:

    • the reporting service will show a list of reports that user can choose from in a web portal which can be the default report manager, SharePoint, your own application.
    • the user selects the report
    • The report is run which means that it will execute a query against the various data sources defined in the report.  Optionally this may require the user to enter parameters defined in the report to filter data first.
    • the report is rendered back to the user from where they can elect to save it off in various formats such as excel.

    Applying this to SQL Azure Reporting Services:

    • If your source data is in SQL Azure already then running a report in SQL Azure makes a lot of sense as the source data doesn’t have to go anywhere and the only traffic will be down to the end user when they run it.  However if you decide to pull every row out of your 50gb database in a report, and save it to Excel (don’t get me started on saving to Excel!)  it will take time no matter how fast the back end service can render it.
    • If your source data is elsewhere you need to understand that the queries driving the report are going to execute where the source data is and then the results have to be uploaded to reporting services where they are aggregated, calculations are run and the output rendered.  The same is true the other way around – if you are running reporting services locally on your own server and you wish to run a report on SQL Azure then the source data will be pulled down to your local server after the query has executed on the source.

    This is probably all stating the obvious, but this behaviour should drive how you decide to use SQL Azure and whether reporting in SQL Azure is right for you. As for actually using it, it is currently in beta which you can sign up for here

  • SCE Sunday part 12 - Virtual machine management

    My demo environment for this series on System Center Essentials 2010 (SCE) is all running one one laptop running Hyper-V, so it is very easy to show off the virtual machine management capabilities of SCE that have been inherited from System Center Virtual Machine Manager (SCVMM). However as I only have one host I can’t show all of its features:

    • Live migration – the process of moving a running virtual machine (VM) from one server to another which is simply a right click.
    • PRO (Performance and Resource Optimisation) tips, which work just as they do in SCVMM and in System Center Operations Manager (SCOM).  For more on this check the SCVMM blog here.

    what you can see here is the way all the SCVMM integration disappears into SCE so you just get one view of what’s going on.

    Note: For the PowerShell fans, you also have access to all of the power of SCVMM, although if you are getting that deep into management it might be time to upscale to SCVMM anyway.

    Next week I’ll look at what how SCE reports on what is happening in your infrastructure and how you can extend this. In the meantime, If you want to try SCE yourself it’s included in the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • SCE Sunday part 13 – Reporting

    System Center Essentials 2010 (SCE) relies on SQL Server Reporting Services (SSRS) to show what’s going on with your infrastructure.  As I mentioned in part 1, you need to either set this up at install time or point to an existing installation of SSRS. Once you have done that the reports you get will depend on which management packs you installed and whether you elected to setup SCE to manage virtual machines. I have made a short screencast introducing reporting in SCE here.

    Other things to note, if the screencasts aren’t your thing:

    • You can put your own reports in the SCE reporting folders and they’ll show up in SCE. You can write your own reports or customise what’s provided (make a copy or you could loose your work if the report is updated by a SCE update) using Report Builder or BI development studio that come with SQL Server.
    • You can subscribe/schedule reports to run from the Report Manager portal. The screens to help you do this are smart enough to prompt you to complete any parameters needed by a report. 
    • The virtualisation reports that are inherited form System Center Virtual Machine Manager (SCVMM) will exclude the SCE server itself form the utilisation reports which confused me as in my demo rig SCE is simply another virtual machine.

    This is the last in the series on SCE, and if you now want to try any of the stuff I have shown you over the last few weeks it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • Project Atlanta part 1 – Introduction

    In the post Christmas rush to get fit the well off will have personal trainers assigned to them to help get back into shape. Similarly in the world of SQL there is an abundance of experts to help ensure that SQL server is as lean and mean as possible, however not everyone can afford that expertise, or to have it applied to every instance /database they have.  So there are also tools to also provide this sort of advice, but these often need their own infrastructure and expertise is still needed to interpret the findings. This could be mitigated by moving to SQL Azure but that doesn’t work for every one or every scenario so on uncared for SQL Server databases will be a fact of life for some time, so in attempt to address this Microsoft have launched project Atlanta.

    What is it?

    Take the cloud and deploy the backend of System Center Operations Manager complete with the management packs and all the expertise Microsoft has on SQL Server. Next create a Silverlight front end for this it so it can be accessed form anywhere with a live ID. The secret sauce is to have page on this site where you can download an installation package and a certificate to deploy on your SQL Server so that this portal can now monitor the health if your databases and instances.

    However  not many IT Professionals want to have their databases directly accessible from the internet so the other clever bit about Project Atlanta  is that you only need to have one windows server (which may well not have SQL Server on it) connected to the internet for this all to work (called the gateway) . The servers running SQL Server then have agents on which can then talk to the Atlanta service via your gateway as per this diagram I lifted from the Atlanta documentation..

    Microsoft Codename Atlanta environment


    So if you have heard of Intune for desktop management this is exactly the same thing for SQL Server. The beta is open to try now and is free.  What I don’t have any information at the moment is on when it will go live and what if any the charging mechanism will be for it

  • Project Atlanta part 2–Installation

    Following on from last post I thought I had better fire up Atlanta to see what it cold find out about my demon environment. I made this screencast as I installed and configured it ..

    It only took a few minutes to get to the stage where I had two servers, one (Oxford-DC, a server with no SQL Server on ) as the gateway and my BI demo (BI 2010) environment with just the agent on ..


    By default, the data then gets fed into Atlanta from my environment every day (which can be changed) so I need to leave it overnight to see what advice and warnings I get from the service, and so my next post will cover that.

    The help online is extensive, but here’s a few things I noticed to get you started..

    • Both the gateway and agent service on each computer will write out to the application log. For the Gateway 1100 means all is well (see the gateway trouble shooting guide) and for the agent success is 1000 (see the agent trouble shooting guide).
    • The configuration guide shows you how to tweak the registry settings in the agent and gateway to control how often data is uploaded to Atlanta and when. I imagine this will be sharpened up for release.
    • Project Atlanta works on all editions of SQL Server 2008 and later

    You can sign up for the beta here and you should also post any feedback on the Atlanta page on Microsoft Connect

  • A gentle introduction to Opalis, Automating the Ordinary

    I go to many sites and get offered a lot of coffee , however it never turns out he same as there is no standard automated process to make coffee, there are different steps in the process, and of course people are on different versions for milk sugar and coffee. Most of the coffee is drinkable and so I am OK, however when it comes to systems management process need to be run exactly as per the approved process.  Some process need to be run again and again and humans aren’t good at doing that and of course are expensive so if it isn’t automated it’s expensive and unreliable.

    It’s all about automation of process that need to be performed regularly in a business.  The trick is to know where the line is between the time taken to design a process for automation against how much time you will get back from not doing the process manually again and again.

    When it comes to the private cloud automation is key, and while tools like PowerShell and PowerCLI allow you to do low level automation this requires considerable skill, is hard to debug and maintain and there aren’t always the hooks into other parts of your infrastructure while Opalis can talk to pretty anything so across virtually all the known system management tools (CA,HP, IBM etc.) , operating systems and applications. 

    As you can see below Opalis can easily automate virtual machine creation in response to a variety of events or requests.

    By implementing this provisioning in response to critical events in tools like Operations Manager you can to a certain extent emulate the elasticity of a public cloud, to provide additional resources to a service under pressure and stand them down when the spike has passed.  I would stress this is not something it does out of the box and the implied scalability you get needs a service like load balancing that can make use of additional virtual machines as they come on line, but it can be done.  Obviously you can’t stretch the service beyond the computing power available in your data centre and you probably won’t have a lot of extra capacity in your data centre unless you are ruthlessly managing the services (again with an  smart Opalis process) to kill off idle virtual machines when they aren’t needed.

    Opalis is now included in the higher end licenses of the System Center Suite (the enterprise and datacenter editions). It has deep integration with Service Manger , Operation Manager and Configuration Manager and for more on how to get started with it check the following:

    • The Opalis portal where you can download 180 day trial of the latest (6.3) version and get training and live meetings direct from the product team
    • the product team blog

    The final thing you need to know is that this is the secret sauce that outsourcing companies are using to get the reduction in costs demanded by their customers particularly in the UK,  and I would argue that being an expert in Opalis where you are designing automation rather than repeatedly carrying out the same tasks day in day out, you will have a more rewarding and secure job in these uncertain times.  

  • IT Process Automation for Microsoft System Center a Guest post by Greg Charman

    I met Greg Charman, one of the ex-Opalis experts who now works for Microsoft a couple of weeks ago and I thought it would be good to get his thoughts on how Opalis works with System Center and other similar tools in the systems management space. Take it away Greg..


    In December 2009 Microsoft acquired Opalis, a specialist provider of IT Process Automation (ITPA) software.   The Opalis product is the process of being fully integrated into the System Center family of datacenter management products.

    IT Process Automation, formally known as Run Book Automation (RBA) software provides a platform to design and run IT processes. Standardizing the IT processes that underpin IT services means best practices can be deployed across the environment, regardless of the underlying management infrastructure. This is achieved by orchestrating and integrating the existing IT tools.

    Traditional IT tools support the functions of one particular IT Silo, sometimes offering automation of tasks within that Silo function. Unfortunately IT Business Processes cross multiple IT Silos and today these bridges are provided by human beings, inducing delay, error and rekeying of data.
    Opalis allows you to now integrate and orchestrate the tools in each of the Silos to now support your end to end IT Business Process, rather than have these tools define what your Business Process will be.

    Microsoft recognizes that companies run heterogeneous data centers. As a part of the System Center portfolio, Opalis workflow processes orchestrate System Center products and integrate them with non-Microsoft systems to enable interoperability across the entire datacenter. Opalis provides solutions that address the systems management needs of complex heterogeneous datacenter environments. Opalis has developed productized integrations to management software from Microsoft, IBM, BMC, CA, VMware, EMC, and Symantec. This enables users to automate best practices such as incident triage and remediation, service provisioning and change management process, and achieve interoperability across tools.

    The combined offering of Opalis and System Center provides the ability to orchestrate and integrate IT management through workflow and simplifies routine systems management tasks in complex heterogeneous environments by:

    • Defining and orchestrating processes across all System Center products
    • Integrating and orchestrating non-Microsoft tools as part of a complete workflow
    • Engaging with System Center Service Manager to automate the human workflow elements


    With the new capabilities from System Center in 2010, namely Service Manager and Opalis and the rest of System Center suite Microsoft can provide the tools to truly achieve the “Infrastructure on Demand” requirements being placed on IT executives.

    Imagine a user has a requirement for a new virtual server which will host a business application.

    First they go to a Web Front End and select a virtual machine template from the available options and request which application must be installed on the machine and how much data storage is required

    • Opalis picks up this request and following the appropriate ITIL process, Creates a New Change Request Ticket in Service Manager to record this new provision request.
    • Opalis then queries Virtual Machine Manager to confirm if sufficient capacity is available to service this request. If insufficient capacity exists Opalis goes to the blade server infrastructure in the Data Centre and turns on some spare blades in the Blade Rack and informs Virtual Machine Manager it has new Physical Servers as part of its cluster.
    • Opalis the checks the Storage infrastructure and determines that capacity is available and allocates a new storage area to service this provision request.
    • Opalis then orchestrates Virtual Machine Manager to create a new virtual machine for this request.
    • Opalis then adds this Virtual Machine to the Operations Manager estate so the machine is immediately under management.
    • Opalis then orchestrates Configuration Manager to deploy the Patches, Antivirus and Business App requested to this new virtual machine.
    • Opalis orchestrates DPM to backup the new virtual machine.
    • Opalis then populates the CMDB in Service Manager with the details of the new machine, and closes the Change Request.
    • Opalis then updates the Web Front End to inform the user their request has been full filled and machine is now ready for use.

    A fully automated request for provision of new infrastructure has been achieved with no human intervention required.

    To illustrate the powerful systems management orchestration capabilities, here are a few examples of typical systems management processes that Opalis + System Center simplifies:

    Incident Management

    Opalis works with event management and monitoring tools to run automated diagnostics, triage and remediation actions to lower the amount of level 1 and level 2 tickets staff have to manage. In this example, Opalis monitors Operations Manager for a critical performance alert, running on a virtual machine. To triage the cause, it retrieves the host name and checks performance on the host and virtual machines. If the host is the issue, it initiates Virtual Machines Manager to migrate the VM. Once complete it verifies performance and updates/closes the originating alert. If the VM is the issue, it creates and populates a ticket in Service Manager, initiates VMM to start a standby VM and updates the Service Manager incident with new VM details.



    in the above workflow Opalis monitors Operations Manager, runs triage and then takes appropriate action.

    Configuration Management

    Opalis works with change management systems to automate request and enforce best practices. Using Opalis, users can authorize, deploy and test change such as adding new services, patching systems, or running audits to detect configurations that are out of compliance. In this use case, Opalis coordinates a patching process during the maintenance window. It opens a service desk ticket, so all activity is tracked. It then queries VMM to get a list of off line VMs running Windows 7, it starts those machines. Opalis then reaches out to Active Directory for a list of computers running Window 7 and initiates Data Protection Manager to run a backup. Once that is complete, Configuration manager is initiated to update all machines with the patch. Upon completion VM’s are returned to their offline state.



    There is more information on Opalis + System Center at the links below and a technology roadmap fully integrating Opalis as part of the Microsoft System Center portfolio will be available shortly to clarify how System Center is becoming an increasingly powerful systems management platform for heterogeneous data centre environments

    More information:

    · Opalis (information on the acquisition):

    · Opalis portal

    · Microsoft System Center

    · Installing Opalis

  • The future of the Domain Controller–A guest pot by John Donnelly

    Andrew asked a really interesting question back in December about the future of domain controllers. I’d like to point out two complementary paths that may converge in the future and work out a possible user story for them.

    The first path is represented by Active Directory Federation Services. ADFS v2 is being used by Microsoft IT to provide identity information to internal, and some external sites. Using ADFS with 3rd parties means that my identity information is provided directly to the site based on my ability to log into a Microsoft domain, working within the corporate network this is entirely transparent, I don’t have to create and manage accounts for the dozens of different internal and external services that I use. Should I leave Microsoft at some point, then MSIT don’t need to contact all these companies to remove my access as that access is no longer possible as soon as my account is disabled. Could a future  version of Windows allow access to resources based on a standardized secure token and the claims that it contains?

    A second path is that the number of identity providers that I use is slowly consolidating, previously it would be normal to create a new account for each service, now I expect to be able to sign in directly to new services such as Project Emporia  using a  windows live or facebook account. The more experimental, temporary or infrequently used the less I trust them to maintain my account. Why wouldn’t I consider employers the same way? Rather than authenticating to a Windows AD

    Imagine a future sample for Contoso Cycles looking at staff identity.  They continue to have an Active Directory but ADFS has been deployed enabling staff to access supplier ordering sites directly based on their corporate identity using federated identity at the supplier site. They have seasonal demand and take on temporary staff. The IT manager is aware that shops have been creating shared accounts for holiday staff, rather than raising IT requests for each temporary staff member a closed Facebook group is created, temporary staff are added to this group by the store manager. Contoso IT authenticate Facebook users for domain access, and give log in permissions based on membership of the Facebook group.


    BTW John recently joined Microsoft as an architect in the MTC

  • Project Atlanta part 3–Monitoring

    In the last part of my series on Project Atlanta, the new cloud based SQL monitoring tool from Microsoft, I have it setup to watch my BI server and left it running for a couple of days  before making this screen cast..

    As you can see you get 3 different views of the server:

    • How it is now
    • How it has changed between each snapshot. by default a snapshot is sent every day form the Atlanta agent on the SQL Server to the gateway on onward to the Atlanta service running in the cloud
    • Actions that the service has identified. Each action has three tabs:
      • the action itself
      • the Technical background for the action such as a link to a KB article
      • the context – the html detailing the issue

    Also the actions can be closed off as you choose to resolve or ignore them

    It is early days for this kind of tool and the project team need you to try it out  (the beta is here)  and for you to post any feedback on the Atlanta page on Microsoft Connect.

  • BETT Notes and queries

    I am at BETT the largest education IT show in Europe for primary and secondary education , and attended by many IT Professionals who work in schools. As you can imagine we get a lot of very interesting questions and Simon and I are there to field them as best we can. I also had help from two ICT administrators who have rolled out Hyper-V and System Center at their schools..


    Dave Coleman (Twynham School) and Alan Richards (West Hatch High School)

    We saw a lot of confusion about virtualisation, not so much the “mine is better than yours” but more a lack of understanding on what to virtualise and why.  I got quite a few questions on other kinds of virtualisation like VDI, but for many schools remote desktop the business of providing an identical stateless desktop to a large group of people (e.g. the pupils )is very more appropriate and efficient:

    • the terminal or old PC can easily be swapped out.
    • Pupils can sign in anywhere
    • the desktop resets to its initial state when a pupil logs out
    • a few backend servers can support most school population leading to better control of power usage and in summer hot PCs running in hot classrooms only add to the stress on children studying for exams (assuming PCs are swapped for remote desktop devices).

    A variation on a theme I discussed with one school was the catchily titled “Remote Desktop Remote App” which is where the application is  run on a remote desktop server  and shows as an icon on the desktop or can be access form a portal including SharePoint.  Unlike application virtualisation (App-V) the application runs on the server and there connectivity needs to be maintained while the application is used but this does mean that you can run a heavy duty application on a remote desktop device or old PC.

    What amazed me was how leading edge many of the schools where, they are already largely virtualised, they are not only running the latest versions of SharePoint and Exchange but are really using the new features to reduce costs and enhance the pupils learning experiences (Dave and Alan being great examples of this). You could argue that schools get the licenses at a very large discount (which is good as we are paying for education), but many organisations with software assurance have access to the latest products but are not rolling them out.  I can also assure you that there are typically only two or three IT Professionals per schools so how are they doing this? I am not really sure so Simon and I have their names and addresses so we plan to interrogate interview them, to find out how and why.

    What really amazed me was the appetite for Office365 - Many schools have already opted to use Live@Edu a hosted e-mail system designed for schools and this is essentially the next step for them.  I say this because I see a lot of concerns about privacy of data in the cloud, and yet teachers and local authorities don’t see any concerns with these cloud services holding sensitive information about children provided the data centre is inside the EU and therefore compliant with the data protection laws of the UK and Europe

  • War on Cost – the Movie

    One of the best events I went to last year was the War on Cost run by Inframon at the Cabinet War Rooms (where else!).  It was like a free mini TechEd on System Center with deep sessions on the entire line-up from the Microsoft product team and Inframon MVPs focused on reducing data centre costs.  We also got a sneak peak at some of the next releases and I saw Opalis in action for the first time. The videos are out now so if you couldn’t make it you can now catch the reruns on Microsoft Showcase..

  • SCE Sunday part 9 – Authoring continued

    When I made my last System Center Essentials video on authoring , I realised there was too much to cover in a 5 minute video so I wanted to show some more again this this week. What I don’t want to do is to cover off too much of the deep dive stuff here, as a lot of the details on creating your own management packs is covered in System Center Operations Manager (SCOM), and I am guessing a typical SCE administrator will only do a bit of tweaking in here.

    So I wanted to look at monitoring a web service and also to create and monitor service levels in SCE..


    One odd thing I noticed was this if you want to record a web session to use as a blueprint for testing a web application, you need to shut Internet Explorer once you have started the capture and then open Internet Explorer 64bit (You’ll see a recording pane on the left of the session).


    Next week I’ll show you how to back up SCE 2010 so you can recover from a disaster complete with all your data .  I had to fin this out the hard way and that’s way there was no SCE Sunday post for a couple of weeks!

    In the meantime, If you want to try SCE yourself it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • SCE Sunday Part 10 – Backup

    Over the last few weeks I have managed to setup most of the elements of Systems Center Essentials 2010 (SCE) and having put some time and effort into this I want to ensure my work is saved, in case I break it or my demo environment breaks.  Being a DBA and knowing that SCE uses a number of databases to store configuration, update progress, events etc. I thought this would be fairly straightforward.  However with power comes responsibility and in the case of SCE this means security and specifically certificates.  There are certificates to allow Windows Server Update Services (WSUS) used by SCE to apply updates to managed computers, there’s encryption of the SCE databases.

    The definitive guide on what and how to back in SCE is here, but I wanted to show a backup in terms of restoring it and my plan for disaster recovery would be to have a new virtual machine (VM) with a clean install of SCE and SQL Server.



    Things to note from the video..


    • I created a blank virtual hard disk(VHD) in Hyper-V for my backups and attached it to the SCE virtual machine (BTW the VM can’t be running to do this)
    • I also backed up the reporting services database, and I also backed up the certificate used to protect reporting services. This isn’t necessary unless you have you’re own reports in there as a clean install of SCE will include all the reports it uses.
    • In the real world you would want to backup the group policy that SCE uses , but also in the real world you would have a backups of your domain controller anyway.


    Next week I’ll attach the backup VHD to the VM withe the clean install of SCE and SQL Server on and restore SCE to the state it was in

    In the meantime, If you want to try SCE yourself it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • SCE Sunday Part 11 – Restore

    The test of any backup is can you get back to where you were, so in this weeks screencast I have taken the data I backed up last week and applied it to a clean installation of SCE…


    Although I ruthlessly followed the TechNet resources on restoring SCE (here), I had to do a couple of other things to bring the system back to health again..

    • I restored the registry key entries  I backed up last week
    • the  WSUSCertificateRestore utility to get SCE to import the Windows Server Update Service (WSUS) key needs to be run with these switches..

    WSUS /Restore WSUS  /Path [path to certificate you backed up with a pfx extension] /Password [password used to protect the certifcate] 

    where WSUS indicates you are using group policy

    • When I started up SCE I couldn’t get the Virtual Machine Manager (VMM) to work properly in that the host wasn’t showing up. SCE doesn’t allow you to remove a host from the UI.  Fortunately SCE has the  powershell snap-in for VMM and I ran these commands to remove the host:

    #addin for VMM

    Add-PSSnapin Microsoft.SystemCenter.VirtualMachineManager 

    #declare the SCE2010 server 

    Get-VMMServer –ComputerName “”  

    #set a variable for the physical host

    $VMHost = GetVMHost –ComputerName “”

    #remove the host declared in the above variable

    Remove-VMHost –VMHost $VMHost –Force –Confirm  

    Then I added the host back in from the UI (you might need to uninstall the VMMAgent on the host before you do this, but you don’t need to reboot.

    Next week I’ll have a look at the Virtual Machine Manger elements of SCE as a way of introducing how a small private cloud might work using this tool.

    In the meantime, If you want to try SCE yourself it’s included in  the TechNet Subscriptions  here or you can get a time bombed trial version here.

  • Five New Year’s Resolutions for 2011

    I  think a lot of people, IT Professionals included are worried about the year ahead, so here are some likely problems and my new year resolutions:

    Energy prices are spiralling

    This means the cost of moving whole atoms rather than just electrons is even more pronounced than ever, so moving data ( a stream of electrons)  is now much cheaper than moving people ( 7x10^27 atoms) . This will mean home working and unified communications is no longer a luxury choice for a lifestyle focused company it is a necessity to retain and attract talent, who are fed up with rising transport cost on overcrowded roads and railways.

    Take a look at Lync Server 2010

    Hi-speed broadband in the UK lags behind many countries in Europe. 

    This could well mean that the adoption of the cloud in the UK could be slower than elsewhere, but does it? On the one hand your office internet speed might hold you back from relying on a cloud service. However if you look at allowing home working then your workforce is distributed and they’ll each have their own access to the net, and if your services are in the cloud then there won’t be a choke point on the pipe to the servers in your offices, nor any single point of failure.  One of your critical systems is probably your e-mail server, so would this be better sat behind a fat pipe where anyone anywhere can get to it?

    Take a look at Office 365

    Recession, stagflation, or business as usual

    The biggest threat facing the British economy is a lack of certainty, which in turn shows up as a lack of confidence to place orders for goods and services and recruit more staff. Obviously IT investment is under the microscope in both the public and private sector and the very low day rates for contractors is just one symptom of this.

    I think this will put pressure on IT departments to consider pay per use cloud services whatever their natural resistance to them might be as predicting growth or shrinkage in servers/ licenses / storage etc.  are all impossible without a firm baseline to work from.  It might not be appropriate or possible for a business to move its services to the public cloud but for larger organisations it is possible to get existing assets to work harder by providing cloud like services on premise, and this is what a private cloud aims to achieve. 

    Read up on Hyper-V Cloud

    Internet Security

    Whatever your views on the many leaks and data losses over the last year there is no doubt that it has heightened awareness about security and privacy on the internet.   So you could stay safe and keep all your data in house, like the way people used to stuff money under the mattress because they didn’t trust banks.  However it is people who leak information and in many cases reported in 2010, there was no hacking or cyber security breach, just someone with an agenda of their own and a pen drive.  I don’t believe there will ever be a complete fix for this, but I think good audit controls and restricting what data a user can see will lessen the damage to some extent. I would also argue that having this data in the cloud can be more secure as it forces you to look at the security granted to each user group /role.

    Check out Microsoft’s solution accelerators on governance risk and compliance

    Everyone tries to get fit after Christmas, but few succeed.

    Personally I find the best way of doing this is not to.  So I play on the Kinect, because I enjoy it, and shovel some sh*t in the garden (from the stables next door) to ensure that this year’s 5 a day are from my garden.

    So my prediction is for a  happy if partially cloudy new year!