June, 2014

  • Right time, right place; breaking down the migration barriers

    clip_image002

    The following post is from Ian Masters of Vision solutions

    I work for one of Microsoft's go to market partners for migrating customers to Hyper-V and Azure. I want to tell the story of how that came about, what problem we solved and how it all worked.

    Let's start at the beginning. I was fortunate enough to be invited to speak at one of the System Center 2012 launch events. The particular one I attended was in Helsinki and I will always be grateful to Riku Reimaa for giving me the opportunity to be a guest speaker. Riku kindly gave me 2 slides and 10 minutes to present what we could do (for anyone who knows me they'll know stopping me talking after 10 minutes is quite an achievement!).  So I got up, did my bit and sat back down - keep in mind the rest of the day was a blank to me as it was all in Finnish, unsurprisingly!  However during the next break I was accosted by a senior technical specialist from Microsoft who said, "you can help me!"…

    Microsoft was heavily engaged with a Cloud OS Launch Partner, a Service Provider who had made a serious commitment to move all of their physical and VMware workloads to Hyper-V.  Included in the deal was a significant amount of consultancy from Microsoft Consulting Services, but they had hit a roadblock and we're struggling to find a way to move forward.  During the planning and assessment phase the Service Provider had identified a significant number of workloads that could not be taken offline in order to image them.  When asked why, the answer was simple - they owned the infrastructure, servers and storage, plus the hypervisor and automation/orchestration layer, but they didn’t own the workloads.  The servers, the majority of which were VMware virtual machines, were their clients.  The Service Provider had strict SLA's in place to ensure the lights were kept on and would incur penalties if they took the workloads offline for too long.  This left them with two options, do nothing and stay as they were or try and negotiate with their clients for additional downtime, neither of which was appealing. So Microsoft were looking for a solution that could migrate those workloads without any significant downtime, not an easy challenge to overcome.

    At this point it's worth considering the tools that Microsoft had in hand and why they were unable to overcome the challenge.  I won’t go into detail on how the Microsoft solutions work, but suffice to say that most of the solutions on the market - free or otherwise - approach the problem in the same way.  The general approach is to take a snap shot of the production server and copy this to the new machine and then when you’re ready to migrate you have to take everything offline and perform the final synch.  This, plus the manual labour required and potential risks posed by the fact that you cannot test cutovers, meant that Microsoft Consulting Services needed an alternative.

    This is where we came in, as experts in High Availability and Disaster Recovery, in other words we kept the lights on.  If you think of a simple HA solution, a pair of servers "clustered" such that when the production server is unavailable it fails over to the secondary server and when you fixed the issue you tailback to where you started. We approached this migration in a similar way, except once we moved them to a second system they stayed clip_image004there.  We achieved this with our Double-Take Move (DTM) product, let me explain how that worked.  Once installed on the production server, be it physical or virtual, it makes an initial block level mirror of the entire server and at the same time our byte level replication starts capturing any new changes being made.  This is achieved through our mini system filter driver which is installed on the server…  One consideration here is that the initial mirror is going to add some payload to the server and so if it is already maxing out the resources you may run into some issues. 

    The great thing about DTM is, it will never bring the server down but may start queuing data or simply stop replicating in order to avoid this happening.  Good assessment and planning can avoid any issues and ensure success.  DTM replicated it to the new Hyper-V host, created mount points, which were then used to create the virtual machines on Hyper-V.  This meant that there was no need to do any more than set up the host, DTM auto-provisioned the virtual machines.  So to this point in the process there was no need to take the users, applications or servers offline, the entire process was achieved with the lights on.  It then kept everything in synch until a convenient time to migrate. This could have been instigated through the click of a mouse or by scheduling the "cutover" to occur, in this case it was a scheduled process that took place between 12am and 1am.  The reason for this was that this Service Provider had very tight SLA’s to meet for their clients and when the final migration occurs it requires a single reboot the new Hyper-V virtual machine, this is where users are clip_image006going to be taken offline. Typically it is well under 15 minutes per server and then users are back online and working again, in this case there was a significant amount of automation built in to the process through our System Center integration.  The orchestration and automation was such that user acceptance testing was completed automatically. What DTM allowed us to do was test cutovers, bring the new virtual machine online but not connected to the network, we could then create a private network to do user acceptance testing, once they were happy they could reconnect the machines and synch only the latest changes. Note that during the cutover process there are a myriad of options around topics like addressing and resources and we could make changes as required as part of the final cutover process.  Basically anything you can script in PowerShell can be triggered to happen either pre-cutover or post-cutover.  The entire migration is a consistent, repeatable process with almost no downtime and no risk.

    The project I’m referring to in this article is Telecomputing, it has been written up as a Microsoft Case Study, I'll call out the two keys points highlighted by the customer.  They believe we reduced the engineering effort by up to two thirds and saved them up to 20'000 hours of downtime on the first 1000 workloads alone.  Think of this in monetary terms and that a huge cost saving.  It also works just as effectively if you are looking to migrate to Azure, there are some slight differences in the approach, mainly that we will migrate to a pre-provisioned virtual machine, just a base template of the same OS, as we don't have hyper visor access to do the auto-provisioning. When the new system is rebooted upon cutover we apply the system state of your original production machine over the top of the template, this will induced all the service packs, hot fixes, security patches etc.  You still have the option to change addressing and resources.  We’ve also successfully helped K2 to migrate directly from Amazon AWS to Azure, this again has been written up as a case study.  If you want to find out more information on what we offer on migrating workloads to Hyper-V or Azure, visit our website.

    So as you can see I happened to be in the right place at the right time and we now have many successful migrations completed and continue to work closely with Microsoft to overcome the migration challenges and drive adoption of Hyper-V and Azure.

    Have you been in a similar position to Ian where you’ve been in the right place, right time? if so, let us know via @TechNetUK.

  • The IT Pro's guide to Enterprise Mobility

    image
    This article discusses the steps required when embarking on an Enterprise Mobility Management (EMM) Project and is joint-contributed by Susan Smith, IT Pro Technical Evangelist at Microsoft and Andy Turner, Technical EMM Lead at Mitchells & Butlers.

     Image Source: ‘Customers eating and drinking’ from Mitchells & Butlers.

    Mitchells & Butlers – the largest operator of restaurants, pubs and bars in the UK – share their EMM project experiences, highlighting gotchas, best practices and other useful advice on how they implemented System Center Configuration Manager 2012 R2 and Windows Intune to manage over 20,000 devices, all in-house with two System Administrators.

    clip_image004_thumbAbout the Author: Susan recently became an IT Pro Technical Evangelist, focusing on Cloud Technologies. Previously, she was a Windows Intune Technical Solutions Professional, specialising in working closely with Customers and Partners to provide the in-depth technical knowledge to strategise their Enterprise Mobility Management endeavours.  

    Some Admins may baulk at the idea of adding 8,500 iOS and Android devices to their existing 12,000 Windows PC management estate, across 1,600 sites. With careful planning, an eye for detail and exceptional technical and collaboration skills, this is certainly achievable with the added bonus (for the boss) of no increase of the two-man headcount.

    Let us enter the world of Enterprise Mobility Management (EMM).

    The increased popularity of Smartphones and Tablet devices in the Enterprise is the main driver for EMM. Enterprises need to stay ahead of the curve for technology trends. Plus the Systems Administrators (SysAdmins) get a range of great new toys to configure.


    This graph illustrates the growth in this hardware market and how this is an upward trend. Enterprises are now including non-PC devices as part of their Corporate purchases and also introducing Bring Your Own Device (BYOD) as part of their IT Strategy.

    BYOD increases employee productivity, morale and convenience by using their own devices.

    The task of Device management poses a few challenges (headaches) for the IT Pro in terms of Security and Compliance.


    What is Enterprise Mobility Management?

    The elements of EMM are described in the table below. EMM is regarded as a maturity model, moving from left to right, using MDM as a starting point, optimising to reach MCM. The key is to ensure Security and Compliance is upheld along the way, without compromising the user experience.

    Mobile Device Management (MDM)

    Mobile Application Management (MAM)

    Mobile Information Management (MIM)

    Mobile Content Management (MCM)

    IT Policies applied and profiles provisioned to mobile devices

    Policy Enforcement

    Cross-platform Support (Windows, iOS, Android)

    Jailbreak and rooted device detection

    IT controlled delivery of apps from a corporate app catalog

    App Deployment

    Selective Wipe

    IT Policies applied directly to data wherever it flows or resides

    Rights Management

    Data Loss prevention

    Secure distribution and mobile access to employee data

    Data at rest encryption

    Multi-factor authentication

    Dynamic Access Control


    Who is responsible for Enterprise Mobility Management?

    With the introduction of Exchange ActiveSync, MDM typically became the responsibility of the Exchange team. However, now there are a number of different platforms and vendors offering detailed management solutions to deal with the increased demands of the business and their users, this support model is changing. Microsoft has declared that success with Enterprise Mobility comes from Empowering System Center Configuration Manage Admins.

    Where do you start?

    An EMM initiative will drive many business benefits, such as lower IT Support costs, a highly productive mobile workforce, happy employees, using the latest, greatest technologies. EMM also increase collaboration and connectivity. It is tempting to jump in feet first and buy a handful of devices and pilot a small number of EMM solutions. STOP! Before embarking on an EMM project, you need to break it down into smaller, manageable chunks:

    1. Requirements Analysis
    Do not dive straight into the different solutions available - this may overwhelm you and can also cloud your judgment. You need to identify the needs and goals of your business. What are your drivers? What ROI is required? Who are your customers? What are you trying to achieve?

    2. Define Mobility Policies
    This requires extensive research and user participation. As with all great projects, customer buy-in from the outset will ensure successful adoption. Determine your company’s needs, such as increasing employee morale and providing the ability to work from home &/or across different locations. Determine employees’ needs, such as favoured devices and an apps policy. This also requires sponsorship for business leads, covering all bases with both a top-down and bottom-up approach. Then you can start to create security policies such as a BYOD policy.

    3. Create a Security Strategy
    Security plays a large part in EMM and should be thoroughly researched. Buy-in from your Security and Compliance team for the outset will ensure you do not encounter showstoppers when the project is at a more advanced stage. Create water-tight strategies which eliminate the possibility of human error. The general rule of thumb is this: if a user (internal or malicious) can find a loophole, they will exploit it. The Security Strategy ensures there are no holes. Process is important and strategies in place such a remote-wipe are a must if a device has been lost or stolen, to prevent corporate data leakage.

    4. Create awareness and set expectations
    This project will affect most if not all of your users so it is very important to continue the great work done in Step 1 to raise awareness, which in turn will have a positive effect on user acceptance and the success of this project.

    Choosing the right EMM tools

    There are many vendors in this space all offering a similar feature-set. Some have extras to make their product stand out from the crowd. It is easy to fall into the trap of reading all of the Marketing paraphernalia and then assuming one solution is perfect for you, disregarding all of the great research you have. Each business is different and you need to decide which one is right for YOU. Now and in the future. Typically a requirements list will be drawn up and a long list of vendors will be selected for a Request for Proposal (RFP) process. A shortlist will then be piloted. These are measured objectively to ensure a fair trial. The RFP panel agrees on the successful candidate who not only ticks all of the boxes but also goes beyond the call of duty to demonstrate why they should be the EMM tool of choice.

    Windows Intune

    Windows Intune is Microsoft’s Cloud-based EMM Solution offering MDM, MAM, MIM and MCM. Windows Intune comes in two ‘flavours’ – Cloud-only and Unified. Cloud-only is Software as a Service (SaaS), where no on-premise infrastructure is required and the administration console is accessed via a browser. Unified is a hybrid model, integrating the Windows Intune Cloud service with your System Center Configuration Manager (ConfigMgr) on-premise infrastructure. Unified gives the organisation the ability to view and manage users’ PCs and mobile devices – both corporate-connected and cloud-based, within a single console. An integrated approach allows you to apply policies and offer software to your users without having to create duplicate infrastructures, separate consoles and new processes. The diagram below highlights typical usage scenarios. If you already have a ConfigMgr infrastructure, it makes sense to build on your existing infrastructure and expertise by integrating with Windows Intune to extend the capabilities and platform support. Windows Intune offers an extensive feature-set, with highlights such as Cross-platform Support for Windows, iOS and Android Devices, Selective Wipe, Granular Device Settings, Corporate App Store, Certificate, VPN, Wi-Fi, and Email Provisioning. For a detailed comparison, here are links to the Service Descriptions:

    · Windows Intune Service Description
    · Mobile Device Management Capabilities in Windows Intune
    · Compliance Settings for Mobile Devices in Configuration Manager



    EMM in Practice


    Revisiting my earlier claim of two ConfigMgr Admins extending their 12,000 Windows PC management estate to include 8,500 iOS and Android Devices, Andrew Turner - Mobile Device Management Technical Lead at Mitchells & Butlers - will talk you through their approach to Enterprise Mobility Management.

    Case study: Mitchells and Butlers: Pub and Restaurant Company Boosts Service, Satisfaction with Managed Mobile Platform

    clip_image010_thumbAbout the Author: Andy Turner is the Technical Lead for the EMM project at Mitchells & Butlers. Previously he has been the technical lead on Corporate Application Remediation, Application Virtualisation and Enterprise Management projects for the same organisation. He is part of a small team that is responsible for the day to day management of a mixed estate of over 20000 devices.

    Mitchells & Butlers - the largest operator of restaurants, pubs and bars in the UK – wanted to move away from pen and paper systems by deploying mobile devices that run service-enhancing apps to its retail teams at 1,600 establishments. Before doing so, it needed a way to remotely manage the devices. The company subscribed to Windows Intune, connecting it to their ConfigMgr infrastructure, with the goal of improved customer service, increasing site managers’ efficiency, and reducing costs.

    The ‘How’

    Mitchells & Butlers selected Windows Intune due to consolidation of their management platform of System Center Configuration Manager. The flexibility of the hybrid model – where you can update cloud features without ConfigMgr server downtime kept the business happy.

    The implementation was straightforward and speedy. We upgraded from ConfigMgr 2012 to R2 and integrated Windows Intune within 2 days. All of this work was done in-house which saved time and money. The in-house implementation was performed by myself and a colleague, Ben Mathews, which increased our knowledge of the product and allowed control of our environment, which in turn meant we understood where there could be points of failure, which was mitigated. We didn’t need to carry out any additional training, as we were using familiar ConfigMgr tools and consoles that we were already using on a daily basis.

    Linear Deployment Strategy

    Infrastructure Stream

    iOS Stream

    Android Stream

    Upgrade existing SCCM 2012 R1 Infrastructure to R2

    Acquisition of Apple Enterprise Developer Agreement (for APNS and App signing purposes)

    App Development

    App development

    Implement firewall change requirements for Windows Intune

    Implement APNS firewall exceptions to allow communication between iOS devices and Apple Push Notifications Services (essential for enrolment & App installation).

     

    Configure Windows Intune Connector in SCCM 2012 R2

    Bottom out security requirements for baseline policy with internal governance.

    Bottom out security requirements for baseline policy with internal governance

    Configure Company Portal (Colour Scheme, company branding, Service Desk Contact details)

    App functionality testing

    App functionality testing

    ADFS Dirsync of Active Directory users to Windows Intune Cloud service.

       

    Configure Import/Sync users/devices into SCCM 2012 R2. Automate collection assignment based on device type query.

    App push/pull deployment testing. Including a fresh install and an upgrade install.

    App push/pull deployment testing. Including a fresh install and an upgrade install.

    Deploy baseline policies to devices.

    Engage Service Desk, prepare call scripting.

    Engage Service Desk, prepare call scripting.

    Make app available to end users.

       


    Gotchas


    We currently outsource our Mobile App development to third party developers and we worked closely with them to ensure a smooth go-live across all of our outlets. The ConfigMgr and Windows Intune configuration was a straightforward process, however I’d like to share with you a few gotchas which may save you time when you embark on a similar endeavour.

    · If an app is developed by a third-party, it needs to be signed by your own in-house certificates, otherwise it contravenes your Apple agreement. Extra time was required in obtaining an Apple developer Certificate to carry out this step. By working closely with our third-party software suppliers we overcame this initial problem.

    · An OEM shipped devices directly to our retail outlets with instructions for user-enrolment. On a small amount of devices, the date and time settings were wrong so Kerberos authentication failed. This necessitated a local site visit to ascertain the source of the problem and the issue was immediately identified, resolved and added to the Service Desk Team’s core script to increase first-level call resolution. N.B. if devices are stored in a cupboard for a few weeks untouched, it may revert to an earlier date and this will cause Kerberos authentication failure (and some head-scratching), especially if the device isn’t able to collect its time settings from an internet time server.

    · Understanding of the Apple Push Notification Service (APNS) and the ports that need to be opened up to facilitate this. Full connectivity to APNS is mandatory for iOS device enrolment and app installation functionality. Working closely with the Network and Security team ensured a swift resolution.

    · Help your Service Desk to help you – detailed training on the range of devices they will be supporting and hands on time with the devices is crucial if they have not had previous exposure. A good understanding of the device enrolment, app installation process, and how to use the app once installed are key.

    · Thorough deployment testing is a must, where possible visit external sites and prove the deployment process in a real end user environment. Poor communications speeds and other environmental factors (competing WiFi networks etc.) can raise awareness of potential problems that aren’t necessarily visible in a test lab environment.

    Best Practice

    · Having access to a Mac is invaluable for packaging iOS apps and troubleshooting problems with iOS devices.  With Xcode you can interrogate the manifest data for the app and extract useful information such as the version and bundle identifier. This ensures the app creation and deployment process run smoothly. Using the Xcode console with an iOS device tethered to the Mac also allows you to view real time logging data from the device, which can be a useful diagnostic tool for troubleshooting app installation and enrolment issues.

    · When shipping devices from the manufacturers straight to site, it is essential to create clear concise step by step instructions including screen captures, together with high-level troubleshooting advice. Remember, your information workers may never have used a device like this before.

    · Create internal awareness. Mitchells & Butlers used bulletins to all outlets to ensure everyone was aware of the progress of this project and the expectations of them, such as end user enrolment of the device, which worked successfully with only a small proportion of Service Desk calls regarding the hardware date and time configuration being out of sync.

    · Early and continuous engagement with the Service Desk is essential. Creating a call script for the Service Desk and making on-going enhancements when unforeseen issues came to light.

    · Providing the full range of sample devices to the Service Desk, to walk-through issues with callers to ensure swift resolution.

    · Engage with the Security and Compliance Management team at the start of the project and sustain engagement throughout the project. This ensured Security Policies were signed off because they were fully involved in the whole process.

    In Conclusion

    Mitchells & Butlers are happy and confident with the platform. They did the unthinkable by increasing their hardware and OS platform support by 40% without increasing Admin headcount (and staying sane throughout the deployment). They have met their goals of removing pen and paper systems, improved customer service, increased site managers’ efficiency, and reduced costs.

    In addition the employees love their new devices and find them a vast improvement to the old system. Their involvement from the start was key in the success of this project.

    The aggressive timescales of the Windows Intune feature-set gives Mitchells & Butlers the ability to plan their EMM roadmap. Help from the Microsoft Windows Intune Product Group was invaluable. It was really useful having the insight of this deeply technical team and speedy turnaround of challenges.

    Links
    · Mitchells and Butlers: Pub and Restaurant Company Boosts Service, Satisfaction with Managed Mobile Platform
    · Windows Intune
    · System Center Configuration Manager
    · Enterprise Mobility Suite

    Technical links
    · Publishing LOB app for iOS devices
    · Well known TCP and UDP ports used by Apple software products (2195/2196 are used by APNS)
    · Firewall and Proxy Server Settings for Windows Intune Client Computers 

    Are you thinking of rolling out a large project soon, perhaps you’re already mid-way through? Let us know what you thought of this article at
    @TechNetUK.

  • Are you ready to migrate? Windows Server 2003 'End of Life' is coming July 14th 2015

    image The following post was contributed by Steve Brennan, Microsoft Business Development Manager at QA

    Has it really been over a decade since Windows Server 2003 hit the market?
    Technology has come a long way in 10 years. Today’s servers run increased workloads for big data, mobile application hosting, social collaboration platforms, streaming video and web hosting. One thing that hasn’t changed on the server front, though, is the requirement for 24/7 performance.

    But what happens to this performance when Microsoft stops supporting your operating system?

    With the end of life support for Windows Server 2003 on July 14th 2015 rapidly approaching, now is the time to start planning your migration process.

    What does end of support for Windows Server 2003 mean?
    End of support for Windows Server 2003 means:

    • No updates - 37 critical updates were released in 2013 for Windows Server 2003/R2 under Extended Support. No updates will be developed or released after end of support.
    • No compliance - lack of compliance with various standards and regulations can be devastating. This may include various regulatory and industry standards for which compliance can no longer be achieved.
    • No applications support - Many applications will also cease to be supported, once the operating system they are running on is unsupported. This includes all Microsoft applications.

    What changes when you migrate to Windows Server 2012?
    With the Architectural changes in 32 bit to 64 bit technology – everything changes in Windows Server 2012.

    As you migrate your IT infrastructure to Windows Server 2012 you benefit from, reduced cost of ownership through the improvement of management of resources, better security, high scalability, improved performance, Increased functionality, in-box virtualisation and cloud support, improved manageability and on-going product support.

    With the average Windows Server taking over 200 days to migrate, now it is the time to act and start planning for your migration.

    What next?
    To help you plan your migration, in partnership with Microsoft – QA’s industry guru Paul Gregory is delivering a ‘free’ live one hour webinar “Migrating from Windows Server 2003 to Windows Server 2012” on 21stJuly 2014 at 2pm.

    During the webinar and highlighted via live interactive product demonstrations, Paul will investigate the process of migrating from Windows Server 2003 to Windows Server 2012. He will discuss what end of support for Windows Server 2003 means, how you can transform your infrastructure with Windows Server 2012R2, and what are the key things you need to plan and consider for a successful migration.

    If you can’t make the live event, don’t worry we will also be recording the live broadcast and making it available on-demand for you to watch and share with your colleagues off-line as soon as the event finishes.

    What’s stopping you migrating off Windows Server 2003? Let us know via @TechNetUK. for more information on support, check out our Microsoft product support lifecycle.

  • Making an impact with Lync in education

    The following post was contributed by Ben Lee, the team leader for Unified Communications at Waterstons, a business and IT consultancy based in Durham and London.

    As an IT Professional (I am very proud of that term) one of the things that always provides me with the most job satisfaction is being able to see people benefiting from the solutions that I have implemented, particularly if the deployment has been challenging. As a consultant, sometimes I miss out on being able to fully appreciate the real-world impact of my work as I am not always involved in the entire project delivery. Having said that, I'm fortunate enough to work for a company where we have long standing relationships with our clients and work in genuine partnership so the satisfaction ratio for me is high. Unfortunately, however good the client relationship is, it doesn't guarantee that the road to delivery and golden moment of success is any easier. Technology will always be tricky but it is my job to minimise the risks and reach a successful delivery (This is where PowerShell helps these days right?).

    That’s why the theme of this month's TechNet flash caught my eye - "Making an impact". It made me recall some recent unexpected feedback I'd heard when catching up with one of our site managers. He mentioned about how some users had really taken a Lync deployment to heart and were using it in a way we hadn't initially envisaged….

    The project in question involves Durham University Business School. The Business School has worked with Waterstons for many years and we provide a full time IT manager who coordinates the onsite IT support, manages the infrastructure, as well as managing some aspects of project delivery for them.

    image

    Back in March 2012 the Business School embarked on a large scale redevelopment programme and essentially had to move out of their building and spread themselves over 5 sites across Durham city. I was asked to evaluate their options to help reduce the impact of this migration on working practices and it was agreed that their legacy OCS 2007 environment would be upgraded to Lync 2010. At the time the subsequent deployment suffered from several technical issues and we had a hard time driving user adoption. However in the early part of 2013 there was a requirement within the technology design for the Business School’s new building to select a solution that would underpin the School's collaboration and video conferencing needs. Once again it seemed that Lync would be able to fit the bill and the improvements added in Lync 2013 would better address their needs. It was also possible this time round to re-visit some of the technical limitations we had encountered during the 2010 deployment that had caused low usage.

    The project was a Microsoft geeks dream as it included all sorts of cool technologies; Active Directory trusts, Identity synchronisation, cross-domain authentication, SIP trunks for dial-in conferencing, multiple Lync pools and Polycom hardware video conferencing units. After a few months of tight deadlines and interesting issues that required some lateral thinking we had successfully sorted the technology side of the project out and were satisfied that everything was working as it should, which left us with the user training & adoption. This time round, having learned from our mistakes, our onsite IT manager embarked on a full program of user training sessions backed up by a SharePoint Lync wiki where users can share tips and tricks. Unlike the first deployment we started to see some proper user adoption with people communicating cross-department and with external partners. So far so good, but roll forward a few months and Lync had really started to become embedded into the Business School users working practices.

    image
    The Business School were embarking on a new study program where they partnered with a large accountancy organisation who sent students onsite before they returned to continue studying remotely. The Business School suddenly had a requirement to run remote revision sessions while the students were offsite. In the past there had been issues with the technology requirements to support such sessions - but not anymore with Lync. After a few practice sessions to prove the concept they were away!

    Using Lync the Business School were able to hold four half-day sessions over two days with at least 50 students per slot. The sessions were configured so that the lecturer locked down the room, muted all the students and shared their screen and video. Students could still interact via chat and the recently added Lync Q&A functionality. The students joined the sessions using their web browser and needed nothing installing beyond the Lync web-plugin. They had all been notified ahead of time to request a USB headset for the audio portion of the calls but those who forgot were able to dial-in to the meeting using the Lync SIP trunk. This is the sort of use-case that should work perfectly in theory but is usually fraught with issues where people can't join or have audio issues etc… We needn't have worried. The whole process apparently worked so well that on the second day there was zero IT involvement at all, the lecturer and program manager set everything up themselves, invited the students and managed the conference from start to finish! To top it all off they recorded the sessions using the built in Lync recording manager and then made them available for anyone who hadn't been able to attend at the time.

    The study program had been so successful that the Business School is planning on running other courses in this way and they will continue to use Lync to deliver the remote training elements of the study program!

    So there you have it, I don't know about you but like I said at the start there is really nothing better than hearing about how something you worked hard to deploy has been used in new and interesting ways. It's the sort of job satisfaction that you just can't beat and is hopefully something you can relate to. I'm looking forward to hearing where else Lync might make an impact for Durham University Business School.

    We loved hearing about how an implementation was put to good use, have you got a similar story? We’d love to hear it, just reach out to us ukitpro@microsoft.com or via @TechNetUK.

  • Office 365 MCSA – Halfway House

    There are two certification examinations that make up the MCSA Office 365. The background, requirements and details are listed here and are partially shown in the graphic below.

    o365mcsa

    In short, to certify you need to pass 70-346 and 70-347 (the 70 simply identifies the retail examination, there are other codes for Academic and Academy exams- the content is identical, as is the passing score).

    One of the problems for Microsoft in producing an Office 365 qualification is that the product encompasses so much and is continually changing and updating (so the MCITP in Office 365 or the Office 365 for Small Business qualifications are not that old but the content is not relevant (in my opinion, for today)

    Why am I telling you all this, well I wrote a blog post last month about Self-Study, relying on my past life as an MCT (Microsoft Certified Trainer). As a trainer I do advocate all manner of training methods, not least MOC (Microsoft Official Curriculum) and MOAC (Microsoft Official Academic Curriculum), indeed I have written them before now for Windows Server 2012 and for Windows 8. Classroom based training is one of the methods that works very well for thousands of delegates and students every year.

    There are those, however, for whom the course is out of financial range or the time required away from work is too great. If that is the case for you, the reader, then this series of blog posts is definitely for you.

    To recap, we have so far dealt with the theory of making time, finding resources and actually studying for technical exams. I also decided that since I hadn’t taken any Microsoft exams since March and that I had been exceptionally sub-optimal in the BETA versions of the Office 365 tests, that I would ‘put my money where my mouth is’ (For the reader, Sub-optimal relates to a score below 700 which is the score required to pass the exam. I don’t consider it a fail, especially when teaching young people and apprentices, ‘failure’ is such a hard concept to grasp since the modern schools system doesn’t really have competition or failure in its curriculum – wrongly in my opinion but that is an entirely different subject for a post all of its own).

    So I set myself the target of passing the Office 365 MCSA before July (this year) and yes I do like a challenge. Unfortunately I was unable to book both exams within the time limit (I was not about to forego my holiday to the Glastonbury Festival or give my tickets to our editor Steven Mullaghan, much to his disappointment). The second exam will be ‘in July’ sometime.

    I wasn’t left with much study time,my role as a Technical Evangelist keeps me on the move, on my toes and rather buy, to say the least. I posted my decision to retake within June on 2nd June and since then I have been on MVP Roadshows presenting on People-Centric IT, System Center 2012 R2 IT Camps, Planning for FY 15 (which starts next week), racing round Donington park grand prix circuit with the Microsoft Motorbike Club (proof below) and manning the Cloud World Forum stand at London Olympia.

    bike1 (1 of 1)

    In between these great events I have been carrying on with normal family life and preparing for a big year ahead in my role as a Freemason. So I haven’t had all that much time to devote to the exam (can you hear the beginnings of an excuse for being sub optimal?)

     

    The purpose of my little story above is to explain that the time available significantly alters the methods I use to study. Given time, I may explore every avenue of the product and read books, use it in anger, go to the Microsoft Virtual Academy (MVA) and checkout the Jump Starts and other courses. I may even use my status as an MCT to access the MOC courseware library and download the Virtual Machines and the trainer materials and run the course for myself.programme

    Sadly not enough time for those methods this time. Although I did use the MVA course designed to support this 346 test and the 347 one. An excellent resource of free technical training from highly skilled and technical Microsoft staff and partners. Whilst discussing the MVA, why not sign up for the UK initiative, MVA Hero a way of choosing your path for study and having a bit of fun at the same time. If this doesn’t appeal to you don’t worry the MVA search engine will find the course you want.

    So with such limited time, having booked the first exam (I booked on 2nd June – so I had to take it! A good trick to stop you backing out – I don’t have £100 to waste on missed or sub optimal exams, for those in the know, MCT’s receive a 50% discount on vouchers herosfor exams – another good reason to become one and Microsoft employees receive free vouchers, another great reason for applying to work here with such great people and resources, although you do have to report your results and the perceived peer pressure to be ‘optimal’ is huge, for me at least.)

    I was left with about 6 days to go and in the middle was a really big weekend event that would need preparation and no chance to study for at least 3 of those days.

    I took the last minute cramming approach and spent 14 hours on Sunday going through the MVA course and scouring TechNet articles for methods to remove licences from Office 365 using PowerShell, to how to deploy a redundant AD FS infrastructure. Trust me there were so many pointers in the course to areas to really work hard on that I was not short of ideas.

    I had booked the exam for 0900 about 60 miles from home (not many seats available in Prometric test centres in Birmingham so its really great news that Pearson Vue have also recently been awarded the contract to provide tests from September 2014).

    I set my alarm for 0400 and woke up at 0355 – I set to a last minute or last 3 hours of revision of the key topics. I followed the advice from my previous post. My technique is to copy the areas to be studied into a OneNote notebook and to create links to all areas for TechNet, MVA other blogs and pdf’s.

    Remember the people that write the exams have to get the content and ideas from somewhere and when you have taken some exams you will quickly find which Microsoft approved resources are useful and which are not.

    Final piece of advice – don’t get bogged down in too much trivia. Do run through wizards live to see what you can and cannot do at each stage to achieve something. As an example (not from my exam – as that would breach the NDA).

    If you are looking at Exchange online, and want to work out how to track or manage malware detections in your email. There are several ways to do it but not all of them would answer the question, so read the question carefully.

    In this example the exchange online protection section has a malware filter where settings and rules are created, it also has a quarantine section where the relevant message would be listed. But if you wanted to track malware received or sent over a period ranging between 7 days and 60 days then you would not use the Exchange management portal you would use the Office 365 admin portal and choose reports.  See below.

    mal1

    These are taken from the Office 365 portal and clearly show what is asked for but the question may ask specifically for Exchange Online – which may confuse you.

    mal2

    Clicking on the malware detections in received email would show the second screen where you can easily answer the question. None of this is available in Exchange Online.

    I stress that this is not a question I have had or have seen. It is representative of the tricks and traps that such a complex product or suite of products can lead you in to.

    The question itself (i.e. In your Exchange Online deployment you want to track malware in received email over the last 60 days and identify the recipient of the greatest quantity of malware) is fairly simple)

    If you had not drilled down through all the available menus and sections you would NEVER come across this section, buried three levels down.

    Oh and for thsoe of you who rightly noticed, I haven’t mentioned PowerShell much, the MSONLINE module is HUGE and the MSOL cmdlets appear very regularly in the study and test. (but you expected that didn’t you!)

    So you have all been very patient. I took the exam yesterday. See below

    pass346

    All Microsoft exams require a passing score of 700, a maximum score is 1000. The theory is that once you reach 700, there is absolutely no difference between a score of 700 and 900 because the questions are different and the exam has a set number of types of question and in each area they all get marked and scored differently.

    I have read the theory seen the video where Liberty Munson Microsoft’s  PRINCIPAL PSYCHOMETRICIAN, LeX Products explains this and I confess I absolutely do not understand this.

    Suffice to say I was happy with the result and will now hope to give myself more time and try to produce a couple of posts mid study for the 70-347 exam.

    The exam this week was all about setting up, and getting working with Office 365, security connections etc. The next exam is all about actually working with the products that make up Office 365 (Exchange Online, Sharepoint Online and Lync Online as well as OneDrive).

    This is very definitely a greater challenge for me. Watch this space, i cannot commit to a date as I have an even busier July that I did June (I am off to Seattle for the internal version of TechEd called TechReady) and I also have to get ready for my next trip on the race track!

    Don’t forget to ping me any questions you may have @edbaker1965 or leave a comment here.

    The post Office 365 MCSA – Halfway House appeared first on Blogg(Ed).

  • How can an IT Pro make a tangible business impact?

    Alan Richards

    The following post was contributed by Alan Richards, Senior Consultant at Foundation SP and SharePoint MVP.

    Business and IT are inextricably linked, there is no denying the fact and if that’s the case then the IT Professional has a big part to play in the success of the business.

    I am employed as Senior Consultant at Foundation SP which means I get dropped into a lot of different businesses who are looking to maximise their use of IT to benefit the business as a whole, but what skills do I have that means I can make a difference to the business?

    Let me first give you an insight into my career, I have worked for 19 years in IT with 18 of those spent in education and the last year working for an IT Consultancy. I have various Microsoft certifications and for the last 3 years I’ve been proud to be a Microsoft MVP in the SharePoint discipline. So I think my IT skills are OK but how is that going to make an impact in a business environment apart from the obvious of keeping everything running, which let’s face it just costs the business money, it doesn’t actually generate income.

    But can IT generate money for a business, or more pointedly can it generate tangible, account auditing income? To be honest I don’t think so. What IT can do is ensure that the people in a business who do generate income have the tools to be able to generate that income and that is where an IT Professional needs some additional skills on top of the usual ‘IT ones’.

    When I go into businesses I am not being brought in because my skills are any better than their in-house IT team, it’s because I bring a different skillset, a skillset that to an extent has nothing at all to do with my IT skills. The skillset I bring is more about being able to understand the business’s needs, what it wants to achieve in both the short and long term. These skills are more about listening to the people in the business, both the directors but more importantly the people at the ‘coal face’ because, let’s face it, the directors can shape a business but it’s the workers who will generate the income, so if you make life easy for them then the business will thrive.

    So let’s assume you’re an IT Professional who has the required skills and you have listened to all sections of the business to understand their needs and you discover a set of technologies that will massively increase the profits imagefor the business but it’s going to bring quite a bit of change. It’s a phrase that is used a lot but ‘change management’ is one of the skills that is often overlooked, but is probably one of the most important. Any IT Professional can make changes, introduce new technologies but the IT Professional that can bring the business along as they make the changes will be the most successful. No one likes change, that’s a fact, so as an IT Professional you need to ensure that the change is managed, that users feel happy with the change, that they understand the reasons for the change and they can see the long term benefits. If you manage all that then not only will the business flourish but you will also benefit from happy users.

    Change management can come in many forms, but at its core is communication. An IT Professionals ability to communicate is core to success. My role when I work with companies is normally to understand the business and recommend ways to improve processes or new technologies that will improve efficiency and the ability of users to carry out their duties easier and in some cases remotely. After the business directors have bought into the ideas the next and most important job is to ensure the workforce buy in as well, without that buy in nothing will ever improve. You can introduce as much new technology as you like but if you don’t communicate the change effectively then the users will become unhappy, disenfranchised and ultimately less effective for the business.

    The title of this article asked about the skills needed to make a tangible impact on a business, for me the core skill outside of the obvious IT skillset has to be communication, which hopefully the text above illustrates. If you get communication right, whether it be by email, training or workshop then the rest such as change management will follow.

    Do you agree with Alan that the ability to adapt to change is one of the most prominent skills in our environment? What other impactful skills should be included? Let us know in the comments section or via @technetuk.

  • Office 365 Administrative Roles

    In Office 365 there are a number of roles available with a myriad of differing levels of access and permissions.

    This link on the TechNet site is a huge help, both when deploying and when studying for an exam like I am (on Monday , no less), and these can be set at the users and groups / settings page on Office 365 Admin center page.

    o365admin

    Short and sweet but worth remembering how to grant administrative privileges.

    Remember those Examination objectives

    skill1

    skill2

    Will feedback after the exam on Monday (and a full weekend of cramming)

    The post Office 365 Administrative Roles appeared first on Blogg(Ed).

  • The Top 5 Coolest Features of SharePoint 2013 Workflows

    imageThe following post is contributed by Stuart Conway, Director of Development at Metia.

    I have been digging into SharePoint 2013 quite a bit recently and must say, as an application developer, I am really quite impressed with how the platform has matured. Gone are the days where you need access to “special folders” or have to develop custom Web Parts to build in added functionality. When we talk about SharePoint with clients, we talk a lot about out-of-box functionality versus custom development - the implication being that developing custom controls and adding them to SharePoint is more time intensive than developing, say, a user control for an ASP.NET application. This is no longer the case. I am especially impressed with the improvements to the Workflow functionality provided by SharePoint 2013.

    Workflows in SharePoint are a job or a series of jobs or tasks that perform some action. Workflows are triggered manually or in response to a site or list event. Workflows are most useful in automating existing conceptual or manual work processes. They are very powerful.

    As an example, one of the most common workflows in SharePoint is the approval workflow. It works like this:

    1. Someone makes a change to a content item. 
    2. Once saved or published, the approval workflow is triggered.
    3. The approval workflow automatically notifies a designated approver that there is an item needing approval.
    4. The approver approves the content and then the content is published live

    SharePoint Workflow

    These are some of the features of SharePoint 2013 workflows I find to be most exciting:

    #1 - Web services
    That’s right! Now you can call web services right from a workflow. This allows you to integrate disparate systems and centralize where data is managed. For example, you can pull in data from third-party external websites that may not be running SharePoint, and work with the data on your SharePoint intranet site. When you’re finished, it can be posted  back to the external sites. You can also update a contact list in SharePoint and have the changes populate into your CRM system.

    #2 – Loops
    Looping is very powerful. You can design the workflow to run a set of actions a specific number of times or until certain condition is met. With the ability to set variables, this becomes extremely useful. Say I want to delete all documents in a document library that were approved by a certain user. I could set up a loop to open each folder in the library. Then I could set up another loop to look through each file in the current folder. By examining the document properties of each file, I could determine whether or not to delete a specific file.

    #3 – Stages
    Stages serve two purposes:

    1. Visually organize a related set of actions. 
    2. Control program flow.

    You can organize your actions into discrete “functions” and then call them from other stages. For example, I could develop a workflow that sends an email to all employees and customers. The employees email will contain different data than the customer email. I would set up a workflow containing two stages: SendEmployeeEmail and SendCustomerEmail. Then, using our handy loop on a list of recipients, I could then use an “If” statement to check whether the contact is an employee or a customer. If the contact is an employee, I use the special workflow command, “Go To” and tell the workflow to Go To SendEmployeeEmail, otherwise I tell it to Go To SendCustomerEmail.

    #4 – Pausing (!!)
    Pausing allows a workflow to stop execution for a specific duration. You can pause for a minute, an hour or a day, or pause until a specific date. I have a workflow that finds broken site links and then sends an email. I then programmed it to call a second workflow whose only purpose is to wait a week before calling the first workflow again, therefore generating a weekly email. And just like that we can have simple scheduled tasks in SharePoint.

    #5 - Visual Designer
    There are a number of ways that you can create SharePoint workflows. In past versions of SharePoint, we created text-based workflows by typing or adding from a list of defined conditions and actions. Now, you can develop workflows using Visio by dragging shapes onto the drawing and setting a few properties. There is an added benefit of being able to share these workflow diagrams with people who are visual learners and don’t get much out of reading through pseudo code.

    Before you assume that SharePoint doesn’t support a feature or capability that you need, check out the SharePoint 2013 feature list. For a more detailed look at the new SharePoint platform, check out these links:

  • Power BI for IT Professionals

    Chris headshotThe following post is contributed by Chris Webb a trainer and consultant specialising in Power BI and SQL Server Analysis Services.

    Sooner or later everyone who works in IT will need to build a report. Whether it’s for someone in the business (that guy in finance who wants to see up-to-date sales figures every morning at 9am) or for your own personal use (so you can keep track of the number of cups of coffee you’re drinking each day), anything that can be measured can be reported on.

    Unsurprisingly, there are already thousands of tools out there for building reports and dashboards. What makes Power BI, Microsoft’s new set of tools for this purpose, different? Well, for a start it builds on a tool you already know – the number one tool for data analysis in the world – Excel. A series of new Excel add-ins and new features in Excel 2013 (Power Query, Power Pivot, Power View and Power Map) make it much easier to load data into Excel, slice and dice it, and build striking data visualisations. This means you don’t have to learn a new technology from scratch – instead you get to build on your existing skills while taking advantage of the increased speed and flexibility that these tools give you. There’s also a cloud-based service, Power BI for Office 365, which enables you to publish your reports to the cloud and even ask it questions using natural language querying.

    Why is Power BI important for IT Professionals? Like everyone else nowadays, IT Professionals are drowning in data that is potentially useful but most often is left unanalysed. Log files, third-party application databases, web services, automatically-generated reports that don’t show quite what you need – the list is endless. Power BI will allow you to build reports from this data quickly and easily, with very little specialised knowledge, so you can start getting some value from it. Once you have worked out what the top cause of calls to the helpdesk is, or which employees are close to their mailbox size limit, or why that data load always fails at 9:36pm on a Tuesday night, you can take some action.

    You can find out more about Power BI at http://www.microsoft.com/en-us/powerbi. To get a quick idea of what you can do with Power BI there’s a demo video here that shows how to use it to analyse British road traffic accident data. There’s also a great hour-long presentation you can watch online here that describes how Microsoft’s own internal IT is using Power BI to monitor issues in Azure data centres.

    SQLBItsNewLogoAn even better way to learn about Power BI is to attend the upcoming SQLBits conference that is taking place in Telford on July 17th-19th. SQLBits is Europe’s biggest SQL Server and Microsoft Business Intelligence conference: over 1000 people have already registered, and it will feature sessions from world-class experts. There are lots of Power BI-related sessions, including two preconference seminars, and the best part about SQLBits is that it’s free to attend on Saturday July 19th!

    You can view the full conference agenda here and register for the conference here.

    How PowerBI savvy are you? Are you 'drowning in data' and plan enhancing your skills? Let us know via @technetuk.

  • 20 Years of TechEd Europe

    imageThe following post is a Q&A interview with somewhat of a TechEd Europe legend.
    Peter Bryant a freelance IT Consultant here in the UK, has been to every single TechEd Europe conference since 1994 when Microsoft first invaded the shores of Bournemouth and he is now the last person who has been to every TechEd since.

    With early bird tickets for Barcelona 2014 closing in one month, Peter very kindly offered up his time to go back through the years for an exclusive TechEd 20 year anniversary interview with TechNet UK.

    Hi Peter, tell us a little bit about yourself…clip_image005
    “I’m a freelance IT Consultant in the UK offering the benefits of the pain and bitter experience of 30 years in large and small company IT; to (largely local) organisations who need the skill set, but cannot rationally employ on a permanent basis.
    Work is quite diverse: project management; supplier management; problem solving for VAR’s; rationalisation of disparate data into a well-ordered information rich database environment with Access and SQL. I’m not a reseller of product; but instead lease out the gap between my ears.”

    So you’ll be marking your 23rd appearance* this year, what is it about the conference you find so appealing?
    “It’s the range of things you can experience – keynotes; sessions, where you can engage with people at the frontline of a specific product as program managers are almost always around. This means you can normally get a conversation with them, perhaps even continue it after the conference. It gives you the means to deal with issues you’re having here and now, there probably isn’t any other channel you can deal with such issues in that way. You’ve also got access to a whole spectrum of Microsoft expertise as part of the hands on labs and, of course, the exhibitions space.
    More generally, you’ve the opportunity to get up to date on something new, as well as engage with peers and experts discussing what may be happening in the near future. It’s a real means to get into the Microsoft Ecosystem and just pick up anything you’ve feel you’ve missed out on, or to ensure you stay bang up to date.
    It’s a long week, but to try and catch all of this outside the conference environment within a week would be far more difficult if not impossible.”
    *23rd as in ’06, ’07 and 08’ there were 2 TEE back to back – Dev & IT Pro

    image
    With the 20th Anniversary of TechEd Europe looming, how have you witnessed the conference change over the years?
    “It’s changed in so many ways, going back to the 1st TEE conference in Bournemouth ‘94, it was quite an exciting innovation for Microsoft, North America may have happened in ‘93 but this was the first conference in Europe. There was a great spirit about it, people were trying things out for the first time, it was a bit of an adventure really.

    There were some notable things done in ‘94. The pavilion at the end of Bournemouth Pier was taken over for all the content that was to be delivered under Non-Disclosure Agreements, there was a proper sense of secrecy walking into it and discovering what was better known then as ‘Chicago’ and would be released as Windows 95. Before you signed the NDA and entered the space, you could only assume that was the topic being discussed! Microsoft UK hired the Sega centre for an evening of games for the UK delegates, playing on what at the time was some pretty cutting edge stuff, but obviously not up to scratch with today’s current gaming platforms.

    TechEd has consistently been an information trawl event, quite often in great locations. It has a strong history of explaining what Microsoft are aiming for and intend to achieve, so of course you have the ability as an attendee to ensure what you’re doing for the future can align to that. One of things I used to say to my Director when employed (which usually helped with justifying the expense!) was: “it allows me to make informed decisions that are probably going to be right, rather than make intelligent guesses that may be wrong” – it allows you to be more certain on the environment you are planning for your business or your clients, depending on what you do. image

    The changes in the environment have been phenomenal – go back to 1994 and you have Windows 3.11 (which came on 8 floppy disks) and Office 4.2 (available on about 25 floppies, with a further 8 for Access). It was such a tight close knit environment back then and it was entirely possible to know pretty much everything there was to know about Windows & Office. Now, with the immense range of Microsoft products, it’s nigh on impossible to keep a handle on what they all are, never mind what they all do. So the ecosystem certainly has evolved out of all recognition from what it was in 94.”


    How has modern technology and the internet changed the conference since 94’?
    “Well, Bill hadn’t written his email at that point. Even then Microsoft’s planning for the internet was the propriety MSN (which was also in beta up to the Windows 95 launch in August ‘95) when it too went live. The internet for all intents and purposes didn’t exist for Microsoft in those days. Now it’s all pervasive and we’re permanently connected and online, it becomes a big issue when you lose connection, even for a brief period of time, it’s a completely different world!

    Back in ‘94 the session PowerPoint’s were pre-printed into two books about 3’’ thick, you tore out the pages that you wanted to use for your sessions. We’ve been through phases when there have been applications for the IPAQ’s (and more recently on WP & iOS) that enable you to do the same sort of thing. Today you just switch on the computer, get the slides in front of you and take notes as you go. A recent change is the streaming the conference key notes live to the outside world. Although this is great for those who can’t make it, in the eyes of many delegates it’s not necessarily such a good thing, as the presenters are now all suited and booted, as opposed to jeans and a t-shirt.
    I did joke with someone 4-5 years ago, how long before we all do this online in Second Life and no one has to turn up. It could all be done that way but you wouldn’t have the same physical interactions by being in the venue, it wouldn’t be the same. It’s good to get away and be encased in that environment. I think if it went to an internet delivered conference, we’d be poorer for it. The greatest change the internet has made to the conference is the ability to view all sessions on demand. If you’re late for a session, or can’t fit in, you can now go and view it online afterwards, whereas 10 years ago, if you didn’t make it, you didn’t see it and that was the end of it.”

    As online learning is now so prominent, does this mean the value of TechEd is reduced, has it made it more difficult to justify the price of the conference to your boss?
    “As I’ve been my own boss for 10 years it’s less of an argument now, but you still have to justify spending money. It is harder to justify today due to things like the Microsoft Virtual Academy, but there is still a huge value to get your hands on a program manager (metaphorically speaking!) who’s responsible for a feature with which you’re having problems or have suggestions for. You might be lucky and find that person through the internet and engage with them that way but at the conference you’re there face to face, you can explain and expand in a way you just cannot do in a virtual environment, and that’s a key benefit you will only get by just being there. Also, to some extend just absorbing the tenure of the conference and finding out what’s happening in the exhibition space.
    It can be difficult to get all of that outside of a physical conference environment. You could almost regard it as 4 days of 5-6 sessions of onsite training. If you make your selections well for the sessions you go to, you wouldn’t be able to buy that experience in any classroom around the country for close to the cost.

    What’s your most memorable TechEd moment and why?
    “I’ve got quite a few, one in particular that stands out was about a decade ago when UN High Commission for Refugees (UNHCR) presented at a keynote. They came on stage and presented their challenge, which was when dealing with disasters such as large earthquakes and floods etc. They had great difficulty in matching up relatives and family members– as people would be evacuated, children would be separated from their parents, husbands separated from their wives. They proposed a challenge to the conference, which was then actually taken up by a few delegates. By the end of the closing keynote, an application had been developed during the course of the week that was then presented to UNHCR, which gave them the ability to use photos, descriptions and names to do loose matching of information to try to put these people back together. It was a very tangible demonstration of what IT could do in the real world, rather than the theoretical conference environment.
    That was a really great achievement, the people who created it did a really great job and actually changed something and made a difference.

    In 2012 Jeffrey Snover (Mr PowerShell) was presenting part of the keynote, about datacentre automation with OMI and I felt there were similarities to the HAL created in Windows NT. So I tweeted the suggestion to him that (perhaps) he really should be talking about the DAL (Datacentre Abstraction Layer). I bumped into him the following day, we discussed it further and he took it up and used it (and it now appears in TechNet articles). Shame I didn’t get to discuss rights first – the term replaced “software controlled datacentre.

    Being able to have some influence like that face to face is rather fun, ultimately it’s not really of real world significance, but when you see it in a document and you know you’re the person who coined the phrase and somebody’s liked it and used it; it’s nice.”

    Have you any favourite keynote moments from over the years?
    “In earlier years the waiting time for Keynotes always tend to be a laugh, in ’94 clip_image012it was Channel 4 Breakfast TV, then about a decade ago we had a team of drummers performing, we were then asked to look below our seat, to discover we all had a drum each – it was audience participation time, so there was a few thousand of us performing a drum roll together. It was quite something, and not really a “tech” thing!
    Another favourite was at Nice in 98 (or 99), when Andrew King the conference organiser was on stage the morning after the party the night before. After informing us of quite how much beer and wine had been drunk, he told how a delegate had asked for guidance on how to adjust the TechEd baseball hat that had been given out (seriously!). Andrew felt obliged to give a full product demonstration on stage, to much amusement.

    Over the years, what tech announcement has created the biggest buzz?
    clip_image014“Blimey, that’s a toughie. I suppose for me, it’s a bit of an oldie –first time access to what became Windows 95 in ‘94. Back then there was a tangible buzz for people seeing 95 for the first time, as it was such a big change from program manager and file manager of old Windows 3.11 days. Explorer was described as file manager on steroids.

    Another ‘big buzz’ was the launch of the Compaq iPAQ in 2000 – the first handheld windows device. There was a queue around the corner to get one and if you hadn’t pre-ordered you were unlikely to get one. It was pretty damn fine device for its age. I think Microsoft missed a chance to do the same with Windows Phone 7 & 8, but there was the Surface offer.”

    clip_image016You’ve collected vast amounts of swag over the years, do you have a favourite?
    Well I still use the 98’ conference bag for my laptop to this day. Some of the TechNet shirts from the earlier days were great – a really good polo shirt, I’ve actually worn mine out (hint hint!). We were also able to keep the drum from 2004 which was pretty cool, although getting it home on the plane was a challenge (hand baggage rules were easier in those days).

    And least favourite…?clip_image018
    “The 2004 bag, a giant orange thing that was universally detested (as far as I can remember – there were many left abandoned around the venue). The string bag from Berlin didn’t go down particularly well either. It was the type of thing I would put my plimsolls in at school as a seven year old.”

    What are you looking forward to most this year, bar the sunny Barcelona weather?
    “But that’s the thing about Barcelona’s October, it’s still around 20C - so I’m walking around in shorts and a t-shirt but the locals are all in about 3 layers as they consider it too cold.
    I guess, what’s been tangible from TechEd North America is the switch from on-prem to the cloud, there’s seemed to have been a tipping point in the last year for Microsoft. The perception I gained from reports of TechEd North America, was that it’s much more loaded to Azure rather than on-prem stuff. So it’s going to be interesting to delve into that.

    We got a taster of this a few years ago from Mark Russinovich* - who announced his move to Azure when he presented a session on it in Berlin. This was really quite remarkable at the time to the outsider, as it was quite a switch in his perception of things. He was “all in”. It seemed like his ‘A-ha’ moment. It will certainly be interesting to what’s next for Azure in October”
    * That presentation from 2010 is still available at
    here

    Why should people attend TechEd Europe 2014?

    Well if you haven’t already got it from the reasons above… You should attend for the chance to do a lot of things in the same venue in the same week that you couldn’t combine anywhere else. If you get it right it can have a major impact on your knowledge, awareness and understanding of the ecosystem. If your job is Microsoft-centric I would almost ask, why wouldn’t you go?

    Lastly, have you a key message you can share for first timers?

    “The key message I’d put across to the newbies (it’s a bit like the imageending of the third Raiders of the Lost Ark film), – ‘you should choose wisely’. Every timeslot, you probably have 8/9 sessions that you think ‘I’m interested in that’. Making a good decision on what you want to see is crucial to your enjoyment of and the benefits you gain from the conference. Part of that is not only making the right choice, but also having good back up choices.
    It’s worthwhile putting some effort into researching the schedule to see what you want to do. Ideally I think you should have a 1st, 2nd or 3rd preference for each session. Just in case you’re 3-4 mins into a session and you don’t think it’s for you, you can quickly move to your 2nd preference with ease. When you consider the unit cost of a session, it’s not a trivial sum, so you want to make sure you’re in a session you’ll enjoy or gain value from. Generally speaking I prefer sessions that are full of demonstrations rather that PowerPoint slides – it means the speaker is generally knowledgeable, and that it will be a good learning experience.
    It’s also worth ducking a session or two to do a few hands on labs – maybe many more if there is a product you really need to get to grips with, and don’t forget you can make some great connections in the exhibition space as well as in the keynote sessions.

    Other than that, all I have to say is good luck choosing your sessions, and see you in Barcelona!”

    You can connect with Peter on twitter via @pjbryant or on LinkedIn.

    We’d like to thank Peter for his time this month, as we go to press Peter is currently cycling for Help For Heroes from Brussels to Paris, if you would like to support his fundraising for them, you can visit http://www.bmycharity.com/pjbryant2014 for more details.

    We hope you found his wealth of TechEd experience both insightful and interesting. Can you relate with any of the above? We’re you lucky enough to be granted access to Bournemouth Pier? Perhaps you too banged the drum pre-Key Note a few years back. If so, please share your TechEd Stories with us below in the comments section, or reach out to us via @TechNetUK.

    clip_image020Remember, TechEd Europe is coming this October in Barcelona, have you got a ticket yet? Early bird $300 discount closes next month!