TechNet UK

Useful tools, tips & resource for IT professionals including daily news, downloads, how-to info and practical advice from the Microsoft UK TechNet team, partners and MVP's

July, 2013

UK  TechNet Flash Newsletter
Featured
No blog posts have yet been created.
  • SharePoint Champions are Vital Roles

     

     

      By Geoff Evelyn, SharePoint MVP and owner of SharePointGeoff.com.

     

     

      SharePoint has a multitude of diverse roles which relate to:

    • A SharePoint solution being delivered
    • A SharePoint solution being supported
    • A SharePoint solution being managed

    For organizations faced with the possibilities of implementing a SharePoint solution the relevant skills required can often leave them at best bewildered. And, without clear guidance, or knowledge of the roles required to carry out
    one of the above solutions, will leave organizations assuming that assume that all three can be serviced by one individual. Then, leaving one individual with having those multiple roles becomes part of normal operations, or BAU (Business As Usual), and strengthens the perception that one person is providing value in all those roles. However, that value is rarely measured, or defined, or even qualified into a TOR (Terms of Reference).

    Many organizations face difficulties in defining the actual skills and roles needed to deliver against a SharePoint solution and how SharePoint services can be delivered and managed. This mis-understanding is exacerbated by a lack of knowledge in the recruiting process in ensuring the roles needed matches the business requirement. Once recruited, some organizations then fail to measure and evolve those roles. In fact, some organizations will attempt to short cut a service delivery of SharePoint requirement, and wrongly assume, for example, that a SharePoint Administrator is a SharePoint Developer is a SharePoint Solutions Architect is a SharePoint support team member. It is not un-common therefore to find individuals in organizations where they are carrying out multiple SharePoint roles, and are announced as the 'SharePoint Guru', the 'SharePoint Superman', or even the 'SharePoint uberDude'.

    Whilst it is not un-common therefore to see the situation of a multi-SharePoint role of a SharePoint uberDude, where that person with that title walks corridors, and meets the 'low bows' and 'nods of respect' from their IT peers. However, those finding themselves running in multiple SharePoint roles will find their training and career options difficult to quantify, and difficult to apply. The organization will not be able to truly define the career path covering of the multitude of roles being provided by the SharePoint uberDude. They simply will not understand whether the SharePoint uberDude is technical, or business oriented. They will not find the luxury of applying time to, or understand any
    of the benefits of training that the SharePoint uberDude requires.

    And simply using third parties to supply SharePoint resources is not a sustainable solution to an organization with an evolving SharePoint platform. There are those in the SharePoint provision sector as third parties who believe that they can provide all the relevant skill sets to meet a client requirement. Whilst this is a certain necessity in the delivery of a SharePoint solution, for example, the organization faced cannot sustain their SharePoint platform by completely relying on a third party. They will not control the key area of SharePoint service delivery of that solution. That is because they may (and in some cases will) not have the ability, the inclination or the burning need to refine the skills of the individual(s) to match with the culture of the organization going forward.

    As SharePoint evolves in an organization, so does the need to manage its provision and continually help SharePoint solve business, information and collaborative challenges. This is using a service oriented delivery and agile delivery
    process, requiring that the business and technological models align. That means ensuring that the roles and skills of people resources align and continue to do so. This is not something that even a SharePoint uberDude can deliver. The key role in making that happen, is a SharePoint Champion, who is a business member, elected by the organization and for the organization.

     

    Why is a SharePoint Champion Required

    Here's an example. An organization wants SharePoint implemented. They state 'You will have SharePoint to save stuff on'. They ask IT to immediately deploy SharePoint. This is what happens:

    1. SharePoint is downloaded and installed
    2. Someone from IT shows SharePoint
    3. People are told to get on with it by IT
    4. SharePoint is largely unused - lack of confidence and input by information workers
    5. Supporting SharePoint becomes difficult
    6. No confidence amongst information workers becomes non-committal
    7. SharePoint stops being used
    8. Revolt as other products are identified
    9. Organization vision is perceived a failure
    10. IT provision is perceived a failure
    11. Service delivery is negatively impacted

    The problem is not the statement, instead, it is the lack of any process or using any kind of service orientation or strategy, all of which will include user adoption of SharePoint. That requires the understanding and continual evaluation of relevant business models and then capability mapping to technology. Ideally there is a close correspondence and alignment between the business model and the technology model, but in practice this relationship is not often the case. This will be due to a number of reasons, but the main one being that inward facing IT departments do not work closely enough and effectively enough the business. Internal implementation and business functional exposure is not aligned.

    And, whilst organizations attempt to stem this gap by plugging in the SharePoint uberDude, the reality is that soft skills are required, particularly from a business perspective. Please note that I am not arguing that there is no case for SharePoint Architects, Solution Architects, or SharePoint technical delivery teams. Far from it! They are definitely required. They are outward facing, and connect deeply with the business side of the organization. However, specifically they are not experts in the business in question. Neither are they generally part of the organization personnel structure and they do not manage the solution once in place. They will not have the luxury of covering all user interactions with SharePoint. They are responsible for defining the solution that meets business capability through requirements mapped directly to the technology at hand, provided through user requirements. And, to do that they use an objective language that allows them to talk with the business about the business, and therefore connects the business with the IT departments.

    So, there is a role which can not simply be provided by putting in a SharePoint 'guru', or by electing someone from an IT department to manage business policies and further requirements of a SharePoint solution once implemented. The role needs to be sought from within the business makeup of the company, and needs to be someone who can aid user adoption and at the same time face IT / SharePoint technical delivery teams.

    A SharePoint Champion is an individual who sees the advantage of using SharePoint and finds ways for its use in new and advanced ways in order to produce better results and help move our organisation forward. They are not administrators and will not be technical experts in SharePoint! They work together in a cross functional network to support the use of SharePoint and help drive up user adoption throughout the business and create the ‘platform governance’.

    What are the benefits of being a SharePoint Champion?

    1. Visibility to senior management
    2. Opportunity to meet and collaborate with people from other areas of the organisation
    3. Improved business insight into their function gained from being at the focus of any business related matters related to their site
    4. Improved skills such as training and communication gained from educating their peers
    5. Increased SharePoint knowledge and overall basic strategy and support best practice

     

    What are the roles of a SharePoint Champion?

    SharePoint Champions initially come from the evolution of SharePoint and are identified early. They generally take on the management of a SharePoint site or group sites; because this grants them the opportunity to learn more about SharePoint as solutions get implemented and the platform evolves. SharePoint Champions promote use within their business functions, build better governance within their site, and are the focus of any business related matters related to their sites.

    This will drive user adoption and bring the all closer to achieving the goals that drove the organizations initial investment in SharePoint. Let us take a look at the key roles that a SharePoint Champion covers.

    Discuss, propose and help make decisions

    The SharePoint Champions can help define business policies surrounding the usage of SharePoint, which leads to and helps sustain platform governance and related decisions. Because of their direct connection and knowledge of business capability, they help shape SharePoint solutions to solve business and information challenges. SharePoint Champions are key to help broaden SharePoint experiences amongst their peers. This is the single most helpful effort when it comes to creating business policy decisions. By utilising SharePoint Champions to expose a variety of SharePoint experiences to their peers, causes those peers to look closer at SharePoint from different perspectives to meet their information collaboration requirements, for example.

    Help communicate and train users on key benefits

    SharePoint Champions are able to communicate the basic functionality of SharePoint and act as middlemen between information workers and the IT support team. They communicate the expectations of the SharePoint service provision as part of any SharePoint solution delivery change process, and are able to because they have clear lines of communication with their peers, and can act as mentors.

    Of course, to aid them communicate the right messages and in the ways needed there must be a variety of knowledge sources available to them covering SharePoint from a basic use perspective, to the specific solutions put in place using SharePoint. Training resources such as the like the SharePoint Productivity Hub, the SharePoint User Adoption Kits, and a centre of knowledge to pull the relevant messages using a combination of those and images, symbols and stories.

    Be the contact point

    SharePoint Champions act as the middlemen between SharePoint Support and business Information Workers (generally their peers), by representing their function / department / business unit for SharePoint. For SharePoint support, SharePoint Champions provide extra mind-eyes to see problems and issues, and can compare them to apparently unrelated situations and see new opportunities.

    Dedicated and committed to the role

    The key to having people change and adopt SharePoint solutions is directly related to commitment, which comes from the enthusiasm, dedication, and desire of the elected SharePoint Champions, working closely with a dedicated and committed SharePoint delivery team.

    SharePoint Champions are seen as committed to SharePoint because they believe that can help provide their peers with the motivation and energy required to sacrifice their current collaborative process, utilising relevant tools, in pursuit of a collaborative process provided by SharePoint solution comprising of its enhanced tools. If this there is no commitment, then there will be impacts to the relevant SharePoint goals of the organization, and thus be seen as a major constraint in its ability to change.

    Therefore, SharePoint Champions are chosen by their line managers and peers not due to their technical knowledge, but on their business acumen combined with the desire to succeed and make a positive difference in the way their business processes can be enhanced using SharePoint solutions. As part of this, SharePoint Champions would need to have dedicated time to fulfil the role correctly. For example, organizations have been known to apply half-a-day a week to each of their SharePoint Champions to attend meetings, brain-storming sessions, workshops, etc.

     Attend SharePoint Champion meetings

    The SharePoint Champion represents their area by attending scheduled meetings (for example, fortnightly) which generally do not exceed one hour in length. The purpose and benefits of such meetings are as follows:

    • Sustains SharePoint Champion engagement
    • Allows the sharing of SharePoint updates, business and technical from SharePoint Champion 'area'
    • Provides a sense of community
    • Provides a regular forum
    • Allows the sharing of ideas in enhancing productivity and user adoption in SharePoint
    • Allows the full discussion of business policies
    • Allows the sharing of successes and failures
    • Allows the tracking of actions and results
    • Keeps members up-to-date on SharePoint trends
    • Provides a yardstick in measuring the effectiveness of SharePoint training, communication and governance

     

    What value does an organization extract from having a SharePoint Champion on board?

    Organizations that put into place SharePoint champions will:

    • Be able to utilize and rely on a bank of individuals who are committed to providing stable and successful SharePoint solutions
    • Be able to define and sustain strategies for user adoption
    • Be able to define and sustain business policies for SharePoint solutions
    • Be able to employ methodologies for achieving success and improved adoption
    • Be able to leverage internal resources and to promote SharePoint initiatives

    User Adoption can be sustained

    SharePoint Champions because of their commitment and enthusiasm for SharePoint will already have the skills necessary to communicate the benefits of a SharePoint solution being utilized within their department. This is because they can identify what the best communication process and levels that suit them and their peers, the best kind of training, and they can drive through top level business processes that relate to the provision of the SharePoint solution.

    Goal alignment of Business and Technology is easier to achieve

    When a SharePoint solution is implemented the lifecycle of that product does not stop until that solution is no longer in use. The lifecycle consists of continual iterations due to permutations in the business structure, organizational structure, technical infrastructure, and other reasons. Therefore, the requirement for the SharePoint Champion in being able to determine how the solution continually meets the departmental requirement is vital. This is a business analysis skill, a communicational skill and also requires systems analysis to a degree.

    This aids the development of any SharePoint solution going forward since the SharePoint Champion provides vital information concerning the value of any alternatives brought to the table. For example, if a SharePoint solution requires a customized product, the SharePoint Champions can help the decision making, identifying whether adopting a customized solution versus an agile approach of developing something from in-built features is prudent.

     

    What are the key qualities of a SharePoint Champion

    As you have seen in the article so far, SharePoint Champions are not technical, they are from the business, elected by the business for the business. SharePoint Champions need to be continually managed to ensure that their qualities remain in focus to ensure user adoption by peers. The aim of a SharePoint Champion is not to become a technical guru, it is to evangelise SharePoint solutions so their peers can remain productive. Let us take a look at some of the other key qualities of a SharePoint Champion.

    SharePoint Champions understand business operations and the flow of information through their function

    The goal of any solutions architect is to enable better alignment between the requirements of the business and the IT services within the organization. From an IT perspective, this will enable IT to support more agile business solutions. The solutions architect understands how SharePoint tools can be used to solve business information and collaborative challenges. However, any solution needs to match the aspirations of the relevant users and map into the organizations vision of what SharePoint can provide. The SharePoint Champion is extremely useful in helping make that vision real, and for the solution to be sustainable and supportable. They enable the business and technology to see the return on investment that the solution has delivered. This is because they help answer the most difficult question of all - "What should this tool do for me and my peers which will continue to make us more productive".

    To do this, SharePoint Champions have key knowledge and skills associated with the capabilities and operations of their business, and provide an understanding in terms of the collaboration required to meet those capabilities. This knowledge aids business policy, learning, training and the build of SharePoint solutions. Figure 1 shows the relationship and integration of business capabilities which will be understood by the business. Utilising SharePoint Champions who understand these capabilities and operations will help ensure that whatever solution put in place supports a change process. 

     

     
    Figure 1. Capabilities common to most businesses which SharePoint Champions have an understanding on.

    SharePoint Champions come from various facets of the organisation

    When you bring more than one SharePoint Champion from different parts of the organization, they in turn become more skilled in the workings of the organization at process level. Additionally, problems which SharePoint Champions individually bring to the table become easier to address when there are different organizational members. In fact, I've found instances where governance comes into play as soon as SharePoint Champions actively discuss requirements with other SharePoint Champions.

    When this takes place, an outcome is the formation of policies concerning how SharePoint should be used.

    SharePoint Champions are elected by the business

    SharePoint sponsors are those who own SharePoint. It is their vision of information management and collaborative challenges being solved by SharePoint which is the goal of service delivery. Therefore, SharePoint Champions, elected by and approved by SharePoint sponsors will support that vision.

    SharePoint Champions ideally have experience using SharePoint

    Currently, there are various versions of SharePoint still in operation which are SharePoint 2007, SharePoint 2010 and SharePoint 2013.

    Therefore, there will be a number of Microsoft Office products also used with these various forms of SharePoint, Office 2007, Office 2010 and Office 2013.

    Additionally, there are off-premise versions of SharePoint, which is provided via Office365.

    The point here, is that whilst SharePoint Champions will ideally have experience in SharePoint, that experience needs to be related to the technology provided in the organization where that SharePoint Champion is based, and the experience in the actual use of SharePoint and associated tools.

    An example of this is where a person joins a company where the SharePoint version is 2007, and that person comes from a company where the SharePoint version was 2010. Great care must be taken if electing that person as a SharePoint Champion, as they may influence decisions to move to a later version (irrespective of the value in moving to a later version). The key of SharePoint Champions is not altogether in the knowledge of SharePoint that they have, rather, in the experiences they have with using SharePoint to solve information and business challenges.

    Note that whilst having experience using SharePoint is ideal, there is also good reasons for having SharePoint Champions who have knowledge of or experience in other content management systems, as they will be able to bring experiences and knowledge concerning user adoption tactics from the perspective of other web based systems.

    The key is to ensure that the career development to enforce the experience of SharePoint in using the platform needs to be provided. In an article I wrote concerning training, and in my book there is a section that really aids SharePoint Champions learn SharePoint and at the same time can be mapped to their career development. This is the Microsoft Office Specialist (MOS) Certification. References for those are at the foot of this article.

     

    In Conclusion

    Managing SharePoint Champions is a continual process that stops when the organization stops using SharePoint. It is not a committee there to make decisions. Rather, SharePoint Champions confer with each other, bring up points of interest, business issues concerning SharePoint. User Adoption in SharePoint is not a given. Those working in SharePoint technologies cannot hope to sustain user adoption without the aid of the business. The successful development and implementation of User Adoption in SharePoint requires the intersection of several factors, namely the business and organizational changes, the compatibility between the desired tasks and the tasks, and the appeal to the users. SharePoint Champions can be a very useful and skilled resource to help reach those goals because:

    • They provide capabilities to help shape strategies and make decisions for their area regarding SharePoint
    • They are confident and committed to SharePoint productivity and user adoption
    • They are great communicators
    • They have the ability to work well in a diverse team (across functions, geographies)
    • They are open and willing to learn new skills.

     

    References

    For more information concerning the use of a productivity hub and user adoption kits for SharePoint, visit this link:

    http://www.sharepointgeoff.com/sharepoint-2010-productivity-hub-and-user-adoption-kit/

    I wrote a simple Training Guide and discussion article concerning SharePoint, which includes references to Microsoft Certified Professional (MCP) and Microsoft Office Specialist (MOS) accreditation, including training providers, types of training, etc.

    http://www.sharepointgeoff.com/articles-2/training/

    One of the things to consider is the setting up a centralised site for SharePoint Champions which should include training materials, information concerning enterprise tools which they have access to and links to policies concerning Governance, Acceptable Use, and Statements of Operations, etc. More information concerning this, including more about SharePoint Champions is in my book "Microsoft SharePoint 2013, Planning for Adoption and Governance".

    http://shop.oreilly.com/product/0790145368263.do

  • The 4 Key Skills You Need to Get Ahead in IT

     

     

     

    By Paul Gregory – Principal Technologist and Trainer at QA – www.qa.com

     

     

     

    I have been training IT professionals for over 15 years. During this time, I have had the privilege to work with some of the world’s leading IT organizations and some of the best IT professionals in the UK.

    Being in this unique position, I often find that I am asked by delegates - who are looking to better their skills in IT - for my opinion on what I think makes a good IT professional.  

    As a trainer, of course, I am likely to say training! But there is more to IT than having all of the latest certifications and course attendance sheets. Here are my 4 golden rules for getting ahead in IT.

     

    1)  The Obvious Lesson – Be On-The-Pulse

    I guess the first lesson is the obvious one.  Everyone spouts the same mantra – but its true – monitor the up-and-coming IT trends and technologies, stay ahead of the game, and monitor the progress and changes of the constantly moving juggernaut that is IT technology.  

    2)  The Hardest Lesson – Don’t Follow the Crowd

    For me, this lesson is one of the hardest to follow.  As humans we do not like to be different we like to agree and follow. But in IT, it is important to form your own opinion and try things out for yourself rather than following the IT crowd.

    Let’s look at an example of this type of behavior; some of the courses I teach at QA are on Windows 7, and I have found that a number of my delegates leave these courses saying ‘we should have looked at Vista’ – in fact in the last course I taught 30% of delegates fed this back.
    Many people’s opinions of Vista have been formed by reading Internet blogs and listening to techie gossip.  However when people attend Windows 7 training and find features they really like they are often the ones introduced in Vista.  It is important not to rule technologies or approaches out without testing and evaluating them for yourself and the business you work for. You may find that a product which hasn’t worked for one business can make a real difference to you.

    3)  A Lesson I Learnt the Hard Way…Manage your IT, Don’t Let It Manage You

    During my first couple of years in IT I learnt a painful lesson. The workplace of the 80’s was very different compared to today and I was working for Unisys in 3rd Line Support, as well as managing a set of production Novell servers. 

    One day, my boss (a guy called Chris Mullen, bald as a coot but a lovely guy) asked if I had any hardware requirements. Now, me, naive as I was at that age (around 20) living in the moment replied ‘No Thank You Boss’.  A few weeks went by, six at the most, and the server farm started to run low on disk space, so I trundled round to his desk to highlight that I needed some new hard disks. His face went bright red, and he looked at me sternly – he rubbed his hands over his head, and I am sure that had he had any, he would have started pulling his hair out. So, what was the cause of Chris’ distress? Well, in the 80’s, hard disks were pretty pricey - £1000’s for 100’s MB. My boss had asked me about hardware when he was forecasting his budgets for the next year. My new revelation had just thrown his new budget completely out of the water.  This experience meant that I learnt two lessons very quickly, lessons which I still recite today as I find other IT professionals making the same mistakes I did.

    1)  Make sure you manage your IT, do not let the IT manage you

    2)  Make informed decisions that will not negatively impact others

    Both these lessons enable the business to feel the IT is being managed in a controlled and planned way.  IT then becomes an asset to the business and so do the individuals that make it happen.

    4)  A Strange Lesson – Be Bi-Lingual, We Don’t All Speak the Same Language

    IT people and business people do not speak the same language. We might sound like we do, the words we use may sound the same but the way we interpret each other can be way off. As an IT professional it is important to be able to communicate in terms that the business understands.  It is often the IT team’s role to communicate issues, and recommend solutions. If we are unable to communicate our business case in terms that the business understands, then we are fighting a losing battle.

    An example of this? Well, I previously ran an IT services company and when discussing a client’s backup requirements, I was told they were ‘All backed up’ – but on pressing the customer for more information I found that in reality the customer was only doing full backups each night. By only backing up each night, the customer was not ‘all backed up’, they were risking 24 hours’ worth of data loss each day. And so we see that the business owner’s interpretation of their IT system was very different to the reality. 

    As IT professionals we often have to make sure we question our business stakeholders and explain the implications of our recommendations.  Often the reality of decisions only unfold when it is too late and the business is then left unhappy.

    My Final Piece of Advice…

    So in summary, of course, core technology skills are hugely important in IT, without these you are unlikely to be taken seriously and you will not be able to perform your core role effectively.
    However, they are not the only skills required to get to the top – there are some key business lessons and life skills which IT professionals will need to master to get ahead.  Most organizations now recognize that senior IT people need more than just great technical ability, they also need them to demonstrate business-focused skills.

    It might mean stepping out of your comfort zone, but in my opinion, acquiring business skills will enable you to really get ahead in the world of IT.

     

     

    BIO 

    With over 15 years’ experience delivering and authoring Microsoft Windows Server technology courses, Paul is one of the most experienced trainers in the industry.

    A Microsoft Certified Trainer since 1995, Paul has worked both for and with some of the world’s leading IT Services organizations. Paul specialises in developing and delivering training courses on the Windows Operating system. He is a frequent visitor to Microsoft’s Global Headquarters in Seattle, attending early product workshops for some of Microsoft’s new products.

    Paul has also delivered many leading edge training courses at the request of Microsoft (US) – he has been called upon in the past to train numerous Microsoft Partners from all over the world in technologies such as Windows Server 2008 R2, Server Virtualisation, Management tools and Private Cloud.

     

     

     

  • Yammer - the Missing Link

     


     

       By Alexander Jethwa from Firebrand Training

     

      

    With the recent 1st anniversary since Microsoft purchased Yammer there have been huge changes & improvements of the Enterprise Social Networking platform, with one of its key benefits being team collaboration & value to business processes.

     • Users have grown by 55 per cent to approximately 8 million registered seats.

    • User activity (as measured by messages, groups and files) has roughly doubled year over year.

    • Paid networks have grown over 200 per cent year over year.

    Most of long standing Office 365 customers have been switched over to the latest Office 365 service upgrade in which gradual roll outs of Yammer integration into Office 365 & SharePoint. Seeing the growth of Yammer gaining real momentum. All current sign ups will be already on the latest service upgrade

    In this period of the last 10 months Firebrand Training an international IT & Project Management training company has followed & embraced the journey of Office 365 & Yammer seeing a huge increase of global collaboration & communication.

    Initially Firebrand rolled out Yammer to just one of their Marketing & Education departments in which they were left to use Yammer to see how it felt and if it would benefit them.

    Within weeks it became Marketing’s main communication platform with ideas being aired openly with great discussions forming with new plans already being rolled out thanks to the hassle less work & idea sharing via Yammer like there "Free Training For Life" competition & also there "2 Year FastPass" for Unlimited training. This has led to a reduction in time consumed on brainstorming, development & general meetings. Marketing have seen their projects from the point of initially idea to final approval cut on average of 37% time with 100% delivered on target. Ed Jones - Firebrands Marketing and Content Creation Executive said "We used to share ideas & make suggestions via email & a lot of the time things would just get lost or there would be very slow progress. Using Yammer at Firebrand Training has becoming a social routine rather than a chore and another thing to do. We can't wait to roll out the Yammer feeds in our SharePoint to really get things streamline."

    When Yammer was rolled out to the whole international company there were some struggles for people to adapt & sign up to Yammer but gradually the whole company joined and saw the benefits. For example, people from the U.K were sharing ideas, news & advice with staff in the Middle East whom they wouldn't really talk to. Senior management also took a keen liking to Yammer, such as in keeping the whole company up to date with announcements from WPC 2013.

    With Office 365 & Yammer just at the beginning of there long relationship, with the announcements of tons of features rolling out over the next 6 months, Yammer is getting easier and easier to use and allows users to stay up to date with apps and support across a range of platforms. With new features coming out all the time for the Yammer/Office 365 collaboration, you may be able to say that this is still just the beginning for everyone but one thing is clear: Yammer is creating a social link in companies, with huge potential to be used in range ways to collaborate, communicate & create.

     

    Go be social & collaborate!

     

    BIO

    Alex currently deals with corporate Training at Firebrand Training having worked in IT Training since 2010. Having stripped PCs since an early age & mainly being self-taught Alex started off by completing his first Microsoft certification at the age 14 years old (MCDST), to now being only 21 years of age & in the short span 2 years of which he was contracting, he won contracts with the likes of Nokia, The Houses of Parliament, Sony Music, London 2012 Olympics & many more. Currently he is working towards his Office 365 MVP for 2013.

     Contact Alex via Email: alexj@microtechies.co.uk,

     

     

  • Aligning Skills with Real World Business Benefits

     

      By Steve Smith,

      SharePoint MVP and owner of Combined Knowledge.

     

     

    This article deals with the subject of ‘Aligning skills with real world business benefits’ and why continued investment in technology skillsets is even more important today than it ever was. We will be looking at the importance of practical training alongside real world skills and aligning it all with qualifications and how a company as well an individual or team would benefit in both the short and long term after being through this process.

    After many years in the education space especially around Microsoft products there is no doubt that the products themselves have evolved into much more complex platforms. The knowledge and skills that we developed in the nineties and early 2000 certainly provided a solid foundation for the core skills needed in today’s world.

    But what if you are fairly new to the world of managing Microsoft technologies what skills am I talking about and why are they still so important?

    Let’s take SharePoint as the perfect example for this article, it is a product that can provide so much to the business but if done badly does nothing more than make a bad situation worse. But SharePoint is not a standalone product it requires many core skills that enables the product to function properly and to make the most of all the available feature sets. The product needs to authenticate and process data from other systems and therefore in order to really design / build / manage and troubleshooting SharePoint deployments a good systems engineer or SharePoint server Administrator would ideally need the following basic skills before even starting to deploy the product in a live environment:

    • Knowledge of Active Directory
    • Knowledge of Windows Server 2008 R2 or Server 2012
    • Knowledge of SQL Server and database management
    • Knowledge of Internet information Server (IIS)
    • Knowledge of network configuration and management (TCP/IP, DNS)
    • Knowledge of Authentication methods (Claims, Windows, SAML Tokens, Forms)
    • Knowledge of security methods (Kerberos, SSL, IPSec) 

     

    Microsoft has started to try and address this cross skills requirement with some new qualifications such as the MCSE for SharePoint. This qualification is aimed at getting people to learn not just the core SharePoint skills but also the core Windows Server skills ticking some of the boxes in the list earlier. http://www.microsoft.com/learning/en-us/mcse-sharepoint-certification.aspx

     

     

    These skills however are not achieved overnight and it may take you 6 months to a year to gain all the knowledge necessary to pass these exams in the real world and that's a very important aspect of skills benefiting the business. A qualification gained by pure academic methods does not help the business without real world experience to back it up. Very few companies will hire a person that has a qualification from a boot camp but no real world experience to back it up.

    The right way to progress for many people is to concentrate on one area like Windows Server management or SharePoint Server and then develop your skills as you work more with the products. Take a good training course and learn to use those skills in the real world. You will notice that the MCSE certification is also a 3 year rotation which means if you want to progress to the next version of the products you will need to spend time learning and working with the new products when they become available. This skills upgrade is an obvious advantage to the business as it now helps you to have discussions with the team on the benefits to upgrading but also the technical requirements to do so.

    Keeping your skills up to date is a very important part of an IT Professional, in an ever changing economy and technology those people that have maintained their skills are a much more valuable asset to the company. On the same token it is therefore important for the business to continue in the skills development of their IT team and understand that just because they sat on a SharePoint course in 2009 a lot of things have changed in the technology and a SharePoint Administrator from the 2007 product would not be able to correctly deploy and manage a SharePoint 2013 environment without skilling up.

    The methods of skilling up are many but the most effective way of knowledge transfer for technical content in my opinion still remains face to face training be it public classes or custom workshops for your team. We have however noticed for more end User focused roles that our online environment is proving much more popular as companies shift learning to a more flexible method for those users to suit both company productivity and the ability to take the training from home or within an office training room but without needing to travel.

    But we already have people in the company that manage these roles why should I need to learn it?

    Is a statement I often hear in the classroom. The key point though is that everyone needs to skill up to the current technologies not just the SharePoint Administrator, the SQL DBA for example should learn about how SharePoint Databases are architected and they need to be managed and if the SharePoint Server Administrator understands how to work with SQL Server at least on a basic level then the two of these Administrators can talk about the design in a way that both understand. It is no use the SharePoint Administrator telling the SQL DBA that he needs the databases for SharePoint configuring differently to the other business databases but not being able to justify why. But if we don’t have these cross platform skills the real loser becomes the business. Over time an incorrect design not only becomes inefficient but also very costly financially such as in a Disaster recovery scenario or a huge performance drop effecting productivity.

    Here is a good example of a DBA starter course to introduce SQL server http://www.microsoft.com/learning/en-us/course.aspx?ID=40364A&Locale=en-us

    There are also lots of community events usually free that has lots of information and learning potential such as the SharePoint user group that I help run http://www.suguk.org/ and is aimed at all levels of working with the product.

    Having the right skills ensures that the not only does the product get deployed correctly in the first place but also in the long term as more features get deployed and integrated that they will also be deployed with minimal issues. Developing a skills program and mapping out learning objectives and time frames is a very important part of this process.

    When I take on new end user trainers for example my first task is to identify what they know, what they think they know and what they don’t know. Then develop a skills program that will enable them to reach the level required to train those particular products. This could involve attending certain classes, working on some real world projects along with more experienced people and getting involved with projects within the company. So even with people that already have a good background in technology it may still take those people 6 – 9 months of additional learning and skilling up before they even start to train classes.

    Skills development is also about preparing for the future and a good example of this is cloud technology. In a recent study it was estimated that by 2015 at current skill adoption rates there will be up to 7 million jobs waiting for people with the right skills.

    Source for data – The skills gap in cloud technology Microsoft - http://borntolearn.mslearn.net/microsoft_it_academy/b/weblog/archive/2013/04/18/preparing-your-students-for-tomorrow-39-s-it-careers.aspx#fbid=bPZzckgeAUs

     

    But we will not install the product that is done by someone else..

    Is another common scenario; a business will bring in contractors or a Microsoft partner to deploy the products for them. This does not mean that the business should not invest in skilling up its own engineers, how will they be able to troubleshoot anything if they do not understand how it all fits together? It’s like giving someone a car without any driving lessons. The financial cost of taking days to fix a problem instead of minutes could be tens of thousands of pounds but the obvious advantage of bringing in Microsoft partners is that they have already gone through the skilling up process and therefore you are able to get the solution you are looking for deployed earlier and whilst they are doing the main implementation you can be skilling up your own staff to take over the daily running when it is handed to you. 
     

    Here is a quote from Charlie Lee (SharePoint Technical Architect) from Cap Gemini who explains how Engineers with the right skills can easily turn a potential issue in deployment into a straight forward work around which ultimately means the business benefits in deployment time and that is setup correctly in the first place. 

    Whilst implementing a large SharePoint 2010 farm for a public sector organisation in the UK it became clear rather late in the day that there was a previously unidentified requirement for User Profiles and more importantly for User Profile Synchronisation. Luckily I had attended a training course on Advanced SharePoint 2010 Infrastructure from Combined Knowledge which had covered the intricacies and quirks of this particular area.  During implementation we came across several issues which I had been prepared for due to the detail covered in the training. What could have taken days to resolve without appropriate skills took merely a few minutes to identify and resolve the issues.  

     

    The consultancy companies and Microsoft partners therefore also have the investment to make in skills development to benefit their business as well as their customers, they need to ensure that they are not only advising the clients correctly but they must also be learning the products a lot earlier to ensure that they are able to deploy products for early adopters or business that want to deploy an early trial. Here is a quote from Matt Groves Head of Information Worker Solutions at Trinity Expert Systems Ltd a Microsoft Gold Partner.

    "As the head of practice in the professional services department of a premier Microsoft partner it is essential that we maintain the highest levels of capabilities with the latest technologies, and training from a provider like Combined Knowledge forms a key part of our skills and talent management strategy. Training and certification allows our technologists to worry less about the technology and focus more on delivering the customers’ business outcomes.

    The launch of SharePoint 2013 generated a lot of interest in our client base and having the majority of the team trained during the beta timeframe put us in a position to be able to deliver implementations for clients according to their schedule, not one constrained by the skills of their implementation partner, facilitating a shorter ROI timeframe for all concerned and a higher quality of end product."

     

    Summary

    Investing in skills development is more relevant today than it ever was especially with the products today being more complex and integrated than they were 10 years ago. The winners of skills development will always be the business through efficiency and productivity.

    For more information on available Microsoft skills and qualifications go to the learning website
    here: http://www.microsoft.com/learning/en-us/default.aspx or feel free to contact me at steve@combined-knowledge.com

     

     

    Bio

    Steve Smith (MVP-SharePoint Server for 7th consecutive year) is the owner of Combined Knowledge, a UK company that provides Microsoft SharePoint support, Education training and developing SharePoint adoption and usability tools.

    Steve is well travelled spending much of the last 30 years travelling in Europe the US and South Asia for the companies he owned and worked with including achieving his first qualification with Microsoft technologies on Windows NT 4, IIS 3,Exchange and SQL 6.5.

    Although tinkering with computers since early teenage years Steve has specialized with Microsoft's Infrastructure systems since 1996 and been involved in many Microsoft Beta programs including Windows 2000, Exchange 2000 and SharePoint 2001/2003/2007/2010 and 2013. The last 13 years however has seen a majority of his time with SharePoint and running the Combined Knowledge SharePoint IT Pro and Infrastructure courses in the UK, Europe and Asia Pacific. Steve is a Co-Founder of the UK SharePoint User Group www.suguk.org, Steve lives in South Leicestershire, England with his wife and 3 children.

    Steve can be contacted at steve@combined-knowledge.com or Twitter: @stevesmithck.

     

     

     

  • Microsoft Server Virtualisation

     

     

     By Alan Richards, Senior Consultant at Foundation SP and SharePoint MVP.

     

     

     Overview

    All businesses are constantly looking at ways to save more or become more efficient and one of the largest drains on funds is normally the IT department, a necessary evil really in most businesses eyes. Without IT businesses wouldn’t be able to run their core services and definitely wouldn’t be able to have an online presence, which in this age of online shopping would herald the death bells for any business without an online presence. There comes a time in the life of all CIO’s when the board says ‘How can you reduce your budget’ Of course what they actually want is for you to reduce your budget while still providing exactly the same service if not more. Well virtualisation is one of those technologies that can help reduce not only want you spend on actual hardware but also reduce your operating costs as well. For a moment in time you may even become the boards blue eyed boy.

    The School had recently invested large amounts of money on the network infrastructure and at the time of the review were about to embark on a program of server replacement. At the time server virtualisation was in its infancy with Microsoft just starting to make inroads into the market and to be perfectly honest they were sceptic; they could not see any benefit to users in taking services that ran on physical servers and putting them onto one large host to share resources, surely this would reduce functionality and the responsiveness of the server, especially file servers, which in any educational environment is one of the most heavily used servers.

    So after reading many articles, a bit like this one, they took the decision to run a test environment, their server replacement program was put on hold for a year apart from 4 servers that in no way could carry on
    any longer. A high specification server was purchased and the 4 servers that required replacement were transferred to the new virtual host. This was run for a year with constant monitoring of all the key elements:

    • Disk usage
    • Network utilization
    • CPU load
    • End user experience

    The results of the yearlong test were very pleasing and the decision to move to a fully virtualised environment was taken.

    They now run a fully virtualised server environment using Windows Server 2008 R2 and HyperV and this eBook will lead you through all the decisions they took from the initial planning phase all the way through to the day to day running of the systems. This eBook is not designed to be a how to, more an aid to help you make the right decisions when moving to server virtualisation.

     

    Running A Trial

    When you first look at starting out on the virtualisation road map running a trial is advisable. You may be more than happy with researching what other people have done and pulling together all of the information available in blogs, white papers and videos but a trial will be able to tell you details about your network and how your infrastructure can cope with the demands of virtualisation.

    How you run the trial will depend on the information you want to discover and to a certain extent how much money you want to spend on the hardware.

    A simple trial would consist of a single server with enough on board storage to cope with 3 or 4 virtual servers and network connectivity to service the virtual servers. This type of test will allow you to monitor the loads on your network infrastructure, the actual network cards on the servers and also allow you to monitor the end user experience.

    If you have decided on virtualisation and your trial is going to be used to simply collect data to help in the planning and installation phases then you could go for a trial setup with a single server
    and the storage system of your choice. This setup will allow you to collect all the required data but also make the transition to a full virtualised environment a lot easier.

    Whichever method of trial you decide on, the data collected will never go to waste as it can be used to inform your decision making throughout the process of virtualisation.

     

    Planning

    Fail to prepare…Prepare to fail

    When implementing server virtualisation the above phrase fully applies. The planning phase should not be overlooked or entered into lightly. IT is core to any business from high street banks to the smallest of primary schools and if that IT fails or is not fit for purpose then the business could fail.

     

    What Have We Got

    The first part of the planning phase should be finding out what you have got and how it is being used. Before you can plan for how many virtual hosts you need you need to know what servers you are going to
    virtualise and the load that they are under currently. The load on servers is an important factor in deciding on your virtualisation strategy, for instance it would not be advisable to place all your memory hungry servers on the same virtual host.

    Finding out what you have can include:

    • A simple list of servers
    • Disk utilisation monitoring
    • Network monitoring tools to map out the network
      load and utilization
    • CPU load

    All of these factors will help you in answering some of the key questions that if you don’t have the right information could prove decisive in the way your virtualised environment performs. These key questions can include:

    • How many virtual hosts do I need
    • What size storage should I purchase
    • What type of storage (iSCSI, fiber channel etc)

     

    What To Virtualise

    A major question you will have to ask yourself is what services you are going to virtualise. Theoretically you can virtualise any Windows based server and also any non-Windows based server, but whether you should virtualise everything is a question that has produced some very heated debates.

    Let’s take, for example, one of the core services to any Windows based domain; Active Directory. At its most basic level it provides the logon functionality for users to access computers and therefore their network accounts. If this service fails your network is fundamentally compromised, so one argument for virtualising it would be that you are providing a level of fault tolerance by it not being a physical server. However a downside to this is what happens if your virtual hosts are members of the domain and your virtualised domain controller fails – how do you login to your host if it requires a domain account to login?

    Obviously this is taking it to the extreme and there are ways around this scenario but the point this argument is trying to make is that you need to carefully consider what services you virtualise.

    As a rule it is best practice to keep at least one physical domain controller to provide active directory functionality, you can then have as many virtualised domain controllers as you like for fault tolerance.

    Other considerations when considering what to virtualise include:

    • Physical connection (there is no way to connect
      to external SCSI connections in HyperV ie tape drives).
    • Security, both physical and logical (Is it, for
      example, a company requirement to have your root certificate authority server
      locked away somewhere secure).

     

    Microsoft Assessment & Planning Toolkit

    While there is no substitute for ‘knowing your network’ there is a tool provided by Microsoft that will make life slightly easier for you. The Microsoft Assessment & Planning (MAP) toolkit will look at your current servers and provide you with suggested setups for your virtualised environment.

    I would suggest running the tool and using the suggested solutions as a starting point for your own planning, I wouldn’t suggest using the solutions from the MAP toolkit as gospel.

    You can download the tool using from the following location.

    http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=7826

     

    Virtualisation Scenario

    Now we have completed our planning step let’s look at a typical small scale scenario. Imagine you are a business with only 4 servers, so not very big at all really, one of which is your Active Directory server,
    maybe a setup like the one below.




     
     
     
     

     

     

     

     

    For this scenario you could quite easily use one server for your virtualisation host with lots of disk space and end up reducing your servers from 4 to 2 as shown below 

      

     

     

     

     

     

     

     

     

     

     

      

     

     

    While this scenario would work and be relatively low cost to implement it does fall down in a two key areas

    • Future Growth – As your needs for more servers may increase it will put more strain on the resources of this single host and ultimately this may have a detrimental effect on your end user experience
    • Redundancy – If your host fails then all of your virtual servers will fail as well as all the virtual hard drives are stored on your single server. This would not be ideal.

     

    A preferred solution to above is shown below, this solution gives you redundancy and the ability to grow your virtualisation environment as your needs grow.

     

     So how is this scenario so much better than our first one?

    • Future Growth – While initially this scenario may seem a bit of an overkill what it does allow you to do is grow your environment as your needs change without the need to purchase more equipment or completely change your setup.
    • Redundancy – Because all the virtual hard drives are stored on a central storage system the failure of one host will not affect the running of your virtual servers. Failover clustering will take care of the transfer of services between hosts automatically and your users experience will not be affected.

     

    Virtualisation Technology Decision Factors

    With the planning phase now over another question you should be asking yourself is which virtualisation technology you are going to implement. There are a number of players in the market but the two key ones are:

    • VMware
    • Microsoft Hyper-V

    Both products will fulfil your needs when it comes to virtualising your servers but there are two key factors you should consider before deciding on your chosen technology

     

    Cost

    VMware will cost you, that is a given and be under no illusions that it is not cheap whereas Hyper-V is built into versions of Windows Server at no extra cost of licensing.

    There is also a free standalone version of Hyper-V Server which can be downloaded using the link below. This version is purely for use as a Hyper-V server and does not have the same management GUI as the Hyper-V version in Windows Server.

    http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=3512

     

    Licensing

    Licensing can be expensive, but when undertaking virtualisation you need to consider how you are going to license both your hosts and your virtual servers.

    Your hosts will need a version of Windows Server installed to run Hyper-V, which version you install can have a dramatic effect on how you license your virtual servers. The table below illustrates a typical large environment and how the version of Windows installed on the hosts affects their licensing requirements.

    Number of Hosts

    Windows Version Installed on Host

    Included VM Licenses

    VM’s Installed

    Licenses Purchased by Business

    5

    Standard

    1

    30 (6 per host)

    30

    5

    Enterprise

    4

    30 (6 per host)

    15

    5

    Datacentre

    Unlimited

    30 (6 per host)

    5

     

    As you can see paying a little bit extra for the datacentre version of Windows Server can help save you money by reducing the number of server licenses you have to purchase. This may not fit for every implementation but for large Hyper-V implementations, licensing can play a huge part in both making the decision on technology provider and your on-going costs. Of course with VMware as well as paying for the actual software there is no ‘included’ Windows licenses and so you will have to license every copy of Windows you install whether it is physical or virtual.

     

    Hardware Decision Factors

    So far we have looked at planning, a virtualisation scenario and what you should think about before deciding on which technology provider to choose for your implementation, the final step in this process is looking at what and how much hardware to purchase.

    Working out what specification and how many host servers you will purchase should centre on a number of factors:

    • Number of virtual servers
    • Network bandwidth required
    • Memory requirements of virtual servers
    • CPU requirements of virtual servers
    • Storage

    All of the above are going to have an impact on your virtualisation design and purchasing so let’s look at each one in more detail 

    Number of Virtual Servers

    As we saw in the virtualisation scenario the number of virtual servers you plan on hosting can have a large effect on your hardware decisions. If you only ever plan on hosting a small amount of servers then you
    may very well get away with just one virtualisation host; but remember that this will not give you any redundancy if your host dies or any future growth possibilities.

    If you plan on hosting a large number of virtual servers then this will force you down certain paths about the number of hosts and the storage of the virtual hard drives.

    Network Bandwidth Required

    When you are designing your host servers then the network resources required by each virtual server will have an impact on the number of network interface cards that you will have built into the host server. You need to consider that if you host 5 servers on a host and only have 1 NIC then all data traffic will be going down that one network card, this may have a detrimental effect on your user’s experience.

    One item that will be discussed later in this book is the setting up of the management side of virtualisation which will take up at least one NIC for management traffic across the network.

    Memory Requirements

    As anyone who works with computers knows, memory has a massive effect on how computers perform, this is no different in servers and in some ways it is more important. When designing the memory requirements for your host servers you will need to consider both the memory requirements for the host server plus all of the virtual servers.

    Let’s look again at the setup from earlier in the book.

     

    Let’s assume that each of these servers has 10Gb of memory and we are going to virtualise all of them except the AD Server. A simple calculation shows us that for the virtual servers we will need 30Gb of memory and if we give the host the minimum of 4Gb then the host server will need 34Gb of memory.

    Sounds simple, well it is to a certain extent, but as we will discuss later in the book if you are planning on setting up your virtualisation environment to have redundancy you will need to allow enough memory on your hosts to cope with the failure of 1 or 2 hosts and the failover of virtual servers to the remaining hosts.

    In Hyper-V this is called failover clustering and is the best way to ensure that your users are not affected if you suffer the failure of one or more virtualisation hosts.

    CPU Requirements

    Designing your CPU requirements for your host servers is done in a very similar way to the memory requirements. You need to consider how much CPU activity and load each of your virtual servers will take plus the load that will be required by the host operating system.

    You also need to give consideration to what will happen in the event of a host failure and the failover of other virtual servers to the remaining hosts.

    Storage

    In a virtualised environment storage plays a major role. In a simple one server environment this can be just a large amount of hard disk space on the actual host, in larger environments the storage is normally a NAS, SAN or other form of centralised storage.

    Some of the factors you need to consider when looking at your storage design are:

    • Size
    • Redundancy
    • Connectivity
    • Expandability

    Let’s look at each of these factors one by one

    Size

    The size of your storage will fully depend on how many virtualised servers you are going to run and what they are going to store. For example a web server is not likely to take up a lot of hard drive space, however a large SQL server  is going to take up a large amount of space.

    You also need to consider your growth plans, while you can design your storage requirements for your current selection of servers you also need to consider how your servers will grow or the addition of more servers.

    Redundancy

    With centralised storage you could introduce a single point of failure. With Microsoft Hyper-V and failover clustering you are taking a decision to implement a system that can cope with the failure of virtualisation hosts, if you don’t take similar steps with your storage solution then you are introducing a fail point. If the SAN or NAS fails then access to your virtual server hard drives will fail and therefore your servers will not run.

    There are various ways you can mitigate the failure of your storage system.

    Duplication

    One of the ways to ensure your storage system is not a single point of failure is to have a duplicate system. This method is by far the most reliable. By setting up a duplicate storage system that is constantly synchronised with the live data you remove the single point of failure. However this method is expensive and requires storage solutions that are capable of failover, hence narrowing your field of choice when it comes to your storage.

    RAID

    Setting up your storage solution for RAID 5 or 10 will mitigate the effects of a particular drive error. This requires a storage solution capable of RAID (which is probably all of them) but foremost your storage solution must support hot swappable drives, this way you will not need to shutdown the storage system to replace the faulty drive.

    Backup

    Finally ensure your data and your virtual hard drives are backed up. Backing up your VHD’s will mean that if your storage solution fails you will only need to recover the VHD’s to return your system to a functional state. Most of the backup software on the market can perform backups of virtual servers but an obvious choice would be Microsoft System Center Data Protection Manager, which has full Hyper-V support.

    Connectivity

    How you connect to your storage solution is also a key factor, two of the main choices are iSCSI and Fiber Channel. Which you choose can be affected by a number of factors.

    Once you have chosen your system of connectivity then you need to consider how much traffic will be traveling between your hosts and the storage system. You also need to consider redundancy again, a single point of failure could be introduced if you use a single network cable and switch to connect all your hosts to your SAN or NAS.

    A simple scenario is shown in the diagram below, this shows two routes for each host to access the storage system. This will remove the connectivity single point of failure.

    Expandability

    As you consider your storage solutions you should also think about the future. When setting up Microsoft Hyper-V you will need to connect your failover cluster to your storage system and the LUN’s you set up on it. If you do not think about how your environment will grow in the future you could lock yourself into a situation where if your storage system becomes full you may need to remove all the current LUNS’s before you can increase the amount of available storage space.

    So let’s explain that in a bit more detail. When you purchase your storage solution you will purchase two distinctly different items; the physical hard drives and the ‘housing’ for them to go in. The housing is the piece of equipment that will manage your hard drives, the iSCSI connections and the partitioning of the hard drives into what are called LUNS’s which then you can connect to your hosts.

    What you need to consider when purchasing the ‘housing’ is the complexity of the management system; some of the low end storage solutions will have the basics, such as RAID array ability, dual controllers, dual iSCSI connections but what they won’t have is dynamic LUN expansion. This basically means that once you set the size of the LUN on the storage box it is fixed and if you ever want to increase the size you will need to backup all the data, delete the LUN and then rebuild it with the increased size. Some of the more expensive storage solutions have the ability to increase the size of a LUN as you insert more physical hard drives.

    A major factor in this decision is obviously cost; the more features in a storage solution the higher the cost.

     

    Installation

    So you have run your trial, planned your virtualisation setup and decided on the specifications for your hardware, all you need to do now is install the system – sounds simple and to a certain extent it is simple.

    The basic setting up of the hardware and storage system is relatively straight forward, this book is not designed to be a how to for using virtualisation but more a thought provoker but the basic steps for setting up your hardware include.

    • Connecting your hosts to your network
    • Setting up your storage LUN’s
    • Connecting your hosts to your storage solution
    • Installing Windows Server to your hosts
    • Enabling the Hyper-V role
    • Enabling the failover clustering role
    • Installing your virtual servers

    While these are the basics steps to installing a virtualised infrastructure there are, as you have probably guessed, a number of factors you have to consider.

    Network Connection

    Network connectivity can play a large part in your end user experience; if you do not assign enough network bandwidth to both the host server and the virtual machines you could create bottlenecks which will then affect how fast data is returned to your users. You also have to consider the management of the host and connection to your storage solution. In an ideal world your host server would have separate connections for each of the networks it has to service. The simple diagram below shows a host server with a series of network connections.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    In this scenario the server would require 4 network interfaces; however this only gives the virtual servers on the host a single connection to the domain LAN. A much better scenario would be to increase the number of network interface so that once Hyper-V is installed you can assign more interface to the domain LAN

    Setting Up Your Storage LUN’s

    A LUN on a storage unit is a logical space that can then be assigned to the virtualisation infrastructure. How you set up your LUN’s and assign them depends heavily on the manufacturer of your chosen storage
    solution. When setting up your LUN’s you need to consider how much data you are likely to store, how much this data may expand. These two factors will affect how many and the size you assign to your LUN’s.

    Failover Clustering

    Failover clustering is the technology used to allow your virtual servers to ‘failover’ to another host if its parent host goes down for any reason. Failover clustering is relatively simple to setup and running the wizard on a host will guide you through the steps for enabling the technology.

    The first step in the setting up of failover clustering is to validate the cluster. To do this you will need all your hosts setup, the Hyper-V role installed and connection to your storage solution complete and
    working. Validation of the cluster carries out checks on the system and simulates failed hosts. While you can create the cluster even if elements of the validation fail it is worth noting that if you place a support call with Microsoft in the future, they will ask to see the original validation report to verify that the system was fully functional when setup.

    It is also worth making sure validation passes if just to give you peace of mind that your system is ‘up to the job’

     

    Management

    So you now have a fully functioning virtualisation infrastructure and all is going well. The management of virtualisation is a relatively simple job; you manage your virtual servers as you would any physical server, with updates and patches all installed to a schedule that you as the Network Manager have decided upon.

    The management of the actual Hyper-V hosts is also fairly simple in that as with any physical server updates and patches are scheduled as per your company or network team policies. The only key difference is that any restarts of a host server will affect a number of virtual servers so requires a little more thought. While it is true that with failover clustering installed the virtual servers will failover once the host restarts, it would be a much better plan to move the virtual servers to another host before any updates take place, this way the end users will not get any interruption of service.

    The main management tool for a Hyper-V environment with failover clustering installed is the failover cluster manager.

     

    From this management suite you can

    • Create New Virtual Servers
    • Stop / Shutdown / Restart Virtual Servers
    • View the health of network connections / storage
      connections / networks
    • Move / Migrate Virtual Servers

    The moving of virtual servers between hosts can be done in a number of ways but if you want your end users work to not be interrupted then live migration is the service to use. Live migration moves a virtual server from one host to another without interrupted network access to it and therefore you end users will not notice any interruption in network connectivity.

    The failover cluster manager is a simple to use application and provides all the necessary tools to effectively manage your virtualisation environment and because it’s part of the Windows management suite, very familiar to anyone who works with Windows Servers.

     

    Conclusions

    The aim of this eBook article has always been to help you in the decision making process with regards to virtualisation. You need to decide if virtualisation is;

     

    a)     Right for your environment

    b)     Which technology provider to use

    c)      What storage solution to choose

    d)     Specification of your hardware

     

    While we can’t tell you if virtualisation is right for your environment, we hope this eBook article helps you in making those other decisions.

    Key points you can take from this book should help you in your virtualisation journey, Microsoft Hyper-V is a powerful virtualisation technology and as it is built into Windows Server 2008 R2 you don’t have to pay any more for it. The licensing model that Microsoft provides for licensing virtual servers can actually save you money on your licensing costs.

    The free version of Windows Hyper-V gives the option of installing a cut down version of Windows that is dedicated to virtualisation, ideal for environments that don’t currently run Windows.

    Combine Microsoft’s Hyper-V and Failover Clustering technology and you have a solution that is cost effective, resilient and an extremely powerful tool in managing your virtualised environment.

    And finally, as they used to say on all good news programs in the UK, did the School in question actually save any money?

    Well the answer to that is yes, the reduction in the number of servers to maintain, cool and generally look after has given them annual savings of approximately £12,000 a year.

    The School in question was a very forward thinking one and took on HyperV just as it was making inroads into the market, the next steps for them are to look at upgrading their current virtualisation far to Server 2012 and start to take advantage of all the great new features that Windows Server 2012 brings you.

     

     



     

  • Virtualizing Tier 1 SQL Server

    A while back I published a couple of posts on virtualizing SQL Server, and in the light of developments in both the virtualization platform out there and SQL Server itself I feel the need to do a complete rewrite. 

    The traditional approach to implementing high availability (HA) in SQL Server has been to create a cluster and for this to be a more resilient HA means three nodes or more are needed to maintain HA with two nodes while one node is offline for planned maintenance, for example to either patch the OS or the SQL Server node itself.   What does this mean for virtualization? If you are using Hyper-V  it doesn’t really matter; the VM’s comprising this cluster (aka a guest cluster) are kept on separate physical nodes on a (physical) cluster you can patch the hosts the guest OS, and SQL Server all using Cluster Aware Updating (CAU) in Windows Server 2012.  However it’s not quite so easy in VMware, you‘ll have use VMware Update Manager to patch the hosts and then use CAU to patch the guest OS and SQL Server. Moreover as far as I know you can only have a two node guest cluster in VSphere so while you are patching SQL Server you are down to one node.   So what if you have to use VMware and you want more in the way of HA like you have on Hyper-V? 

    One option would be to use Availability Groups in SQL Server 2012 Enterprise edition. This combines the best of mirroring/ log shipping with Clustering:

    • There’s no shared storage so I don’t see why you would be limited to a three node guest Windows Cluster.
    • Failover is very quick as there’s no shared storage each node has it’s own copy of the database being protected
    • Unlike mirroring and log shipping you are protecting a group of databases as though they were one and you can use the secondaries for reporting and as a source for backups (only full backups though). Plus you can have multiple secondaries for example a synchronous secondary in your local data centre with an asynchronous copy at another location , so a bit like replication in Hyper-V & VMware but at the database rather than VM level.  That’s an important point you should use this techniques over VM replication as all you are synching is the actual SQL Server

    Your next consideration is going to be making sure you get a predictable level of performance or your users might be phoning if there’s issues with speed as well.  Tuning in a physical world has occupied many a mind and there’s tons of advice out there from MVPs, TechNet etc.   Things get tricky in a virtual world as resources are shared.  However if you are running a tier 1 on database then best practice would be:

    • CPU Don’t over commit and use all the NUMA capabilities in your hypervisor to pass through maximum performance to the database.  Bear in mind that for HA you might well want this capacity reserved on other nodes.
    • RAM can’t be over committed in Hyper-V, but shouldn’t be over committed on VMware as performance suffers.
    • IO use the latest SR-IOV cards which can recieve and mange virtual network traffic straight fro the VMs if you can.  However you can’t team SR-IOV cards so you might want to pass through multiple SR-IOV NICs to a VMand then team inside the VM (while you can do this in Hyper-V but I am not sure if you can do that on VMware where the guest OS is Windows Server 2012).  If not use NIC teaming at the host level and the appropriate  teaming policies for access to the database (VMware advice on this is under networking policies here).
    • Storage Access questions usually revolve around whether to use RDMA/pass through disks so that the database itself is stored directly on a LUN referenced by the VMs. There’s actually very little difference these days and in both platforms you could use a share if you have a Windows Server 2012 fileserver running storage spaces. 

    The definitive white paper for virtualizing SQL Server 2012 is here. However the latest version of a best practices guide for running SQL Server on VMware I could find is here but it’s three years old and so applies to older versions of SQL Server (typically 2005/2008) and Windows Server 2008.  Hopefully this will change as Windows Server 2012  & SQL Server 2012 are now supported and of course there’s going to be even more new stuff with SQL Server 2014 running on Windows Server 2012 R2. Whatever you decide to do you’ll want your HA design to be supported and the definitive word on that check KB956893.

    Finally if you are a DBA reading this, one way to get to know your data centre admins is to help them with their SQL Server, as whether they are using System Center or VSphere it’s likely that the database underpinning these is SQL Server and it could probably do with a but of TLC, and a general discussion about protecting those databases too as they are vital components of your data centre.

    Note:

    • My definition of tier 1 isn’t necessarily big, it’s more about the impact of a tier 1 service not being there. If it isn’t there you can’t operate trade, function etc.  Of course systems like this tend to be heavily used and predictable performance is also important too.
    • I haven’t mentioned VMware fault tolerance here because it has so many limitations that render it impractical for all but the smallest databases and I generally find that if it’s tier 1 it’s generally very big and used by lots of people and so only having one CPU doesn’t really work.
  • Bi Products

    I have just read Paul Gregory’s guest post for the TechNet Flash, and the tow things that caught my eye was

    you own case I think of two ways this works:

    I am pretty good on SQL Server especially BI, such that I could still have that on my cv with some confidence, however more recently I have been focusing on Windows Server and System Center and my SQL background has really helped with this. For example in my Evaluate This series I used a SQL Server workload to show

    the point he made about being Bi-Lingual and about being Skilled up.

    I put these two themes together and came up with the title for this post which essentially means being skilled in more than one technology.  As we move into a world where some of the nuts and bolts are automated or outsources away from us then having a set of skills that can bridge technologies is going to be ore valuable. In you own case I think of two ways this works:

    I am pretty good on SQL Server especially BI, such that I could still have that on my cv with some confidence, however more recently I have been focusing on Windows Server and System Center and my SQL background has really helped with this. For example in my Evaluate This series I used a SQL Server workload to show Hyper-V Live Migrations, SQL Server running on a Storage Space and running SQL in Windows Server core.

    The other way this works is that I have a pretty good knowledge of VMware (I am a lowly VCP5) and  I have MCSE Private Cloud,  This means I understand enough to be able to speak VMware and articulate the world of Hyper-V to VMware experts. 

    This comes into play with integration if you know how to get product X to integrate with solution Y, and with migration I want to move from vendor 1 to vendor 2.  Those kinds of projects are always going on and have a number of advantages over other kinds of IT work:

    • You are respected as the expert and you can’t buy respect you can only earn it
    • The work is more challenging so having the skills isn’t enough you need to also (to Paul Gregory's post) relate to users and other technical teams. 
    • The day rates are higher, because the combination those two skills are rare.

    So while it’s quiet over the summer holidays (unless you are in education IT in which case you have my complete respect!) start having a look at some new stuff be it Windows Server 2012 R2, SQL Server 2014 or have another look at the Microsoft Virtual Academy (MVA)

  • Installing SQL Server in a Virtual World

    I do get occasional feedback that SQL Server is hard to install, all those screens all those checkboxes etc. etc.  This is simply a reflection that there is so much in SQL Server aside form the database engine, however if you know what you want and you are doing this regularly then script it, if you have less experienced staff you want to delegate the task to then script it and if you want to reduce patching then er … script it.

    On my recent VMware course it was obvious that the rest of the delegates while generally keen on scripting didn’t do this when deploying SQL Server on VMware so here’s my advice (which also works on Hyper-V BTW)

    What you can’t do in a virtual world is simply copy/clone a SQL Server virtual machine because you’ll end up with two VMs with the same Active Directory SID, and SQL server doesn’t like to the server name to change once it’s installed. So this is how I do this as I often need to build a quick SQL VM

    • Setup a VM with your guest OS of choice e.g. Windows Server 2012 Datacenter edition.
    • Install the prerequisites for SQ Server for example the .Net Framework 3.5. sp1
    • Use Image Prepare to partially install SQL Server
    • SysPrep the machine (windows\system32\sysprep\sysprep.exe) to anonymise it
    • Create an unattend.xml to be consumed when the vm comes out of SysPrep and save this to the SysPrep folder above. Typically this answer fie will join your new VM to your domain, setup the local admin account, input locale, date/time etc. and  this TechNet library article will walk you through that.

    Note: passwords in the answer file are stored in clear so plan around that.

    • Save this off as a template.  That typically means saving the VHD on a share for later use
     

    To use the template

    • Create a new VM using a PowerShell / PowerCLI script. For example here’s the sort of thing we used earlier in the year at our camps to create a server on Hyper-V..

    New-VM   -Name $VMName -NoVHD -MemoryStartupBytes 1Gb -bootdevice IDE  -SwitchName $VMSwitch  -Path $VMLocation
    Set-VM   -Name $VMName -ProcessorCount 2 -DynamicMemory       
    Add-VMHardDiskDrive -VMName $VMName -Path $VHDPath

    start-VM $VMName

    which creates a simple VM with 2 processors 1gb dynamic memory based on a given VHS , $VHD Path 

    • Rename the machine. We have a clunky script Simon and I use in our camps to do this..

    #find the ip address of the new V and look it up in DNS

    $vmip = Get-VMNetworkAdapter $VMToRename |where switchname -eq "CorpNet" | `
    select -expandproperty "IPAddresses"  | where {$_ -match "^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$"}

    $vmGuestName = [system.net.dns]::GetHostEntry($vmip)
    $vmGuestName = $vmGuestName.HostName

    #now execute a remote powershell command to rename it

    Invoke-Command -ComputerName $vmHostName -ScriptBlock {
    rename-Computer -NewName $args[0] -DomainCredential contoso\administrator } -ArgumentList $NewVMName

    Restart-Computer –ComputerName  $vmGuestName –Wait –For Powershell

    Hopefully you’ll write something better for production

    Doing this from the installer UI and server manager is tedious and prone to mistakes, but there is another reason to do this all from the command line, and that is because you should be installing SQL Server onto an installation of Windows Server that has little or no UI. It’s called called Server Core and is the default method for installing Windows Server 2012.  It cuts patching in half, and there’s no browser to secure, because it’s designed to be managed remotely. New in Windows Server 2012 is the ability to turn the user interface on and off (where in 2008R2 this was an install choice) and there’s a new halfway house installation called MinShell and my post on it here.

    Any VMware expert is going to read this and laugh because VCentre has a built in template capability so you don’t have to do all the PowerShell hand cranking to clone a sysprepped  VM and then domain join it.  Any Hyper-V expert shouldn’t be doing this either as System Centre is how this is done in production as you can create, not just templates of individual VMs, but architect services and setup self service so users can ask for templated VMs via your service desk or directly from a portal.  However my point here is that under the covers this is the sort of thing you’ll need to do to run lots of SQL Server at scale for tier 1 applications where downtime is critical. Having said that this sort of thing might be useful for labs and for setting up evaluations of SQL Server 2014 running on Windows Server 2012R2.

  • The evolution of Rovnix: Private TCP/IP stacks

    We recently discovered a new breed of the bootkit Rovnix that introduces a private TCP/IP stack.  It seems this is becoming a new trend for this type of malware.

    The implementation of the private stack is based on an open-source TCP/IP project and it can be accessed from both kernel and user modes.

    It works like this:

    1. At boot time, Rovnix hooks the following exported APIs in ndis.sys by patching the export table in memory: 
       
    • NdisMRegisterMiniportDriver()  (for NDIS 6.0)
    • NdisMRegisterMiniport()  (for NDIS 5.1)
       
    • When the network adapter driver calls NdisMRegisterMiniportDriver()/ NdisMRegisterMiniport() to  register to NDIS, the hooked function registers Rovnix’s own miniport handler functions.
    • With Rovnix’s own miniport handler functions, the malware is able to send/receive the packets through this private TCP/IP stack (see Figure 1).

    The Rovnix private TCP/IP stack

    Figure 1: The private TCP/IP stack

    The stack is introduced for stealth purposes:

    • It bypasses the rest of NDIS library code so it can bypass the personal firewall hooks
    • The port used by private TCP/IP stack cannot normally be accessed (such as “nbtstat” command)

    Basically, this means Rovnix has introduced new stealth in its network communication.

    Traditional methods of analysis, for example running network traffic monitoring software, may not be able to see the packets that are sent or received via a private TCP/IP stack.

    However, the compromised machine will contact the domain youtubeflashserver.com. If a network administrator notices traffic sent to this domain, then most likely there are machines infected.

    With our latest signature update, we detect the Rovnix dropper as TrojanDropper:Win32/Rovnix.I. Windows Defender Offline (WDO) also detects the infected volume boot record as Trojan:DOS/Rovnix.F.

    Sample: SHA1: a9fd55b88636f0a66748c205b0a3918aec6a1a20

    Chun Feng
    MMPC


  • Enabling Management of Open Source Software in System Center Using Standards

    This post is a part of the nine-part “What’s New in Windows Server & System Center 2012 R2” series that is featured on Brad Anderson’s In the Cloud blog.  Today’s blog post covers standards-based management of open source software with System Center and how it applies to Brad’s larger topic of “Transforming the Datacenter.”  To read that post and see the other technologies discussed, read today’s post:  “What’s New in 2012 R2:  Enabling Open Source Software.”

    Whether in the public cloud with Windows Azure, the private cloud with Windows Server and System Center, or a hybrid of both, running and managing open source workloads (such as Linux and JEE applications) is a key tenant of Microsoft cloud solutions. In this post, we will review the standards-based management approach used in System Center to manage open source software, take a detailed look at the management implementation in the UNIX/Linux agents for Operations Manager and Configuration Manager, and introduce System Center 2012 R2 improvements to these agents.

    System Center 2012 R2 and Management of Open Source Software

    System Center 2012 R2 is a great solution for management of the heterogeneous private cloud with Windows, Linux and UNIX workloads running side by side. With System Center 2012 R2, the portfolio of heterogeneous management capabilities has been substantially expanded and now encompasses:

    • Inventorying and deploying software to Linux and UNIX with Configuration Manager
    • Monitoring UNIX and Linux computers and services with Operations Manager
    • Monitoring JEE Application Servers on Linux, UNIX, and Windows with Operations Manager
    • Deploying Linux virtual machines and services with Virtual Machine Manager (and Windows Server Hyper-V)
    • Backing up Linux virtual machines with Data Protection manager

    In enabling the heterogeneous management features of System Center, our focus is on standards-based management. Open standards such as Common Information Model (CIM) and WS-Management play a key role in many of the heterogeneous management capabilities of System Center.

    One of the primary benefits of a standards-based approach is that different implementations of similar technologies can be uniformly presented and managed. For example, a Linux server, AIX server, and Windows server may have very different implementations for identifying and reporting on operating system resources and performance (such as processor inventory and utilization), but by managing each of these servers through a management implementation based on CIM, the administrator or management software does not need to understand the specific architectures, APIs, and all details of each operating system’s conventions and implementations. Rather, a common interface and model is used to uniformly present key performance indicators and inventory. In turn, this allows management software, such as System Center, to tightly integrate management for a variety of platforms, with consistent presentation and experience throughout.

    OperationsManagerLinuxDiagramView

    Figure 1 - A Linux Server Monitored in System Center 2012 R2 - Operations Manager

    Implementing the Standards-Based Approach

    In System Center 2012 R2, we continue our commitment to standards-based management of open source workloads, and have made a significant improvement in this regard by implementing a common CIM server in both the Operations Manager and Configuration Manager agents for UNIX and Linux.

    In the Windows realm, a consistent CIM implementation has been available since the introduction of WMI (as far back as NT 4.0). Likewise, WS-Management (or WS-Man) has been available for Windows in Windows Server 2003 and beyond. However, expanding common management capabilities to a broad array of UNIX and Linux operating systems (and architectures) with these standards required new implementations.

    The UNIX and Linux agents for Operations Manager consist of a CIM Object Manager (i.e. CIM Server), and a set of CIM Providers. The CIM Object Manager is the “server” component that implements the WS-Management communication, authentication, authorization and dispatch of requests to the providers. The providers are the key to the CIM implementation in the agent, defining the CIM classes and properties, interfacing with the kernel APIs to retrieve raw data, formatting the data (e.g. calculating deltas and averages), and servicing the requests dispatched from the CIM Object Manager. From System Center Operations Manager 2007 R2 through System Center 2012 SP1, the CIM Object Manager used in the Operations Manager UNIX and Linux agents is the OpenPegasus server. The providers used to collect and report monitoring data are developed by Microsoft, and open-sourced at CodePlex.com.

    OperationsManagerUNIXLinuxAgentArchitecture

    Figure 2- Software Architecture of the Operations Manager UNIX/Linux Agent

    This CIM/WS-Man standards-based approach also brings benefits to the agent implementation itself. The resulting management agent is lightweight, with a small footprint and low impact to the monitored host. Additionally, such a CIM server and provider implementation is quite portable, allowing it to be consistently implemented across a broad matrix of UNIX and Linux operating system distros, versions, and architectures – while returning monitoring data with a uniform presentation. Lastly, the standards-based approach enables the Operations Manager server to UNIX/Linux agent communication with well-defined protocols (WS-Man over HTTPS) and established interfaces (WinRM).

    A very similar agent software architecture is employed in the UNIX and Linux agents for Configuration Manager, first available in the System Center 2012 SP1 product. Like the Operations Manager UNIX and Linux agents, the Configuration Manager UNIX and Linux agents implement a lightweight CIM Object Manager and set of providers. While the Operations Manager agent providers are focused on system monitoring metrics, the Configuration Manager agent providers enable scenarios such as hardware inventory.

    By adopting a standards-based approach to enabling and managing open source software, System Center 2012 R2 is able to deliver consistency in the standards, protocols, and management interfaces that are employed in managing Windows Server workloads and open source software.

    Introducing Open Management Infrastructure in System Center

    With System Center 2012 R2, UNIX/Linux agents for both Configuration Manager and Operations Manager are now based on a fully consistent implementation of Open Management Infrastructure (OMI) as their CIM Object Manager. In the case of the Operations Manager UNIX/Linux agents, OMI is replacing OpenPegasus. Like OpenPegasus, OMI is an open-source, lightweight, and portable CIM Object Manager implementation – though it is certainly lighter in weight and more portable than OpenPegasus.

    An excellent introduction to OMI can be found on the Windows Server Blog, but some of the key features of OMI include:

    • Very small footprint (the package size for Operations Manager UNIX/Linux agents has been reduced by half)
    • Highly portable
    • Simple provider extensibility

    While these are immediately realized benefits in the System Center 2012 R2 UNIX and Linux agents, perhaps the most significant and exciting benefit of OMI can be found in the promise of real and broad cross-platform and standards-based management. OMI has been designed not just to be portable between UNIX, Linux and Windows, but also for devices and embedded systems. As an example, both Cisco and Arista are working on WS-Man/CIM implementations for network device management with OMI. Given the possibility of using a single protocol or mechanism to manage network and storage devices, baseboard management controllers and Windows, UNIX, and Linux servers, one can quickly imagine the scenarios this could unlock in the automation-centric cloud world we now live in. OMI’s portability and standards-based implementation open this management opportunity to a potentially incredible array of managed devices and entities and management platforms and tools with streamlined interoperability. Thusly, it is easy to see why OMI is a foundational implementation element of the Datacenter Abstraction Layer (DAL) concept.

    Further discussion of some of the management scenarios that OMI, and a link to a great demo, can be found the PowerShell Team Blog.

    Summary

    As we continue to broaden the portfolio of management capabilities for open source software in System Center 2012 R2, we are reaffirming our commitment to open standards-based management, and aligning with exciting new models developing in the cloud era. The availability of OMI, and its adoption into the System Center agents for UNIX and Linux is another step forward in the realm of standards-based management. Now, a CIM and WS-Man based implementation used for management of Windows and Linux/UNIX can extend even broader to devices, embedded systems, and applications. This benefits the System Center user, as we continue to provide consistent experiences regardless of the managed platform, and this benefits the management ecosystem – as additional management providers and management tools can more fully and capably interoperate.

    To see all of the posts in this series, check out the What’s New in Windows Server & System Center 2012 R2 archive