Like TechNet UK on Facebook
TechNet Team Blogs
Why is it so hard to get time off for training? It can be hard, because there is a cost associated with a training event. However, whilst SQLRelay offers free training, you may still need some help to explain the value of SQLRelay to your boss. Here are some useful reasons you can provide when you ask for the time to attend a SQLRelay event, and we hope to see you there!
Here are some signs that your organisation needs to send you to SQLRelay…
Well, lots of conferences can give you that! Let’s look at why SQLRelay isn’t like other conferences…
How SQLRelay can make things easier for people who don’t attend…..
To summarise, attending a free training event, given by Microsoft and world-experts, is an excellent investment of your organisation’s time, resources and energy. SQLRelay is coming to a location near you. Come and join us: learn, and get help with your SQL Server issues, for free. We look forward to seeing you there!
SQLRelay is a series of day-long conferences held around the UK by local community organisers. Each event covers a wide range of SQL Server related content delivered by expert speakers from around the world. In its fourth iteration, it’ll be appearing in a city near you during November. For more details consult sqlrelay.co.uk
Help spread the word by getting in touch with us via - Twitter @SQLRelay2013 - Facebook/SQLRelay2013 or via LinkedIn
Event Speaker: Jen Stirrup - Most Valuable Professional (MVP) - SQL Server
Jen is best-known for her work in Big Data, Business Intelligence and Data Visualisation. She is Joint Owner of Copper Blue Consulting, delivering business-critical solutions that add enterprise value in addition to provisioning technical integrity. Jen is a Director-At-Large (Elect) for the Professional Association of SQL Server (PASS) organisation, holding the EMEA seat. Jen is also a current holder of the SQL Server ‘Most Valuable Professional’ Award (MVP) who has also won the SQLPASSion Award, presented by PASS at Summit 2012, for her work in helping the European SQL Server community. Jen has presented at a variety of world-class events including TechEd North America, TechEd Europe, PASS Summit, PASS Business Analytics Conference, SQL Live! 360 and SQLBits, along with SQLSaturday events throughout Europe and the United States.
The iconic kickstand a better, full HD screen, lighter form factor and superior sound make Simon May, Microsoft Evangelist, rather obviously fall in love with the new Surface 2 and Surface Pro 2 devices. But are they good for the IT guy.
Last week I was lucky enough to be one of the first people to go “hands on” with the new Surface 2 and Surface Pro 2 devices from Microsoft. As always this series is about writing about what they’re like for IT Pros which I’ll get onto in a few lines but before I do let me tell you how I use my current Surface devices. Currently I only own a Surface RT, actually I own three of them and two are for demo purposes. My main Surface device spends most of its time sat by the sofa and it’s used for casual non-work stuff but it’s also used heavily for commuting. For the times I go into London to for work I only take my surface, I don’t need anything else for emails, for meetings, for blogging or my general day to day non-technical work. Surface RT is the perfect device for this because it’s light and I don’t need to charge it. I also have an Android tab sat there, invariably I prefer Surface RT.
Let’s start off looking at the new Surface 2 then which runs Windows RT. The very first thing I noticed when I grabbed the device was how much lighter it feels than the Surface RT, I am sure there’s not much of a weight difference but it’s enough to be noticeable. The very next thing I did was to try the iconic kick stand, it feels as solid as the Surface RT with that pleasing spring when it gets to the end of its movement but the kickstand can be pulled to make it move a smidge further and provide a flatter working angle. I moved the kickstand to the second position and I was quite surprised about how that affected by ability to type. With the first position and on the Surface RT it’s pretty cumbersome to type on screen, with Surface 2 the kick stand position makes it easy to type with both hands –almost touch type.
My very next move was to power the device up and log in to set it up. Immediately I noticed how sharp the 1080p screen is compared with the 720p screen is on the Surface RT which just made the Surface logo that little bit smoother. It’s also noticeable on the labels on live tiles which are just that little bit more readable. Personally I prefer to have more tiles so I quickly set my Surface 2 to display 4 and the 1080p screen handles that really nicely too. Within about 10 minutes my apps had started to sync down too so I jumped onto twitter which did exactly what you’d expect on a 1080p screen. Wanting to test the screen more I popped into the Windows Store and installed the 500px app to view some beautiful photography. I have to say the clarity of the screen, the contrast of the colours everything about the screen makes it wonderful to look at.
Taking a look at the desktop to use the Microsoft Office apps also didn’t disappoint me. The higher resolution makes office just that little bit nicer to work with which I think is possibly because it’s slightly more congruous with the display on my Asus Zenbook Prime, things just seem to be the right dimensions.
Everything starts to feel snappier around the interface than my Surface RT with apps loading just that little bit more quickly. Overall I found the Surface 2 to be a pretty great improvement over the Surface RT for me, I’ll probably be buying my own. Sometimes people say to me that it’s not a great device for IT Pros because it doesn’t run desktop apps, I however find that it does almost everything I need for short periods and does much better than anything else I’ve ever used for such. I have easy access to PowerShell and to Remote Desktop and in fact though remote desktop I deliver a couple of apps I need occasionally (like the RSAT) using Remote App and they basically feel like native tools.
Another thing I like, which is actually a Windows 8 feature is the ability to wipe my device. The device I used for this review wasn’t mine, was not going to be mine and other people needed to use it, so I used the reset ability of Windows 8 to just reset the device and take away all my customizations before I handed it off. Very handy for recycling your old Surface RT device I thought.
Surface Pro 2 for the Professional
Next I was onto taking a look at the Sur face Pro 2, a colleague had signed into this device first and it was setup with their Microsoft Account. The very first thing I did was play a movie trailer from Xbox video, not so that I could see the screen – it’s 1080p just like the Surface Pro, but so I could the sound. The Surface Pro 2 and actually the Surface 2 have Dolby audio built in and wow do they sound good! The sound is excellent and probably the best of any tablet device since they have two speakers (lots of tablets only have one – aka Mono) but Surface has multiple drivers and sounds superb. I could happily use the Surface Pro 2 as music device or to watch whole movies on.
I wanted to give the USB 3 on the device a try so I moved a huge amount of data over from a USB3 memory stick and transfer speeds averaged about 34mbs. Copying from the Surface 2 to the stick managed a similar average transfer speed, so we can tick the “it just works” box. I also ran some benchmarks on the device and it out performed by new laptop (Asus Zenbook Prime) in almost every way from drive speed, 3D graphics performance and various CPU tests. I have to say it was impressive in every respect and obviously a total laptop replacement for an IT Pro – with this you’d only need one device for everything in your life – even a little bit of virtualisation!
In Windows Server you can create two kinds of Virtual Desktop Infrastructure (VDI), personal or pooled. A personal collection is a bit like a company car scheme where everyone chooses their own car. This means there needs to be car for everyone even if they are on leave or sick etc. and each car needs to be individually maintained. However the employees are really happy as they can pimp their transport to suit their own preferences. Contrast that with a car pool of identical cars, where an employee just takes the next one out of the pool and when its brought back its refuelled and checked ready for the next user, and you don’t need a car for everyone as there’ll be days when people just come to the office or use public transport to get to their destination. That seems to be a better solution than company cars for the for the employer but not so good for the employees. Pooled VDI collections work like pool cars in that they are built from one template and so only one VM has to be maintained, but that means every user has the same experience which, might not be so popular. However Pooled VDI in Windows Server 2012 has a method for personalising each users experience while still offering the ability to manage just one template VM and that’s why I want to use pooled VDI in my demos.
Carrying on from my last post I right click on RD Virtualisation Host and select Create Virtual Desktop Collection
Now I get specify the collection type
Having chosen the collection type I now need to pick a template on which to base the pool..
I found out that you can’t use the new Hyper-V generation 2 VMs as a VDI template even in Windows Server 2012R2 rtm. This does mean you can use that WimtoVHD Powershell script I have promoting in earlier posts in this series to create my template directly from the Windows installation media.
Note: you’ll need windows 8.1 enterprise for this which is currently only available on msdn, until 8.1 is generally available in a couple of weeks when there should be an evaluation edition available
In fact for a basic VDI demo the VHD this creates can be used as is; all you need to do is create a new VM from this VHD to be configured with the settings each of the VDI VMs will inherit, such as CPU, dynamic memory settings, Virtual NICs and which virtual switches they are connected as well as any bandwidth QoS you might want to impose..
Here you can see the setting for my template VM such as it being connected to my FabricNet virtual switch.
Normally when you build VMs from templates you will want to inject an unattend.xml file into the image to control its settings as it comes out of sysprep (as I have done in earlier posts in this series). This wizard helps you with that or you can just enter basic settings in the wizard itself as I have done ..
and not bother with an unattend.xml file at all.
Now I can start to configure my collection by giving it a name, how many VMs it will contain and specifying who can access it ..
In a production environment you would have several virtualization hosts to run your collection of VMs and here you can specify the load each of those hosts will have.
Having specified which hosts to use I can now get into the specifics of what storage the VMs will use. I am going for a file share, specifically one of the file shares I created earlier in this series, which will make use of the enhancements to storage in R2. Note the option to store the parent disk on a specific disk, which might be a good use of some of the new flash based devices as this will be read a lot but rarely updated.
My final choices is whether to make use of user profile disks. This allows all a users settings and work to be stored in their own virtual hard disk and whenever they log in to get a pooled VM, this disk is mounted to give them access to their stuff. This is really useful if all your users only ever use VDI as you don’t need to worry about all that roaming profiles and so on. However if your users sometimes use VDI and sometimes want to work on physical desktop such as laptops then you’ll want to make use of the usual tools for handling their settings across all of this so they get the same desktop whatever they use - remember we work for these people not the other way around!
That’s pretty much it - the desktops will build and your users can login via the web access server in my case by going to http://RDWebAccess.contoso.com/RDWeb
To demo the differences in performance on a pooled VDI collection that sits on a storage space that's had deduplication enabled I could create another collection on the Normal* shares I created in my post on storage spaces by doing this all again. Or I could just run a PowerShell command, New-RDVirtualDesktopCollection, and set the appropriate switches..
$VHost = "Orange.contoso.com" $RDBroker = "RDBroker.constoso.com" $ColectionName = "ITCamps"
#The VDI Template is a sysprepped VM running the Virtual Hard Disk, network settings etc. that all the pooled VMs will inherit. The VHD will run windows 8.1 configured and sysprepped with any applications and setting needed by end-users
$VDITemplateVM = get-vm -ComputerName $VHost -Name "Win81x86 Gen1 SysPrep"
New-RDVirtualDesktopCollection -CollectionName "ITCamp" -PooledManaged -StorageType CentralSmbShareStorage -VirtualDesktopAllocation 5 -VirtualDesktopTemplateHostServer $VHost -VirtualDesktopTemplateName $VDITemplateVM -ConnectionBroker $RDBroker -Domain “contoso.com” -Force -MaxUserProfileDiskSizeGB 40 -CentralStoragePath”\\fileserver1\NormalVMs” -VirtualDesktopNamePrefix "ITC" -OU “VDICampUsers” -UserProfileDiskPath “\\fileserver1\NormalProfiles” My good friend Simon May then gradually add in more and more VMs into the collection with the Add-RDVirtualDesktopToCollection cmdlet to see how much space he can save.
The other really clever thing about a pooled VDI setup like this, is maintaining it. Clearly you will want to change the tem[plate the Pooled collection is based on from time to time, for example to add or remove version of applications and to keep patches up to date. All you have to do is to make another template VM with the new applications and latest patches and then Update the collection from the Collection management screen, or via the Update-RDVirtualDesktopCollection PowerShell cmdlet for example
PS C:\> Update-RDVirtualDesktopCollection -CollectionName "ITCamp" VirtualDesktopTemplateName "$VDITemplateName" -VirtualDesktopTemplateHostServer $VHost -ForceLogoffTime 12:00am -DisableVirtualDesktopRollback -VirtualDesktopPasswordAge 31 -ConnectionBroker $RDBroker
where I would have set $VDITemplateName to be the modified and sysprepped VM to base the updated collection on. Note the Force LogOffTime setting; that’s where users will be thrown out and forced to log on again. If you don’t set this they’ll only get to use the new version when the login and logout again. However you manage that if you have used User Profile in the collection as I have done their preferences and setting will persist on the updated collection.
So that’s the basics of setting up VDI on a laptop for your evaluations. From here I could go on to ad other parts of the Microsoft remote desktop solution such as;
However I would be interested to know what you would like me to post next, so please add comments or if you are shy e-mail me
Earlier this year, we started publishing a new set of metrics on our portal – An evaluation of our protection performance and capabilities. These metrics show month over month how we do in three areas: coverage, quality, and customer experience in protecting our customers.
And, since we started to publish the results on this page, I've had many great conversations with customers and partners alike, discussing what the results mean for their organization and their protections. In this post, I want to cover some of the most common taxonomy questions I was asked during those conversations and also discuss the results for September 2013.
First, let's dive into what the terms we use really mean:
This is how we measure threat misses and infections. If we block a threat, that means we've protected our customers as expected and that's a win. Misses and infections show up as a red dot and the bar chart in red.
Misses are threats we had early warning detections on (non-blocking detection), but by the time we determined it to be a threat, the threat had either disappeared or changed into a different file on the computer.
Infections are threats we detected and then had to remediate (instead of a block). We call these active because, according to our telemetry, they appeared to have some active running component when we detected them. On the positive side, our real-time protection detected and worked to remove the active threat. We continue to work on methods to determine the ways in which threats become active, for example, through vulnerability exploits, through another program that drops the malware, or through credential-based attacks so that we can further address these active threats and provide actionable information to customers about how to protect themselves.
Here's why that's important. Many threats, like Conficker, show up as active because the threat uses passwords or exploits that were effective in compromising the system for a very brief moment in time. For example, 85% of Conficker infections on Windows 7 happen through credential-based attacks (read more about this Conficker case in SIRv12). When we detect a Conficker infection that was delivered this way (which happens immediately), we identify it as active because it was written by a system process compromised through a credential-based attack.
Incorrect detections happen when antimalware products incorrectly flag and misclassify a file as malware or unwanted software. The yellow dot and the other bar chart represent incorrect detections. In any given month, only an extremely small number of programs are incorrectly detected. In most months in 2013, for example, only 1 in a million customers experienced an incorrect detection - the percent of customers with incorrect detections was less than three zeros to the right of the decimal (<0.0001%).
With this criteria, we measure the performance implications of antimalware on the day-to-day activities that a person might perform – such as opening an application, browsing the web, downloading files, and playing games and multimedia. Latency perceptible by a human tends to land within the 50 to 100 millisecond range. In most months, most activities stay under 100 milliseconds latency. This is the second graphic on our results page and it shows the customer experience when running the latest version of Windows Defender on the latest version of Windows 8. September's measurement reflects Windows 8.1.
To sum it up, the two graphics on our results page highlight the findings for coverage, quality, and customer experience (in terms of system performance). The first graphic shows protection coverage and quality for Microsoft's real-time protection products that cover home, small business, and enterprise, which represent approximately 150 million endpoints. The second graphic shows the performance implications when running the latest version of Windows Defender on the latest version of Windows 8. There is a great whitepaper that provides additional insights at this link.
And finally, let's talk about the September 2013 results:
In September, 0.17% of our customers encountered a miss (0.03%) or an infection (0.14%). This infection number was uncharacteristically high because of the resurgence of an old threat we currently call Sefnit. 44% of the active detections for the month were related to this Sefnit family. That's a very large percentage – on normal months, no one family represents more than 6% of active infections. As we investigated the threat, we noticed that the distributors of Sefnit were using some sneaky techniques to infect computers, including using installer programs that install legitimate software but occasionally install legitimate software with bonus material (Sefnit). Sefnit distributors are also modifying the appearance of components, such as sometimes using an obfuscator and then sometimes not.
This month, only 0.00025% customers were impacted due to incorrect detections. This percentage was slightly above average. The driver for the slightly above average impact was due to an incorrect detection on a 2009 version of the Microsoft Malicious Software Removal Tool.
We consistently provide great performance for our customers using Microsoft antimalware products. In September 2013, the results have been consistent with the 50 to 100 milliseconds range.
Our goal is to provide great antimalware solutions for our consumer and business customers. I hope this blog demonstrates how committed we are in raising the bar for ourselves and others in the industry for doing so. We're monitoring our results, performance, and progress closely, prioritizing for real threats that might affect our customers and applying lessons learned to make our products even better. Plus, we support antimalware partners in order to build a strong and diverse ecosystem to fight malware – the true adversary.
Holly Stewart, Senior Program Management Lead, MMPC
By Dana Simberkoff, Vice President, Risk Management & Compliance, AvePoint.
For most organisations worldwide, it’s no longer a matter of “if” they will move to the cloud but rather “what” they will put in the cloud. Keeping everything on-premises within the walls of an organisation is unrealistic. Cloud is increasingly a part of more and more IT business strategies, at least judging by the rapid spending on cloud-related services. According to a recent IDC study, public cloud services spending will reach $98 billion USD in 2016, with a compound annual growth rate five times that of the IT industry overall.
Why? Companies are constantly looking for ways to do more, to collaborate better, to create more product, to continue pushing the revenue needle forward – all the while, enabling an increasingly global workforce. Cloud computing offers many advantages to technology providers and their customers, allowing companies to invest far less in infrastructure and resources that they must host, manage, administer and maintain internally. This instead allows them to invest in the advanced applications they build on an externally hosted and fully redundant environment that they can access at a fraction of the cost - not just for saving costs on what is traditionally capital expenditures on hardware, but more so for business agility. The business landscape has never been more competitive, and every enterprise is looking for an edge. Judging by the numbers, many believe utilising the cloud to manage enterprise systems and content with repositories such as Microsoft SharePoint Online will help pave their way to victory.
With this great reward, however, comes great risk. Hosting SharePoint through Microsoft Office 365 could reduce cost and improve global access to content. However, for organisations subject to regulatory requirements (and that’s essentially every organisation today regardless of size, vertical, or geography), the move to the cloud isn’t without risk. Enterprises have tremendous concerns about storing business data outside their own walls because it means relinquishing control – control of information, user access, authentication, and data exposure (whether intentional or accidental) of sensitive personally identifiable information, classified information, or otherwise non-compliant content.
So you accidentally let someone take a peek at the wrong data – how much harm can that data breach possibly do? About $5.4 million USD worth, according to a recent study by Ponemon. The study found that the average organisational cost for a single breached record – a document, user ID, email, email address – is $188 USD. Think about the number of emails clogging up your own inbox, and documents in your shared drive right now … it adds up very, very quickly.
Before you call your sales representative selling you a cloud platform and tell her no thanks, consider this: There is a way to gain value from cloud computing while addressing compliance concerns. Many companies are offloading select content or workloads into the cloud, and keeping their most regulated content on-premises. You won’t be alone. Many organisations are following this approach, a report by IDC found that 80 percent of the world’s 2,000 largest companies will still have greater than 50 percent of their IT onsite by 2020.
So, what’s your move to start the migration from your old on-premises technology platforms to the cloud? Here’s your four-step playbook:
1) Assess existing sites and content. Identify at-risk content and sensitive data within your “as is” on premises environments – including SharePoint or file share content – that could potentially violate your compliance policy. Perform a risk analysis to understand exposure levels for a defined scope of content, as well as the effectiveness of existing controls to determine the overall sensitivity of an existing SharePoint environment.
2) Report on and classify content. Implement an effective and realistic compliance program that can be enforced, measured and modified as needed. Identify what data your organisation collects, processes and stores (and where it comes from) and decide on applicable/mandatory privacy and security requirements – what, where, why, and how. Provide information classification based on risk exposure to the organisation. Define minimum content and physical security access controls based on risk classification. Assign metadata and restrict access to sensitive content.
3) Design compliance information architecture. This is your chance to expose, access, and manage all content residing in your network and/or the cloud for centralised document management in SharePoint based on your specific business requirements – such as restructuring permissions, and adjusting access, metadata and security settings of content. Strictly regulate user-generated content to prevent the creation or uploading of non-compliant, harmful content.
4) Determine cloud migration approach. Utilise content and site assessment reports and subsequent tagging to develop a best practices approach to migrate select content and workloads to the cloud. You can do so by identifying cloud-appropriate content for migration with customisable filters based on metadata or content types you established in Step 3. Scan, flag and/or block all contents prior to upload to ensure compliance. Detect and make changes to content and/or user permissions and access that violate your policy. Then, just as in any other migration – determine your schedule and project milestones to ensure that the project meets your business needs and keeps your end users focused on what they should be focused on: doing their jobs.
As companies and government agencies move their applications increasingly to a cloud based infrastructure, they must also understand and fully review the associated privacy and security considerations. Privacy is a global issue, and one thing is certain, even if you build software applications to serve a very specific market segment - you cannot ignore privacy as a fundamental issue that your customers will demand. Change can be hard, but this is a positive change. You’ll be utilising a new way of working in the cloud that can vastly improve your business agility, while keeping traditional hardware costs low and safeguarding your sensitive data. In the meantime, keep your feet firmly planted on the ground as your applications move to the sky!
Following the success of MVP Cloud OS Week held this September at Microsoft's Victoria Office, CloudOS is going on the road.
We are pleased to announce the next phase of our MVP-led events, the MVP Cloud OS (Infrastructure) Relay, to be held November 11-15th and 25th to 29th in selected cities nationwide.
Join Microsoft and a panel of MVP speakers, to learn about Cloud OS and how you can use the technology suite from Microsoft to transform your business.
Today's business runs on IT. Every department, every function, needs technology to stay productive and help your business compete. And that means a wave of new demands for applications and resources. The datacenter is the hub for everything you offer the business, all the storage, networking and computing capacity. To ride the wave of demand, you need a datacenter that isn't limited by the restrictions of the past. You need to be able to take everything you know and own today and transform those resources into a datacenter that is capable of handling changing needs and unexpected opportunities. With Microsoft and its Cloud OS strategy, you can transform the datacenter.
Where and when are the events;
IF YOU WANT TO GO TO THE SQL RELAY CLICK HERE
These will be great events, top speakers, top topics and a little bit more.....
Speakers will include
MVP Gordon Mckenna, MVP David Allen, MVP Damian Flynn, MVP Raphael Perez, MVP Rob Marshall, MVP David Nudelman, Martyn Coupland, Sam Erskine (author of 2 cookbooks), John Quirk and myself MVP Simon Skinner.
One of the great things about virtualisation, is that the host operating system running the hypervisor is independent of the operating system in the VMs. For example VMware ESXi is not the same as Linux and Windows operating system in the guest VMs that reside on it. You might be a little confused when you look at Hyper-V in the same way, but actually it’s the same again. You could run Windows Server 2012 Hyper-V and have Windows Server 2003/2008/200R2 in the VMs and contrary to popular belief Linux also works well on Hyper-V and is fully supported for the latest versions. Note: It is technically possible to run much older operating systems on Hyper-V, such as MS-DOS, OS/2, Windows for Workgroups it’s just that those aren’t supported because those operating systems aren’t supported at all even if they are ran on physical hardware.
The point I want make here is about what effect upgrading the hypervisor has on the guest operating system in the virtual machines. This can be likened to reinstalling that operating system on new hardware, which in turn means driver support. VMware Tools/ Hyper-V Integration Components provide these synthetic drivers to spoof such things as CPU storage, networking, time synchronisation and also feed back to the hypervisor the state and usage of these resources. So from the perspective of the guest operating system, moving hypervisors is the ripping and replacing of these drivers. Of course from the host this might mean a change of the metadata and hard disk files that represent that virtual machine on the host.
None of this is difficult but does involve some change albeit less than changing the guest operating system, but why bother upgrading or changing a hypervisor?
If I look at what Hyper-V offers in Windows Server 2012R2 compared with the original version that shipped with Windows Server 2008, then everything has got easier, faster, with corresponding improvements in high availability(HA) and the different but equally important world of disaster recovery (DR). Some of this is a reflection of what hardware can do now such as NUMA in CPUs, SR-IOV on network cards while other improvements have totally been down to reworking the hypervisor itself to provide access to parallel processing without getting caught up with waiting for availability of threads on cores in a CPU.
So you’d have to have some obscure use case to stop you upgrading from one to the other as there would be no license cost involved because Microsoft doesn’t charge for Hyper-V, just the licensing in the VMs - I am assuming you are already licensed for those! So in return for a bit of work you get access to all the new stuff in Hyper-V.
Of course you could also move from Hyper-V in whatever version of Windows Server to VMWare and use one of their many licensing options to suit your HA & DR needs and how many VMs per server you have and so on. In preparing your cost benefit analysis for this compared with moving to Windows Server 2012 R2, it’s worth bearing in mind that you’ll still need licenses for the operating systems in the VMs themselves whatever hypervisor you choose. Often the best way to do that is to license the host with Windows Server Datacenter edition which covers you to run as many VMs as you want on that host each of which is then licensed to run Windows Server and then covers you to run Hyper-V on the host as well. For a few edge cases that analysis might weigh in favour of VMware or be worth paying because of some particular feature like VM fault tolerance that doesn’t exist in Hyper-V. I say edge cases because I don’t see that happening a lot in the current market.
What I do see is movement from VMWare to Hyper-V. I don’t propose to do a feature comparison here (If you are interested then Keith Mayer’s post is as good as it gets) . What I want to focus on is three things:
1. Hyper-V advances over the last five years have outstripped enhancements in VMWare. For example the list of new features in Windows Server 2012 to 2012R2 all enhance Hyper-V in some way be that for VDI, for storage or DR. That rate of change isn’t evident in VSphere 5.1 – 5.5 most of which means the scalability numbers are in line with Windows Server 2012 R2.
2. Windows is Windows. If you know how to manage a windows server you can manage Hyper-V. This reduces the staff costs associated with running server virtualisation, because you don’t need a different team with different skills. A good example is to fire up Server Manager and see your physical hosts, alongside your virtual machines in one screen. This is actually good for us IT Professionals in those teams, we can either acquire a broad windows server knowledge including virtualisation in a smaller team or have the ability to transfer skills and have career progression in a larger one
3. Hyper-V is fit for purpose. While the Hyper-V that you buy in Windows Server 2012R2 is not exactly the same as the one runnning behind Azure , Office 365, Bing etc. there is a lot of common code. I could rattle out a list of references who are on Hyper-V now like Royal Mail, Unilever and Aston Martin, but perhaps the best evidence of Hyper-V being ready for business is silence. By this I mean that when things go wrong with anything technical these days forums and social media are alive with it very quickly and that has not been the case with Hyper-V.
So my assertion is that to upgrade your Hypervisor you need to consider Hyper-V
前回のブログ「Windows 8.1 ～ Windows Update 管理画面の新機能」に引き続き、今回も Windows 8.1 に関する情報をお届けします。
今回は、Windows 8.1 で改善された更新プログラム適用時の再起動の動作についてご紹介します。
更新プログラムによって、インストールを完了するために PC の再起動が必要になる場合があります。
Windows 8 では、段階的な通知により再起動のタイミングをフレキシブルに選択することができるようになりました。(#1) Windows 8.1 でも、同様に段階的な通知により再起動のタイミングをフレキシブルに選択することができます。また、次の画面のように、ログオン画面で通知を受けた場合には右下の電源ボタンをクリックすることで、[更新してシャットダウン]、[更新して再起動] が選択できるようになりました。通知に気付いた際にすぐにアクションが取れるようになっています。
Windows 8 では 3 日経過すると最初にユーザーがログオンまたはロック解除したタイミングで、System Critical Notification 形式の 「○分で再起動します」というメッセージが表示され��15 分以内に強制的に再起動が行われました。
Windows 8.1 では、再起動せずに 1 日経過すると次の形式のメッセージが通知されます。お気づきの通り、[後で再起動する] が選択できるようになりました。また、強制再起動の猶予も 15 分から 1 日に延長されています。これなら、余裕をもって作業内容を保存できますよね。
更新プログラムの適用は、わずらわしいことかもしれませんが、最新の状態に保つことが PC を安全にご利用いただくためには必要不可欠なものです。通知に気付いたら、お早目に更新プログラムのインストールをお願いします。
#1: Windows 8 の Windows Update に関する詳細は、「Windows 8 セキュリティ特集 #1 Windows Update」をご覧ください。
Here is the Final programme update for each day of Tech.Days Online from Wednesday November 6-8. Not only are we delighted to confirm that Steve Ballmer will be joining us on the first day, but we also have British Lions rugby legend Will Greenwood confirmed to share the British Lion and Microsoft story as well as the Deputy CIO of Lotus F1 on board to talk about Office 365. You can still send in you questions to Microsoft CEO, Steve Ballmer before Wednesday, just send them to firstname.lastname@example.org and we’ll select the best to ask him during the interview.
All sessions are 30-minutes and the technical experts running these sessions consisting of Microsoft product experts and Microsoft Most Valued Professionals (MVP's) will be available post-session for further online chat and follow-up to any questions you have.
Remember that there will also be competitions and prizes to be won throughout each day from T-shirts to an X Box One so do switch on, tune in and join us for all the sessions you want to participate in by registering for Tech.Days Online starting on Wednesday November 6th.
Tech.Days Online – November 6-8 - The Final Programme Update
Wednesday November 6 – Windows Client for IT Pros and Developers
Session Title (all sessions are 30 minutes)
Overview of the day
Windows 8.1 – devices galore! + interview with Will Greenwood on devices and British Lions
MDOP 2013 Overview and Deeper Dive on changes in Application and User Experience Virtualisation
Management in the cloud with Windows Intune Configuration Manager
Steve Ballmer, Microsoft CEO, Live Interview
Device Management – Heterogeneous Device Management
Office 365 – The Evolving Service + Interview with Michael Taylor, Deputy CIO, Lotus F1 team
Building business Apps with Visual Studio DevOps
Windows 8.1 – Workplace Join
Windows 8.1 - VDI
Find out about what you can do with Intel vPro
Windows 8.1 Enterprise
Wrap-up of Day 1 (inc. announcement of today’s Xbox One winner)
Thursday November 7 – Server and Cloud for IT Pros
2012 R2 - Virtualisation
Building Windows Server 2012 R2 Networking with System Center 2012 R2 Virtual Machine Manager
2012 R2 - Storage
Extreme automation - Learn automation or get better at golf!
What’s new in Ops Manager
Cluster in a box
Moving VMs from on-premise to Azure
Automating the Azure Datacentre with PowerShell
Ask the Experts – Your questions answered by today’s expert presenters
Windows Azure Platform
Wrap-up of Day 2 ((inc. announcement of today’s Xbox One winner)
Friday November 8 – Visual Studio, Azure, Dev tools for Developers
Asynchronous C# development in Visual Studio 2013
Agile development with Team Foundation Server
Quick and Easy Cloud Back-Ends for Mobile Apps
Using the Nokia Music C# API on Windows Phone 8 / Windows 8
Azure Cloud Services Architecture
From Whiteboard to deployed in 15 minutes
What's new in Visual Studio 2013 for Web Developers
What's new in Visual Studio 2013 for App Developers
What's new in Windows 8.1 for App Development
Wrap-up of Day 3 (inc. announcement of today’s Xbox One winner)
Remember to register for Tech.Days Online from November 6-8 here
By Geoff Evelyn, SharePoint MVP and owner of SharePointGeoff.com
There are several elements to SharePoint, all of which require support. These elements include back-end, front-end, integrated, business and configuration support. To support customers using the SharePoint platform and the relevant SharePoint solutions on that platform, there will be instances where that support is not entirely the same as what the customer thinks you support (though that's another conversation altogether).
So, a key success criteria of SharePoint depends on the level of its support service success. The capability of that support service is simply defined on a key element - the defining, understanding and setting exactly what is being supported and setting customer expectation.
So what is supported to cover SharePoint? I thought the best way to explain would be to craft a basic table giving a high level view of the key areas:
Support provided by
Advanced usage, like data integration, third party components, Internally / Externally developed Apps and Add-ins
Peers, product creator, user representative, user training
Usage of SharePoint, Site Management, Site Administration
SharePoint champion, user training, helpdesk, SharePoint support
SharePoint configuration management; installation, interfacing to other systems, connection to third party products, change control
SharePoint support, IT Helpdesk
Environment; operating systems, client operating system, back-end software support
SharePoint support, Helpdesk, Engineering and Platform support
Implementation and Deployment
SharePoint support, helpdesk
SharePoint 'support' could not possibly provide 100% support cover to all areas described in the above table. Also, support tends to become more vague as the problem area tends towards user specialization moving from the bottom of the table upward. An example of specialization is the use of third party SharePoint tools or SharePoint Apps. This may require specialized skills coming from the third party, of which SharePoint support consumes. Another example would be SharePoint solutions built internally by bought-in external expertise, which will require an element of internal support once delivered. Again, those skills cannot be provided solely by an internal facing SharePoint team. Similarly, a finance specialist writing macros in a spreadsheet and expecting SharePoint support may also find that he or she is alone, and yet, if that finance specialist needs to upload that document loaded with macros he or she may need support. In any case, it is quite possible that at this level users often fall back on their own resources, training they have had, or contact with other user with similar problems.
Henceforth, one conclusion that can be drawn from the above table is that one method for delivering support is rarely enough. The same can be seen from the individual weaknesses of the different forms of support.
To further explain, I am now going to describe a fictitious scenario where a software house that has developed SharePoint Apps and add-ons, and provides a SharePoint support service to provide support to those products for their clients. In doing this, it may give you an understanding of the associated successes and failures in that model.
1: Scenario - Fabrikam SharePoint software house
Fabrikam is a software house which develops SharePoint tools and apps. The software house then sells these to customers. As they sell these products, the software house provides a support team to look after a number of external clients. With its head office and several branches in the UK and branches in two other European countries, as well as having to support its own SharePoint tools and apps customers, it also has a large number of users of its own internal SharePoint platform.
Support to external users is slick and professional. Fabrikam has its SharePoint customer support service split into two. One part is a 'Maintenance' operation. Support members use an administrative helpdesk system which is used for customers to report product failures, and from there contact the clients to resolve the issues. These support members also have access to a further 'technical' helpdesk which they can contact for assistance regarding the SharePoint issue. In this way, the support team has an extremely high spot rate. It rarely needs to pass on queries, and to keep this spot rate high it maintains a SharePoint site acting as a 'technical library'. The technical helpdesk also re-contacts key clients, first to see if the issue can be resolved without SharePoint support getting directly involved, and second to decide whether additional resources are required.
The second part of the customer support service is the 'Users helpdesk'. This is dedicated to solving problems and challenges users have. This has two tiers; first, general query and answer helpdesk, and a second, a chargeable premium support helpdesk for customers requiring a fully fledged problem-solving service.
Both parts of the customer service have a strong emphasis on user relationships, keen to build up a rapport with their external customers. The service take queries from customers who are also technical competent (usually calling them from a client's internal helpdesk service), answer those they can immediately and pass on what they cannot answer to a separate, central , 'technical support' function, which solves the problem and contacts the customer. Technical support also provides software solutions to the system maintenance helpdesk. Also, clients and the company teams can also use the technical library, which as well as retaining a professionally managed catalogue of technical volumes also offers online technical documents.
In contrast, support to internal users is slender. Fabrikam has a 'development' function responsible for coding SharePoint Apps, Tools and patches. It answers the occasional user query but does not operate a formal helpdesk. User training is virtually non-existent. Users are expected to learn from their office colleagues and no responsibility for the standard of user ability is taken. Due to the regular nature of the interaction between internal users and development, a single, overstretched 'technical support' operative emerges, moving from user to user, solving problems reactively.
The contrast in this scenario between the professionalism of external customer support and the sparsity of internal support is not uncommon. SharePoint support grows out of commercial imperative under the watchful eye of managing the corporate purse. Internal support can risk a little complacency, for it can always turn to its better equipped and better financed colleagues down the corridor. However, the lack of internal user training and poor internal support reduces the effectiveness of the workforce and worsens the learning curve for new recruits. It also occasionally causes customer support to be diverted from its true purpose in order to help out with internal support.
That the customer support side is successful is not in doubt. With such a broad range of services, plus a technical library to catch those support queries which don't fit the other services, their coverage is as broad as could be expected. They have separated loss-making from profit-making customer support work and use one to fee, finance and necessitate the other.
The separation of helpdesk and technical support causes problems in some implementations, however, here it is a distinct advantage. Great store is set by the helpdesk's ability to answer queries on the spot. This pleases external customers, who of course want the fastest possible answer, as well as minimizing the number of staff needed in technical support. In order to make sure technical support does not get out of touch with customers, job rotation takes place between the helpdesk and resolver function. The main weakness of this focus on a high spot rate is the tendency of some technicians to mark a query as closed, when in fact the client may have wanted more. To control this tendency, the helpdesk manager needs to keep a careful watch on answer quality, particularly ensuring that requests for aid are directed and recorded correctly.
Where the systems helpdesk fails is in its inherent inability to manage user expectations for internal support services. In order to attract custom, the company sales force naturally tends to overstate the capability of SharePoint support to its customers, and this increases demand to a level at which the quality of support becomes unsustainable and unusable for internal staff.
The lesson here is that if support is free, it will be oversubscribed, whether it is needed or not. The chargeable, premium support is in a much better position to control expectations through service level agreements.
On the SharePoint product and Apps maintenance side, the separation of the administrative from the technical causes considerable duplication of work. This confuses some customers who see not reason for two contacts on the same topic before relevant support individuals are assigned.
From the company's point of view, there is a trade-off between reducing technical people to logging fault calls from users and having a separate, less-skilled but expensive service just to answer the telephone. To put this into context, in the past companies may have solved this dilemma by issuing its customers with batches of fax header sheets, pre-printed with their account details, for reporting issues or requirements. Nowadays, customers are asked to fill in an online support form which is then routed to the engineers via an administrative helpdesk.
No administrative helpdesk is needed, the faults come straight into the technical helpdesk who can then decide whether to call the client for further information before passing the report on to an administrator to allocate the job to the relevant support individual.
4: In Conclusion
Concerning the scenario described in this article, the more products created by a software house the more diverse and complex the nature of the support becomes.
However, whether the SharePoint solutions are created externally, or internally, or a combination, the increase in products decreases the impetus for internal support to have their skills upgraded.
The answer to the question on who supports what therefore is dependant on the level of support that you wish to provide to your customers.
If the product is an App, then most likely you will not support anything else than that App. However, where do you start offering support, and where do you stop offering support. Do you start at the client PC side to the web application to the site collection to the App? Or just the App? And, if you do, is that clear to the customer so that the level of support provided is what they expect? What happens when another product is created which is closer to the client PC? How does that change the support for the App?
And, even if we move from custom development to solution development from a business user? Say, for example, that a solution using a third party web part component in a SharePoint site is configured by a business user? Where does the support start and end for that business user?
And in order to support internal customers, they need to be defined on a level of importance - some internal customers will be as important (if not more so) than external customers. That needs to identified and wrapped into the professional offering provided. Support for any SharePoint solution must be aligned to the SharePoint Service Delivery, so that resources required to support released solutions are defined and agreed. The reason for that statement is made clear in the following blog:
Six action points to properly define what is SharePoint supported are (in order):
1. Identify the customer base - define their priorities.
2. Inventory the products the customers use and map accordingly.
3. Associate products to specific levels of support
4. Create Service Level Agreements backed up by operational statements aligned to the products offered. A service level agreements aim is - to teach, inspire confidence in the availability of the solution provided and outlines support details. http://www.sharepointgeoff.com/articles-2/sharepoint-service-level-agreement/
5. Associate skillsets to the areas of support identified in the Service Level Agreement
6. Ensure that all products are part of a roadmap that includes configuration management