At first glance you could be forgiven for thinking that deploying a Windows 8 to a bunch of enterprise devices is hard, complex or time consuming. The reality is that Windows 8 apps are actually quite easy to deploy once you understand the basic requirements and methods for deployment. The nomenclature that we use here has changed a little since the source of our apps has changed with the Windows Store. Deeplinking is the process of deploying an advertisement through a company portal that an app is available (or recommended you could say) for installation by your company, the application package remains in the store. Sideloading is the process of taking the application package provided to you by your in house Line of Business (LoB) developers or a 3rd party software vendor (ISV). Let’s take a look at both more carefully.
Requirements for modern UI apps
Before we look too deeply (pun intended) at Deeplinking and Sideloading lets look at the requirements for successful installation of a Windows 8 app.
With that understood lets take a look at how we install an app on a device. Typically a user finds the app in the Windows Store and taps Install or Buy, both of which start the app installation although Buy obviously also completes a purchase transaction with the Windows Store. The key thing though is that installing and buying an app are essentially the same process – essentially the user is consenting to the install, and more importantly they are consenting to the association of the app with their personal Microsoft account.
Now lets consider the Deeplinking process. Deeplinking can be performed using either System Center Configuration Manager 2012 SP1 or Windows Intune for Windows 8 devices. For Windows RT devices System Center Configuration Manager 2012 SP1 can be linked up with Windows Intune to support deeplinking. The two products can also be linked to support Windows 8 clients if you want to centralise management too. I’ve created a series of videos, The Deployment Sessions, that explain how to make the links required and how do the deployments.
Once you’ve decided upon your deployment targets and your deployment method it’s time to build your deployment. The first thing you’ll need to do is to designate a device as your reference device, just as you would for any other type of applications packaging. In this case though you won’t need to run a monitor app to capture what the app is doing. Simply go to the Windows Store and install the app. Now go to a Configuration Manager console and create an application in the Software Library making sure to select Windows app package (in the Windows Store). You’ll then be asked to specify the location which you do by connecting to your reference computer by name (you’ll need to have run winrm quickconfig on the reference machine first). The wizard will return a list of all the apps installed on the device, then simply select the app you need, complete the Wizard and deploy just like you would any other (msi or App-V) application. Whilst completing the deployment wizard you’ll be able to say if the app should be available or required, normally a required app will be installed for the user and an available app will just appear in the Configuration Manager Application Catalog. However with deeplinked apps this isn’t the case.
When deeplinking in Config Manager 2012 SP1 a required installation will still need user interaction, the store will open for them to the right app but they will have to click / tap Install. This is because the app is being added to their personal Microsoft account so they need to consent. Required then becomes a constant reminder to the user to install the app, and arguably this looses it’s value. Most users are today comfortable with the idea of a store, the device in their pocket almost certainly has one, so self service should be a key consideration in your deployment plan.
Deeplinking with Windows Intune differs from the above in that you don’t need to install the app onto the reference device, you simply need to get the URL for the app from the Windows Store. There are a couple of ways to achieve this, but I commonly email the app to myself using the Share charm. You will also notice that available is the only option within Windows Intune for a deeplinked app.
The only other thing to mention on deeplinking is that it’s available on platforms other than Windows. Deeplinking works for Windows 8, Windows RT, Windows Phone 8 and also for Android from Google Play and for iOS devices from the Apple App Store.
Lets take a look at the Sideloading process. Sideloading is the business of taking an Appx Package which is generated from Visual Studio at build time and installing that package onto a target device. The appx package is signed at the time of building the app by the developer, usually with a certificate issued by your enterprise CA but a certificate issued by any trusted CA can be used. This type of deployment is most commonly used for Line of Business (LoB) apps. As with Deeplinking both Windows Intune and System Center Configuration Manager 2012 Sp1 can be used but also PowerShell can be used.
The first step to Sideloading is to obtain the appx package and to place it on a share that you can access from Configuration Manager or from Windows Intune. The second step is to add the app into the Configuration Manager console and create an application in the Software Library making sure to select Windows app package (appx file). You’ll then be asked to specify the location of the appx file and specify details about the app. You’ll then need to deploy the app to a collection of users that you want to have access to it. If you want you can also add the app to any Task Sequences you use to deploy your operating systems.
If you’ve chosen to do your deployment to a Windows RT device using Windows Intune and you’re using an enterprise CA to sign the Appx package you’ll need to provide that certificate to your Windows RT devices since they cannot join your domain. Windows Intune takes care of this for you and if you’ve got your Windows Intune account linked to Configuration Manager you can add the certificate you’ll use to sign your apps through the Windows RT tab of your Windows Intune subscription in the Hierarchy Configuration node of the Administration Workspace. Once provided this certificate will be automatically added to your Windows RT device. You’ll also need to provide a Sideloading Product Key which is available from the Volume Licensing Portal in the same place and again Windows Intune will allocate a key and enable sideloading on any enrolled Windows RT devices.
I’ve created an ongoing series of videos on my blog entitled The Deployment Sessions that will walk you through most of the permutations of deployment of Windows 8 apps, using Configuration Manager 2012 Sp1 and Windows Intune.
By Robert Marshall - Consultant at SMSMarshall
In this article I will cover key areas of the enhancements to the Alerts feature that come with System Center 2012 Configuration Manager Service Pack 1 , primarily the ability to send emails when Alerts have been triggered for non-Endpoint Protection Alerts. We will accomplish this in a guide form, and go over configuring a site server and triggering an alert so that the email notification is sent which we can then view.
Email notification came into System Center Configuration Manager due to the inclusion of Endpoint Protection, which at the time was the sole component that email notifications could be configured for. In service pack 1 this has been extended across several areas of the product and is no longer just an anti-virus email notification system.
We can now set subscriptions on all the Alerts that are available and target multiple recipients with email notifications as a result. Most of the alerts provide a percentage that governs how low the measurement can go before the alert is trigged. This allows the alert to be fine-tuned and a baseline for notification defined. For further information on alerts visit the documentation library.
Before we can use the email notification feature we need to switch it on and configure it. Open the System Center 2012 Configuration Manager Console to begin.
Navigate to Administration > Overview > Site Configuration > Sites, Select Configure Site Components on the Ribbon and then select Email Notification
After enabling email notification for alerts and filling in the dialogs properties click Test SMTP Server to perform a quick test. If you encounter failure here review your settings, make sure firewalls are not getting in the way (SMTP port 25) and that the account you have specified, if not using anonymous and configured for it, has adequate rights to use the SMTP server.
We're not able to create alerts for everything happening in ConfigMgr, there are other alerts that can be generated, such as from the migration feature, but we can enable alerts for the following objects so far documented or discovered:
Site server Alerts
Database (drive capacity)
Low sideloading activations (Windows 8)
Site System Alerts
Software Update Point
Client Health Alerts
Client check pass or no results for active clients falls below threshold
Client remediation success falls below the threshold
Client activity falls below threshold
Endpoint Protection Alerts
Malware is detected
The same type of malware is detected on a number of computers
The same type of malware is repeatedly detected within the specified interval on a computer
Multiple types of malware are detected on the same computer with the specified interval
The last two category of alert are handled differently than the first two, these alerts are created and configured at the collection level. I've not focused on these types of alerts but for further information visit the documentation library.
Ok let's proceed to test the email notification feature using the management point alert.
You will find the option to enable the management point for alerts in the management point roles properties itself, which can be found under Administration > Overview > Site Configuration > Servers and Site System Role, simply select the site server containing the role, select the management point role itself, select properties from the ribbon and finally select Generate alert when the management point is not healthy. Once the management point is configured for alerts, the alert itself should show in the alerts view.
Navigate to Monitoring > Overview > Alerts > All Alerts to view the newly configured alert
To get an email notification sent out we'll subscribe to the new alert (highlighted above) in readiness for the alert to be triggered.
Select the new alert and then select the Create subscription button on the ribbon.
It will be important to create a standard around the subscription name, for my example I've placed the server name and the role type in the subscription name for easy reference in the console. The Email address field is semicolon delimited and thus can be loaded with multiple recipients. For further information on configuring a subscription refer here and expand the To subscribe to alerts section.
We now configure the alert with a comment, this comment is included in the email notification and we can use this to provide some further information about the alert to the recipients.
Select the new alert and Select Edit Comments from the ribbon and enter some details:
Note that I have included the management point server name and that mentioned that it is a management point failure being monitored, this is useful for later on as the comment is mentioned in the email to the recipients.
Now navigate to Monitoring > Overview > Alerts > Subscriptions
From this view we can see all the available subscriptions configured so far, and in this screenshot we have a solitary subscription created from the previous steps. Any recipients on the delimited email address list will now receive an email notification once the alert has been triggered.
To trigger the alert we can cause the management point to fail simply by stopping the SMS Agent Service on the site server hosting the role. The SMS_MP_CONTROL_MANAGER component on the site server checks the status of the management point every five minutes, you can monitor the MPCONTROL.LOG on the site server to see when this event takes place. Obviously you would do this on a non-production management point and not risk inducing a brief production outage. Now let's head back to the alerts node
Navigate to Monitoring > Overview > Alerts > Active Alerts
Alerts are handled with high priority, within moments of the component noticing that the role is unhealthy we see an alert appear in the console:
We can see here that the alert state is Active which means the management point is most likely still down, we also get the time the alert was created or last modified.
My mailbox received an email almost immediately after:
As you can see in the above screenshot the alert name isn't being converted from its token-form into the name of the alert and the alert text hasn't expanded the role name, there is a DCR logged for this on connect, but the comment was passed down properly and we can now tell from the Alerts email notification which management point failed, and it all happened in near real-time. Of course there could be latency involved here and a delay in the email being sent due to a busy Exchange server, or a very busy site server, but these alerts should get triggered the moment a status message is created and processed on the site server.
To resolve the alert I simply restart the SMS Agent Host service and wait for the 5 minute Management point periodic health check to take place and for the SMS_MP_CONTROL_MANAGER component to report that the management point is healthy again, at which point the alert will be switched to the cancelled state.
An alerts state is useful diagnostic information. For some of these alerts, the alert state shouldn't change for several months, for example the database warning and critical alerts most likely will never be triggered, but if it they are, and the issue is resolved, you can see from the alert state that the alert was triggered and then Cancelled. Thereafter the alert will not show as Never Triggered unless the alert is recreated. It would be a good idea to set subscriptions on the database related alerts.
To test alerts further, either configure deployments that are destined to fail then configure the alert and create a subscription, or test using the Low Client remediation rate alert and exclude some of the clients assigned to your site from automatic remediation, setting the alerts success percentage to 100% and then causing client failure by stopping the SMS Agent Host or BITS service and running CCMEVAL from the clients installation folder. The client health check will report back to the site server which in turn will trigger the alert. You can find further information on how to exclude computers from automatic remediation in the documentation library.
Overall this new feature gives us a little more monitoring capability straight out of the box. I’m looking forward to the growth that will take place in this area of the product over coming releases.
Robert Marshall an IT professional who specialises in System Center 2012 Configuration Manager, is based in the City of London and works as a Consultant for SMSMarshall. He has been an MVP for 5 years and is a founder of the Windows Management User Group.
Twitter LinkedIn Blog User Group
Twitter LinkedIn Blog User Group
It’s not long now until TechDays Online 2013 will be in full swing! If you haven’t registered already for this unique online 3 day event then register now!
Join Microsoft experts for three, free days of interactive learning; online and direct from your browser. Learn all about the latest Microsoft Technologies and with even more exciting topics, discussion and interactivity.
For a chance to win a HTC Windows Phone 8, all you need to do is follow @TechNetUK and tweet the following: Tech.Days Online is back!! http://aka.ms/k35pwq RT & follow @TechNetUK for a chance to win a Windows Phone 8 #UKTechDays2013
By registering for Tech.Days Online 2013, you will also be automatically entered into a prize draw to win a Sony VAIO laptop (Terms and Conditions apply)
For those of you who are focused on SQL Server I thought it would be good to let you know that following on from SQL Bits in May is SQL Relay in June. You will find all the details in the flier below:
Posterous was bought by Twitter on the 12th March last year and is about to be shut down on the 30th April this year.
If you have a blog on Posterous Spaces, you have until the 30th April to move it. After that date, your content will be gone and it will no longer be possible to get hold of it.
If you only do one thing between now and the 30th April – use the Posterous backup facility to create a zip file with all your blog posts in it. You can then use tools at a later date to import that blog.
But maybe this is exactly one of those compelling events that forces you to eventually act; to get on and do the thing you’ve been meaning to do for over 12 months: move your blog to a private website, running the Wordpress blogging engine.
That’s exactly what this click-by-click video shows you, using a Wordpress blog on a Windows Azure website – one you can easily scale out and back using the scale slider. Watch the video for exact instructions on how to set up the Wordpress website and then import your Posterous blog in to it.
More info at http://www.posterousblog.com which has been set up to help with this challenge.
So far in this series we’ve looked at Sideloading and deeplinking apps using System Center Configuration Manager 2012 SP1 and Windows Intune linked together. In this and the next few videos we’ll take a look at using Windows Intune only, in isolation from Configuration Manager. This will be the preferred method for those companies that want to sideload a LoB application but don’t want to deploy Config Manager.
This video is split into sections:
To give this a try signup for a trial Windows Intune account at windowsintune.com. You might also want to watch the other videos in this series on The Deployment Sessions mini-site and please Like the YouTube video if you do.
To help change its business model from a traditional web hosting company to a cloud services provider, Outsourcery is taking advantage of the latest improvements in the Windows Server 2012 operating system and Hyper-V virtualization technology. No longer hampered by a four-core-per-virtual-machine limit, Outsourcery can accommodate partners and customers that demand high-performance virtual machines. Data center administrators are running 67 percent more virtual machines per server. Administrators can build an eight-node Windows Server 2012 Hyper-V cluster in just a few hours instead of a week, and they can perform simultaneous live migrations 10 times faster than before. Outsourcery expects to save more than £50,000 (US$78,000) a year in IT costs and to serve more partners and customers more quickly with a minimal quantity of resources—containing costs and growing its business.
Situation In the United Kingdom, Outsourcery provides a broad range of cloud offerings, including hosted software applications, virtualized infrastructure, and unified communications solutions. Outsourcery is a member of the Microsoft Partner Network with four Gold competencies. In 2010, it was named Microsoft Hosting Solutions Partner of the Year, and in 2011, it was a finalist for Microsoft Dynamics CRM Partner of the Year. Outsourcery is also a member of the Presidents Club, an elite group of strategic partners whose sales achievements rank them as the highest in the Microsoft Dynamics global partner network.
Dynamic Data Center Initiative Recently, Outsourcery took a step forward in its strategic plans to expand its existing cloud services to include a larger portfolio of next-generation cloud products. From the perspective of a cloud service provider, public cloud computing incorporates the automated and on-demand delegation of compute, storage, and networking resources to partners and their customers as needed through a shared physical infrastructure maintained by the cloud provider.
To support its plans, Outsourcery upgraded its platform to utilize dynamic data center solutions from Microsoft and, therefore, benefit from the latest advancements in automated cloud management and administration. The company also upgraded its data center in Leicester and opened another data center in London in July 2011. Outsourcery offered new services, such as infrastructure as a service (IaaS), which provides Windows-based virtual machines on demand. It used Microsoft Systems Center data center solutions and HP Converged Infrastructure, which integrates technologies into shared pools of interoperable resources and provides seamless management. The platform included the Windows Server 2008 R2 Datacenter operating system with Hyper-V virtualization technology and new HP ProLiant BL460c G7 Server Blades.
“We have seen a lot of interest in shared cloud services and dedicated cloud infrastructure from small and midsize businesses. We have closed deals from 10 through 20,000 seats of Microsoft communications tools, business applications, and cloud infrastructure solutions running on a dedicated cloud platform,” says Mike Charles, Product Manager for Cloud Infrastructure at Outsourcery. “We began this initiative expecting to see our business increase five-fold in the first two years and we are right on track.”
Growth Challenges However, as demand for Outsourcery cloud services increases, the company must continue to deliver reliable applications and favorable user experiences to maintain growth. As Outsourcery is serving more and more partners, including independent software vendors and value-added resellers, it must provide cloud services geared to their needs. Outsourcery needed to create a cloud computing environment that would enable it to provide agile, responsive services to its partners to build their own businesses. “Our focus is on helping our partners to quickly and efficiently deliver cloud-based solutions to their own customers,” says Dan Germain, Director of Hosting Infrastructure at Outsourcery. “We need to provide partners with self-service tools to help them activate those services quickly, all from the Outsourcery platform.”
As a cloud services provider, Outsourcery runs business applications on a shared hardware platform. To remain competitive, it has to improve management efficiency and control operational costs at its data centers. “As we see greater demand for our services from our partners, we need to enable a greater density on our platform to keep our costs under control. For that reason, we want to be able to run more workloads and accommodate more businesses by using existing resources—without degrading service levels. We were running about 12 virtual machines, limited to four processors each, per host server with up to 32 gigabytes of RAM per virtual machine. For some of the workloads that we run in a multitenant platform with thousands of users, these limitations impacted how we could scale and grow the business.”
Large enterprise partners wanted Outsourcery to offer more flexibility in terms of network segregation. The company is using virtual local area networks (VLANs) to isolate networks of virtual machines for individual partners and their customers on a shared physical network. This increased management overhead as administrators have to renumber partners’ IP addresses to accommodate the physical and topological design of the Outsourcery data center. “As we started scaling with hundreds of partners, changing IP ranges and reconfiguring production switches when we moved virtual machines created additional layers of complexity,” says Germain. “This impeded our ability to quickly serve enterprise partners.”
Other manual data center management tasks reduced efficiency and impacted customer service. It took several hours for administrators to perform large-scale, live migrations of virtual machines because they could only move one virtual machine at a time. Building a new Hyper-V cluster could take up to a week because it required manual processes. To take advantage of its two data centers and offer data replication services, Outsourcery had to use expensive, storage area network (SAN) replication technology, which reduced the marketability of its service. Also, staff often struggled with virtual machines that consumed a disproportionate quantity of resources on the shared network.
“We made great strides with our Dynamic Data Center initiative, launching our facilities last year and taking advantage of virtualization technologies from Microsoft,” says Germain. “Now we wanted to take our data centers to the next level of density, maximizing the value of the resources we have to contain costs, while ensuring superior cloud computing services for our partners and customers. For these reasons, we were eager to find out more about the features and capabilities in the next version of Windows Server. Luckily, in January 2012, Microsoft invited us to join the Rapid Deployment Program [RDP] for the Windows Server 2012 operating system.”
Outsourcery managed to meet their requirements using Windows Server 2012. Have a trial here
To find out how they managed to meet their requirements View the case study
Hyper-V Server is a free operating system specifically designed to just run Hyper-V so basically a cut down core installation of a paid for edition of Windows Server. The cut down bit refers to the fact that only the roles and features needed to run Hyper-V are there. However Hyper-V itself is in no way cut down; for example you can create clusters for running HA virtual machines (up to 64 nodes hosting 8,000 VMs) and each VM can still have up to 64 logical processes as per the DataCenter edition of Windows Server.
So what’s the catch?
If there is one then it’s that if you want to run Windows Server in a VM it needs to be licensed and the most efficient way to do that once you get to 6-7 VMs per host is to use Windows Server DataCenter edition as this allows any number of guest VMs’ to be licensed for Windows Server as well as the hosts. However if you were going to use Hyper-V to host VDI then your guests need to be licensed for Windows 7/8 and so Hyper-V Server is a good candidate. Another example is if you want to just host Linux VMs which will run really well and are supported (depending on the flavour you are using).
I have made my usual short screencast to show you what it looks like..
Also, you might want to look at the other posts in my Evaluate This series as Hyper-V Server is best managed remotely, and my other screencasts will show you how to do such things as live migrations, VDI, replicate VM’s etc. all of which are possible with Hyper-V Server.
To configure Hyper-V Server for remotes access all I did was use the built in SConfig utility to join it to my domain as remote management is turned on by default in Windows Server 2012, and I have group policy setup to allow remote desktop on all of my servers.
NIC teaming is now viable in server core and Hyper-V server because it’s built into the OS where in earlier versions of server you might not have been able to install the hardware vendors NIC teaming software without a user interface.
Hyper-V Server like the server core installation option of Windows Server only needs half the patching of a full installation of Windows Server.
Hyper-V Server 2012 now includes Powershell out of the box.
Finally you can get Hyper-V Server 2012 here and try it yourself and put it into production if needed.
By Vicky Lea
In a previous blog I discussed how the licensing of Windows 8 works at home. As a natural follow on to that we now need to think about how applications are also licensed to run on employee’s own devices, so that is what I am going to cover in this blog.
When we think about Office nowadays we need to consider Office 2013 and Office 365 ProPlus. I am going to start with Office 2013, the on-premises licensing option for the new Office.
Office 2013 is licensed Per Device. This means every device that runs Office 2013 needs a licence to do so, irrelevant of whether Office 2013 is installed locally on that device, or whether Office 2013 is being delivered to that device in another manner, such as via RDS or VDI. So for any device that is on the corporate premises accessing Office 2013 you would need to purchase an Office 2013 licence. However, it could be that you want access Office 2013 from a home computer, how do we make sure that home computer is licensed for Office 2013?
Well, there are a number of ways to tackle this:
First of all we could make use of the Office Roaming Use Rights Software Assurance benefit. When you purchase Office 2013 with Software Assurance you receive a number of benefits. One of these is the Office Roaming Use Rights which by definition (from the PUR) allows the primary user of the device licensed with Office SA to:
· remotely access the software running on your servers (e.g., in your datacenter) from a Qualifying Third Party Device,
· run the software in a virtual OSE on a Qualifying Third Party Device, and
· install and use the software on an USB drive on a Qualifying Third Party Device.
· When the primary user is on your or your affiliates’ premises, Roaming Use Rights are not applicable.
· You may not run the software in the physical OSE on the third party device under the Roaming Use Rights.
We can see from the definition then that Roaming Use Rights will allow Office to be delivered to an employee’s computer in a virtual OSE, via RDS or VDI for instance, whilst outside of the corporate premises. However, what happens if we would like Office 2013 to be installed locally on the employee’s device rather than virtualised onto that device?
Well here we could make use instead of the Home Use Program. This is another Software Assurance benefit that you receive when covering Office 2013 with SA. The Product List states:
Under the Home Use Program, customers’ employees, who are users of the licensed qualifying applications, may acquire a single license for the corresponding Home Use Program software, to be installed on one home computer. The license terms for that software permit the primary user of the home computer to install and use another copy on a portable device.
So with the Home Use Program an employee can purchase the Office 2013 Professional Plus media and then install the software on their own computer for use whilst they are an employee of the organisation and Software Assurance has been maintained on the underlying Office 2013 licence.
Another alternative is to license Office 2013 via the Work at Home rights received with some volume licensing agreements. Select Plus and Enterprise Agreement customers receive Work at Home rights for Office 2013. The Work at Home right allows the organisation to acquire a Work at Home licence for use on the employee’s home computer, but this licence must correspond to a licence purchased for the same product that has been deployed on an “at work computer”.
The above options all relate to licensing Office 2013, the on-premises offering of Office, but as I mentioned before there is another way in which to license the new Office. And that is via an Office 365 subscription. Office 365 is Microsoft’s cloud offering of their user productivity products, including amongst other things Office 365 ProPlus, Exchange Online, SharePoint Online and Lync Online. Office 365 is licensed via a USL (User Subscription Licence), meaning that you license each user, on a subscription basis, to access the services provided through Office 365.
Office 365 ProPlus provides the licensed user access to an always-up-to-date Office experience, with the licensed user being able to install Office on up to 5 PCs, as is confirmed in the PUR:
· Each user to whom you assign a User SL may activate the software for local or remote use on up to five concurrent OSEs.
These 5 devices can include home owned computers as well as corporate ones, which means that you can easily license your users to access Office 365 ProPlus on home owned devices just via their Office 365 subscription.
The last area I wish to discuss today, and then I will leave you in peace, is the licensing of Office 2013 on a Windows RT device. When you purchase a Windows RT device it comes with a copy of Office Home and Student 2013 RT preinstalled. This suite includes Word RT, Excel RT, PowerPoint RT and OneNote RT. There is one very important factor you need to be aware of with Office Home and Student 2013 RT, and that is the fact that the default usage rights of the product do not allow it to be used for commercial purposes.
This obviously has an impact when you need to use the copy of Office preinstalled on a Windows RT device for commercial purposes, but it is possible to acquire commercial usage rights for Office Home and Student 2013 RT. This can be done in a couple of ways:
Firstly the commercial usage rights for Office Home and Student 2013 RT can be accessed via Office 2013 or Office 365 ProPlus. When you license a PC for Office 2013, or a user for Office 365 ProPlus, the primary user of the device licensed with Office 2013, or the user licensed for Office 365 ProPlus is then provided with commercial use rights for Office Home and Student 2013 RT that can be applied to their Windows RT device and the copy of Office that comes with it.
Alternatively, it is possible to purchase Office Home and Student 2013 RT Commercial Use Rights. These are purchased per device and will remove the non-commercial usage restriction from the licensed Windows RT device, as detailed in the PUR:
1. You must assign each license to a single device.
2. This license modifies your right to use the software under a separately acquired Office Home & Student 2013 RT license, by waiving the prohibition against commercial use of the software.
I have covered a number of areas here, and just as a reminder, if you want to check out any of the detail referred to in this blog the Product Use Rights and Product List documents are a good place to look!
By Michael Sullivan, SQL Product Manager, Microsoft UK
We all know some pretty bad SQL jokes (the language that is). Well I do anyway. Like, a DBA walks up to two tables in a bar and says 'may I join you'? Enough. But imagine that same DBA walking into a restaurant and finding no tables or chairs! Only cloud tags, long arrays of text strings, angry looking web site logs, a zillion tweets munged together with neighbouring RFID tag streams, and a bunch of unemployed maps with geospatial attitude! His* mission? To chat with all of them, and get them to yield business insights which us mere mortals can consume? Maybe that takes a rocket scientist. Or does it?
Well a Microsoft DBA can approach it as follows.
First, he* thinks parallel. That means he is enlisting his company's new Microsoft SQL Server 2012 Parallel Data Warehouse (PDW) to solve the challenge. No need to try and invite all the data back to your place - even though you have incredibly efficient seating (in-memory column stores, for example) at your disposal. Nope. Leave those weird and wonderful data types where they are, on their comfortable HDFS sofas, and just get their phone numbers for now.
OK, it's time to ask this universe of guests about their favourite 'data nibbles': in parallel. A Microsoft SQL Server DBA you can do that easily, avoiding getting bogged down in dietary requirements, long queues or complicated seating plans. How? With a new (not so) secret weapon called POLYBASE! Think of Polybase as an amazing 'data butler' that speaks your language (SQL) and your guests' language too (in this case MapReduce). In no time you have queried all this structured and un-structured data (your complete guest population). In tech-speak: you issued a standard T-SQL query that joins tables containing a relational source with tables in a Hadoop cluster without needing to learn MapReduce queries. And you got compliments back on your near-fluent Hadoopsch accent too.
It's the next day (the day after the party). The Heads of Business Intelligence and Enterprise Applications in your company are very happy. And the CFO too. Using nothing more complicated than Microsoft Excel, she* has everything she needs on her laptop, at the board meeting, to show amazing insights into her customers, invoices, SKUs, cash on hand, days outstanding, the shelf-life of everything that left her factory last week, not to mention deep insight into customer sentiment about a product recall triggered by a quality audit last month. Wow! And all because you are a SQL Server DBA.
Book your ticket to a parallel universe of insights at the upcoming SQL Bits event, or by drop us a note: firstname.lastname@example.org to learn more.
* my fictional DBA is a 'he' and my CFO is a 'she'.