IT Project work can generally be split into two categories, user facing and behind the scenes stuff. With either type it is important to properly plan everything that you need to do along the way.
Both Lewis and Andy are visually impaired engineers who have mastered the use of Windows Server 2012 R2 using voice synthesizers and a Braille Display to work quicker and smarter...
The conundrum which is the Internet of Things and the many innovations coming from its emergence. How does SharePoint fit in? Does it fit?
With such a vast array of sessions at TechEd Europe 2014 it’s hard to keep up with the latest announcements so the aim of this article is to do the hard work for you whether you are at TechEd or not. Keep tuned to this article as it will be updating regularly as new announcements are revealed!
Microsoft announces Exchange Support for Azure VM as a Witness Server - 31/10/2014
As of January 1, 2015, Microsoft will support using an Azure laaS file server VM as a witness server for an on-premises DAG... Find out more here.
Microsoft Band - 30/10/2014
Microsoft Band, the first device powered by Microsoft Health, helps you achieve your wellness goals by tracking your heart rate, steps, calorie burn, and sleep quality... Find out more here.
Azure Site Recovery - 30/10/2014
Announcing Windows Azure Pack integration and PowerShell Support... Find out more here.
Azure Data Factory - 29/10/2014
With the ability to manage and orchestrate the collection, movement and transformation of semi-structured and structured data together, Data Factory provides customers with a central place to manage their processing of web log analytics, click stream analysis, social sentiment, sensor data analysis, geo-location analysis... Find out more here.
Azure Stream Analytics/Azure Event Hubs - 29/10/2014
Azure Steam Analytics will help businesses collect, analyse and gain insights in real time, produced by IoT-connected devices and applications... Read more here.
Azure Market Place - 28/10/2014
The Azure Marketplace helps bolster productivity and strengthens application development by being the first marketplace to provision multi-virtual machine clusters... Read more here.
Azure Websites Migration Assistant - 28/10/2014
Now you can easily migrate to Azure Websites from your existing websites that run on Internet Information Service (IIS) 6 or later... Find out more here.
Exciting Office Updates - 28/10/2014
Office365 latest innovations in security and compliance... To find out more click here.
The general availability of Office 365 APIs, including Mail, Calendar, Contacts, and Files are now being released for production use... Find out more here.
Enterprise Mobility Announcements - 28/10/2014
Enterprise mobility is at the core of our mobile-first, cloud-first strategy. Today new capabilities were announced that will make it even easier for IT and the mobile workforce to be productive, while keeping company data secure... Find out more here.
Exciting Updates to Microsoft Azure - 28/10/2014
Updates with the aim of enabling simplicity, scale and innovation... To find out about the new updates to Microsoft Azure click here.
Microsoft Azure Batch brings together Microsoft's Green Button acquisition and Azure to deliver resource management and job scheduling as a service... To find out more and for the preview head here.
Azure Operational Insights will use machine learning to create actionable insights for enterprises, allowing them to make better business decisions... for the preview and to find out more click here.
ExpressRoute enables you to create private connections between Azure datacenters and infrastructure that's on your premises or in a colocation environment... Find out more here.
Automates time-consuming tasks across Azure and third-party environments, reducing the risk associated with repetitive manual processes... for the free trial and to find out more click here.
Cluster witness in the cloud – 28/10/2014
Mark Russinovich confirms that the new cloud witness feature will work with any fail-over cluster scenario including Microsoft Exchange…. Find out more here
Resources
What do you think of the latest announcements from TechEd Europe? Let us know in the comments section below or via @TechNetUK.
Mykhailo Liubarskyi is a software architect and a lead software developer at SoftServe Inc. With over 9 years in the industry, he has extensive experience within the US and European IT markets. He is responsible for the development of management software products produced by SoftServe and has an M.Sc. in Computer Sciences from the National University in Kharkiv, Ukraine.
With all the interconnected gadgets, services, and applications running worldwide, what once used to be a good script for futuristic movies, is now becoming an inevitable reality. While the phrase “Internet of Things” is on everyone's lips, do we really know what's in the name?
Defining the IoT
No, the Internet of Things is not a simple network of different instruments and sensors connected to each other as well as to the Internet via wired or wireless communication channels. It would be an oversimplification to consider it so. The “father” of the concept of the IoT, Rob van Kranenburg defined it as a single network connecting real-world and virtual objects around us, claiming that all of the analog and digital worlds can be combined into a single interconnected system. It basically redefines the way “things” (anything existing and moving in space/time, whether real or virtual) interact, as well as their properties.
The very idea of the Internet of Things is based on the extensive integration of real and virtual worlds, where communication is carried out between humans and devices, and between devices without human intervention. It presupposes that in the near future "things" would become active participants in business, information and social processes where they would interact and communicate with each other by exchanging information about the environment, responding to and influencing the processes occurring in the outside world, with or without human intervention.
Based on Rob van Kranenburg`s four-layer scheme, we can speak of the IoT evolution from separate identifiable smart objects to smart units (combined into a smart house, for example) to bigger networks like smart cities, to encompassing the whole planet. Yes, it`s that sci-fi.
In other words, the IoT should actually be viewed as a multidimensional network, where small, loosely coupled, devices are grouped into larger networks where communication requires a special common language.
Challenges to Overcome
But before we come to the last stage of the Smart Earth in the IoT evolution, there are still a few challenges and issues we`ll have to solve.
At the same time, the potential benefits of the IoT implementation are enormous.
With the global scale and unseen before speed of analyzing data, with each second of human life documented and stored (a thought both scary and fascinating), immediate diagnosis, and even prevention, of disease based on the wealth of the incoming clinical data could become a common occurrence. Epidemic and chronic diseases as well as many other threats challenging humanity could be successfully defeated. The new horizons opening for world`s science, health care, space and Earth exploration defy the boldest imagination. The new world might be closer than anybody could think.
ThyssenKrupp Elevator - Giving cities a lift with the Internet of Things
ThyssenKrupp Elevator have drawn on the potential of the Internet of Things and Microsoft technologies by connecting their elevators to the cloud, gathering data from sensors and systems, and transforming that data into valuable business intelligence.
Gavin Payne is a principal architect for Coeo, a SQL Server professional services company, and a Microsoft Certified Architect and Microsoft Certified Master. His role is to guide and lead organisations through data platform transformation and cloud adoption programmes.
In this article, we’ll see how you can use a free tool from Microsoft to identify the servers in your environment prior to starting data-centre wide IT projects.
Change is the new norm
In the last few years, not only has the pace of change in IT increased but also the scale of change. To borrow a cliché: whether it’s migrating physical servers to virtual servers, on-premises services to the cloud, or upgrading software – the only constant is the fact that everything keeps changing.
When we think about the types of change mentioned above its possible that they’re going to affect a large number of systems in an organisation. Replacing Windows Server 2003 for some environments means refreshing the majority of its servers, the same might happen when you consolidate your physical workloads to a virtual platform.
If that’s a challenge you’re facing, then the first step in any multi-server upgrade project should be defining your scope.
The purpose of a scope
A scope tells you what your project needs to consider and what it can ignore. In the context of a server environment, this might tell us which servers you have to upgrade and which you can leave as is. Essentially, it defines the size of the problem.
How to define a scope
If you’ve got a small enough environment, you might be able to identify all of the in-scope servers for a project from information you already know. If not, you’re going to need to perform a discovery and have something tell you what’s in your environment. Some might be surprised that information like that doesn’t already exist, but the reality is if it does then it’s typically not trusted to be up to date so still needs collecting again.
Discovering your environment with the Microsoft Assessment and Planning Toolkit
Microsoft’s free Assessment and Planning Toolkit, known as MAPS, is a simple but powerful tool that lets you discover all of the servers, their configuration and their workload amongst many other areas of detail.
It’s available to download from its TechNet home page here:
I use it to do the following:
It can do far more, but for a mid-size organisation unneeded with exactly what they have in their IT environment, how its configured and how hard it works this can often be the first time they see this kind of information in a single report.
Defining your scope
One of the first outputs from the MAPS tool is the name of every server in the environment it scanned – that’s version 1 of your scope.
Next, you need to start thinking about servers you’re planning to retire before the project finishes. Then, you should consider any servers in your data centre that you don’t own or manage – and so on. Your objective is to make your scope as small as you legimately can. No one wants to do un-necessary work but at the same time you can’t ignore servers for no reason.
Summary
Hopefully this article has either introduced you to the free Microsoft MAPS tool, or if you’re already familiar with it then hopefully you’ve learnt something from me explaining one of the ways I use it.
Peter Egerton is a Microsoft System Center Consultant for Inframon Ltd. He travels the UK designing, implementing, supporting and training on the Microsoft System Center product range. I’ve been an IT Pro for 14 years and in my spare time I am a Community Leader for the Windows Management User Group (WMUG).
Today’s world is ever moving and ever evolving. Week by week there’s new software, new devices and people naturally want to keep up to date with the latest and greatest. Traditionally, trying to keep your corporate IT in line with this requires an army of staff and deep pockets. The days of a standard issue corporate machine are undoubtedly numbered - and I personally think this is a good thing.
I’m a consultant and I get to visit businesses and organisations all over the UK, I often ask them if they do BYOD. Some do and some don’t – or at least they think they don’t. The fact is that even if they say don’t, I can almost guarantee their users do and often there’s not a lot they can do about it. People use email on their personal devices, swap USB drives between corporate and personal devices, VPN from a personal device – you name it they will try it if it allows them to get their job done. So what we are looking at there is really BYOD – accessing corporate resource from non-corporate devices.
As someone who spent around 12 years in IT support I know this can be a nightmare but I know now that things can be made better and you can regain that control whilst making life easier for your users by allowing them to use whatever device they can get their hands on. For starters if your bug bear is devices then why not implement a Choose Your Own Device (CYOD) policy? Give the user a wider choice of devices and operating systems but from a range that you have chosen. The technology now available can allow you to put a level of management on other operating systems so you can include Apple Mac and Android in your CYOD range if you so choose.
Let’s look at some of our options:
Windows to Go
Make your corporate desktop mobile and give your users a Windows to Go USB stick so they can use their corporate desktop on their home device. This gives you flexibility as the user can take it anywhere and use it with multiple devices, it also gives you security – you can encrypt it with Bitlocker so your corporate data is protected and you cannot see the local internal storage in the machine you are using so there is no ‘cross-contamination’. Also don’t forget the cost aspect of this one, it’s a cheap solution for your occasional home workers or those who prefer to run their own device and maybe want to keep up with the latest and greatest. Once they’re done with work they can simply unplug it and use their personal desktop again.
For some detailed information on setting up Windows to Go in your environment take a look at this article to get you started.
RemoteApp and Microsoft VDI
If you want to deliver your corporate applications to your users then why not give them the applications they need to use over the wire? Give them a full desktop if you really want to. RemoteApp is a tool that has been around a little while now but seems to have reached prominence due to some recent feature enhancements which make the whole experience that little bit slicker. The basic functionality of delivering corporate applications to a device (corporate or other) with all the processes running in the data centre and no data being stored on the end users devices appeals to many folk I speak to.
Again it’s flexible as you can use it anywhere with an internet connection, it’s certainly efficient as you only need to make application or operating system patches, changes or upgrades in 1 place and it’s secure. You can run this in your own data centre if you prefer or Microsoft now offer RemoteApp in the cloud via Azure. As someone who has used this both on premise and through Azure I have to say from a technical perspective the Azure set-up has to be easier.
If you choose Azure RemoteApp then you have a fixed choice of applications including Microsoft office, the good thing about this is that it’s all maintained by Microsoft – no patching, updating, hot fixing etc. by you. That’s great, but not for all. If you want to deliver your own RemoteApps of desktops then you can use the RemoteApp Hybrid Deployment like this:
You essentially need to create a corporate ‘gold’ image which has all the applications you might need, upload that to Azure you can then publish either the full desktop or selected applications from within it. As you can see from the diagram these can be domain joined machines which are subject to your group policies, can be fully managed with SCCM like a standard client and they can access your data in your data centre. Now obviously you’re going to need to maintain this yourself but if I think back to my early days in IT and trying to support travelling directors who randomly call in with problems from countries far away, to give them something like this would have been amazing.
If you want to explore this a little further there is an excellent 2 part blog post from Microsoft UK IT Pro Evangelist Ed Baker here.
System Center Configuration Manager (SCCM)
I feel I should make a special mention for Configuration Manager or SCCM as it’s frequently known because it is a product I specialise in and spend a lot of time working with. I realise I’m writing this on a Microsoft blog but I’m still an independent and I have to say it’s an awesome product which can enable flexibility, efficiency and security for your workforce.
I always think you can measure how valuable a piece of software is by taking it away and seeing how you get on without it, I’ve seen this with Configuration Manager and people soon realise how important it is to their daily work and how much time it saves them. You can deploy software to your users across multiple devices and they can choose where they want to consume that application with the Application Catalog.
You can obviously manage your standard office based desktop devices (over multiple operating systems I might add) but often forgotten about is the fact that you can configure Configuration Manager to use also internet based client management and manage your clients that are rarely in the office. I won’t dwell on all the features of Configuration Manager as I will be here a while but I wanted to just give a quick nod to the product just in case you haven’t seen it, you should really check it out.
If you want more information on Configuration Manager there is a huge community out there on the web – I’m part of it. If you want to trial Configuration Manager then you can download a 180 day free evaluation here.
I would also suggest checking out Microsoft Virtual Academy as an accompaniment before you implement your trial.
The icing on the cake – Enterprise Mobility Suite
Now I come to the latest offering from Microsoft in the world of Enterprise Client Management – Enterprise Mobility Suite. I class this as the missing link for client management as it completes the feature set which maybe some of the other tools on their own don’t cover. As the name suggests this is a suite of products and includes:
The image above gives you some of the highlights of each product but these 3 features combined offer you the flexibility to manage your devices on or off premise, domain joined or not and give you the ability to stretch your resources depending on your requirements at the time by leveraging Azure. The efficiency comes into this as it’s maintained by someone else meaning you can be getting on with the productive stuff whilst someone else worries about the upkeep. Self-service password reset and multi-factor authentication is both efficient and secure and the ability to easily remote wipe or reset mobile devices again from experience is a really easy and efficient process.
You can easily create an Active Directory in Azure using the portal:
And you can see the simplicity of the portal once it has been created:
In here you can manage your users and groups as you may in your current Active Directory and also configure the directory integration between the two as well as viewing the available reports. Another really neat feature of this is the SaaS application discovery. You can set a discovery process running against your in-premise Active Directory and then use those to configure Single Sign On with your Active Directory account. A nice add-on to this is that AD premium grants you rights to use Forefront Identity Manager on premise which is an awesome tool for hybrid identity management.
Microsoft Intune allows you to manage all kind of devices (iOS, Android, Windows Phone, Windows 8.1) from the cloud and apply a level of control and policy onto the device that is being used to access your corporate data. In a BYOD or CYOD scenario this is ideal as it keep both sides happy – you can manage and deploy applications to devices and the user can use whatever they want. You can also remotely wipe or reset devices at the click of a button for those occasions when that person leaves their device in a taxi. You can see the list of available options in Intune.
You can also combine Intune with Configuration Manager to create a Unified Device Management scenario and leverage the added functionality of Configuration Manager whilst at the same time creating a single point of contact for all your device management needs. The efficiency of this alone has to be noted, there are numerous examples out there of the simplified management this configuration offers.
The final feature in the Enterprise Mobility Suite is Rights Management. This is where the security of the whole offering becomes enterprise grade in terms of data protection. Azure RMS allows you to classify data and set specific policies depending on that classification, where it is stored, who is accessing it, when it is accessed, what connection they have and so on. As you might have guessed this can be used across multiple device types and can be audited, monitored and reported on accordingly. A particularly nice feature is preventing the use of copy/paste and even a snipping tool from taking screen shots of the data.
At a high level Azure RMS policies work like this:
The full process is detailed by Microsoft if you want to know more.
I believe that Enterprise Mobility Suite ties together the various Microsoft offerings and for me creates a really attractive proposition for Enterprise Client Management. Further information can be found here.
IN SUMMARY, you should see that there are various options out there for your whether your are on-premise, in the cloud or both and if you are a mobile organisation or static. Different products work for different businesses so take a look and see what meets your requirements.
Going back to the question - How do you transform the tech in your workforce, to enable flexibility, efficiency and security? I believe that by implementing at least some of the solutions outlined you will begin the transformation of your workforce whilst enabling flexibility and efficiencies both in front and behind the desktop and above all securing that vital data which is at the core of every organisation.
Does this article help you see how you can transform the tech in your workforce, to enable flexibility, efficiency and security? Is there anything you would add? Let us know in the comments section below or via @TechNetUK.
Richard Conway, Elastacloud, Director and Head of Cloud Services at Elastacloud Limited, Co-Founder at UK Windows Azure Users Group. If you're interested in getting started with Azure, join us every Tuesday from 12:30-14:00 (UK timezone) for the Azure Weekly Webinar. It is aimed at the techie who has not yet had any/much exposure to Azure but who just wants a leg-up to get started. This week join us for the first hour's "how-to" session which is immediately followed by a 30 minute talk by our guest speakers from Scaboodle who will be giving a presentation on Building a Cloud Business.
Over at Elastacloud we’ve been using Big Data and machine learning frameworks on Azure for a years and on a community level teaching free courses and bootcamps on HDInsight and the associated Machine Learning framework Apache Mahout. As such I’ve been waiting with baited breath for Microsoft to release their own machine learning offering. Let’s deal with some basics …
What is machine learning? Machine learning provides computers with the ability to learn without being explicitly programmed. It focusses on the development of software that can teach itself to grow and change when exposed to new data.
In my short online briefing I covered the idea of training data. In order to create a model and begin to understand the relationship between data points we can apply several types of algorithms to create models. These models can then be applied to new data.
You can see that after we select our dataset we train our model so that we can then test further data. We can then try the model with new data and or a portion of our original dataset and evaluate the results. The feedback cycle can continue ad-infinitum so that we create the best model available to us.
AzureML allows this process to occur very simply. If you look at the dataset I’ve chosen below you can see that I’m testing the idea that cricket chirps get louder as temperature increases. With AzureML I haven’t needed to write any code in this instance as I have data for cricket chirps in decibels and temperature in centigrade.
You can see that we feed the algorithm – in this case Linear Regression (remember from secondary school statistics class!) which seeks to find a linear relationship between the two aforementioned variables. We then train this model and we can score it and evaluate it thereafter. This allows us to feed additional cricket chirp data based on our training data so that we can determine whether we can accurately predict the temperature given the chirps of cricket. Evaluating our model is easy since if we have the actual data we can then test to see whether our predicted data and actual data match. Determining whether our model is effective is a simple consequence of averaging the errors between the two sets of data points!
What can machine learning be used for? Any type of problem which has available data can be used with AzureML. Google are using machine learning to make self-driving cars, Microsoft to make the Kinect and X-Box recommendation tools. Netflix is the posterchild of recommender systems and has successfully used machine learning to predict what its users want to watch next. Not an easy feat.
AzureML is a fully featured machine learning host which will allow you upload or process datasets in Azure, clean that data up and then make predictions using that data. Currently it supports a drag-drop web interface using HTML5 called MLStudio but supports a high level of configuration over the tasks you can enable and it also supports the R programming language which is the most popular language used by statisticians and data scientists as well as .NET exposing some of the features of Infer.NET – a powerful library produced by Microsoft Research.
Any models can also be exposed as web services which is powerful feature of enablement for many data scientists that don’t have the software skills to take their models into production.
Jonathan Noble is a PowerShell Microsoft MVP with additional interests in Windows Phone Consumer, Office365 and Forefront Identity Manager. Jonathan has been a full-time IT Pro for over a dozen years and since 1999 he's been working in the IT department at Newcastle University. On the 12th of November, join Jonathan who will be speaking on the topic of 'Building a Desired State Configuration Infrastructure' at Future Decoded which is a free event which will see keynotes from Brian Cox, Sir Nigel Shadbolt, Or Arbel and Michael Taylor, followed by eight individual speaker tracks that discuss different topics. Be sure to register before the day is sold out, and we hope to see you there!
I'm very happy to be presenting at the Future Decoded Tech Day on 12th November at the ExCeL in London. The thing that I find most exciting about it is that my session on "Building a Desired State Configuration Infrastructure" is part of the DevOps track, and that's because I believe the DevOps approach is really important for all of us to embrace going forward. It's unfair of me to make that claim without a bit of explanation, so let me try to quickly start down the track towards convincing you…
In most IT organisations, whether they're the whole company or just a department within a bigger organisation, there tends to be a situation where IT operations and development teams are not totally aligning to work on common goals, even if they aren't completely at odds with each other (which is sometimes the case). Among other things, this leads to the situations where sys admins are thinking that the developers are writing bad code, and the devs are thinking that their code works fine on their machine and the IT operations people are hopeless for not being able to get it running in production. They have silos where systems (whether that's applications or infrastructure) are developed in isolation from each other and then have to be integrated down the line, when it might be harder to change things.
The DevOps approach includes the whole product/service lifecycle, with greater collaboration between those previously isolated teams. This drives enhancements in efficiency, and enables faster release cycles because features can be added and tested more reliably. You don't have to be aiming for 10 releases a day like some organisations - that doesn't suit all situations, but I doubt many people would think the option of greater agility would be a bad thing. All this requires something of a culture change, which does need a degree of buy-in across the IT organisation, but it also hinges on automation. There's no chance of doing continuous delivery without consistency, and your chance of maintaining consistency is very limited without embracing automation; especially if you need to scale rapidly.
While you may or may not be great at scripting at this point, using sets of imperative commands to put your infrastructure into the required state may only work for a snapshot in time. You run your configuration scripts and everything works fine for some minutes/hours/days/months, but the fact that you put a system in a particular state once upon a time is not going to help you if something or someone comes along and changes that. That is why it is so important to use a declarative system to configure your infrastructure, so that it's self-repairing across all the dev/QA/production layers.
For Windows Server and a whole lot of other components, PowerShell Desired State Configuration is a great approach to declarative configuration management, whether you like to keep your infrastructure in-house, or if you're moving part way, or all the way, into the Azure cloud. You're going to use it to tell the systems not just the way you want them to be, but the way you want them to stay (until you tell them otherwise). You don't need to be a PowerShell guru to get started, as my presentation will highlight (although you should keep working towards that PowerShell guru badge because it's going to pay off in spades!).
Using DSC, you can make use of push or pull deployment methods to get your configurations to the nodes you want to manage. There are pro's and con's to either approach, but I expect most people will want to aim for setting up a pull server and having the nodes poll it for updated configs. Given that the pull server is going to be a key part of your infrastructure, you want to make sure that it's properly configured, so what's the best way to do that? With DSC, of course! The DSC Resource Kit offers resources to configure this and a load of other things that aren't offered in the box, including some unexpected options like installing Chrome.
You don't have to be going all the way DevOps to gain advantages from managing your infrastructure with DSC, but if you've already implemented a DSC infrastructure, when the rest of your organisation catches up with the need to adopt DevOps practices, you'll already be ahead of the game.
Will you be attending the Future Decoded Tech Day? Who are you most looking forward to seeing? Let us know in the comments section below or via @TechNetUK.
Ed Jones works for Firebrand Training, a Microsoft Gold Learning Partner. He has worked in the IT training and certification industry for the past 3 years. He is a tech enthusiast with experience working with SharePoint, Windows Server and Windows desktop.
Having been part of a team that has launched or upgraded hundreds of courses, it’s not often I get really excited about a new certification. The Specialist track released for Microsoft Azure falls into that elite list of new certs which gets my geek side buzzing.
As businesses continue to transition to the Cloud, demand for skilled and certified professionals continues to outstrip supply. If you’re a Developer or IT pro working with Cloud technology, these Specialist certs are career changers.
In this post we will look at the IT pro focussed Implementing Microsoft Azure Infrastructure Solutions Specialist certification. For those interested in the Dev focussed Developing Microsoft Azure Solutions Specialist certification, head to the Microsoft UK Developers blog for the low down. Without further delay I’m going to pre-empt all the questions now buzzing around your brains:
Q. Who is this certification intended for?
A. This certification will be invaluable for experienced IT professionals responsible for on-premise infrastructure. This course will teach you to migrate some or all existing on-premise infrastructure to the public cloud of Microsoft Azure.
So whether you choose to adopt a hybrid cloud option, or become fully immersed in the cloud, you will walk away from this course with the skill set required to do so.
Q. Why should I get this certification?
A. As I touched on during the intro, cloud technology is a rapidly expanding sector which infiltrates every facet of modern business. This rapid growth has created a skills gap where demand for cloud-qualified professionals outpaces supply. Only last week there was IDC report on cloud skills highlighting 56% of European IT departments now cannot find qualified staff to effectively support cloud projects.
Couple this with the fact that Microsoft Azure is now second largest provider of Cloud Infrastructure Services and the value of this certification becomes clear. If you hold a Microsoft Azure specialist certification, you are one of the most in demand IT professionals currently in the market place. A new job, salary increase or that promotion you were looking for just got a little bit closer.
Microsoft Azure is also growing fast than any other cloud services provider with a 154% YOY growth. So it looks like the demand for Microsoft Certified Cloud professionals is only set to rise.
Q. What will be covered during the curriculum?
A. You will cover the following modules when working through the official curriculum for the Microsoft Azure Infrastructure Solutions Specialist course:
Q. When will this certification be available?
A. You can sit the Implementing Microsoft Azure Infrastructure Solutions exam and attain your certification as of now. You have two options, schedule exam 70-533 with Pearson VUE or Prometric. Though if you’re planning on sitting your exam after January 1, 2015, book it with Pearson VUE. As of December 31, 2014, Prometric will cease delivering Microsoft certification exams. Additionally you can sit the Developing Microsoft Azure Solutions.
Training providers across the world have been developing their offering, while instructors prepared for the exams required to deliver the course. There is now training available for the course, with Firebrand being one of the first to market.
Q. What are the Microsoft certification prerequisites?
A. Both Azure specialist certifications sit outside the traditional MTA, MCSA and MCSD/MCSE tracks, as such, there is no pre-requisite certification required to sit the course. However, having spoken with our lead Microsoft Instructor, Mike Brown, and reviewed the curriculum personally, it’s clear an in depth understanding of virtualization would be hugely beneficial.
Those in possession of the MCSA: Windows Server 2012 certification will therefore be stood in good stead for this course. Those without the MCSA should consider this course due to the sound introduction to virtualization.
Q. What if I want to prepare for the Microsoft Specialist certs now?
A. Those wishing to gain a solid grounding in Microsoft Azure in preparation for the course should head to the Microsoft Virtual Academy. There are currently 28 Microsoft Azure short courses available, which you can self-study to begin a foundation of knowledge to build upon.
Again, having spoken with our lead instructor, it sounds like training will be necessary for this cert. Due to the scale and intricacies of Microsoft Azure, the sheer range of knowledge covered in this course and the current lack of external resources, self-study will be an exceptionally tough and potentially un-rewarding route.
I hope that this FAQ will have answered all your questions. If however I’ve simply generated more questions I’ve yet to answer please feel free to ask them. My final piece of advice, it’s never too soon to get started, give those MVA Azure short courses a quick look right now.
The following article is contributed by Geoff Evelyn, SharePoint MVP and owner of SharePointGeoff.com.
Consider the conundrum of the artist who paints themselves into a picture. The artist is depicting a painting of him/herself painting a landscape. During painting the picture, the artist feels that there is something missing. So, the artist, in order to feel that he/she is a crucial part of the painting, paints a picture of an artist painting a picture of an artist painting a landscape. This repainting of the picture has an infinite number of artists within the picture, and yet, the artist is still outside of the picture.
Strange start to an article? Read on. The same conundrum can be referred to the Internet of Things, where, by definition, devices, not humans; things, like equipment sensors send information concerning location, other sensors connections, environment, etc., to a myriad of collection centres, which in turn feed that information to others. The same conundrum arises. If you are responsible for managing technologies which utilises sensor feed data, how can you ensure that you know everything about the sensor feed data? How can you govern its quality? How can you be sure of the integrity of the data? What kind of support, both technical and business, is available to the devices providing the sensor feed, the software whose job it is to gather and record that data in the first place?
So, how does SharePoint, in terms of content management, fit in? Does it even fit? Is SharePoint a crucial aspect in the provision of data provided by sensors? This non-information worker data, which is therefore not defined as personal (in the sense that a human actually created the content), and yet, in many cases crucial to the operation of any process, is that it needs to be managed.
The gathering of sensor data across networks and storing it in a central database or data host is not new. We have been doing it ever since first photographs in the 19th century. In terms of software, user interactions would be logged as part of audit trials, data updates etc. The explosion of networked sensor connectivity means that more and more direct and indirect interactions with equipment is being monitored and turned into analytics. Examples can be anything from a Tennis Racket to a Traffic Cone to a Fridge to a Cooker to a Washing Machine to a Bicycle to a Car and beyond. Additionally, the data provided may be harvested by another device, or combined with data from another device. The point is, how can you be sure of the integrity of the data provided, how can you ensure that there is adequate support, communication and management of the sensors providing the data?
Internet of Things
Internet of Things (known as IOT and referred as such through the rest of this article) technically defines the connection of devices over the networks to provide data. The data could be coming from a mass of resources, both internal and external. The data could be managed and controlled into data centres instead of on-premise (hence the provision of cloud based data). Due to the various kinds of data being provided, it is possible to combine the data from various sources to present business insights for customers. In computing terms, IOT is not new. Sensor technology has been around for years. Like the advent of 'Cloud', it is simply a re-use of technology which has been in use since the 1980s.
For example, back in the 80s (showing my age here) when mainframes and mini-computers had software whose purpose was to gather and store data from sensors, and then return results - yes, sensor technology was around then. Sensors provided analytic data; for example, information concerning the status of the equipment. For example, mainframes had thermal sensors that shut the machine down if the room reaches a set temperature. Those sensors also sent data recorded in database, and software was specifically provided to extract information from showing performance information concerning the operation of those mainframes.
Therefore, some will see IOT as hype, however, that does not mean it is not here to stay and evolve. McKinsey Global in a recent report projected a global value of $6.2 trillion (£3.6 trillion) by 2025.
Nowadays, the ubiquity of devices such as Tablets and Smartphones, and their reach to kids in schools, means that what we are seeing is a really good preparation at an early age for IOT. This is the generation that will find as they grow up that everything will be smartly metered and monitored. There will be a billion sensors on the planet all telling them where and when they have parked their cars, for example.
There are so many innovations coming from the emergence of IOT - take the following examples - note I am not a salesman for any of these!
Sensors and Intelligent Edge Devices for Transport for London
Microsoft Azure Intelligent System Service and Microsoft SQL Server has improved the efficiency of Transport for London. The system connects thousands of devices and data streams across a rail network serving millions of people.
Machine to Machine Network in Milton Keynes
A machine to machine network has been provided in Milton Keynes so that the local council can use technology to drive efficiency into its services. For example, dustbins have sensors which can tell when the dustbin is close to full and will alert services when they need to be collected. Car parks will have sensors to identify which car park slots have been filled and how many.
Wireless sensors in Traffic Cones to protect workers
Sensors have been placed inside Traffic Cones which can measure the distance between each other, record the speed cars pass the traffic cones, even things like weather conditions. Additionally, according to the story on the below link, cones can detect straying vehicles by sending out a radio frequency signal that triggers a portable site alarm (PSA), located within 50m, to ‘scream’, alerting road crews. The cones also sends information which is then updated on a site portal.
Sensor alarm in house connected to other sensors
A product, called NEST is basically a smoke alarm system which has the capability to connect to other sensors. This means, for example that in the event of smoke being detected it could inform another sensor to switch off the boiler. In addition, it could instruct other sensors to send information to the house owner.
Pay for goods by scanning patterns in veins
Start up company Quixter has placed terminals in 15 stores and restaurants around the Lund University in Sweden that enable 1,800 active users to pay for goods by simply scanning the unique pattern of vein in their hands.
Face recognition in petrol stations
Tesco has placed cameras which capture faces of those who visit Petrol stations and use that to drive targeted advertisements to those users.
Support impact
The general trend for integrating SharePoint or any technology with external systems, devices or data sources that provide data is to outsource technology development and support.
The impact however, is that if something goes wrong the issue will be exposed very quickly; with an immediacy and scale that is not altogether controllable.
For example, consider that data provided via a probe monitoring Oil flow and capacity provides information via dashboards to executives. These dashboards are surfaced in SharePoint. Data is also copied through SharePoint to other systems which provide other parties with specific information.
Whilst all of this sounds wonderful, there are several pitfalls which are all related to the available support and the ownership of the issue as follows:
In reality, SharePoint as a platform is naturally not directly affected by the IOT. From a software perspective, there is minor impact to how the system operates. However, there is a significant impact on the service delivery of solutions to SharePoint. Issues like support, ownership, change management come into play. These issues are not directly business related.
End to End Support is affected
In another article, I talked about the ability for SharePoint support to actually support the various technologies connected and integrated. From a general perspective, that is deemed easy when what is supported, is known. Additional reasons include that support is managed by humans, and there is a defined outcome to the use of the relevant technology.
With IOT, End to End support is affected. Support is provisioned by third parties that either created the sensors, or provide software to access information produced by sensors. Some IOT support provisions is in fact provided by the devices themselves.
Therefore the knowledge required to support SharePoint and the technologies behind the sensor operation needs to be understood. I have witnessed situations where the assumption is that because the sensor data comes from a hosted solution, that there is no need to understand the level of support required from an implementation perspective. In other words, some assume that to build a solution which requires IOT to simply connect to the hosted service; drop in a SharePoint content editor to display some results, and wait for the data to appear. If it doesn't appear, it's not our problem!
Ownership of the problem
A big indicator in the effect of service delivery on SharePoint is the movements between what the business wants, and what IT wants in order to manage and control what the business inevitably gets.
Implementation of any solution needs to have end-to-end ownership and full human communication. Relying on sensor data is not enough.
Integration with 3rd party systems
All software and devices produces analytical data concerning the output being generated, and performance of the tool, and more.
Therefore, it would be safe to say that a third party system integrated with any application would produce log information, gathered from its use. The output of that information would need to be in a form that is understood (as a dashboard, for example).
So, what happens when that software breaks? Who is responsible and how crucial is the provision of the data being produced by the software?
We need to feel like we are part of the picture
To be part of the picture, we need to understand the role that service delivery plays in the application of IOT technologies into our companies. When reading into IOT, people tend to think of everything, which literally means billions of devices and sensors. In reality, service delivery is the key factor meaning that what we need to focus on is what matters most to the organisation we work in.
A long time ago, when I first got into SharePoint, someone said to me that 'What you can do in SharePoint is only limited by your creativity'. The same can be said for embracing IOT. Just looking through some of the areas I have mentioned in this article shows you that. And whilst you can therefore apply creative thinking on immediately trying to get to utopia in having a complete integration of devices, sensors into your workplace and for sudden productivity increase because of the mass of data available, it is vital that you at least identify the vision, what needs to be connected, and why.
Therefore, from a service delivery perspective, and to ensure that a thought process' is defined, I think there are point areas which will need focus, from both IT and business groups.
In Conclusion
As stated in this article, recent research by McKinsey says that by 2025 around 13 billion devices will have sensors that have Internet capability, which allows them to be remotely monitored, automated and controlled. Devices such as traffic cones, electrical metres, door lighting, room lighting, temperature systems and much more will provide insights leading to new and amazing business opportunities and service delivery information identifying (and even controlling) customer experiences.
IOT is not a fad (some feel it is over-hyped). The impact of IOT technology may not be felt by the current aged population like me as a father of two children, but rather, our childrens interpretation of this technology. They will be more readily in tune with this technology than we, since it is in the schools being taught, and the ready access to things like tablets, smart phones and the like.
The joke about 'I have decided to watch soap operas on TV, so I connected the smart washing machine to the smart TV' may be closer than you think. Imagine, for example, that a sensor on the smart washing machine talks to the TV advising you in the corner of watching the programme how long it will be till the washing is complete, or informing you what the outside weather is like?
In essence, IOT presents delivery impacts, through the provisioning of solutions to provide information given a number of data sources, human and machine generated. This means taking functionality already available in the available technologies and then finding ways to integrate so that merging of machine data to it to provide dashboard information. The issue however, is that further work must be done concerning the platform used to present that data. The objective of SharePoint has always been to allow individuals to manage and centralise content on their website. The road to get to that vision for users is not automatic. Neither is the road to the provision of machine to machine combined with human generated data going to bring immediate utopia.
IOT should be treated as an organisational imperative. This means starting small. Examining what you have in your organisation in terms of technologies, and finding ways in which you can further automate, integrate by connection to devices within the organisation. Utilise the skill-sets to ensure that you can identify nuggets of information. Once done, find ways of provisioning that information to systems that allow users to access that data through dashboards.
As I am a keen SharePoint follower, there are a number of other points which you should consider particularly from the service delivery angle; both as business and IT actions; SharePoint can help. Ensuring that you can provide information concerning business policies, rules and information for your organisations concerning IOT can be provisioned through SharePoint. Consider the following actions:
What are your thoughts about SharePoint and the Internet of Things? Have you dabbled in either or both? Let us know your thoughts below or via @technetuk.
Alan Richards has been working in IT for over 17 years and during that time has been at the forefront of using IT. He has led teams that have been among the first to roll out Windows, Exchange and SharePoint. Alan is currently Senior Consultant with Foundation SP, a SharePoint services and delivery company. He's an Microsoft MVP and a regular blogger and speaker at various events.
Is the cloud the right tool for your business and how do you know? As the IT Pro in your business you are, at some point, going to get asked this question or a variation on it. And it’s a perfectly valid question as the cloud is not always going to be the right choice for a business and in some cases it’s the completely wrong choice.
The cloud is at its core a set of technologies that many IT Pro’s are already familiar with and in some cases have been using for years in the guise of their various versions. The cloud consists of; Exchange for email, SharePoint for document management, Lync for communication and Azure AD for identity management. These equate directly to all the onPremise options; Exchange 2013, SharePoint 2013, Lync 2013 and Windows Server for AD.
SharePoint
There are obviously some technical differences between the onPremise and cloud versions, for example not all the functionality that can be found in SharePoint 2013 can be found in SharePoint Online; the reason for these differences is that the cloud obviously runs a multi tenancy model where multiple businesses host their services on Microsoft servers and some functionality found in the ever popular onPremise version of SharePoint 2013 would cause too many security issues if it were to be enabled in the cloud.
SharePoint is a big product in lots of ways, besides it being a very popular product it can also be big in data terms and data storage always has a cost associated with it. It doesn’t matter if you use onPremise SharePoint or SharePoint online you are going to have to pay for storage whether that is the physical storage devices or the online storage that comes with Office 365. There is also the maintenance of that storage, with Office 365 it’s not your problem, it’s all down to Microsoft to ensure the uptime and availability of storage.
While on the SharePoint subject lets also look at apps & code. Lots of businesses use 3rd party tools or custom code in their onPremise versions of SharePoint to provide additional functionality. When considering if the cloud is the right choice for your business you need to understand the limitations on 3rd party applications and custom code. Because of the multi tenancy model of Office 365 Microsoft obviously have to limit what custom code can do as it may impact other tenants or create security holes.
Another consideration is, of course, security. Microsoft adheres to standard security procedures and conforms to the EU safe harbours agreements and also the UK Governments IL2 data level security. But at the end of the day these may not be enough for you. You need to consider the nature of your data and data security policies; is it mandatory, for example that no data leaves the confines of the business. What are your data retention policies? What are your policies on backup and restore timescales? These are all valid considerations and you need to take them into account before you choose the cloud.
Managing SharePoint online is also a familiar experience, as with the onPremise version it is all managed from the web browser but with the added support of being able to carry out many of the tasks using PowerShell from your local PC.
Exchange
You send email you receive email, that’s it right?
Well if that’s all you do then great, go cloud, but for some business there may be more to take into consideration.
As IT Pro’s we all know that email is one of the security holes that can allow all manner of problems into our network and for that reason whatever version of Exchange or whatever other email server you may be using we all have the most powerful and sometimes restrictive antivirus, anti-spam and anti-malware software we can find installed on our mail servers or a lot of businesses take advantage of hosted email security services.
Exchange online has built in malware & spam filters and as with the onPremise version facilities such as connection filtering are also available. Exchange Online is also preconfigured not to act as a relay and so you are safe in the knowledge that your server won’t be used for relaying spam.
If you use hosted mail filtering software then you can still use that as well, Exchange online provides the ability to create secure connections with hosted mail filtering services so that your server will only accept mail sent from your hosted service. You can also configure Exchange online to send mail out via the same hosted service.
Of course Exchange online also comes with all the usual features such as mail user, mail contacts, mail flow settings and you can of course connect to it from anywhere you can get internet access.
In my opinion Exchange online provides you with all the functionality you could ever want from a mail server with the added advantage that you don’t have to worry about availability of the services or maintain the servers with updates and patches, that’s all down to Microsoft.
And as with SharePoint you can either manage Exchange online from the web browser or using PowerShell from the comfort of your own PC.
Lync
Lync is a communication tool and the online version provides full functionality for all the classic functions such as IM and PC – PC audio & video calls.
If you have Lync onPremise then the move to the cloud will not mean any reduction in features unless you are using it as your business telephone system. Lync online can’t provide telephone functionality so if this is one of the ways in which you use Lync then a move to the cloud may not be for you.
For all other functionality Lync online could be the answer. In a previous life I ran Lync onPremise with full telephony functionality and the management overhead along with the number of servers required can be quite daunting, where I work now we are fully cloud based and one of the main advantages is Lync online, we can communicate both internally but with the added advantage of allowing clients to federate with us and also setup online meetings with clients means we cut down on expenses and travel time.
For me Lync online is a no brainer when it comes to moving some services to the cloud.
Azure AD
Azure AD is the identity provider for all of the Office 365 products, this is what you are interacting with when you create users in the Office 365 admin portal.
At the moment Azure AD is not a replica of Windows AD, it just can’t be, that would be too much of a security nightmare for Microsoft.
I suppose the biggest consideration when moving to the cloud is how your users are going to login to their services, we all know how hard it is for users to remember passwords. Creating accounts in Azure AD can be done in two ways either through the cloud or using Directory Synchronisation.
Creating users in the cloud is done using the browser and can be done individually or by using a csv file. However this will mean of course that your users will have different passwords from their normal network password.
Directory Synchronisation uses a tool aptly called DirSync that once configured synchronises your Windows AD user accounts with Azure AD and therefore all your Office 365 services and best of all DirSync has the ability to synchronise passwords so users will be able to login to Office 365 using their email address and their current network password.
And of course, as with the other Office 365 products, you can manage Azure AD using PowerShell. In fact there are a number of blog posts in the internet that provide collections of scripts for not only administering Azure AD but all of the Office 365 tool set.
So in summary the decision to go to the cloud is a big one and should be taken after you have taken into account what services you currently run and can they be replicated in Office 365, do you need all those services? What are you storage requirements, do users constantly upload and view large files which will put a load on your internet bandwidth? What are your security requirements and does the current Office 365 security feature set meet those requirements? How do you want users to access Office 365?
For me the decision process to move to the cloud should come at the end of an evaluation process of your current services, how utilised are your current services? What is used and what isn’t? If you have custom code or use 3rd party applications, do you still need them or are their versions compatible with Office 365?
Let us know in the comments section below or via @TechNetUK.
Rick Delgado feels blessed to have had a successful career in the tech industry and has recently taken a step back to pursue his passion of writing. He's started doing freelance writing where he occasionally works with tech companies like Dell Computers. He enjoys writing about new technologies and how it can help us and our planet.
A lesson many organizations have learned is that when new demands arise, changes to existing systems have to follow. As big data continues to create vast new opportunities, businesses are finding that their current technology needs to be updated to keep up with these ever-changing demands. One area in particular that is receiving more focus is storage, where current equipment is being given a second look to better handle the new and varied big data projects many companies are implementing. For years, much of the debate has centered around flash storage vs. hard drive, but many view that as an oversimplified debate that focuses on the extremes. While the majority of businesses and technology leaders will concede that legacy storage systems are in need of an extensive update, the discussion has expanded past the two options that are most often talked about and now includes hybrid storage arrays.
First, it’s important to look at both flash (solid-state drives or SSD) and hard disk drives (HDD) when used for storage. Most of the argument can be boiled down to cost and performance. Hard disk drives have been around for decades and are much cheaper to manufacture when looking at cost per gigabyte. The downside is that they suffer from poorer performance when compared to alternative storage options. With big data a more important aspect for businesses than ever before, realisation has set in for many companies that hard disk drives have difficulty keeping up with big data’s demands. Solid-state drives have much faster performance and can handle the increased workloads more easily, but businesses need to pay a higher price (monetarily speaking) to capture that performance. In short, one side features lower costs but poorer performance, while the other is more expensive but much faster.
With so much emphasis placed on getting faster performance, some companies are choosing to invest in all-flash storage arrays. Instead of using the more traditional HDD systems, they’re choosing to adopt nothing but flash. The benefits of this approach include more than just improved performance through a reduction in read latency; all-flash storage also has lower operating costs than those seen with hard disk drives. When comparing the two, all-flash requires less energy to run, less physical floor space, and generally less energy to keep the hardware cool. There are also plenty of reasons organizations choose to go this route. Perhaps their operations require the sub-millisecond latency that all-flash arrays offer. Maybe the business prioritizes a performance-optimized strategy for storage. Or maybe a company prefers to use a system that has a guaranteed quality of service. Either way, with costs for flash storage dropping, more organizations are finding all-flash more affordable.
But it may not be necessary for businesses to contemplate the extremes on this issue. Another option has been developed which may satisfy the concerns many organizations have, and it comes in the form of hybrid storage arrays. Hybrid storage works much like it sounds, combining both flash storage with hard disk drives in an effort to get the benefits of both methods. One way it does this is by making it more economical for those businesses with budget concerns. In one study from the United Kingdom, three quarters of enterprises said the cost of all-flash arrays prevented them from using it.
By mixing flash with HDD, costs come down, but what does hybrid storage mean for the other major factor--performance? While hybrid storage performance may not match that of all-flash arrays, supporters say the gap isn’t that wide. They contend the cost compared to the amount of performance exceeds that of flash storage. Supporters also claim that in some cases hybrid storage performance can almost equal flash provided that the size of the mega cache is expanded. However you look at the technology, hybrid storage arrays appear to provide a balance of cost and performance that meets the demands of budget-conscious businesses where latency issues wouldn’t be a significant problem.
There is little question that storage systems need to evolve to meet the demands of an increasingly data-driven world. How businesses meet those needs will largely depend on what they can afford and what type of performance is required. Both all-flash arrays and hybrid storage arrays can be excellent choices for handling so much new big data. A careful examination of the pros and cons of both will help organizations prepare for what they need to adopt.
Do you think hybrid storage arrays are the answer to the DDSvs HDD Debater? Let us know in the comments section below or via @TechNetUK.