Kevin Remde's IT Pro Weblog
IT Pro Resources
TechNet EventsMicrosoft Security Response CenterMicrosoft Virtual AcademyKevin’s Evaluation Download Center
IT Pro Evangelist Blogs
Blain Barton Blain Barton's Blog@BlainBar
Brian LewisMy Thoughts on IT...@BrianLewis_
Dan Stolts IT Pro Guru Blog@ITProGuru
Jennelle Crothers TechBunny@jkc137
Kevin RemdeFull of I.T.@KevinRemde
Tommy PattersonVirtually Cloud 9@Tommy_Patterson
Yung Chou Yung Chou on Hybrid Cloud@YungChou
“Monday? It’s Thursday, dummy.”
Right. But it feels like Monday. Today is the dreaded first day back to work after a nice long Christmas and New Year vacation. For many (like me), it’s the most difficult-to-get-out-of-bed morning there is.
So this note to you (in the form of a blog article) is just my way welcoming you back, and wishing you all a productive return to your normal routines.
I sincerely hope you all had as nice a Christmas break as I did. And you can all un-decorate the house and toss out the tree on Saturday, but for today and tomorrow, let’s kick some serious I.T. butt.
…once we get caught up on e-mail, that is.
Here’s a quick post with some of my photos so far. Trying out a neat photo album feature by just dragging multiple photos into Windows LiveWriter.
Included are a couple of shots from the Krewe Meet-n-Greet last night, and random shots of people and food.
Now I’m off to another session…
Welcome to another in our new series of “Modernize your Infrastructure” articles. Today I’m pleased to share with you the details of yet another free and easy-to-use assessment tool from Microsoft. The purpose of this tool is to help you answer the following important question:
“Are my servers and services able to be migrated to Microsoft Azure?”
And that is a fair question; particularly if we see the value, but don’t really know where to begin. If in the process of modernizing my infrastructure I consider perhaps moving some (or all) of my servers – whether they’re physical or virtual machines - off of my local hardware and into “the Cloud” as Microsoft Azure hosted Virtual Machines as an extension of my datacenter, then it would be good to have a starting-point assessment to help me learn about and consider what might be required; and even better if it was based on my current environment and some initial goals and desires.
And that’s what the Microsoft Azure Virtual Machine Readiness Assessment is all about.
It’s a free and easy-to-install tool that, when run on supporting OS and with the proper credentials, will ask you a number of questions about your environment and about your needs and desires (the end goal), and result in a lengthy report based on your answers and, importantly, based on what it was able to detect in your infrastructure.
“Can you show it to me?”
Showing you the whole process would be overkill here. But how about I show you some of the highlights.
Requirements and Installation
The download page is where you’ll find a good description of how and where the tool can be run. In basic terms, it will run on any OS newer than Windows Vista and Windows Server 2008. It does have some .NET framework requirements as well. The instructions are pretty simple:
1. Download and run WAVMRA.EXE on the computer you want to run the assessment from2. Complete the installation steps3. Launch the tool4. Select the technology you want to assess and proceed through the wizard experience
On the workstation you’ve installed the tool on, make sure you run it as an administrative account that has rights to administer Active Directory, SharePoint, or your SQL Servers (whatever it is you’re interested in assessing).
Naturally, the first question you are asked is “What would you like to assess?”
Your answer here will determine some of the remaining questions concerning what kind of connectivity, applications, availability, and performance you’re going to require.
Let’s say that In my example I’m going to want to extend my Active Directory domain into the cloud. Using my single corporate domain I want to extend authentication to other applications that I want to host on virtual machines in my Microsoft Azure network.
Prior to the remainder of the questionnaire, you are reminded of the requirements for this tool to be able to run successfully:
Answer the Questions
The rest of the process prior to scanning your environment and generating the final report, is to ask you additional questions. In my scenario, I’m asked 13 more questions. “All questions must be answered as part of completing this assessment.” Here are a few samples:
Note that each question provides additional detail about what’s being asked, and you are often giving the option to basically say “I don’t know yet”. Trust me – The report will give you excellent detail on and pointers to additional information about all of the options available.
The tool generates a Microsoft Word .docx file that you can save, print, share.. whatever you want to do with it. Inside you’ll find a detailed report on what you’ve chosen, what’s required of you, and links to additional information and further learning around your next steps. The report is organized into three parts: “Ready”, “Set”, and “Move”.
And then shows you “What we checked”, with a quick visual indication of which items are fine, and which ones should probably be looked into further.
And that’s it!
Hopefully you’ll find this a useful first-step into extending your infrastructure into the Microsoft Azure cloud.
Go Forward “To the Cloud!”
In today’s article in the “Why Windows Server 2012 R2” series, I’d like to show off a new feature in Hyper-V; something I like to call the “Replica Replica”.
As many of you know, Microsoft introduced a new, powerful tool for your disaster recover (DR) tool belt called Hyper-V Replica back in Windows Server 2012 Hyper-V and Hyper-V Server 2012. For those of you who are not yet familiar with it, a Hyper-V Replica is an easily created and up-to-date offline copy of a virtual machine. On some other host – either in your local or in some remote datacenter – you have a copy of a virtual machine that can be available in case of disaster. If something bad happens to the production machine, you can failover to the replica virtual machine very quickly.
For a most-excellent description of Hyper-V Replica is and how to set it up in Hyper-V in Windows Server 2012 Hyper-V, check out this blog post from the series “31 Days of our Favorite Things” -
Windows Server 2012 and Hyper-V Replica (Part 5 of 31)
Windows Server 2012 and Hyper-V Replica (Part 5 of 31)
“So, what’s new in R2? What’s this ‘Replica Replica’ you talk about?”
We’ve added the ability to create yet another replica. It’s a replica of the replica. It’s an additional offline copy of a virtual machine and its configuration, made available, synchronized and automatically kept up-to-date on yet another Hyper-V host. Interestingly the request was from our many hosting providers, and it makes a great deal of sense in their scenario, where they are the ones hosting a replica on behalf of their customers. It only makes sense that they would love to have a backup of the replica they’re hosting.. so why not make it a replica of the replica?
Yeah, I thought so, too.
“How does it work?”
It’s very simple. After you’ve created the first replica, you right-click on the replica machine and select “Extend Replication…”. In my example, I have already set up a replica of my domain controller, and I’m going to extend the replication and put a replica of the replica on my Hyper-V Server named HVSR2-1…
The wizard looks and works very much like setting up the initial replication does. Once you get past the Before You Begin screen…
…you choose or browse to the server you want to put the replica on (the Replica server)…
You pick the type of authentication you want to use (based on what has been enabled in the Replication Settings on the Hyper-V Host settings)…
You pick a replication frequency.
NOTICE that I have two choices here, because I had selected the primary replica as sending changes every 5 minutes. Your choices will depend upon what you selected for the first replica frequency.
You may not know this (yet), but Hyper-V Replica in Server 2012 R2 allows for more than just the 5 minute intervals that were in the original Hyper-V Replica in Server 2012. You can have replication send changes every 30 seconds, 5 minutes, or 15 minutes for the first replica. For the extended replica, you must replicate at an interval that is less-or-equally-frequent to the first replica; with the exception being that you cannot replicate the to the extended replica at the 30 second interval.
Here’s a quick chart that shows the extended replication interval options available based on the first replica interval selected:
Getting back to our wizard; now we select how many recovery points we want to maintain of the extended replica…
We select an initial replication method, plus when to launch the initial replication if requested…
Check the summary…
And Finish. We’re done. And the first extended replication is now going over the wire.
Pretty cool, huh?
“Pretty cool. So now I can failover to either of my two replicas?”
Now, if I right-click on the first replica…
I see that I have similar options to what I had back in Hyper-V 2012. But now I have an additional “Pause Extended Replication” option as well.
Here’s a failover scenario for you…
Let’s say I have a virtual machine “DukeN” running on Host A, with replica on Host B and extended replica on Host C.
Host A goes down. So I right-click on the “DukeN” machine and select Failover…, and DukeN fires up and is now running on Host B.
If I right click the newly running VM and look at the Replication options I have now on the failover machine, it’s pretty interesting…
I can “Reverse Replication”, which means I can now treat this running (but still considered a replica) machine as the primary machine, and begin replication back to what was the primary location. Note: if you do this, it essential "orphans” the old extended replica. You’ll have to re-extend the replication if you want to.
I can “Remove Recovery Points..”, which does cleanup of this replica of any other points still saved.
I can “Cancel Failover”, which will shut this replica down and assumes that the original machine is now available and can be started.
I can “Resume Extended Replication”. This one is interesting to me. It assumes that Host C (containing the extended replica) is still available. When selected from Host B, then Host B becomes the main VM and the copy on Host C becomes the first replica. Once a synchronization process is completed, you can then go to the VM on Host C and Extend Replication to another host (Host D?).
Good stuff? Try it out yourself by downloading the evaluations of either Windows Server 2012 R2 or Hyper-V Server 2012 R2. And let me know if you have any comments or questions by posting them in the comments section.
Yesterday in our “Modernizing Your Infrastructure with Hybrid Cloud” series, Matt Hester described how to create a virtual network “in the cloud” in Microsoft Azure in order to support cloud-based Virtual Machines and their ability to communicate with each other and with the outside world. We of course have the ability to connect to our VMs individually using Remote Desktop connections, but if we’re going to treat the location of these cloud based machines as just an extension of our own datacenter, we’re going to want to have a secured connection to them.
That’s what the VPN Gateway is all about.
In this article I’m going to show you step-by-step how to connect your Azure virtual network to your on-premises network. Here are the steps we’ll go through:
And as an added bonus, I might throw in a little something extra.
I hope you’ll think so. But for starters, let’s begin where Matt left off. I have an Azure subscription with a virtual network named AzureNet1, located in the South Central US datacenter region. In my scenario, I want to connect this Azure network to my Fabrikam office (Fabrikam was recently purchased by Contoso). Once the connection is established, I will want to join servers in that office to the contoso.com domain.
Here’s what the AzureNet1 network dashboard tab currently looks like. Note the two virtual machines currently in this network; a domain controller and an application server.
As you can see on the configure tab, I’ve set up two subnet ranges (their purposes are obvious based on the names I’ve given them) as part of an 8-bit-masked 10.x.x.x subnet.:
Notice that I’ve also defined my DNS server as 10.0.0.4. My domain controller has that address.
Collect Some Information
Before we start adding the site-to-site connection, I need to collect some information so that I can carefully use it to make the correct configurations. As you probably know first-hand, when doing networking configuration it’s easy to make simple little mistakes that cause everything to NOT work, so let’s make quick note of a couple of important items:
Your local network address range refers to the addressing of your local network. By that name, though, it’s a little misleading. “Local” assumes you’re connecting your Azure network to some “Local” office. But in reality it could be some other branch office or even another virtual network somewhere else in the Azure world. So, just think of “local network” as being “the network I’m connecting to my Azure network”. And I’ll keep using “local network” in “quotes” throughout the rest of this article for just that reason.
In our example, my Fabrikam network is 192.168.0.0 with a 16-bit subnet mask. (192.168.0.0/16)
The Gateway Address is the externally accessible IP address of the gateway. In the Fabrikam network, let’s say that I have a VPN device connected to the Internet with an external Internet-exposed address of 22.214.171.124. I will also have a gateway address on the AzureNet1 gateway, but that address will be assigned when I create the gateway for my virtual network. So, in simple terms, the gateway address is the connection point on either end of the VPN connection.
Define the “Local Network”
Before enabling the site-to-site connectivity and creating the gateway, we need to define the “local network” Fabrikam, so that our network knows what addresses it will be routing to over the VPN through the gateways.
To define my “local network” (which I’ll name “Fabrikam”), I clicked on +New in the bottom-left corner of the Azure portal, and selected Network Services –> Virtual Network –> Add Local Network.
I give my local network a name, an optionally the gateway address (I can add it later if I don’t know it right now.)
Then on the next screen I add the address spaces that exist at my “local network” at Fabrikam.
Once created, you’ll see it in the list on the local networks tab.
Back on my AzureNet1 network and on the Configure tab, now I can check the box to enable Site-to-Site Connectivity. Notice that a couple of things change. I now will choose which “local network” I’m going to connect to (Fabrikam), and it also requires (and defines) a “Gateway Subnet” for me.
“Hey Kevin.. What’s that ‘ExpressRoute’ option?”
That’s actually what my friend Keith Mayer is going to cover in tomorrow’s article in the series. I’ll include the link to his article after it’s published.
UPDATE: Here is Keith’s article - Modernizing Your Infrastructure with Hybrid Cloud - Step-by-Step: Cross-Premises Connectivity with Azure ExpressRoute (Part 16)
Anyway, after checking Connect to the local network, clicking Save starts the process of updating the network configuration. After a couple of minutes it completes, and now back on the dashboard tab we see this:
This means the gateway is in defined, but not actually created. That’s our next step.
Create the Gateway
At the bottom of the dashboard screen, click on Create Gateway.
Notice when you click it that you are given a choice between a static and dynamic routing VPN gateway.
“What’s the difference?”
Your choice will be based on a number of factors. Often the VPN hardware you are using will limit you to one or the other. A static routing VPN gateway is one that routes traffic based on policy definitions (which is why it’s often referred to as a Policy-based VPN). Packets are routed through the gateway based on a defined policy; an “access list”. A dynamic routing VPN gateway, also known as a “route-based VPN”, is a simple forwarding of packets between two networks. If the network doesn’t locally contain the destination for this packet, I’ll assume the gateway knows where to send it. And if it’s known by the gateway as existing on the other network, it sends it securely through the tunnel. For more information about these choices, and about various devices and gateway types that support either static or dynamic VPN gateways, check out this excellent documentation. Even if your device is not on that list, it may still work if your hardware supports
In my scenario I’m creating a simple tunnel to a device that supports the other end of dynamically routed VPN, so I’ll choose Dynamic Routing. Creating the gateway does take a good amount of time (as much as 15 minutes), so be patient. Eventually our display will go from this:
…and eventually, this:
Notice that we’ve been assigned an official actual external gateway IP address. We’re still not actually connected. (Connected would be GREEN in color.) We haven’t addressed the configuration of the “local network” side of our connection yet. At the bottom of the page you see a Connect button:
But let’s not click that just yet. We still need to…
Configure the Local VPN Device
Other than collecting some information about our Fabrikam network, we’ve only focused on the AzureNet1 side of our VPN tunnel. We still need to create the gateway on our Fabrikam network.
On the AzureNet1 dashboard, notice this hyper-link towards the right-side of the page:
Clicking on Download VPN Device Script this brings up a very interesting page that allows us to specify what kind of hardware (or software) we have on the “local network” side of our connection. The beauty of this is that, based on our selection of hardware (or even Windows Server 2012 and 2012 R2 Routing-and-Remote-Access (RRAS) working as your gateway), you are generating a script that can then be used to automatically configure your gateway on the “local network” side.
Once we’ve selected our Vendor, Platform, and Version, and clicked the check mark, we’re immediately sent a text file containing the configuration script for our selected device.
Use this script to configure your device, establish the connection from the local network, and then come back to the Azure network dashboard and click connect. And if you’ve done everything correctly, you should see something happy (and GREEN) like this:
“What kind of hardware do you have on Fabrikam’s network, Kevin?”
I don’t know. I’m not actually using a local network. For this demonstration, I’ve actually connected my AzureNet1 network, which is located in the South Central US datacenter region to a Fabrikam virtual network that host in the Central US datacenter region and manage through an entirely different Azure subscription. So.. I’m doing Site-to-Site between two Azure virtual networks. That’s my “something extra” that I promised earlier. Now I’m going to show you what I needed to do to make that connection work.
Connecting two Azure networks via a Site-to-Site VPN requires two things:
I’ve already showed you where you choose Dynamic Routing when you create the gateway. And other than the shared key, everything else I did for configuring the Fabrikam network was identical to what I configured in AzureNet1, except that my network in Fabrikam is 192.168.0.0/16 – identical to what I defined the “Fabrikam” “local network” to be on this side. IMPORTANT: These have to match. The range and mask have to be correct and consistent on both ends both the “local network’' definition and the actual network (or Azure virtual network as in my case) for this to work.
Also in the definition of the “local network” on either side was the specification of the Gateway IP Address. Again, ordinarily, your configuration script is populated with the Azure virtual gateway’s IP address. But in this instance, I need to create the gateway first, and let it fail connecting, just so I can see what the actual assigned gateway IP address on that side of the connection is going to be. Then I can take that address and configure it into the “local network” definition on the other side.
As for the shared private key.. Notice at the bottom of the AzureNet1 dashboard that there is a Manage Key button:
If I click this, I can see (and copy) the generated long key. I’ll copy it to the clipboard.
This key was created when we created the gateway, and is included for you in the configuration script on behalf of your “local network” device. But…
“We don’t have a local network device!”
Bingo. And we also don’t (as of this writing) have a way to use the Azure portal to set the shared key in and the configuration of the virtual network! But we will need to do that to at least one end of the tunnel to make sure they match. (Or both if we want to just use our own text as the shared key.)
This is where PowerShell comes in.
I’ve installed the Azure PowerShell cmdlets onto my local system, and then in PowerShell I connected to my Azure subscription where the Fabrikam virtual network resides. And now I use the following PowerShell command to set the shared key for the gateway connected the Azure network Fabrikam to the (from this point of view) “local network” named AzureNet1.
Set-AzureVNetGatewayKey -VNetName fabrikam -LocalNetworkSiteName AzureNet1 -SharedKey 2kDsdqXnxeXrGjI4r4rLltKKT1g9E9gY
Set-AzureVNetGatewayKey -VNetName fabrikam -LocalNetworkSiteName AzureNet1 -SharedKey 2kDsdqXnxeXrGjI4r4rLltKKT1g9E9gY
(For the Windows PowerShell command-line tools, go to the Azure downloads page, and scroll down to “Windows PowerShell” section. Instructions for setting this up are found there as well.)
That’s how I was able to get the common shared key into the other side of my connection. After this command completes, and soon after clicking Connect in the dashboard, I was happily sending data back and forth.
“But Kevin… Prove to us that you have the connection established! Finish your domain-joining scenario!”
In the AzureNet1 network I have two servers. One is a domain controller, and the other is a member server. All machines here are assigned their DNS server as 10.0.0.4. They reside in the South Central US datacenter region.
On my Fabrikam network (which, you’ll recall remember, resides in the Central US datacenter region, so not in the same location as the AzureNet1 network and machines) I have one server that I’ve just created:
Importantly, I’ve also created a “DNS Server” designation here, and assigned to the Fabrikam network, with the 10.0.0.4 address. Note the configure tab of the Fabrikam network.
In this way my machines in this Fabrikam network will be assigned 10.0.0.4 as their DNS server, and so will know how to find the DC in the AzureNet1 network. To verify this I can establish a remote desktop connection to my new karContosoDC2 server and look at the status of the network adapter:
Trusting that my VPN is happily and dynamically routing traffic between Fabrikam and AzureNet1, and knowing that my new server in Fabrikam is going to look for DNS at the domain controller in AzureNet1, I attempt to join the domain:
I am asked for domain credentials (a very good sign!)…
I’m in! That’s proof that I have successfully connected these two virtual networks!
For more information on configuring secure cross-premises connectivity, check out the official documentation here:http://msdn.microsoft.com/en-us/library/azure/dn133798.aspx
Here are some more specific configurations and their documents:
And be sure to keep watching http://aka.ms/ModernCloud for the full series of articles on modernizing your infrastructure.
My friend Dan Stolts and yours truly continue our series on “Modernizing Your Infrastructure with Hybrid Cloud” with an overview on how to plan for a hybrid cloud storage solution using Windows Server 2012 R2 and Microsoft Azure. Tune in for our lively discussion on the many storage options available to you as well as discussions around performance, reliability and security.
Follow the entire series! http://aka.ms/ModernCloud
Shortened URL if you would like to share on Twitter or Facebook, etc. http://aka.ms/TR140822
For each member of the SouthwestMissouri Chapter of the AITP (SWMOAITP) who downloads Window Server 2012 R2 orHyper-V 2012 R2 from the below links between 11/4/13 and 11/30/13, MicrosoftCorporation (“Microsoft”) will donate USD $2 to the Council of Churches of theOzarks (a 501c3 organization; see http://www.ccozarks.org).
When SWMOAITP members download WindowsServer, we’ll point you to instructive videos, hands-on labs, and more availableat http://aka.ms/SWMOAITP. For each completed download by registered SWMOAITP membersduring the promotional period, a USD $2 donation, up to a maximum USD $3,000,will be made to the Council of Churches of the Ozarks. See the officialterms and conditions at: http://aka.ms/SWMOAITP.
Terms & Conditions
Offer good only to legal residents of the 50 United States & D.C.aged 18 or older who are registered members of the Southwest Missouri Chapterof the AITP (SWMOAITP). Offer is not valid where prohibited by law.
Must complete full download from below linksbetween November 4, 2013 and November 30, 2013. Offer good only to thefirst 1,500 registered members who complete downloads of Windows Server 2012,Window Server 2012 R2 Preview, Hyper-V 2012 or Hyper-V 2012 R2 Preview untilthe end of the promotional period, whichever comes first. Limit 1 downloadper member, and up to USD $3,000 for donation on behalf of SWMOAITP to the Councilof Churches of the Ozarks. May not be combined with other offers. This offerwill be fulfilled in the form of a monetary donation to the Council of Churchesof the Ozarks charity within 90 days after the end of the promotional period.Microsoft reserves the right to modify or cancel the terms of this offer at anytime. Your download for the purpose of this offer does not create anemployment relationship of any kind between you and Microsoft or otherwiseentitle you to compensation or remuneration from Microsoft. Due to governmentethics and procurement laws, employees of certain government agencies(including but not limited to military and public education institutions) maynot be eligible to participate. It is your sole responsibility to review andunderstand your employer’s policies regarding your eligibility to participatein offers and promotions. Microsoft employees are not eligible to participate.Microsoft disclaims any and all liability or responsibility for violations oflaws, or for disputes arising between an employee and their employer related tothis offer. Microsoft reserves the right, as determined by Microsoft in itssole discretion, to disqualify any person not complying with these offer Termsand/or acting fraudulently with the intent to avoid offer restrictions or otherlimitations.
FY14 EP URL
Windows Server 2012 R2 Download
Hyper-V Server 2012 R2 Download
Thanks to Mary Jo Foley for tweeting about this. Mary Hutson is maintaining a very useful list of “top Microsoft Support solutions for the most common issues IT Pros experience when using or deploying Windows 8 or 8.1.” She updates the list every quarter; the most recent being just two days ago (Aug 11, 2014).
* HERE IS THE LIST * <—Click that
Kudos, Mary! This is a great page to bookmark!
An attendee at our IT Camp in Saint Louis a few weeks ago had an problem that is understandable:
“Thanks for training session, I have a question. Tried to RDP one of my VM’s at work and I can’t connect. Possible firewall port issue? I am going to try and connect from home tonight.”
You're already onto the issue. It’s important to remember that the port that you’re using for RDP is not the traditional 3389.
“It’s not? How does that work?”
Let’s step back for a second and consider what you see when you first create a virtual machine in Windows Azure and you get to the screen where “endpoints” are defined. By default, it looks something like this…
…Notice that, even though the operating system is going to have Remote Desktop enabled and will be listening on the traditional port 3389, the external “public port” value that will be redirected to the “private port” 3389 is going to be something different.
Security. We take the extra precaution of randomizing this port so that tools that are scanning for open 3389 ports out there won’t find those machines and then start attempting to log in.
So the answer to your question: Yes, it’s a firewall issue. And I bet it worked from home later that night.
Let’s go one step further here and propose a couple of solutions to this, in case you also run into this problem.
Solution #1: Open up the proper outbound firewall ports
In the properties of your virtual machine, you can find what “public port” was assigned to the VM under the endpoints tab…
So this web server of mine is answering to my RDP requests via my ability to connect to it’s service URL and port 56537. Since I am not restricting outbound ports, this isn’t a problem for me. But knowing what this port is can help you understand what needs to be opened for a particular machine.
“Is there a range of ports that I need to have open outbound?”
The port that will be assigned automatically is going to come from the “ephemeral port range” for dynamic or private ports (as defined by the Internet Assigned Numbers Authority) of 49152 to 65535. So if you simply enable outbound connections through that range, the defaults should work well for you.
Solution #2: Modify the VM End Points
You’ll note on the above picture that there is an “edit” option. You have the ability to edit and assign whatever port you want for the public port value. For example, I could do this…
…and just use port 3389 directly. Of course, this would defeat the purpose for using a random, non-standard port for remote desktop connections. But it could be done.
Solution #3: Use some other remote desktop-esque tool over some other port.
The server you’re running as a VM in Windows Azure is your machine, so there’s no reason you couldn’t install some other tool of choice for doing management or connecting to a remote desktop type of connection. Understand the application, what port needs to be enabled on the firewall of the server, and then add that port as an endpoint; either directly mapped with the same public/private port or using some other public port. It is entirely configurable and flexible. And as long as you’ve enabled the public port value as a port you’re allowing outbound from your workplace, you’re golden.
Solution #4: Use a Remote Desktop Gateway
How about instead of connecting to machines directly, you do something more secured, manageable, and along the same lines of what you would consider for allowing secured access into your own datacenter remote desktop session hosts: Configure one server as the gateway for access to the others. In this way you have the added benefits of just one open port; and that port is SSL (443). You’re very likely already allowing out port 443 for anyone doing secured browsing (HTTPS://…), so the firewall won’t get in the way.
I hope you found this useful! Don’t hesitate to ask questions in the comments if you’d like me to clarify anything, or share your ideas if you have other solutions I haven’t yet considered.
Still haven’t tried Windows Azure yet? We’ll give you $200-worth of Azure in a one-month free trial.
A couple of months ago I had the privilege to interview Brad Anderson. Brad is a Sr. VP at Microsoft, responsible for the System Center and Windows Server product lines. So…
“So this guy knows what he’s talking about?”
Exactly. As a companion to his blog - In the Cloud – we recorded these three interviews around his nine-part “What’s New in R2” blog series. So for today’s article in our current “Why Windows Server 2012 R2” series, I thought I’d give you another opportunity to hear what Brad has to say. Here are the videos, and I’ll include the links to his blog series posts below as well. Enjoy!
TechNet Radio: (Part 1) - What’s New in 2012 R2 - Empowering People-Centric IT
TechNet Radio: (Part 2) What’s New in 2012 R2 – Transforming the Datacenter
TechNet Radio: (Part 3) What’s New in 2012 R2: Enabling Modern Business Applications
Brad Anderson’s “What’s New in 2012 R2” Series
Websites & Blogs:
Follow @technetradio Become a Fan @ facebook.com/MicrosoftTechNetRadio
Follow @KevinRemde Become a Fan @ facebook.com/KevinRemdeIsFullOfIT
Subscribe to our podcast via iTunes, Stitcher, or RSS
When you’re doing a Live Migration** of a virtual machine between hyper-v hosts, you want it to go quickly. You may be doing the migration of one or several or dozens of virtual machines all at once, and the performance of the network and the network paths you choose are going to determine how quickly you can get the job done. Yes, sure, in one sense it doesn’t matter how long it takes if the VMs will continue to run and provide service during the migration. But if I’m doing, say, an automated update of all of the hosts in my cluster, and allowing it to drive the live migrations of machines among hosts, the speed with which those migrations complete will ultimately determine how long it takes to complete the updates of all of those hosts. If I’m really maxing out the capabilities of Hyper-V in Server 2012 R2 or Hyper-V Server 2012 R2, that could mean as many as 8,000 virtual machines moving around and among 64 clustered hypervisor nodes. So, speed is still important.
In the past, memory of a running virtual machine was just sent over the wire (TCP/IP) as it was. Nothing special was done to it. But as hardware costs have improved to support larger and larger scale, and as we’re afforded the ability to run more virtual machines with more and more memory, we certainly want to do everything we can to make that transfer of memory and configuration data go as quickly as possible. So to address this and improve things, we’ve added two new technologies to hyper-v in Windows Server 2012 R2 and Hyper-V Server 2012 R2:
Let’s talk about those, shall we?
Live Migration With Compression
Did you know that your hypervisor host isn’t typically suffering much when it comes to processor capacity?
“I didn’t know that.”
It’s true. So, what we’re going to is borrow some extra CPU cycles while we’re doing a live migration, and actually compress the migration data before it goes over the wire, and decompress at the destination.
If it sounds just that simple, well, it is. And it’s just a simple choice in the Live Migrations –> Advanced Features settings on your Hyper-V hosts:
And as if that wasn’t good enough…
Live Migration via SMB Direct (RDMA)
In Windows Server 2012 we introduced a new version of SMB – SMB 3. Among other things, this version of the protocol greatly improves performance; even to the extent that we can trust a basic file share to be the location for live data such as a virtual machine’s hard disks and data disks, or a SQL Server database. (Click here for a good summary of what SMB 3 provides.)
SMB Direct (SMB over Remote Direct Memory Access, or RDMA) is technology that, given hardware (the NICs) supporting it, can establish an efficient memory-to-memory transfer of data. In Server 2012 the main beneficiary of this was faster file services. But in R2 we’re using this to send live migration data between the Hyper-V hosts.
So now instead of just sending the memory and configuration of a VM over the wire using TCP/IP, or compressing it first, we’ll use a direct memory-to-memory channel.
Can you say “FAST”?
I knew you could.
“But, can you give me an example? Can you show me how they compare?”
The best example I can give you is Jeff Woolsey’s demonstration he did for the TechEd 2013 North America keynote this past June.
Click this link to watch his demo (at 1:56:15) : TechEd 2013 North America Keynote Video – Jeff Woolsey’s Live Migration Demo
And for a more detailed description of Live Migration and the improvements made, check out this page: Virtual Machine Live Migration Overview
Questions? Comments? Make sure you add them to the comments at the bottom of this post! And try it out yourself by downloading the evaluations of either Windows Server 2012 R2 or Hyper-V Server 2012 R2.
**That’s a ‘vMotion’ for those of you who are more familiar with the VMware terminology.
In this episode I welcomes Bruno Saille to the show. We discuss the SQL Server Self-Service Kit and how it works with System Center 2012 to help automate SQL Server deployments. Tune in as we discuss how the self-service kits works, which System Center components are required as well as what plans are in store for the next release.
If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:
Experience Microsoft's latest products with these FREE downloads! Build Your Lab! Download Windows Server 2012 R2, System Center 2012 R2and Hyper-V Server 2012 R2 and get the best virtualization platform and private cloud management solution on the market. Try it FREE now!
Don't Have a Lab? Build Your Lab in the Cloud with Windows Azure Virtual Machines. Try Windows Azure for free with no cost or obligations, and use any OS, language, database or tool. FREE Trial
Follow the conversation @MS_ITPro Become a Fan @ facebook.com/MicrosoftITPro
Connect with Kevin @KevinRemde Become a Fan @ facebook.com/KevinRemdeisFullofIT Subscribe to our podcasts via iTunes, Stitcher, or RSS
This is pretty cool.
As the title says: System Center 2012 R2 Data Protection Manager is now an application that Microsoft will support when running inside a virtual machine in Microsoft Azure.
I’m sure they won’t mind me sharing this.. but here is the text from an e-mail I received on the subject that spells it out nicely:
We are pleased to announce that System Center Data Protection Manager (DPM) is now supported to run in Azure as an IaaS virtual machine. This announcement allows customers to deploy DPM for protection of supported workloads running in a Azure IaaS virtual machines. Customers with a System Center license can now protect workloads in Azure. Read more about it on the DPM blog. Support for multiple virtual machine sizes Choose the size of the virtual machine instance that will run DPM, based on number of workloads and the total data size to be protected. Start with just an A2 size virtual machine, and upgrade to a larger size to scale up and protect more workloads. Support for Microsoft Azure Backup Protect your data to Microsoft Azure Backup and get longer retention with the flexibility of scaling storage and compute separately. The Microsoft Azure Backup agent works seamlessly with DPM running in an Azure IaaS virtual machine. Familiar management using the DPM console With DPM running in an Azure IaaS virtual machine, you get the same experiences and capabilities that you are familiar with.
We are pleased to announce that System Center Data Protection Manager (DPM) is now supported to run in Azure as an IaaS virtual machine. This announcement allows customers to deploy DPM for protection of supported workloads running in a Azure IaaS virtual machines. Customers with a System Center license can now protect workloads in Azure. Read more about it on the DPM blog.
Support for multiple virtual machine sizes
Choose the size of the virtual machine instance that will run DPM, based on number of workloads and the total data size to be protected. Start with just an A2 size virtual machine, and upgrade to a larger size to scale up and protect more workloads.
Support for Microsoft Azure Backup
Protect your data to Microsoft Azure Backup and get longer retention with the flexibility of scaling storage and compute separately. The Microsoft Azure Backup agent works seamlessly with DPM running in an Azure IaaS virtual machine.
Familiar management using the DPM console
With DPM running in an Azure IaaS virtual machine, you get the same experiences and capabilities that you are familiar with.
So, here’s what you should do:
Welcome to another in our series entitled “Modernizing Your Infrastructure with Hybrid Cloud”. As you may be aware, this week the theme is “Management and Automation”. As a part of that theme I’m sharing with you an introduction to Desired State Configuration (DSC); more completely called Windows PowerShell Desired State Configuration.
DSC is a relatively new (less-than-a-year-old) technology, introduced with PowerShell v4.0, that lets IT define what the configuration of a server will be, apply that configuration, and then verify (and remediate) so that the configuration is still in place and as-desired.
“So, it’s like System Center Configuration Manager?”
No. It’s built-in as a part of Windows, and is configured and implemented using PowerShell. Sound interesting?
Good. In the context of one blog article naturally I won’t be able to go into every detail, but I hope that this article, some simple examples, and some additional resources at the end will get you excited for trying this out. And ultimately that you’ll see the immense value that this will give your IT and, of course, you’re business.
A Simple Example
For our quick example let’s assume a couple of things. I’ve enabled the Windows PowerShell DSC feature on a server named “Server1”. Server1 is a member server in my domain. I’ll be using an administrative account from another server (called Admin) to apply configuration to Server1.
I open up the PowerShell ISE and enter the following text. Can you tell what it’s doing from what the text says?
“It looks like it’s defining something that’s a ‘Configuration’ and calling it ‘IISWebsite’. And for your server named Server1, it’s laying out what Windows Features should be installed!”
Exactly! And in this PowerShell session, when I execute the configuration, I end up with a .MOF file, which is a definition on behalf of how Server1 should have the Web Server and ASP.NET 4.5 installed and running. All I need to do is run the Start-DSCConfiguration PowerShell cmdlet with the proper parameters referring to the .MOF file and pointing to Server1, and DSC configures the features and enforces that they always be there as I desired In fact, even if I or another administrator were to manually remove the ASP.NET 4.5 feature from the server, after a period of time the state would be re-evaluated and the configuration would be fixed!
What if, like those “WindowsFeature” sections, I were to add a “File” section like this:
Basically what I’m saying is, “Here’s the source folder of content that I want you to make sure is always found under this destination.” Ah.. and doesn’t the path look like it might be a web site folder? Yes! This configuration not only enforces that IIS be installed and running, but that the contents of a web application be always there and that the destination code always matches what is coming from the source! Someone could go in there and, say, delete some of the web content, but DSC would fix it automatically!
“Hey Kevin… What’s a .MOF file?”
Yeah.. this was a very quick, very simple example. Let me go through and briefly describe the parts that make up DSC…
The Parts – Configuration
The configuration is what we built in my earlier example. It’s a PowerShell definition that, using “Resources” (defined next) specify how things should be configured; our “desired state” for the configuration of a target server.
The Parts - Resources
In our example above, you notice that I’m defining what Windows Features are to be installed. I can do this because there is a built-in DSC “Resource” called “WindowsFeature”. From the TechNet Documentation, “Resources are building blocks that you can use to write a Windows PowerShell Desired State Configuration (DSC) script.” Windows comes with a number of these built-in resources that know how to specifically work with, configure, and enforce various aspects of the operating system. Resources for working with the registry, the file system, Windows Features, services… and many more, are included in the list of built-in DSC resources.
But it gets even better. These resources are just PowerShell modules. And just as you have the ability to create your own modules to extend PowerShell, you also have the ability to create your own custom resources!
The Parts - The .MOF file
This is the file that contains the configuration to be applied. It’s the result of executing the configuration definition in PowerShell, and is in a standard format as defined by the DTMF.
“Hey Kevin - Why do we even really need a .MOF file? Can’t Microsoft just do what it needs to do directly from PowerShell?”
I’m sure they could. But the beauty of using the .MOF is that because it’s a DTMF standard, it is formatted in a way can be applied to different machine types and for various purposes. In fact, at TechEd in Houston earlier this year I saw Jeffrey Snover actually use DSC to create a .MOF that then configured a Linux server running an Apache web server. (Yeah.. we’re “open” like that these days!)
The Parts - How It’s Deployed
The full name, “Windows PowerShell Desired State Configuration” is a hint about how you enable the DSC capability. It is a feature of Windows Server 2012 R2, found here in the Add Roles and Features Wizard:
When you check the box, you’ll notice that it will also install some Web components to your server…
This is because one of the ways DSC configurations are securely pulled is to use IIS.
The Parts - Push-me-Pull-You?
One important aspect of DSC is that it becomes even more powerful when you can distribute configurations, or maintain consistent configurations among many machines, all from a smaller number of source locations. DSC allows either a simple “push” distribution, which is simple and more manual, and a “pull” distribution where not only do you apply a configuration to a machine but you also tell it where it should be looking for its configuration and any changes going forward. Pulling can take place over HTTP (not recommended), HTTPS (recommended), or SMB Share permissions (okay because it’s authenticated access).
“Why isn’t HTTP recommended?”
Think about the damage someone could do if they hijacked DNS and then pointed to and automatically applied someone else’s version of a server configuration to your servers. Scary prospect, indeed.
The Parts – The Local Configuration Manager
The Local Configuration Manager is “the Windows PowerShell Desired State Configuration (DSC) engine. It runs on all target nodes, and it is responsible for calling the configuration resources that are included in a DSC configuration script.” So basically when you’ve enabled the DSC feature on a server, this is the service that either takes the pushed configuration, or pulls the configuration, and then applies it as defined in the most recent .MOF file.
For More Information…
Like many of you, I find that I learn best by looking at other people’s examples. And thankfully in the case of PowerShell and DSC there is a really big community already formed and willing to share what they have done with the rest of us. Here are some of the places I recommend you check out and save to your favorites if you’re really going to get serious about using Desired State Configuration:
If you want to try it out in a virtualized lab environment:
And finally, don’t forget to check in frequently at our “Modernizing Your Infrastructure” series landing page, to see all the great articles our team has created and resources we’ve shared.
Keith Mayer and I continue our series on “Modernizing Your Infrastructure with Hybrid Cloud”. In today’s episode we discuss various options for networking. Tune in as we go in depth on what options are available for hybrid cloud networking as we explore network connectivity and address concerns about speed, reliability and security.
Shortened URL if you would like to share on Twitter or Facebook, etc.
Build your very own Hyper-V Server 2012 R2 for FREE and Enter for a chance to win* one of the following fantastic prizes:
You could win a Microsoft Surface Pro or Certification Exam Voucher!
You could win a Microsoft Surface Pro or Certification Exam Voucher!
In addition to a chance to win one of the prizes above, EVERY ENTRANT will receive our Hyper-V Server 2012 R2 enterprise-grade bare-metal hypervisor software completely free. This is a fully functional virtualization hypervisor that supports scalability up to 320 logical processors, 4TB physical RAM, live migration and highly-available clustering.
Hyper-V serves as the virtualization foundation for Private Clouds leveraging Windows Server 2012 R2 and System Center 2012 R2.
You can enter the IT Pro“Cloud OS Challenge” Sweepstakes by completing all of the THREE EASY TASKS below to download and build your Private Cloud foundation with Hyper-V Server 2012 R2. Be sure to complete the last task to submit your proof-of-completion for entry into this sweepstakes.
Download the Hyper-V Server 2012 R2 installation bits using the link below.
DO IT: Download Hyper-V Server 2012 R2
Install Hyper-V Server 2012 R2 in your lab environment using the installation steps linked below.
DO IT: Install Hyper-V Server 2012 R2
Complete the steps in this task to submit your proof-of-completion entry into the IT Pro “Cloud OS Challenge” Sweepstakes for a chance to win one of the exciting prizes listed above.
Upon submitting your entry, you will receive a confirmation email within 24-hours.
Now that you’ve installed Hyper-V Server 2012 R2, continue your learning and evaluation with these additional resources.
*NO PURCHASE NECESSARY. Open only to IT Professionals who are legal residents of the 50 U.S. states or D.C., 18+. Sweepstakes ends November 30, 2013. For Official Rules, see http://aka.ms/CloudChallenge201311Rules.
This excellent question was asked by Ralph at our IT Camp in Saint Louis a few weeks ago:
“One of the questions asked by our VP relates to Azure backups protecting from user error rather than hardware failure or disaster recovery. What is the Microsoft guidance on backing up VMs in the cloud?”
“One of the questions asked by our VP relates to Azure backups protecting from user error rather than hardware failure or disaster recovery. What is the Microsoft guidance on backing up VMs in the cloud?”
How do you protect the data on your servers today? The quick answer to this question is that you need to protect OS and application configuration and business data the same way on your physical virtual machines; no matter where they reside. A benefit of putting any storage (which includes your virtual machines) in Windows Azure is that it is all kept highly-available and geo-redundantly replicated; and that’s just automatic. But beyond that, you are responsible for any machine or data backups or archiving that you may feel is needed.
“Okay.. but what about Azure storage BLOB snapshots?”
Well.. yes, Windows Azure actually does have the ability to take and maintain BLOB snapshots through the REST APIs. And a few vendors have created solutions to use this as a way to keep point-in-time copies of virtual machine disks, and then restore machines from those snapshots. But using BLOB snapshots for Virtual Machines in Windows Azure is currently not supported by Microsoft.
I repeat: As of October 11, 2013, using BLOB snapshots for VMs in Windows Azure is not supported by Microsoft.
That said, Chris Clayton has a script that you can use to backup and restore Azure VMs using BLOB snapshots. But: “This is a demonstration and should not be used for production scenarios”…”This should not be used to replace your current backup and restore strategy.”
Companies like Cerebrata (Cloud Storage Studio and Azure Management Cmdlets) and ClumsyLeaf (CloudXplorer) and others also have tools and operations for taking and restoring Azure storage BLOB snapshots, but the process of restoring a snapshot currently involves saving a copy of the VM configuration, deleting the VM, deleting the original disks, restoring the snapshots, and then re-restoring the machine configuration. It’s still cumbersome, and prone to error.
And if you don’t do it right, you can end up with a corrupted VM. (Trust me.. I know from experience.)
“Will we have a supported way to do this in the future?”
I don’t know. Personally, I hope so.
In the meantime, treat your machines the same as you would any other machine. Backup their configuration and data according to your policies as required.
“Okay.. so what if I just want to make offline copies of my VMs? Can I do that?”
Absolutely. For the backup, what you’ll want to do is:
And then for the restore:
EXTRA CREDIT: Someone who has more time than I do today – build us two PowerShell scripts for doing this!
Yes, it’s been a few weeks since our last series wrapped up (“VMware or Microsoft?”), so it’s about time we started a brand new series of blog articles.
A fair question. The ‘we’ I’m talking about is the 11 Microsoft US DPE IT Pro Evangelists in these here 48 contiguous United States. The series runs to the end of November (just before Thanksgiving here in the U.S.), and is all about answering in as many useful ways as possible, the magical question: Why?
…and so on.
My friend Dan Stolts is the organizer of the series, and owner of the official landing page: “Why Windows Server 2012 R2”
Keep watching his landing page and the complete list of articles and their anticipated dates of publication.
RECOMMENDED: To follow along with the dozens of examples we’re going to be writing about, we highly recommend that you download and install the following newly-available R2-version evaluation software:
Hyper-V in Windows Server 2012, Hyper-V Server 2012, and Windows 8 allows the addition, removal, or configuration changes of some key aspects of a virtual machine; even while it’s running. Others, however, are still not able to be changed. And often, the ability to make changes may also be determined by the capabilities of the operating system running in the impacted virtual machine.
For Part 2 of our “20+ Days of Server Virtualization” series, we wanted to give you an overview of what is allowed, and what’s not, with regard to making “hot add” or removal (or configuration) of a virtual machine’s settings. To do this, I’m going to use a picture of the Virtual Machine Settings dialog, and walk right down the list…
Item: Hardware: SCSI Controllers, Network Adapters, and Fibre Channel Adapters
Reason: Modern operating systems still don’t know how to adapt to when a new SCSI Controller or NIC suddenly shows up. So, like a physical machine, plugging those in virtually doesn’t make much sense.
Hot-Add or Change: Nope
Reason: These are configurations that really don’t impact a machine until it’s being started anyway. So being in an OFF state is no big deal here.
Hot-Add or Change: It depends! Have you enabled Dynamic Memory?
Reason: In Windows Server 2008 R2 SP1 Hyper-V we introduced a capability called Dynamic Memory. Originally, Dynamic Memory was the configuration of a minimum and a maximum memory that would be used by a Virtual Machine, and the virtualization host would adjust memory on machines based on resource usage (memory demand) and relative priority settings. Dynamic Memory is a huge boost to virtual machine consolidation ratios (meaning: more VMs on each host), and is really useful in scenarios like Virtual Desktop Infrastructure (VDI).
With Hyper-V in Server 2012 we added the configuration of “Startup RAM” along with the minimum and maximums. When Dynamic Memory is not enabled, your Startup RAM is just the amount of memory that the machine has, and it can’t be adjusted while the machine is running. However, if Dynamic Memory is enabled, you now can set and even adjust the minimum and maximum RAM settings on the fly; as the machine is running.
Hot-Add or Change: No
Reason: I don’t know. It’s just not something you can do. But…
Item: Processor Configuration
Hot-Add or Change: Yes! You can configure and change the Virtual Machine reserve percentage, limit percentage, and relative weight.
Reason: The aspects of some of the performance parameters are completely driven through software, and will impact the performance of a virtual machine relative to the other virtual machines on a host. So making changes to these is perfectly acceptable.
Item: IDE Controller
Reason: There are only two IDE Controllers in Hyper-V virtual machines, in keeping with the very common physical PC motherboard configuration. And there can be only those two. So adding and removing them doesn’t make much sense now, does it?
Item: IDE Disks
Reason: I suspect it has something to do with how we implement IDE as a more direct path to hardware than we do for other emulated items like SCSI controllers.
Sidenote: If you’re wondering why a virtual machine in Hyper-V cannot boot off of SCSI disks (and why you should not care), check out Ben Armstrong’s blog post: “Why Hyper-V cannot boot off of SCSI disks (and why you should not care)”
Item: SCSI Controller
Reason: A virtual machine may have as many as 4 virtual SCSI controllers, but adding or removing them is a hardware change that the guest operating system wouldn’t support.
Item: SCSI Disks
Hot-Add or Change: Yes!
Reason: The virtualization of the SCSI Controller and the kind of emulation we do through (hardware access through the VMBus) allows the addition of or removal of disks as the machine is running.
Item: Network Adapter
Reason: The sudden addition or removal of a NIC isn’t supported in the guest VM operating system, so there’s no real reason to virtualize that kind of a change to a running machine in Hyper-V. However…
Item: Network Adapter Configuration
Hot-Add or Change: Yes!
Reason: Making the change of, for example, the virtual switch to which a virtual NIC is connected to is very much the same as unplugging your RJ-45 cable from one device and plugging into another. And changes such as enabling and configuring bandwidth management, hardware acceleration, or other advanced features are implemented through software, shaping network traffic or performance; which is outside of the physical machine itself. The guest OS and the virtualized machine doesn’t know or care about such things.
Item: Virtual COM Ports
Reason: You’re making an emulated connection to some local (or even network connected) hardware that can be seen by the virtual machine as something being plugged-in or unplugged. As long as the guest operating system can adjust to it, you can make this change on-the-fly.
Item: Virtual Diskette (Floppy)
Hot-Add or Change: Yes
Reason: While you only have one (and ever only one) virtualized diskette drive available in a Hyper-V virtual machine (and it’s more than any of us have seen of actual diskette drives in the past 10 years!), you do have the ability to virtual insert or remove these .VFD files (1.44MB! How did we ever manage!) into your virtual diskette drive. (Which is, as you must imagine, is 3.5 inches.)
So far we’ve only discussed the “Hardware” aspects in the settings of a Hyper-V virtual machine. Under “Management”, you have the ability to make some additional changes to a running VM.
On a running machine, you can modify the machine’s name, the Integration Services enabled, and the Automatic Start and Stop actions. You can not, however, make a change to the Smart Paging File location.
And the configuration of the Snapshot file location isn’t determined by whether or not the machine is running, but instead by whether or not there are any existing snapshots. If there are, then you can’t change this location. If not, then go ahead and change it!
With every version of Hyper-V comes more and more flexibility in terms of what can be configured and changed while a virtual machine is running. All changes mentioned above can also be driven (or not – again depending upon the state of the machine) programmatically using tools like PowerShell and products like System Center 2012 SP1 Virtual Machine Manager (VMM). Much of what is able to be modified on a running system is impacted by practicality, by limitations of the implementation of virtualized hardware, and by the capabilities of the modern operating system and its ability to adjust to those sometimes drastic changes “on-the-fly”.
I hope you’ve found this summary useful, and that you’re taking advantage of all of the “20+ Days of Server Virtualization” posts in our series.
It isn’t a brand-new tool, but it was updated to version 1.1 the other day, and definitely worth sharing. The Microsoft Azure (IaaS) Cost Estimator Tool is now available. It’s an installable tool that allows you to “profile [your] existing on-premises infrastructure and estimate cost of running it on Azure.”
“Sweet! So, it installs agents on servers and then..”
Whoa! Lemme stop you right there! No agents. It’s agentless. It does require you to supply administrative credentials that will apply to the machines you’re profiling, which make sense.
The first time you run it, you’ll see this screen.
As you can see, the description of what it does and how it can be used are clearly spelled out.
In my example, for example (?), I’m running the tool on a PC in my test network. I’ve selected to profile physical machines, such as my domain controller named, surprisingly enough, “DC”. I’ve supplied my credentials…
And clicking Add, plus adding a couple of other machines (whose names might give away their purpose) results in this:
Clicking Next brings me to the page where I can choose a profiling duration, scanning frequency, and a name for the generated report.
I’m going to scan only one time, so my results won’t be based on more accurate, actual traffic or performance of my machines. But it’s good enough for a start.
I click Begin Profiling, and (in my case) after about 10-15 seconds my one-time scan is complete. I click View Report, and after an informational pop-up describing what was done and what options I have to change values, I see this screen:
Notice that I can tweak values and select just some or all of the machine before clicking Get Cost. I’ll just leave the values as determined, select all, and get my cost. Here is the result:
Notice that I can tweak the pricing model, and change the size of the Compute Instance (the type/size of VM) to play with various values. And when I’m done, I can export the results to a .CSV file (for use in Excel), or go back and try it all over again. Pretty nice?
“Very nice! But, what does this tool cost?”
Nothing. Nada. Zilch. Zero-dollar$.
“Sure. And I suppose once I run this tool I’m going to be bombarded with e-mails from Microsoft.”
Nope. Not even a requirement for a Microsoft Account to download, and no information is ever sent from this app back to Microsoft.
Seriously, Microsoft hopes that this will be a good way to get an idea of what your existing machines, whether physical or already virtualized, will cost to run over time as VMs hosted in our Azure Infrastructure Services. It’s all a part of helping you plan for an eventual migration of some of your local resources into Azure, to take advantage of the scale, capacity, security, and cost-benefits of the cloud.
In case you missed the link earlier, here it is again: Microsoft Azure (IaaS) Cost Estimator Tool
Recently we’ve been showing off a capability (currently in preview) called “Windows Azure Backup”, which is a simple file system backup and restore to/from Windows Azure storage.
At our IT Camp in Saint Louis a few weeks back, David asked:
“Can Windows Azure Backup do a bare metal restore in the event of total failure of a physical server?”
Short answer: no.
Longer answer: Not directly, no. But consider this…
You have other tools such as Windows Server Backup and System Center 2012 SP1 Data Protection Manager that can do a full system, system state, or even bare-metal image restore of a backed up machine.
With Window Server Backup, you could use a two-step process of additionally saving the WSB-created image up to Windows Azure storage using Windows Azure Backup. And the restore would be to retrieve the image using WAB and then recover it.
With Data Protection Manager, the new functionality to store your backup data into Windows Azure already exists as of System Center 2012.
“So I can just put my image backup into Azure, right?”
No. DPM only supports Volume, SQL DB, and Hyper-V Guest backups to Azure. So, in the same two-step process we discussed for Windows Server Backup, you could do your bare metal backup to a file share and then use DPM to protect that share to Windows Azure.
In this episode I am honored to welcome back Microsoft Vice President Brad Anderson to the show. We discuss his monthly upcoming live webcast series, “Success with Enterprise Mobility” , that kicks off on Tuesday, December 9th and concludes on March 3rd. Tune in as he gives us a preview of what he and his guests will be discussing and learn how you can successfully support business productivity for your users through secured and controlled mobility
This week at VMWorld the VMware faithful are learning all about the latest news and updates from their virtualization vendor. And hopefully at the same time Microsoft is able to reach them with some free custard and some good information to help them understand:
So today, for the latest article in our “VMware or Microsoft?” series, I thought I’d address an area that perhaps a lot of VMware customers don’t know much about. One of the important things that we really want VMware customers to understand is that they may be paying for features or technology or high availability or virtualized storage or virtualized networking that they wouldn’t have to if they went with Microsoft’s version of the “Software Defined Data Center”. And add to this the fact that many enterprises using VMware already also own System Center; well, that means that they already own all that they need do to everything that otherwise requires the vCloud Suite and VMware’s Enterprise Plus licensing.
While I don’t have the time to write (and you won’t have patience to read through) an exhaustive list of examples, let me just pick a few key scenarios that you’re either already paying too much for, or perhaps haven’t purchased because you thought the capability was just too expensive. In each example, while I won’t list any retail prices (which are always subject to change), I’ll try and point out what versions or SKUs you would have to obtain (purchase or simply download) to gain the described benefits.
Disclaimer: VMWorld isn’t over yet, and there may be announcements around licensing changes that may make some of these points obsolete. And for your sake, I hope so.
The Hypervisor: FREE
While VMware has also has a free hypervisor, theirs is limited in what it can do. And while this week VMware announced that more capabilities will be made available to more of the purchased vSphere levels, Microsoft will never ever have to make any such announcement.
Because the free Hyper-V Server already does everything that Hyper-V installed under Windows Server 2012 does. It’s full-featured. No limits. No compromise. All of the scale is there, for no additional cost. And even though higher versions of vSphere 5.5 now finally support similar scale to Hyper-V, they don’t exceed what Hyper-v already does, and does for free.
Do you see anything on that list that VMware does bigger or better? At the time of this writing (the day after VMWorld’s keynote), in vSphere 5.5 they did increase the LPs to 320, memory to 4TB, and vCPUs to 64, which matches Hyper-V – but not in their free version.
Live Migration (It’s like VMotion): INCLUDED
You don’t need to buy anything just to get ultimate live portability of virtual machines. You can do live moves of running virtual machines (Live Migration), live moves of a machines storage (Storage Live Migration), and even a move of the running machine and its storage, all in one operation (“Shared-Nothing” Live Migration); even without the need for a cluster.
I know that’s not something unique to Hyper-V. VMotions have been around for a while. But unless something new is announced this week, you still have to pay something for that capability. And, there are some capabilities which, when implemented, actually override and disallow the ability to do a vMotion. (SR-IOV being just one example. Check out this vSphere 5.1 document for their entire list.) NOTE: I’m guessing that the story here gets better with vSphere 5.5, but I don’t know the details at the time of this writing. Please enlighten me in the comments if there is something new here.
With Hyper-V, we have no such limitations.
Also with Hyper-V, you can do as many simultaneous migrations of machines and storage as your hardware will allow, with no artificially imposed limits based on network capacity.
Windows Server (and the free Hyper-V Server) includes the Windows Failover Clustering role, which allows you to create a big cluster of virtualization nodes.
Currently the limit is up to 64 nodes supporting up to 8,000 virtual machines. And you don’t even need System Center to manage or maintain it. You can even do rolling updates of the nodes of your cluster, and the VMs will live-migrate back and forth during the process. That’s just built-in.
“But what about DRS (Distributed Resource Scheduler) and Distributed Power Management?”
Yep. But in Hyper-V and using System Center 2012 Virtual Machine Manager, we call it DO and PO – for Dynamic Optimization and Power Optimization.
Do you want to create and regularly synchronize to an offline copy of a virtual machine that you can failover to in case of an unexpected outtage or disaster? Hyper-V provides that in the box with Hyper-V Replica. And coming in Windows Server 2012 R2 and Hyper-V Server 2012 R2, you’ll have a couple of new capabilities:
“But Kevin, VMware includes replication in all editions of vSphere, and in 5.5 they’ve made improvements in RPO and in doing point-in-time recovery with multiple recover points saved.”
Yep. Just like Hyper-V has had since 2012. They’re doing more here, definitely, which is good. But do they support Test Failovers? Do they support automation through PowerShell without some other purchased tool like SRM? Can they automatically re-IP a server that has failed over to a different IP subnet? Is it easy to “failback”? These are all things that you get for no additional cost with Hyper-V Replica.
Network Virtualization: INCLUDED
VMware announced NSX at the VMWorld Keynote. This is their solution for network virtualization / Software Defined Networking. The flexibility of defining, isolating, and applying policy to networks of machines that can be programmatically created, and giving the portability to move virtual machines around to different physical networks while the virtual networking and IP addressing of those machines never has to change – that’s all very compelling, yes?
“Yes. And isn’t that what you can already do with Hyper-V Network Virtualization and the Hyper-V Extensible Switch?”
Yes. Microsoft started enabling network virtualization in Windows Server 2012, and managed by System Center 2012 SP1 Virtual Machine Manager. And these capabilities are only getting better and more flexible in the R2 versions of both of those products, and supported by many hardware vendors.
I’d actually like to learn a little more about how NSX is implemented. Is it just a new version of their switch? If so, Microsoft also has the benefit of a virtual switch that is Extensible, not just replaceable. Other products such as firewalls, traffic control, packet filtering – these can easily be added to the switch; configured at a logical level and the applied uniformly to all switches participating in a logical network.
Can you use NSX and the Cisco Nexus 1000v at the same time? No. But with Microsoft’s extensible switch, you just add the Nexus 1000v extension, and you still have Network Virtualization.
Storage Virtualization: INCLUDED
At the VMWorld keynote, VMware announced the availability of the public beta for vSAN –the VMware Virtual SAN.
This is “a new software-defined storage tier, pools compute and direct-attached storage resources and clusters server disks and flash to create resilient shared storage.”
Have you heard of Storage Spaces? Windows Server 2012 (and improved in R2) supports the ability to treat cheap disks as pools of storage. Virtualized. From the pool, you create virtual disks, which can then contain volumes.
If that volume contains a file share, you can use SMB 3 (and even better with RDMA support) to have fast, live-data support (even virtual hard disks of running machines) on that storage.
Supporting that storage, you could have a cluster of file servers who actively share access to that same share, which makes the supported files and filesystem “Continuously Available”; meaning, if a file server goes down – even if it’s the one serving access to a particular file (or running VM’s hard disk or SQL Server’s database files), you’ll never lose connectivity. (See “Scale-Out File Server for Application Data Overview” for more information.
And I should probably remind you: This is included in Windows Server 2012.
But it gets even better. In Windows Server 2012 R2 we add the ability to automatically support tiered storage in storage pools. If you have local SSDs alongside of HDDs, go ahead and put them in the same pool. And Windows Server will automagically move the more active files to the SSDs and the less active files to the HDDs. (Yes, you can also designate that certain files must always have faster performance and should therefore be put on the SSD tier; like your VM’s hard disks.)
VMware agreed with Microsoft during their VMWorld keynote when they said that automation “is the control plane for the datacenter of the future”, and that what is missing (?) is a common set of management and, importantly, automation tools for working with virtualized machines and applications – even in a hybrid cloud environment. And their solution for this is their vCloud Automation Center.
Microsoft’s answer to this is a combination of PowerShell (which, for no additional cost, is available to fully manage all of Hyper-V, all of Windows Server, and even configure and manage Infrastructure-as-a-Service resources in Windows Azure or other hosting providers), and System Center 2012 SP1, which, through automations in Virtual Machine Manager, App Controller, and extremely rich (and cross-vendor) automations driven by Orchestrator.
Oh.. and did you know that, with these same tools, you can also automate your configuration, deployment, management, monitoring, and reporting against vCenter-based virtualization resources too? Yes, System Center 2012 SP1 can do that, even if you want to stick with vSphere, or use Hyper-V in addition to vSphere for virtualization.
I could go on, but I think this is a good start.
What do you think? Are you paying too much for capabilities that should just be “included”? Have I opened your eyes at least a little bit to the idea that Microsoft has a full-featured, enterprise-ready solution? If you haven’t lately, it’s definitely time to take another look.
Welcome to another installment of our May series of articles – “20 Key Scenarios with Windows Azure Infrastructure Services”.
Today I’m going to describe a scenario, a problem, and then propose a solution.
The Scenario: Single Sign-On support using Active Directory, Windows Azure Active Directory, ADFS (Active Directory Federation Services), and Office 365 and/or Windows InTune.
For those of you who may not be familiar with it, you have the ability to set up a federated identity relationship between your local Active Directory and your Office 365 authentication. In this way, your people, simply logging in with their local domain accounts, are able to be automatically authenticated against Office 365, because Office 365’s use of Windows Azure Active Directory, and you have the ability to set up an ADFS relationship between the authentication in Office 365 and your company’s Active Directory domain. So, you manage one set of user accounts locally, just like you always have, and Office 365 can grant access based on the “claim” that the user account is known and valid. Your client (laptop, tablet, or other mobile device) gets the claim from your Active Directory (preferably by accessing an ADFS Proxy in your company’s perimeter network), and then passes that acquired claim up to Office 365.
In short – Your users are either already authenticated, or just have to set up the authentication parameters one time for their use of the cloud-based services such as Office 365, Windows InTune, or other such services.
For details on setting up Single Sign-On for Office 365, see “Plan for and deploy AD FS for use with single sign-on”
So this is great. No matter where I am, or where my people are in the world, they can use their domain account and local profile and just open up Outlook or access the cloud-based SharePoint or their SkyDrive Pro storage, and they’re authenticated. And even if they’re using a non-domain machine or a mobile device, they’ll use the same company credentials they’re already familiar with to connect to their company e-mail or other resources.
The Problem: I’m outside the office, and the connection to my ADFS Proxy is unavailable. What happens then?
“Yeah.. what happens then?!”
I’ll tell you what happens then. It’s a problem, because, your device needs to get to the ADFS (STS) proxy to verify that you are who you say you are, and to give you the claim token that is passed up to Office 365. If it is unavailable, then your users can’t be trusted by their cloud-based resources. Outlook won’t be able to connect to the Office 365 Exchange server. Yeah.. a big problem. That’s why so much documentation (and even the promise of Microsoft support) is devoted to the configuration of a load-balanced farm of servers to keep that proxy service high-performing and highly available.
Granted, it’s an even bigger problem for the people who are sitting in that office. Presumably they can’t access the Internet at all. So assuming that your company, like most others, is becoming more and more dependent upon that Internet connection being live in order to get their work done, you’ve probably already addressed alternatives. And many people nowadays have multiple personal paths to the Internet that would restore some amount of personal access. But that doesn’t fix their problem of not being able to get Outlook to connect.
The Solution: Put a copy of your domain in “the cloud”!
Think about it: If I have a replicated copy of my domain up on a virtual machine running in Windows Azure, then that domain controller can also serve as the trusted location where Office 365 and the ADFS trust can be connected!
“Sounds like an interesting idea. But what if I don’t want a copy of my domain up in the cloud?”
Then another option would be to Windows Azure virtual machines as your ADFS Proxies. Basically think of Windows Azure as an alternative to (or an extension of) your Perimeter network (DMZ). Of course in this case if the availability of your home datacenter goes down, you’re still going to have authentication issues.
Here’s a thought: Do both! Have an AD site up in Windows Azure, with a secured/authenticated/encrypted connection back to the corporate network. And then build an externally available, load-balanced set of machines in a separate “perimeter” network in Windows Azure as well. In this way, even if your connection back to your main office and the local AD DCs goes down, you still have AD authentication available “locally” within your Windows Azure subscription.
Here’s a document that describes the process in great detail:
Office 365 Adapter: Deploying Office 365 Single Sign-On using Windows Azure
What do you think? Do you have any other ideas or suggestions? Any concerns? I’d love to hear about them in the comments. Let’s discuss!
And if you’ve missed any of our ““20 Key Scenarios with Windows Azure Infrastructure Services” series, please click on this link to find all of the other great articles.
In the context of Windows Azure Infrastructure Services and our IT Camp in Saint Louis a few weeks ago, Lettie asked this question:
“If we had one large storage pool and added individual user folders, do we have the ability to setup file security access to each individual user folder? Is there the ability to limit a user’s folder size? We need a better backup solution for our 800+ remote users.”
“If we had one large storage pool and added individual user folders, do we have the ability to setup file security access to each individual user folder? Is there the ability to limit a user’s folder size? We need a better backup solution for our 800+ remote users.”
In order to answer this one, I have to make an assumption about the specific topic it relates to. So I’ll answer this question in two ways.
If you’re wondering (and I think you are) about whether or not ACLs can be assigned to or sizes restricted for containers within Windows Azure storage accounts, the answer is no.
But another thing to remember is that a network of virtual machines in Windows Azure can be treated as just another subnet in your corporate network. And if your users connect via VPN or Direct Access to your network, they’ll have access to the servers “in the cloud”. Those servers “in the cloud” can be hosting file services, with Storage Spaces storage pools and virtual disks containing user documents. As long as those file servers are domain joined, you can easily add ACLs to those folders.
I’m only giving you one of what could likely be dozens of solutions out there. If you’re reading this and have other recommendations for Lettie and her company, please share them in the comments.