Kevin Remde's IT Pro Weblog
IT Pro Resources
TechNet EventsMicrosoft Security Response CenterMicrosoft Virtual AcademyKevin’s Evaluation Download Center
IT Pro Evangelist Blogs
Blain Barton Blain Barton's Blog@BlainBar
Brian LewisMy Thoughts on IT...@BrianLewis_
Dan Stolts IT Pro Guru Blog@ITProGuru
Jennelle Crothers TechBunny@jkc137
Kevin RemdeFull of I.T.@KevinRemde
Tommy PattersonVirtually Cloud 9@Tommy_Patterson
Yung Chou Yung Chou on Hybrid Cloud@YungChou
It’s the end of the month, and the last of our 30 days of cloudy goodness is here! In today’s blog post, one of the most genuinely certifiable gentlemen I know, Brian Lewis, points you to some resources to help you get on track for cloud readiness. And by you, I mean YOU. You need to have the proven skills to work with these new technologies, and we can help you get there.
Read part 30 of 30 HERE.
I sincerely hope you’ve found our work on this series useful. I know I speak for all four of us (John Weston, Brian Lewis, Matt Hester, and I) when I say: If you’ve enjoyed these blog posts just half as much as we’ve enjoyed doing them… then we’ve enjoyed them twice as much as you.
“Way to sneak an obscure Monty Python reference in there, Kevin.”
Yeah.. so sue me. I’m a huge fan.
And it’s not too late. If you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
Go forth and be cloudy!
In today’s next-to-last installment of our cloud series, we have another guest author representing another great Microsoft partner: NetIQ. And in particular, they discuss a product of theirs called Platespin Migrate.
Check out the Part 29 blog post HERE.
And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
Back in part 23 of our series I wrote about Windows Azure, and some of the tools that IT Pros have at their disposal for the sake of managing and monitoring it. One of those tools of course is PowerShell. And today in part 27, Matt Hester writes about how PowerShell (one of his favorite things) can be used to manage not only Windows Azure, but also Office 365.
Check out his part 28 post HERE.
“Hey Kevin – Is the cloud just for big companies?”
Absolutely not. Certainly some implementations (private clouds, for instance) make more sense in organizations that A) are able to take advantage of delivering IT as a Service for their business units, and B) can justify the kind of datacenter resources to support it. But small business can take
Today in part 27 of our cloud series, Matt Hester talks about Windows Small Business Server (SBS) 2011 Essentials, and how that solution utilizes both your local resources (an SBS server) as well as cloud-based services for messaging and collaboration.
Check out his excellent post HERE.
Are you concerned that the cloud will eat your job? Perhaps. Every time there is a shift in how things are done based on improvements in technology or new options for efficiency, economics, scale… there is always someone crying foul. Or probably more appropriately, they’re worrying about what it means for their current situation. And that’s quite understandable. If you’re the guy responsible for buggy whip holders when the “Horseless Carriage” catches on, you eventually realize that your role is going to have to change a bit.
In Part 26 of our cloud series, Matt Hester discusses this issue, and does so in the context of one of Microsoft’s software-as-a-service solutions: Office 365.
Check out his post HERE.
Microsoft has some pretty interesting new direction with regard to what “the cloud” can do. For example – how about an Internet-based service that let’s you monitor, manage, and deploy software to the workstations you’re responsible for?
“Yeah.. I’d love to see that.”
That’s what we have in the form of Windows Intune. Today in part 25 of our cloud series, John Weston fills you in on all the details. Check out his blog post HERE.
Back in part 2 of this 30 part series, John Weston introduced the topic of Hybrid Cloud. That is the combining of Public, Private, and or traditional IT into a system that works for you. Today in Part 24, he continues the discussion with a look at some examples of using public cloud services that extend from and augment your internal infrastructure.
Check out his blog post HERE.
* Happy Thanksgiving! *
You might recall that in April 2011 I introduced you (or perhaps re-introduced you) to Windows Azure in part 6 of my “Cloudy April” blog series. Windows Azure, in case you’ve forgotten (or have been living under a rock) is Microsoft’s Platform as a Service (PaaS) offering – allowing you to build applications and run them in the cloud – among (many) other things.
“Kevin, why are you (so) into parenthesis?”
I don’t (really) know. Forgive me. Anyway… I also blogged in part 25 of that series about the options that you, as IT Professionals, have for managing and monitoring Windows Azure.
Here now, with my permission, is the (updated) content from that post:
Let me ask you something… Are you like many IT Pros I talk to Windows Azure about, who think, “Oh.. that’s cool. But it’s for developers. How am I going to manage it?”
“Yeah.. that’s what I’m thinking! It’s like you can read my mind!”
Exactly. And I’ve heard it a lot from the IT Pros I’ve talked to, and quite honestly I thought it myself when Windows Azure was first introduced. And also, for a while there I was frustrated that Microsoft didn’t have a better answer when it came to automating or otherwise controlling and monitoring your Windows Azure workloads; though I knew that more and better solutions than just watching some stream of logging information were “in the works”. Fortunately, now we’ve got some good solutions for you; and even more on the way. So I thought I’d take a minute to list some of the tools and options that are available, and some that are still-to-come, regarding the management of Windows Azure and SQL Azure.
The first thing you’ll want to do is walk through some of the free training guides.
“But Kevin.. that’s for developers.”
No.. not entirely. Yes, sure you will want to install the platform and the training kit samples, but you won’t have to do any coding. The training kit comes with the fully-completed example applications that you can quickly compile and package up for putting up into your trial account. And once you have that, the training walks you through the important steps of configuring storage, loading your application using the Windows Azure Management Portal, and working with the web-based management. Once you’ve got that down, further exercises show you how to use Windows Windows PowerShell to securely manage and control you Windows Azure applications.
Also on the subject of PowerShell for Windows Azure, you really should watch Max Adams’ “How Do I” video on TechNet: http://technet.microsoft.com/en-us/ee957677.aspx
Second, you might take a look at the MMC.
“Really? There’s a snap-in for the MMC?”
Yes – The Windows Azure Management Tool. It’s a non-MS-Supported tool, but it does a lot for you, such as managing your hosted services, monitoring diagnostics on performance and events, managing certificates, configuring storage, etc. It is even extensible, and drives PowerShell to do its work.
Ryan Dunn has also put together a nice 15-minute introductory video on the tool.
And finally, we have the Windows Azure Application Monitoring Management Pack that you can use with System Center Operations Manager. Here is the description from the download page:
Overview The Windows Azure Monitoring Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure. Feature Summary After configuration, the Windows Azure Monitoring Management Pack offers the following functionality:
Discovers Windows Azure applications.
Provides status of each role instance.
Collects and monitors performance information.
Collects and monitors Windows events.
Collects and monitors the .NET Framework trace messages from each role instance.
Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
Changes the number of role instances via a task.
To summarize: Here are the tools mentioned above, plus a few extras, that will help you get started in learning how to manage and monitor Windows Azure and Windows Azure applications:
UPDATE: I forgot to mention the Windows Azure Traffic Manager:
"Windows Azure Traffic Manager enables you to manage and distribute incoming traffic to your Windows Azure hosted services whether they are deployed in the same data center or in different centers across the world."
So that's a great way to automate some additional management functionality based on monitored aspects of traffic and performance. Very nice!
And in addition to what I posted back in April, I also want to point you to the recording of Joey Snow’s excellent talk from TechEd North America 2011, which I’ll go ahead and embed here also:
What are you using or hoping to use to manage your Windows Azure platform and your applications or storage? Are you using any other methods you’d like to share with us? We’d love to hear from you in the comments.
And if you have missed any of the “Cloud on Your Terms” series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
Back in Part 10 of our “Cloud on Your Terms” series, I discussed the goals of building my own test environment – which I refer to as my “Private Private Cloud”. It was the first in a series of screencasts and blog posts showing the process of building your own test environment using spare hardware, free evaluation software, and basic networking.
In this Part 22 of our series I wanted to share with you my “Part 2” of my PPC series, in which I actually show off the environment that I’ve built. In later posts/screencasts, I will walk you through the process of doing the installations and configuration.
Please enjoy this video:
Don’t forget to go to http://aka.ms/evals if you want to evaluate any of the foundational software to create your own private cloud test environment. And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
Back in part 10 of our cloud series, I described how I configured my own “Private Private Cloud”. One of the important elements of my test lab was to have a foundation that would support Windows Failover Clustering and the Live Migration of virtual machines. To do this in my “test lab” (the spare bedroom that is my downstairs office), I needed to have an inexpensive storage solution that supported it.
Today in part 21, Brian Lewis goes into detail about some additional options you have for building a SAN using inexpensive hardware. I might even have to put this on my Christmas list now.
Enjoy Brian’s part 21 HERE.
And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries
Have you ever wondered how Microsoft supports so much scale online for our online services such as Hotmail, Windows Azure, Windows Live, Windows Updates, etc.?
“Yes, I have. You must have a massive datacenter!”
Actually, we have many massive datacenters. “More than 10, but less than 100” is what I am told. Today in part 20 of our Cloud on Your Terms series, John Weston talks about our field trip to see one of those datacenters in San Antonio, TX, and directs you to more information on the team at Microsoft that is responsible for them.
Check out his post HERE.
Have you ever been puzzled by some problem only to find out that it was related to how your network is configured?
“Yes. All the time.”
Of course you have. And when you add the potential complexities of supporting a Private Cloud in your datacenter, let alone extreme virtualization, the problems may get harder and harder to track down.
In today’s Part 19 post, Matt Hester relates a tale or two from his experiences, plus leaves you with some valuable resources. Check out his post HERE.
Today in part 18 of our cloud series, John Weston bemoans the mistakes one might make while suffering the effects of datacenter hypothermia.
I guess Texans don’t work well under 75 degrees Fahrenheit. Read his post HERE.
Today in Part 17, It’s Brian Lewis’s turn to admit some “Doh!” moments and, more importantly, share what he learned from them during our build-out of our test datacenter in San Jose.
So if you don’t think sysprep is all that important, you should read his post HERE.
Disclaimer: This is a re-post of my Oct 4, 2011 blog post. But it fits our series, and it’s my content. So I am giving myself permission.
Okay.. I feel like sharing this because it’s pretty stupid, but in a geeky-sort-of-way the solution was interesting enough to share. Think Chicken & Egg. (or “Catch-22”).
As the title of this post suggests, the subject is Windows Failover Clustering. For those of you who are not familiar with it, Windows Failover Clustering is a built-in feature available in Windows Server 2008 R2 Enterprise and Datacenter editions. Along with shared storage (for which we used the free iSCSI Software Target from Microsoft to implement), it provides a very easy-to-configure and use cluster for serving up highly available services. In our case, this would be virtual machines running on two clustered virtualization hosts.
As a training platform, but primarily for use as a demonstration platform for our presentations (and certainly more real-world than one laptop alone can demonstrate), our team received budget to acquire several Dell servers. We found a partner (Thank you Groupware Technology!) who was willing to house the servers for us. The idea was that we, the 12 IT Pro Evangelists (ITEs) in the US would travel to San Jose in groups of 3-4 and do the installation of a solid private cloud platform, using Microsoft’s current set of products (Windows Server 2008 R2 and System Center). This past week I was fortunate enough to be a member of the first wave, along with my good buddies Harold Wong, Chris Henley, and John Weston. The goal was to build it, document it, and then hand if off to the next groups to use our documentation and start from scratch, eventually leaving us with great documentation, and a platform to do demonstrations of Microsoft’s current and future management suites.
We all arrived in San Jose Monday morning, and installed all 5 server operating systems in the afternoon. We installed them again Tuesday morning.
It’s a long story involving how Dell had configured the storage we ordered. We needed to swap some drives between machines and set up RAID and partitioning in a way that was more workable to our goals. I’ll leave that discussion for one of my teammates to blog about.
Anyway, once we had the servers up, I installed and configured the Microsoft iSCSI software target on our “VMSTORAGE” server, and configured two other servers as Hyper-V Hosts in a host cluster, with Windows Failover Clustering and CSV storage. By the end of the week we had overcome hardware, networking, missed-BIOS-checkmarks (did you know that Hyper-V will install, but you can’t actually use it if you somehow miss enabling Virtualization support on the CPU on one of the host cluster machines? Who’da thunk it?!) , we had 5 physical and a half-dozen virtual servers installed and running, with Live Migration enabled for the VMs in the cluster. Our domain had two domain controllers; one as a clustered, highly-available VM, and the other as a VM that was not-clustered, but still living in the CSV volume; C:\ClusterStorage\Volume1 in our case. (That’s a hint, by the way. Do you see the problem yet?)
One of the many hurdles we had to overcome early on was an inadequate network switch for our storage network. 100Mbps wasn’t going to cut it, so until our Gig-Ethernet switch arrived on Friday, Harold used his personal switch that he carries with him. On Friday before we left for the airport, we shut down the servers and let the folks there install the new switch. Harold need his switch back at home.
But in restarting the servers, here’s the catch: Windows Failover Clustering requires Active Directory. The storage mount-point (C:\ClusterStorage\Volume1) on our cluster nodes requires the Failover Clustering. And remember where I said our domain controllers were?
“Um.. So… Your DCs couldn’t start, because their location wasn’t available. And their location wasn’t available, because the DC’s hadn’t started. And your DC’s couldn’t start, because their storage location wasn’t available, and… !!”
Bingo. Exactly. Chicken, meet Egg. It was our, “Oh shoot!” moment. (Not exactly what I said, but you get the idea.)
“So how did you fix it?”
I’ll tell you…
Our KVM was a Belkin unit that supports IP connections and access to the machines through a browser. We configured it to be externally accessible. So I was able to use that to get in to the physical servers and try to solve this “missing DCs” puzzle; though to make matters much more difficult, the web interface for that KVM is really, REALLY horrible. The mouse didn’t track to my mouse directly, no ALT+ key support, TAB key didn’t work.. I ended up doing a lot of the work from a command-line simply because it was easier than trying to line up and click on things! Perhaps in a future blog post I will give Belkin a piece of my mind regarding this piece-of-“shoot” device…
So, my solution was based on two important facts:
“Ah ha! So on the storage machine, you mounted the .VHD file that was your cluster storage disk, and you copied out the .VHD file from one of the domain controller VMs!”
Yeah.. that’s basically it. Though I did have one problem. The .VHD file was in-use; probably by the iSCSI Software Target service. So when I tried to attach it, the OS wouldn’t let me.
Fortunately I found that by stopping that “Microsoft iSCSI Software Target” service on the storage server (I also stopped the “Cluster Service” on the two Hyper-V cluster nodes), I was able to attach to the .VHD, navigate into it, and copy out the .VHD disk for the needed Domain Controller. (Actually, I also removed the .VHD from its original location. I didn’t want the DC to come alive again when the storage came back online, if the identical DC was already awake and functioning.)
So after that, it was as simple (?) as this:
Everything came back to life almost immediately; including the Remote Desktop Gateway that we had configured so that we could remotely connect to the machines in a more meaningful, functional way.
So the moral of the story is: When you’re building your own test lab, or even considering where to put your DCs in your production environment, make sure you have at least one DC that comes online without depending upon other services (such as high-availability solutions) that, in turn, require a DC to be functioning.
All-in-all, it was a great week.
Today in part 15 of our November Cloud blog series, my buddy John Weston lays out some of the technologies that will support your efforts in implementing an architecture that can span clouds.
“Span clouds? Huh?”
You’ve heard “private cloud”, “public cloud”, etc.. but a solution or an application or service that has portions of its architecture in your local datacenter and others that are external are more of a hybrid approach. Hence the term “Hybrid Cloud”.
Check out John’s blog post HERE.
Consider the following chart that diagrams the delivery of “IT as a Service”. This is what the private cloud is all about. And the way tasks may get done in an automated fashion is going to play a very important role.
In the Microsoft solution, the tool that will allow you to create, test, and perform that automation is called System Center Orchestrator 2012. You may know it by another name…
“Hey Kevin.. isn’t that what Opalis does?”
As I was about to say.. Yes, the current product is called Opalis. And in the new product, coming out as a part of the System Center 2012 wave, the functionality in Opalis is brought into Orchestrator; with some additional functionality included.
First I want to summarize the main areas where Orchestrator shines. And I like to think of it in musical terms such as an orchestra; the hall, the players, their instruments, the music, and the conductor (which would be you):
Process Integration - There’s a tight integration with the rest of System Center – particularly with System Center Service Manager 2012. It also preserves and integrates with your existing investments in other tools and processes; not just Microsoft’s. We can think of this connecting of heterogeneous environments together as the musical instruments and the players in our Orchestra. Something needs to bring them together. Orchestration – It’s not enough to have all of the players and their instruments in the same room. Now you have to give them something to perform, and get all the different bits working together, in the right order, in the right way. Orchestration is the sheet music. Automation – Now that we’ve defined the symphony, we let it fly. You are the conductor. The music flows at your command, and in your timing.
Process Integration - There’s a tight integration with the rest of System Center – particularly with System Center Service Manager 2012. It also preserves and integrates with your existing investments in other tools and processes; not just Microsoft’s. We can think of this connecting of heterogeneous environments together as the musical instruments and the players in our Orchestra. Something needs to bring them together.
Orchestration – It’s not enough to have all of the players and their instruments in the same room. Now you have to give them something to perform, and get all the different bits working together, in the right order, in the right way. Orchestration is the sheet music.
Automation – Now that we’ve defined the symphony, we let it fly. You are the conductor. The music flows at your command, and in your timing.
“But how is that different than Opalis?”
That’s a fair question. And here’s how I have heard it described… Opalis is a tool built mainly for the IT Professional. It allows you to author, test, and debug runbooks.
Yes. Runbook Automation (RBA) is the ability to define the steps that are to be performed, plus the inputs and outputs, and the order in which they are to happen. (A happens before B, C depends upon B completing successfully, etc. A –> B –> C…) Defining then an overall process that involves many and varied steps and dependent inputs and outputs is where Opalis really shines. Consider the following set of steps: A folder is being monitored, and when a new file enters the folder, a task launches to copy that file to another folder, and then to make note of the operation in the event log.
It’s a simple, automated process that involves several steps; each depending upon the previous step.
And the job of actually launching and running these runbooks in Opalis was primarily the IT Pros’. But in System Center Orchestrator 2012 we take it to the next level and provide benefit for these additional “audiences”:
IT Business Manager – Orchestrator gives business managers and application owners direct visibility into the processes that they are interested in or have oversight for. Through a web console they can gain quick access. They can pull information from the product through the provided web service and plug it into BI or use Excel PowerPivots to work with data as a data feed. IT Operator – Initiate, monitor, and troubleshoot automated tasks. Developer - You can your build apps to include support for being driven by and reporting to Orchestrator in the form of Integration Packs (IPs) IT Professional – As before, the job of authoring, testing, and debugging the runbooks is your main focus here.
IT Business Manager – Orchestrator gives business managers and application owners direct visibility into the processes that they are interested in or have oversight for. Through a web console they can gain quick access. They can pull information from the product through the provided web service and plug it into BI or use Excel PowerPivots to work with data as a data feed.
IT Operator – Initiate, monitor, and troubleshoot automated tasks.
Developer - You can your build apps to include support for being driven by and reporting to Orchestrator in the form of Integration Packs (IPs)
IT Professional – As before, the job of authoring, testing, and debugging the runbooks is your main focus here.
System Center Orchestrator 2012, like the other parts of the System Center 2012 product set, is well integrated as a part of the whole solution, and works on behalf of the whole business and the needs of people based on their roles.
“Is there a beta or RC available for Orchestrator 2012? And when will it be released?”
I don’t know when exactly the release date is, though I’m fairly confident that it will come out along with the other products in the System Center 2012 suite. And yes, there is a Release Candidate of Orchestrator 2012 that can be downloaded from HERE, along with the other prerelease System Center 2012 products.
I hope you have found this contribution to our 30-part series useful. Let me know in the comments. And if you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
Today in lucky part 13 of our 30 day “Cloud on Your Terms” series, my friend (and the guy who hired me at Microsoft) John Weston introduces you to System Center Operations Manager 2012.
Read it HERE.
Today we have a guest writer! Doug Hazelman of Veeam Software shares some ideas on why it’s important to have common management for your private cloud.
Read their article HERE.
“Seriously, Kevin? A ‘love story’?”
Yes indeed. Are you and your sweetheart better together? So is Hyper-V and SCVMM. And that’s what Brian Lewis is discussing in Part 11 of our 30-part “Cloud on Your Terms” blog series.
Read it HERE. Ahh.. l’Amore!
For those of you who missed any of these and want to catch up, here is a list that I will keep updated with links to the team’s (Me, Brian Lewis, John Weston, and Matt Hester) posts.
And don’t forget to go to http://aka.ms/evals if you want to evaluate any of the foundational software to create your own private cloud test environment.
In part 10 of our series today, I wanted to kick of a sort-of-sub-series of my own. During our IT Camps this month (our afternoon portion of our TechNet Events), I’ve been taking some time to show off my personal test environment, and how I set it up. I’m finding that there is a lot of interest in learning and seeing just how easy it is to build a private test environment using cheap hardware and evaluation software. And since I have had some pretty good success in doing just that, I thought I would document the process for you all.
So today I’m going to outline some of the goals in building what I’m calling my “Private Private Cloud”. Here is what I hoped to achieve as my desired outcome:
In my role at Microsoft I have been fortunate in a couple of key ways:
For various presentation / event / launch needs, hardware has come our way to support it. And though I would likely have to return if I left Microsoft for any reason, I am able to keep it for now. The need for a good test environment is putting this older hardware to good use.
To support my test lab, I am using 3 Lenovo laptop computers. (You could do it with 2, and I’ll outline that in the post that talks specifically about the hardware setup.) These are all connected to my test network using a 1Gbps Linksys Router.
Here is a diagram showing my configuration (click to enlarge):
The two T61Ps at the bottom of my simple diagram are the machines that make up my 2-node failover cluster. Also, to support a large quantity of shared storage, I have a 2TB drive (7200RPM) connected via USB3 to my W520 “storage server”.
Short of having a TechNet Subscription (which I highly recommend – for many reasons – but more relevant to this topic because the evaluation software doesn’t time-out), you have everything you need at your disposal FOR FREE for building a private cloud based on Microsoft’s solutions. Check out http://aka.ms/evals for a list of and links to the evaluation software that you need for building out our test lab. Most importantly, you’ll be starting with Windows Server 2008 R2 SP1.
Additionally, in order to build the shared-storage for supporting Windows Failover Clustering, you’ll need to download the Microsoft iSCSI Software Target. This is what we’ll install on the “server” that we’re designating as our “storage server”.
On all three of my “servers” I have added the Hyper-V role. Originally my test server was just the W510 (top of the diagram), and I have two SSDs (Solid State Drives) for the sake of really fast OS and Virtual Machines; so it’s still a good place to build and support faster-to-build + faster-performing demo examples. And on the two T61Ps I installed the Hyper-V role also, as they are the two nodes of my Hyper-V cluster.
“Hey Kevin, what does Hyper-V cost?”
Ah.. Virtualization from Microsoft is simply included with Windows Server. So is Windows Failover Clustering. And so is Live Migration. (How much are you paying VMware for vMotion?)
Speaking of failover clustering.. that also was a goal of this test environment. Until recently the option to build relatively cheap (free?) shared storage for implementing failover clustering was non-existent. But now with the ability to install the free Microsoft iSCSI Software Target, it’s easy to create and become familiar with your own high-availability test bed. So I wanted to take advantage of that here.
“What is CSV?”
CSV stands for Cluster Shared Volumes. This is a technology introduced in Windows Server 2008 R2 for the sake of clustered Hyper-V nodes that supports shared access to files. Once you have Windows Failover Clustering configured, you can dedicate some storage to be used for CSV, which is then the recommended location for your virtual machine files that will represent HA (highly available) virtual machines.
Because the live migration (the moving of a running virtual machine from one cluster node to another with zero downtime) requires Failover Clustering, which in turn requires shared storage, it wasn’t easy until recently to implement this in a test-lab-on-a-budget. But now we can.
Key components of a true private cloud are the tools to manage and support it. Virtualization alone does not a Private Cloud make. So to fully implement my own private private cloud, I need to acquire, install, and work with the System Center products; both the current set as well as the ones coming in the System Center 2012 wave. And fortunately, like the Operating System, you can find links to free trials, betas, and release candidates at http://aka.ms/evals.
So there you have both the goals and some of the higher-level components of my own personal “mad house of cloud”. Watch for future posts (and perhaps some screencasts) where I’ll take you through step-by-step in the building of my private private cloud.
If you have missed any of the series posts, check out my summary post for links to all of the articles available so far at http://aka.ms/cloudseries.
“Hey Kevin.. We’re considering adding Hyper-V virtualization to our operations. What hardware will I need to make this happen?”
You’re in luck! Today in part 9 of our 30 day series, Brian Lewis covers just that, and shares some real world examples.
Today my friend Brian Lewis tells you how to get Hyper-V installed. Check it out!
“And does it..”
Yes. It actually looks like that robot Hyper-V mascot.
We’re no fools. We know that if you’re using Virtualization in your business, you're very likely using or have considered using VMware. And we also know that you may be considering Hyper-V for an additional (or alternative) virtualization platform. At least I hope you are.
That said, and if you’re just starting out learning Hyper-V, you may be confused about what terms or technologies equate from VMware into the world of Hyper-V, Windows Server 2008 R2, and the System Center suite. So, for your benefit, here is a quick list of VMware terms/technologies and their equivalent technology in the Microsoft Virtualization world.
Self Service Portal
System Center Data Protection Manager (DPM)
System Center Virtual Machine Manager (SCVMM)
Distributed Resource Scheduler (DRS)
Performance and Resource Optimization (PRO)
Virtual Machine Terminology
VM IDE Boot
Hot Add Disks, Storage, Memory
Distributed Power Management
Core Parking & Dynamic Optimization
SCVMM P2V / V2V
Virtual Machine Servicing Tool (VMST)
VMDK (Virtual Machine Disk)
VHD (Virtual Hard Disk)
Raw Device Mapping
Quick Storage Migration
Expand Disk / Volume
VMware HA (High-Availability)
Cluster Shared Volumes (CSV)
Incidentally, this list was “borrowed” from the recorded content that is available on the Microsoft Virtual Academy; specifically the excellent recordings of Corey Hynes and Symon Perriman’s “Microsoft Virtualization for IT Professionals” training sessions (and even more specifically, Module 2). I highly recommend this training, and any other content that looks useful to you. It’s free, and very well done.