I had a few Partners I was working with tell me that they were having some problems figuring out how to get Azure Connect working. Specifically, they wanted to connect a physical or VM on-premise to a VM in Azure and allow them to communicate by IPV6 or hostname. Now, turns out this isn’t all that complicated but the documentation that exists assumes you know your way around Visual Studio and how that interfaces with Azure. For those of us that don’t know VS and don’t have a lot of experience with Azure…but just want to get this working…this is for you. :)
NOTE: Now, this blog post isn’t the place to go deep into all the various connectivity options available in Azure, but essentially you have some hardcore VPN type capabilities so that you can get broader access – similar to how you would setup a remote office for example. Azure Connect is a client based mechanism which allows you to create groups of computers – either physical or VM…and on-premise and in Azure that can ‘talk’ to each other via IP and hostname. Before you get too much further, Azure Connect is 100% IPV6. So, make sure that you have that running and enabled at least on the endpoints that you are going to be working with otherwise none of this will work properly.
This Video, in about 3 minutes, will help you better understand the Azure VPN/Connectivity Options
Easy VPN – Using Azure Connect to Create a Secure Network Connection between two on-premise machines
If you read some of the instructions, it seems to be pretty straightforward. THIS is probably the best documentation on how to do this that I’ve seen thus far – except it assumes that you know how to finesse a Visual Studio Azure Cloud Project, which many folks trying to do this task – hard core infrastructure folks – don’t have much or any experience with.
So, without re-doing then entire TechNet article I just referred you to – I’ll fill in a few of the blanks as I’ll admit, I’m not a Visual Studio guy either…so I just had to bang my head against it a little and eventually figured out the one check-box I needed to uncheck to make everything work perfectly.
GETTING STARTED:
First thing – you obviously need access to an Azure subscription and you have to enable the VM Preview. I walk through that as well as how to interface Azure VM’s with System Center App Controller in THIS post. At the time I published this, the most current System Center release is the SP1 Beta. You have a few options – you can setup the 90 day free trial OR if you have a MSDN subscription you get access to Azure, which is what I’m using.
Most of where you are setting this up is in the old-school Azure Management Console (not the new preview console) and in Visual Studio. I used Visual Studio 2012 and the downloaded the Azure SDK. The Azure SDK’s can be found here.
If you’re not sure how to get back to the old-school GUI you simply click on the green “PREVIEW” button in the new console and it will give you the option to go back:
I have a MSDN subscription, so I used the “Ultimate” SKU but you can use less than that. The SDK installation is pretty straightforward and probably the biggest obstacle I faced was figuring out how to get started with an Azure Cloud Service Project from the “New Project” wizard. The GUI defaults to .NET Framework 4.5…and no Azure stuff shows up in there. You have to pull that drop down at the top and select .NET Framework 4…Ah, now you see it!
From there, I choose the Visual Basic (I tried it with C#, works as well) Worker Role. Notice that if you want to rename the worker role (and you probably do) to something more identifiable then you have to click the pencil icon in this GUI to make that change:
Now, once your in the project there’s only a few things that you have to do before publishing it.
First, Import your Azure Subscription into VS. The process is pretty self-explanatory – just go into your Azure Management Console (the old school one) and copy/paste your subscription ID into VS.
Once you do this – VS will enumerate the VM’s that you have created in Azure. I’ve underlined a few key areas that you have to pay attention to here. First of all highlight your VM in the server explorer and then in the Solution Explorer double click or right click for properties on the WorkerRole that you created for this project. It will bring up what you see in the middle here.
UNCHECK THE DIAGNOSTICS. You don’t need it to create this service and you’ll get warnings/errors when the project builds/publishes.
The other thing that you’ll have to do in here is in the properties of the WorkerRole (middle of the screen) you need to click down to the Virtual Network settings. From here, you need to get the activation token from your Azure Management Console.
Here’s where you get the activation token that you’ll paste into that field. When you click the icon, it will give you the code to paste.
If you did that last part right, when you click back to the “Settings” tab in the WorkerRole, you should now see your Token:
Now you can publish your service to Azure. Just go to the BUILD menu and choose PUBLISH for your Azure Project – it will start the process and you’ll eventually see it in Azure.
OK, NOW WHAT?
Go back and follow the directions in the TechNet Guide I referenced as far as how to setup the local endpoints, etc… It’s spot on there. But basically, in the Azure Management Console –> Virtual Network you’ll see the “Install Local Endpoint” icon. You will want to install this on both the on-premise physical or virtual machine as well as the Azure VM.
Once you do, you’ll see them populate in the GUI:
The next step, and this is also well documented in the TechNet article is to create the Group so that everything can communicate with each other.
You click on the “Create Group” icon in the screen above to do this. From here you add the endpoints that Azure sees – in my case ‘labmgmt.virt.lab’ is my domain joined machine running on-premise and knlazurevm is, well, the Azure VM. You’ll want to check the box to allow connections between endpoints in the group and then of course you add in your Azure roles that you created in Visual Studio and published to Azure.
What will happen at this point is that you should see your Azure connect tray icons lighten up. If you’re impatient like I am, you can right-click on the icon and choose the ‘refresh policy’ and it should come to life.
Now, you should be able to ping FROM on-premise TO Azure using the Role Instance ID’s.
You’ll get an IPV6 reply:
Now FROM the Azure VM TO the on-premise physical/VM you should be able to ping by hostname or FQDN:
FROM the on-premise physical/VM TO the Azure VM you can ping by name as well:
Now you could create, for example, a SharePoint instance in Azure and connect that using the FQDN of the SQL Server that will backend that running on-premise.
CONNECTING AZURE VM’S TO ACTIVE DIRECTORY ON-PREMISE:
Now, if you want to make it real fun – connect your Azure VM’s to your on-premise AD. To do this, you’ll need to install that local endpoint from the Azure Management GUI to one of your domain controllers. Now, you’ll see it show up in the endpoints screen we talked about above. You now need to go in and EDIT your endpoint group and then add in your domain controller so that it can talk to everyone as well.
In the Azure VM, you’ll need to make one change to the IPV6 DNS so that it can see your on-premise DC.
In the Azure VM – do a ping to the FQDN of your DC. Copy that IPV6 address and in the network properties of your NIC change the default IPV6 DNS server to the one that matches your on-premise DC.
Now, you’ll be able to add your Azure VM’s to your on-premise Active Directory!
Here’s a fun screenshot – my Azure VM that’s domain joined, logged in as a domain administrator and using some AD tools:
Something else that’s kinda fun is the ability to now (with on-premise computers that have Azure Connect installed and are added to the same group) use on-premise Server Manager to connect and manipulate VM’s in Azure. In my case, I setup a new group called “AZURE SERVERS” and was able to add the ‘knlazurevm’ by hostname. Now I can manage my Server 2012 instances in the ‘cloud’ the same way I do my on-premise. Nice!
Good stuff!
Have fun and enjoy!
I wanted to provide a walk-through of what the current set of tools provides in terms of setting up and sharing documents via RMS. For more detailed information on RMS check out the TechEd 2014 session delivered by Enrique Saggese, a Program Manager on the RMS team.
Deploying RMS for Cloud-Friendly and Cloud-Reluctant Organizations
First thing you need to do is go to the Azure RMS Portal and download the latest RMS application for your device. https://portal.aadrm.com/ If your company is already using RMS, either on premise or in the the cloud with Azure RMS you will be able to ‘connect’ the RMS client to your existing templates. The RMS client also seamlessly integrates with the Office 2013 suite.
Outlook Integration:
Office Apps (Word, Excel, etc…) integration:
With the RMS client, you can connect to existing templates created by your administrators either on Windows Servers running the RMS feature or Azure RMS.
In my case above, I have an O365 tenant I demo from and I’ve configured the templates using Azure RMS. The first time you open the RMS client you’ll see the option to ‘connect to RMS service…’ in the place where you see my existing templates. Once it’s made the connection from that point on, you’ll see the actual templates available when you use the RMS client.
Now, lets go to the RMS portal and setup our account and download the client. If your organization is already using Azure Active Directory, then you won’t need to setup a new account – the RMS client will simply start working with your existing RMS setup.
If your organization is already configured to work with Azure AD, then you might see a message like this after entering your email address:
In which case, once you click ‘NEXT’ you will be prompted to authenticate with your credentials associated with that email (assuming it’s a corporate login for example) and you’ll see the following screen where you can download the RMS client to your computer:
Now, if you don’t already have and account you’ll still see a similar screen – you just won’t see the few previous screens that tell you that your company is already configured for RMS. But still, you’ll be able to download the RMS client to your machine and start using the service.
Once the RMS client is installed you’ll see new context menus when you right click on items. Let’s create a document in Word and save it on the desktop. The first option is to “Share Protected” which essentially launches the RMS client and allows you to enter email addresses (LiveID’s, gmail, yahoo, outlook.com, etc… are not accepted at this time) and assign permissions to the recipient.
RMS will protect the document then open Outlook to send the email.
When the recipient receives the email one of a couple things will happen. If their user account is already in Azure AD (let’s say they are an existing O365 customer which would be the most common scenario), then they will be able to open the document in Word without having to set anything else up.
If the email domain of the recipient is not in Azure AD, then per the email they will be sent out to the sign-in page to create an account.
After they sign-up they will receive an email asking them to continue on to complete the sign-in process.
The recipient will then fill in a few pieces of information:
It takes a few seconds to provision the account then the recipient is passed along to the page where they can download the appropriate RMS client for their platform.
Now when the recipeient opens the protected document they are prompted for the credentials they just created for the RMS client:
The recipient now has ‘view’ only access as given using either the RMS client reader or Word 2013.
The Azure Team recently announced a new high performance VPN gateway. Details here:
http://azure.microsoft.com/blog/2014/12/02/azure-virtual-network-gateway-improvements/
The net of it is – you get ~200Mbs and 30 S2S tunnels vs ~80Mbs and 10 S2S tunnels compared to the standard VPN gateway.
One thing to note – you’ll likely need the latest version of Azure PowerShell for this command to work properly.
You can get that here: Azure PowerShell 0.8.12
If you already have Azure PowerShell installed, this will upgrade that installation.
Once that’s installed – the commands are easy.
1) Use the 'add-azureaccount' to add your Azure credentials for your subscription where the S2S gateway you want to upgrade is configured.
2) run the following command – obviously editing the name of the S2S VPN for yours. The process will take a while so don’t get worried if it doesn’t complete immediately. I quit watching mine after about 5 minutes and it still wasn’t done and didn’t look again until about 30 minutes later. So somewhere between 5 and 30 minutes. :)
That’s it! Have fun!
For those of you who are already running Windows Server 2012 – don’t forget that you can preview the Azure Online Backup Feature for free for 6 months.
Here’s how you get started:
Open Windows Server Backup and click on the “continue” button which will launch a page that gets you to create a Azure Active Directory Account:
From here, set up your account.
Once you get logged in, you’ll see your Azure Online Backup Account being provisioned:
Once that’s done (just takes a few minutes…) click ‘manage’ and get started.
From here you’ll download the agent and install on the server/servers that you want to backup:
You’ll notice that you get a healthy 300GB of storage to use.
Once you download the agent and get it installed, restart Windows Backup and You’ll see some new stuff in there.
Just follow the directions to register your server, generate a passphrase, enter your credentials and you’re ready to go.
From here, you can start doing backups! You can set schedules and you also get an option to back them up at a point in time should you need to.
You also get the ability to throttle the network if you need to do that.
Have fun!
With the recent release of Windows Server 2008 R2 Service Pack 1, those of you running Hyper-V are probably looking for an easy way to start updating your integration components. I'd guess that the primary reason for this would be so you could leverage the new dynamic memory feature in your VM's, which for Server 2008 R2 and Windows 7 VM’s requires either the IC’s to be updated or the full installation of the service pack.
If you have VM’s that you aren’t ready to apply the full SP – or Windows 2003 R2 / Windows 2008 VM's that you want to update and are looking for a way to shave some serious time off this process, then here’s one way to do it.
I ran across Charles Joy’s blog post on this subject. He has everything that you need to know – including the entire workflow that you can download and import directly in to Opalis. You’ll just have to match up the variables and other published data to your environment – but the hard part (the workflow) is done for you.
You can find the blog post HERE. Scroll to the bottom of the page for the zip file link to download the workflow.
I ran through the process in my own environment (link to short video of my experience). One thing that I don’t think Charles mentions anywhere in his video or blog is that the VM’s that you are running this workflow against MUST HAVE WINRM enabled. He makes mention that the powershell script uses WinRM, but I glanced over that and had that component error out on me on a few test runs until I figured out that the VM I was attempting to manipulate didn’t have WinRM configured.
If you are in the same boat – it’s an easy fix. Simply open a cmd prompt in the VM and type:
winrm quickconfig
Say yes to the prompts and off you go.
Don't forget to check out the Opalis Survival Guide - everything you need to know to get up and running with Opalis!
Enjoy!
The CentOS Linux distribution is now supported as a guest within Hyper-V. Please see Sandy Gupta’s blog post:
http://blogs.technet.com/b/openness/archive/2011/05/15/expanding-interoperability-to-community-linux.aspx
==============================================
FAQ
Q: What CentOS versions are supported?
A: CentOS 5.2 through 5.6 (32-bit and 64-bit versions) are now supported as Hyper-V guests. Support will cover installation issues as well as configuration issues.
===========================================================================
Q: Will you be adding support for additional Linux distributions?
A: We continue to evaluate adding additional Linux distributions to the supported list.
Q: What version of the Linux Integration Services support CentOS?
A: The existing Hyper-V Linux Integration Services for Linux Version 2.1 support CentOS. The following features are included in the Hyper-V Linux Integration Services 2.1 release:
· Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.
· Driver support for synthetic devices: Linux Integration Services supports the synthetic network controller and the synthetic storage controller that were developed specifically for Hyper-V.
· Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
· Timesync: The clock inside the virtual machine will remain synchronized with the clock on the host.
· Integrated Shutdown: Virtual machines running Linux can be gracefully shut down from either Hyper-V Manager or System Center Virtual Machine Manager.
· Heartbeat: Allows the host to detect whether the guest is running and responsive.
· Pluggable Time Source: A pluggable clock source module is included to provide a more accurate time source to the guest.
The Linux Integration Services are available via the Microsoft Download Center here: http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=eee39325-898b-4522-9b4c-f4b5b9b64551
Q: I’m unfamiliar with the different Linux distributions available. Can you tell me more about CentOS?
A: From Wikipedia:
CentOS is a community-supported, mainly free software operating system based on Red Hat Enterprise Linux (RHEL). It exists to provide a free enterprise class computing platform and strives to maintain 100% binary compatibility with its upstream distribution. CentOS stands for Community ENTerprise Operating System.
Red Hat Enterprise Linux is available only through a paid subscription service that provides access to software updates and varying levels of technical support. The product is largely composed of software packages distributed under either an open source or a free software license and the source code for these packages is made public by Red Hat.
CentOS developers use Red Hat's source code to create a final product very similar to Red Hat Enterprise Linux. Red Hat's branding and logos are changed because Red Hat does not allow them to be redistributed.
CentOS is available free of charge. Technical support is primarily provided by the community via official mailing lists, web forums, and chat rooms. The project is not affiliated with Red Hat and thus receives no financial or logistical support from the company; instead, the CentOS Project relies on donations from users and organizational sponsors.
I wanted to build a box to dance a little with RemoteFX. Turns out, the spare I have is running an older CPU that doesn’t have SLAT support which means ‘no bueno’. (In my case an Intel 6600 Quad Core)
Here’s more detail on what you need to run RemoteFX from a hardware perspective:
There are several hardware requirements that must be met when deploying a RemoteFX server:
Also worth nothing:
For a list of GPUs that will work with RemoteFX in Windows Server 2008 R2 with SP1, see this blog post (http://go.microsoft.com/fwlink/?LinkID=197416). The list of GPUs will grow and evolve for the final release of Windows Server 2008 R2 with SP1. For a list of recommended GPU drivers, see this blog post (http://go.microsoft.com/fwlink/?LinkID=197417).
And a final important point:
In order to use Live Migration, the source and destination RemoteFX servers must have the same GPU installed in the server.
All this can be found here: http://technet.microsoft.com/en-us/library/ff817602(WS.10).aspx
Jim and Travis put together a great ‘how-to’ video on getting a good workflow going with Service Manager and SCVMM using Opalis. They show you step-by-step how to configure each object and how to configure Service Manager – including creating the management pack using the Service Manager Authoring Tool (you can download this HERE).
http://blogs.technet.com/b/servicemanager/archive/2010/11/16/how-to-automate-vm-provisioning-in-20-minutes-using-service-manager-and-opalis.aspx
I was able to get this up and running in my lab – works great and is an effective demo on how to integrate SCSM, Opalis and SCVMM.
I wanted to take it a step further though. How would you do this if you had a VMware vSphere environment?
Well, the good news is - it's not that hard. :) Here’s a link to a video I recorded that shows the entire process of provisioning using VMware.
1) Go back and watch the video on the Service Manager Blog. In my case, instead of using "VM" in the templates, forms, lists, etc..., I used "VMware". You'll essentially re-create that entire CR process - but you'll do it for VMware.
2) Copy and paste your Opalis workflow (just use the mouse and select all the objects) to a new policy. In a little bit, you'll see which SCVMM objects you'll need to replace with those from the VMware vSphere Integration Pack.
3) You’ll need to make sure that you have the VMware vSphere Integration Pack installed on your Opalis Action Server and you’ll see it in the Opalis Client:
4) You'll also want to go ahead and create the VM's (using Jim and Travis' example - SMCLONE-SMALL, SMCLONE-MEDIUM and SMCLONE-LARGE) on your ESX hosts. For time sakes, I created a blank VMDK so that the cloning process takes a matter of seconds instead of minutes (or more) for fully installed VM's.
For reference – here’s the differences between the two workflow’s. The first is the one that Jim and Travis show you in the video. The second shows the objects you can use to do the exact same thing in your VMware environment.
In my case, I have a 4.0 cluster using vCenter.
Now with VMware:
As you can see, you’ll need to replace the three VM related objects with the appropriate ones from the VMware IP.
Here’s what my settings look like for those 3 objects:
The “GET VM LIST” is pulling published data from the bus – same as the GET VM in the SCVMM workflow:
The “CLONE WINDOWS VM” is a little different from the CREATE VM FROM VM – it’s pretty straightforward, but a few more things to consider. Yours won’t look like mine – it will be unique to your environment – but a couple things you need to consider:
1) In the “Source VM/Template Path” you’ll need to use published data from “MAP PUBLISHED DATA” to grab the VM Image Name. This ensures that you are in the path for the appropriate template.
2) In the “New Virtual Machine Name” field – you’ll want to use a prefix and (in my case) use an underscore and then pull a unique field from somewhere in the bus to uniquely identify the VM you are creating. In my case, I’m pulling the VM ID from the MONITOR OBJECT.
Finally, for the “ADD VM DISK” object, you’ll want to pull “New Virtual Machine Name” from the bus from the “CLONE WINDOWS VM” object.
The DiskSize comes from the original MONITOR OBJECT and is the value that was defined originally in the Service Manager change request form.
The last step, if you did a copy/paste from your SCVMM workflow and just replaced these 3 objects to work with VMware is to go back through the final 3 objects to make sure that any data that was pulled from the bus is accurate. If you have data being pulled from any of the objects that were deleted or changed names – your workflow isn’t going to work.
Final Result. I stuck with the same naming convention that Jim and Travis used in their demo, so each new VM that’s created starts with DEMOVM_ and then is appended with the CR# in Service Manager. I like using that, as it makes it easy to correlate the change request with the VM. Of course, another option here is to modify the orignial CR form in Service Manager to include the VM Name (as well as any other paramters that Opalis can modify on a VM, incuding things like RAM and Network Adapters, etc...). In this case, the only other form option was the size of the VM Disk we were going to add. That could certainly be optional and you could branch the workflow if the requestor didn't want/need an additional VM Disk attached.
The “ADD VM DISK” worked flawlessly as well.
This is a great example of the power of System Center - regardless of which hypervisor you choose!
Finally, Don't forget to check out the Opalis Survival Guide - everything you need to know to get up and running with Opalis!
Good luck and enjoy!
I installed Win8 RTM last night on my primary work machine, a Samsung 700T tablet.
I did all my customizations, installed Office and such…and then went out to the store to grab all the apps I had already installed previously (Win8 does a really nice job of keeping track of this for you…install a new machine, login with your live credentials and it ‘knows’ what you’ve already downloaded on that or other machines…). I found this Flight Aware app…for those that spend any amount of time in the airports, you’ll appreciate this one too. It will at least give you something sorta fun and relevant to do while your delayed somewhere.
Check it out:
Fresh on the press!
http://technet.microsoft.com/en-us/windowsserver/hh968267.aspx
Experience the beta release of Windows Server “8” firsthand in these virtual labs. You can test drive new and improved features and functionality, including server management and Windows PowerShell, networking, Hyper-V, and new storage solutions.
It's simple: no complex setup or installation is required to try out Windows Server “8” running in a full-featured virtual lab. You get a downloadable manual and a 90-minute block of time for each module. Before you start, read the minimum system requirements. Select a virtual lab from the list below. Then, you will see an application launch the lab.
We are pleased to announce the release of the new Linux 3.1 Integration Services and support for the following Linux distributions as supported guests within Hyper-V:
· Red Hat Enterprise Linux 6.x (x86 & x64)
o Red Hat Enterprise Linux 6.0 (x86 & x64)
o Red Hat Enterprise Linux 6.1 (x86 & x64)
· CentOS Linux 6.x (x86 & x64)
o FYI, CentOS has only released 6.0 presently
This announcement is happening today at the Open Source Convention 2011.
The new Linux ISs are available here:
http://www.microsoft.com/download/en/details.aspx?id=26837
The following features are included in the Linux 3.1 Integration Services release:
Driver support for synthetic devices: Linux Integration Services supports the network controller and the IDE and SCSI storage controller developed for Hyper-V.
Fastpath Boot Support for Hyper-V: Boot devices take advantage of the block Virtualization Service Client (VSC) to provide enhanced performance.
Timesync: The clock inside the virtual machine will remain synchronized with the clock on the virtualization server and utilize the pluggable time source device.
Integrated Shutdown: Virtual machines running Linux can be shut down from either Hyper-V Manager or System Center Virtual Machine Manager using the Shut Down command.
Symmetric Multi-Processing (SMP) Support: Supported Linux distributions can use up to 4 virtual processors (VP) per virtual machine.
Heartbeat: Allows the virtualization server to detect whether the guest is running and responsive.
Key Value Pair Exchange (KVP): Information about the running Linux virtual machine can be obtained by using the Key Value Pair exchange functionality on the virtualization server.
Introduction
I’ve been getting a number of requests for these types of resources. So, I thought I’d aggregate everything I know about in a single spot. If you know of others, please let me know and I’ll include them in this list.
RDS Architecture Summary:
http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=9bc943b7-07c5-4335-9df9-20e77ed5032e
Virtualized RDSH guide (including costing):
http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=8d454921-72d6-45b4-b6ba-ac1c26d337bd
Host Capacity planning :
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=bd24503e-b8b7-4b5b-9a86-af03ac5332c8
Another deployment guide (not the IPD) :
http://www.microsoft.com/downloads/en/details.aspx?displaylang=en&FamilyID=1d95a910-72a5-44ec-96db-6853f6f9dc5b
IDC Business Value of Client Virtualization:
ftp://ftp.hp.com/pub/c-products/servers/vdi/Biz_ClientVirtualization_White_Paper.pdf
VDI Reference Architecture Guides (HP/Dell):
http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA0-2391ENW.pdf
http://h20195.www2.hp.com/V2/getdocument.aspx?docname=4AA2-7731ENW.pdf
http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA2-7731ENW.pdf
Citrix:
XenDesktop 4 on Windows 2008 R2 Hyper-V Scalability Report
http://www.citrix.com/site/resources/dynamic/partnerDocs/XD_4_Hyper-V_Scalability_Final_v1.0.pdf
http://www.citrixandmicrosoft.com/Docs/WhitePapers/DELL_MSFT_CTRX_VRD_RA_vFinal.pdf
http://www.citrix.com/site/resources/dynamic/partnerDocs/XenDesktop-Xeon-HyperV_SolutionBrief_Jun2010.pdf
http://www.microsoft.com/downloads/en/details.aspx?FamilyId=AD3921FB-8224-4681-9064-075FDF042B0C&displaylang=en
10 VDI Resources for Microsoft + Citrix:
http://blogs.technet.com/b/davidzi/archive/2010/08/01/10-vdi-resources-for-microsoft-citrix-implementations.aspx
In a previous post, I described how you can use SCOM to create a custom alert that watches the security group on your DC’s for changes to the “Domain Admins” security group.
I mentioned in that post that I was using this as a backdrop for an Opalis demo that uses that alert to start a workflow that disables the user account, removes it from the domain admins group, populates a ‘notes’ field in AD with information about why the account was disabled, clears the alert in SCOM and finally sends an Exchange email with the details to the administrator.
Folks have asked for more details on the Opalis workflow behind this – so here you go.
You can download the OIS file here and import into Opalis to see what I did.
Let’s get started…
First off, here’s the workflow I use:
Here are the steps:
1) Monitor Alert: We’re watching for any NEW alerts in SCOM that contain the string “DAACESS” in the CustomField2 property (there's more detail on this in the blog post I reference above)
2) Query XML: We need to query the description CONTEXT from our SCOM alert to extract the CN name for the offending user account that was added to the domain admins group
*You can find what you want to query from by clicking the ‘alert context’ tab on the SCOM alert. In this case we want the full CN of the user account so we use “MemberName”.
3) Disable User: Using the result from the XML query, we’re disabling the user account
4) Remove User From Group: Next, we remove the offending account from the domain admins group. In my case, I just setup a variable for the domain admins group – you can pull that via XML as well if you want.
* Here’s my detail for the ‘group’ variable
5) Update User: We can update the “notes” field in the AD account to put some detail around why the account was disabled (there are other options you can pick as well if you want to update other properties – just use the ‘select fields’ to choose)
6) Update Alert: Now, we’re going to go ahead and close the alert in SCOM since we’ve successfully remediated this issue. Alternatively, you could branch here if you had a failure and send an email or alert in some other fashion (or start another workflow)
7) Send Exchange Email: Finally, we’re going to send an email to the administrator with all the details
* You’ll need to have an Outlook profile configured to send Exchange email – on the connect tab, you’ll specify the name of the profile you’ll use. Also, if you want proper formatting (line breaks, etc…) make sure you use ASCII formatting
This sends the following email:
That should get it done. Enjoy!
VM Recovery Tool The stand-alone Virtual Machine Recovery Tool works with System Center 2012 – Virtual Machine Manager (VMM) to temporarily remove a host, cluster, virtual machine, or service from VMM when that object is in a failed or persistent warning state due to environmental conditions, third-party applications, or other causes. After resetting the state of the object, once VMM refreshes, the object will appear as healthy in VMM. This tool does not repair a damaged or incorrectly created virtual machine or corrupt .vhd file. It deals with state information stored in the VMM database. For additional details about the tool, see Using the Virtual Machine Recovery Tool for System Center 2012 - Virtual Machine Manager (VMM).
Important This program is not a supported part of the VMM product. This program is offered “as-is”.
Virtual Machine Manager Configuration Analyzer (VMMCA) The VMMCA is your first line of defense in troubleshooting an issue in the System Center 2012 – Virtual Machine Manager environment. VMMCA is a diagnostic tool you can use to evaluate important configuration settings for computers that are either running VMM server roles or are acting as virtual machine hosts. The VMMCA scans the hardware and software configurations of the computers you specify, evaluates them against a set of predefined rules, and then provides you with error messages and warnings for any configurations that are not optimal.
Microsoft Product Support Reporting Tool (MPSRPT) You can use the Virtual Machine Manager MPS Reporting Tool for System Center 2012 to gather detailed system status and configuration information from the VMM management server as well as all managed virtual machine hosts and VMM server roles. The tool offers the ability to select the particular scenarios for which system configuration data will be gathered. The Virtual Machine Manager MPS Reporting Tool for System Center 2012 gathers the following diagnostic data for the VMM management server, the virtual machine hosts and library servers that the VMM management server manages, and physical to virtual conversion (P2V) source computers:
===========
Be sure to read the disclaimers and other notes in the download area. Some of the programs are not a supported part of the VMM product. They are offered “as-is”.
Go get them at http://www.microsoft.com/en-us/download/details.aspx?id=29309.
What can I do with this thing again?
Convert-WindowsImage is the new version of WIM2VHD designed specifically for Windows 8. Completely rewritten in PowerShell, this command-line tool allows you to rapidly create sysprepped VHD and VHDX images from setup media for Windows 7/Server 2008 R2 and Windows 8/Server 2012.
What’s new in this version?
· Bug fixes (for example: bigger text on the UI to make it more readable, -Edition switch now accepts image number as well as image name, etc.)
· New switches to enable kernel debugging in the VHD(X).
· The ability to share this script with your friends, family, partners, pen pals, customers, and pets, even if they don’t work for Microsoft!
Where can I get it?
The public site is here.
System Center MVP Cameron Fuller authored a great blog post on using a common SQL backend database for System Center. I get this question all the time in my conversations with customers/partners and Cameron’s advice is spot-on.
http://blogs.catapultsystems.com/cfuller/archive/2012/05/23/system-center-2012–using-a-common-sql-backend-database-sysctr-scom-scsm.aspx
Accelerated Bootcamp to Upgrade your Skills to MCSA Windows Server 2012
This accelerated four day course will cover new features and functionality in Windows Server 2012. This is not a Product Upgrade course, detailing the considerations for migrating and upgrading your specific environment to Windows Server 2012. Rather, it will update your skills to Windows Server 2012.
This course is also preparation material and maps directly to Exam 70-417:
Upgrading Your Skills to MCSA Windows Server 2012.
Who Should Attend:
This course is intended for Information Technology (IT) Professionals who are experienced Windows Server 2008 or Windows Server 2008 R2 Server Administrators, carrying out day to day implementation, management and administrative tasks, who want to update their skills and knowledge to Windows Server 2012.
This course will also be of interest to participants who hold the MCSA: Windows Server 2008 credential, who aspire to update it to the MCSA: Windows Server 2012 credential by taking the upgrade Exam 70-417: Upgrading Your Skills to MCSA Windows Server 2012.
Course Content:
Course Prerequisites:
Before attending this course, students must have:
McLean, VA
August 27 - 30
Click here to Register now!
Chicago, IL
September 4 - 7
New York, NY
September 10 - 13
Dallas (Irving), TX
September 17 - 20
Atlanta, GA
Bellevue, WA
September 24 - 27
Anaheim, CA
October 2 – 5
The Infrastructure Planning and Design team is pleased to announce that the System Center 2012 - Virtual Machine Manager guide is now available for download.
Download the guide now: http://go.microsoft.com/fwlink/?LinkId=245473
This guide outlines the elements that are crucial to an optimized design of Virtual Machine Manager. It leads you through a process of identifying the business and technical requirements for managing virtualization, designing integration with Operations Manager if required, and then determining the number, size, and placement of the VMM servers. This guide helps you to confidently plan for the centralized administration of physical and virtual machines.
Infrastructure Planning and Design streamlines the planning process by:
Tell your peers about IPD guides! Please forward this mail to anyone who wants to learn more about Infrastructure Planning and Design guides.
Join the IPD Beta Program Subscribe to the IPD beta program and we will notify you when new beta guides become available for your review and feedback. These are open beta downloads. If you are not already a member of the IPD Beta Program and would like to join, follow these steps:
Stay tuned for other System Center 2012 guides releasing for beta feedback!
Already a member of the IPD beta program? Go here to get the latest IPD beta downloads:https://connect.microsoft.com/content/content.aspx?ContentID=6556&SiteID=14
Related Resources Check out all the Infrastructure Planning and Design team has to offer! Visit the IPD page on TechNet,http://www.microsoft.com/ipd, for additional information, including our most recent guides.
Just one week after Microsoft Management Summit 2011 (MMS), Microsoft Learning will be hosting an exclusive three-day Jump Start class specially tailored for VMware and Microsoft virtualization technology pros. Registration for “Microsoft Virtualization for VMware Professionals” is open now and will be delivered as a FREE online class on March 29-31, 2010 from 10:00am-4:00pm PDT.
What’s the high-level overview?
This cutting edge course will feature expert instruction and real-world demonstrations of Hyper-V and brand new releases from System Center Virtual Machine Manager 2012 Beta (many of which will be announced just one week earlier at MMS). Register Now!
Every section will be team-taught by two of the most respected authorities on virtualization technologies: Microsoft Technical Evangelist Symon Perriman and leading Hyper-V, VMware, and XEN infrastructure consultant, Corey Hynes
Who is the target audience for this training?
How do I to register and learn more about this great training opportunity?
What is a “Jump Start” course? A “Jump Start” course is “team-taught” by two expert instructors in an engaging radio talk show style format. The idea is to deliver readiness training on strategic and emerging technologies that drive awareness at scale before Microsoft Learning develops mainstream Microsoft Official Courses (MOC) that map to certifications. All sessions are professionally recorded and distributed through MS Showcase, Channel 9, Zune Marketplace and iTunes for broader reach.
Please join us for this fantastic event!
http://technet.microsoft.com/en-US/evalcenter/jj659306.aspx?wt.mc_id=TEC_133_1_7
Windows Server 2012 Essentials (formerly Windows Small Business Server Essentials) is a flexible, affordable, and easy-to-use server solution designed and priced for small businesses with up to 25 users and 50 devices that helps them reduce costs and be more productive. Windows Server 2012 Essentials is an ideal first server, and it can also be used as the primary server in a multi-server environment for small businesses.
Windows Server 2012 Essentials enables small businesses to protect, centralize, organize and access their applications and information from almost anywhere using virtually any device.
Need more information? See the product details page .
Hello Partners!
Make sure that you check out the Partner Learning Plans site and register for all the Server 2012 on-demand training for Windows Server 2012.
http://www.microsoftlearningplans.com/
I attended a hypervisor compete session at VMWorld this week and saw this topic come up yet again. Can we please put the ‘footprint war’ to rest already?
VMware's claim was that since the ESX hypervisor is smaller than Hyper-V (it’s not), that means a smaller attack surface and fewer patches which equates to more “.9999’s”. Of course, the gentlemen presenting the session called out every single Windows/Hyper-V update that causes a server reboot and conveniently neglected to show ANY similar information for ESX. I was glad to see another attendee bring this up at the Q&A. The presenter answered with, “That’s good feedback”. Sure.
Here’s more from VMware’s website:
Microsoft attempted to follow VMware’s lead to reduce the attack surface of its virtualization platform by offering Windows Server Core (a subset of Windows Server 2008) as an alternative parent partition to a full Windows Server 2008 R2 install. However, the disk footprint of Server Core in its virtualization role is still approximately 3.6 gigabytes (GB). Until Microsoft changes its virtualization architecture to remove its dependency on Windows, it will remain large and vulnerable to Windows patches, updates, and security breaches.
Alright, let’s put this thing to rest already…
First of all – there’s this great feature called ‘clustering’. Both Hyper-V and ESX have the ability to move virtual machines to other hosts in the cluster with no downtime (ie; vMotion or Live Migration). That means hosts can come down for routine maintenance with little or no impact to the virtual workloads.
REALITY: Everyone has to patch. Our friends at VMware are not the exception to this rule. If you run VMware, you know this already. If you don’t – check out this PAGE.
I count 8 critical update packs since July of 2009 for ESX 4.0.
Now, I don't claim to be an ESX guru so I don’t know if the VM shutdown part is completely accurate – but it’s possible that one or more of the updates in these ‘packs’ require the VM’s to restart (perhaps upgrades to the integration components?). If that’s the case though, that’s even worse than I originally thought. Either way, I’m just reading what THEY say.
Add them all up – it’s over 4GB of updates in a year. Now, I don’t know how much of that 4GB is over writing older stuff or is adding new stuff. Does it matter? It’s bigger…a LOT bigger than 70MB footprint they claim…
Back to VMWorld. At the compete session, I see this slide again.
Seriously? How can you have 4GB of updates for ESX in a year and then say that your disk footprint is 70MB?
Want more proof? Here’s a picture I snapped when doing my own ESX 4.0 install. If it’s hard to read (you can click on it to make it bigger) – here’s what it says:
“ESX REQUIRES AT LEAST 1.25GB. IF THE SERVICE CONSOLE IS INSTALLED ON THE SAME DEVICE AS ESX, AT LEAST 9.5GB IS REQUIRED.”
Here’s a comprehensive list of Hyper-V updates – going back to R1. (I know of one other from a few weeks ago that’s not in this list. It was last updated in May, so there may be others)
The moral of the story is – whatever you are running - Hyper-V or ESX, you are going to have to deal with updates, service packs, patches and the like.
Ah, but you say…VMware is moving to ESXi now – no more ESX. So, that will significantly decrease the number of patches, right?
Doubt it. I count 9 updates to ESXi since July 2009. 1.6GB worth to be exact – all require rebooting the host.
Now, can we move on please?
I finally got a chance to get the SSP 2.0 up and running in my lab. I had installed the RC but never really spent much time with it. One interesting thing to note – if you have installed the RC, you need to use START –> PROGRAMS and dig in to the SSP folder and choose ‘uninstall’. You cannot remove the RC from the Control Panel. This is fixed in the RTM bits.
If you don't know or haven't heard much about the Self-Service Portal 2.0, here's a brief description of it's capabilities. It's important to note that this is NOT an upgrade to the Self-Service Portal that ships with VMM 2008 R2.
VMMSSP (also referred to as the self-service portal) is a fully supported, partner-extensible solution built on top of Windows Server 2008 R2, Hyper-V, and System Center VMM. You can use it to pool, allocate, and manage resources to offer infrastructure as a service and to deliver the foundation for a private cloud platform inside your datacenter. VMMSSP includes a pre-built web-based user interface that has sections for both the datacenter managers and the business unit IT consumers, with role-based access control. VMMSSP also includes a dynamic provisioning engine. VMMSSP reduces the time needed to provision infrastructures and their components by offering business unit “on-boarding,” infrastructure request and change management. The VMMSSP package also includes detailed guidance on how to implement VMMSSP inside your environment. Important: VMMSSP is not an upgrade to the existing VMM 2008 R2 self-service portal. You can choose to deploy and use one or both self-service portals depending on your requirements.The self-service portal provides the following features that are exposed through a web-based user interface:
System Requirements:
First things first, download all the bits and documentation HERE.
You have a few setup options – to do a new install, grab the SETUPVMMSSP.exe.
Of course, you’ll want to grab the documentation. Highly recommend you at least scan through the Datacenter Administrator guide before proceeding as it gives you some information that you’ll need to have to make sure that your SCVMM environment is properly configured for the portal. The actual installation process isn’t all that complicated – but you’ll definitely want to dig into the documentation before getting to far past that. This blog post is a ‘my experience’ and yours may vary. :) When in doubt, default to the documentation.
The “Getting Started” guide has a good diagram that shows the SSP architecture:
Another thing you’ll want to make sure you do, if you haven’t already done it, is create host groups in SCVMM. If you have specific hosts that you want to use for SSP placement, you’ll have to pick a host group when approving the infrastructure request. If you have all your hosts in a single group, you’ll probably have some issues. In my case, I did, and I immediately got an error on my first placement job because SCVMM was trying to deploy my VM to one of my ESX hosts. Yeah, that’s not gonna work.
Here’s my setup. I put my cluster in a host group, my stand-alone hosts in their own host group and then my two ESX hosts that are managed by vCenter in their own group.
I choose HYPER-V HA as my host group of choice in my environment. It’s a Hyper-V cluster with plenty of capacity.
I ran into another error during one of my first jobs because SSP thought the servers in my host group were ‘overcommitted’. I solved that by changing the cluster reserve node to “0” in the cluster properties in SCVMM. Since this is a lab, I only have two hosts and really don’t care if something dies (I do, but not from a ‘production’ perspective), the zero here works for me. You can find more information on this subject here: http://technet.microsoft.com/en-us/library/cc764243(printer).aspx
From an SCVMM configuration perspective there wasn’t much else to do. I moved on to installing and configuring the SSP. The install is pretty straightforward. The installer will run a pre-req test to make sure IIS, MSMQ, .NET Framework, etc… are all installed. You’ll need a phyiscal host or VM that you can allocate 12GB of RAM and a x64 CPU to as well. In my case, I’m running SSP in a dedicated VM but after the install was complete I backed the RAM down to 4GB and it’s working fine. There are two options during the install – the server and website components. You can obviously put these on different servers but in my case, I put them on the same server.
You’ll also need a SQL 2008 instance somewhere for the back end. I have a dedicated SQL 2008 box in my environment that hosts all my DB’s for System Center and other applications. I pointed to this with no issue during the setup. If you have the RC installed, the uninstaller does not remove the DB so you’ll have delete that one if you no longer intend to use it.
After all of that gets installed, you simply open the default web page on your server – when you do, this is what you get. In my case, I’ve changed the default logo on top to my own and I’m sure you’ll want to do the same. If you want to do that – easy to do. Just right click on the photo and you’ll see that it’s named a logo.png in the /images directory. Use IIS on your SSP server to explore that directory and grab that logo.png file and use your favorite photo editing tool to make changes then save it back into the directory. Easy enough…
The next step is to hit the ‘settings’ tab on this page. You have four sections – the last two you probably, at least initially and certainly to just get up and running, should not have to make any changes – but check them out to see what’s in there.
The first section is “Configure Datacenter Resources”. The big one in here is setting the FQDM of your VMM server. You’ll add the networks/VLAN’s that you want SSP to use, quota costs (don’t worry about this during initial setup – read up on this topic and you can come back and change this), and a few other things that require more reading. :)
The next is “Configure VM templates”. Over on the right, select ‘import templates’, pick the VMM server and library and select the ones you want to be available in SSP.
Believe it or not, that’s pretty much it as far as initial setup. SSP is predicated on your existing SCVMM environment being up-to-snuff with the appropriate templates (which we’ll talk about next), host group configurations, proper networking and VLAN configurations, etc…
As far as VM templates go in VMM, here’s what I’m using. I discovered a couple things in my trial and error process…
1) You need to remove the network adapter from the template. The SSP process creates one and I got an error on one of my first jobs about the number of adapters so a recommendation was made to me to remove what was in the template. I also removed everything else I wasn’t using – the SCSI adapter and the DVD drive.
2) When you create a new template, make sure you choose the option to include the customization properties (screen shot on the right below). If you choose the option to forego customization to the VM, then the SSP job will fail creating a VM from that template. I didn’t do much in the customization – the SSP names the VM so leave that one alone, I set the local admin password, put in my MAK key and selected the x64 version of Server 2008 R2, which is the OS in the VM.
USING THE PORTAL
Now that you have SCVMM, your templates and the global settings for the SSP configured, now it’s time to create and register your business units and create some infrastructure.
Once again, the “Getting Started” guide outlines the procurement process. You definitely need to read this guide and the Datacenter Administrator guide has goes deep into each of these – most of which I won’t cover in my post here.
The first thing you have to do is create and register the ‘business unit’. Think of this as the line of business that will consume the resources. I use stuff like “Marketing”, “Engineering”, “IT Lab”, “Production”, etc… Each business unit has it’s own security and access and can request additional resources or do add/change requests at a later time.
In my scenario, I just use my administrator account for both – makes it easy to make changes or request resources and then approve just switching back to the ‘requests’ tab instead of logging in/out as different users. Here’s what that tab looks like after you get some activity going. Notice that after someone requests resources the administrators will see all the new requests at the bottom of this page.
When a new request comes in, the administrator will see hyperlinks and they then need to click on each of those to ‘approve’ the specific request. The requestor may choose or request the incorrect resources or set their quota too high and this is where the administrator can make the appropriate changes before approving or denying the request.
When the admin clicks on “Service: TestService” for example, they will have to assign the host group that the VM’s for that infrastructure will deploy to as well the VMM library that the templates are stored.
After the request is approved, the requestor and all the users that have been granted access to that resource will now be able to login to the portal and manage the resource.
All the resources can be filtered by the available business units where their infrastructure resides – or show them all in one page, like below. You can see that both of those are in different business units. The managers can create virtual machines here using the templates available, create a service/change request or create additional infrastructure requests.
The users can also manage the resources that they have deployed using the portal by clicking on the “Virtual Machines” tab:
To connect to a VM, the users can choose the “connect'” option on the right hand pane or connect via RDP, if enabled on the VM
Now of course, all of what happens on the backend is managed by VMM. So, you can always check the jobs on the VMM server to troubleshoot or see what’s been going on. Here’s the step flow through the SSP creation of the SSPDevLab01 that you saw in the earlier screenshot – it’s a VM I created in my “IT Test Lab” environment.
Finally, you can install the SSP Dashboard into your SharePoint farm using the installer available on the download site.
You can get a nice view into the SSP resources, availability, consumption and utilization as well as take advantage of the charge-back features.
Overall, pretty happy with the tool. It’s pretty easy to get going and works well and is very responsive.
Get it running and let me know what you think about it!
Couple things first. Check out the TechEd 2012 Session on what’s coming in SC 2012 SP1: http://northamerica.msteched.com/topic/details/2012/VIR201#fbid=ovh6LDyxJpE
Download SC 2012 SP1 CTP2 Here: http://www.microsoft.com/en-us/download/details.aspx?id=30133
A few things to note about CTP2, this release contains updates to all the System Center components. See below for more information and stay tuned for additional posts describing the component updates.
Upgrade: CTP1 cannot be upgraded to CTP2 and CTP2 will not be upgradable to Beta.
Production Use: this release is not intended for production deployments, it’s targeted at giving you all an early preview of some of what’s coming in this SP1 release. And there is a lot that still has to come after CPT2!! We specifically focused on a key set of scenarios documented here for this release.
What’s New: All components now support Windows Server 2012 RC and SQL Server 2012.
Component
What’s New
App Controller
Support for using Service Provider Foundation to create and operate VMs in VMM
· Azure IaaS enhancements: Ability to deploy VMs from an image or disk, start and stop VMs, and add VMs to a service
Data Protection Manager
· Improved backup performance of Windows Server 2012 Hyper-V over CSV 2.0 deployments
· Protect Hyper-V over remote SMB share
· Protect Windows 8 de-duplicated volumes
· VM Live Migration: Uninterrupted data protection
Operations Manager
APM enhancements, including:
· Support for IIS8
· Monitoring of WCF, MVC and .NET NT services
· Azure SDK support
Orchestrator & Service Provider Foundation
· Supports existing System Center and 3rd-Party Integration Packs
· Service Provider Foundation, which provides a rich set of web services that manage VMM:
o Create, change, and operate VMs
o Manage VMM Self-service User Roles
o Manage multiple VMM stamps and aggregate results from multiple stamps
o Integration with App Controller to use hosted capacity
Service Manager
· Ability to apply price sheets to VMM Clouds
· Create VM chargeback reports
· Ability to pivot by Cost Center, VMM Clouds, and Price sheets
Server App-V
· Support for applications that create scheduled tasks during the packaging process
· Ability to create virtual application packages from applications installed natively on a remote server
Virtual Machine Manager
· Improved support for network virtualization
· Ability to convert VHD to VHDX and to use VHDX as a base operating system image
· Support for the Windows Standards-Based Storage Management Service, thin provisioning of logical units, and discovery of SAS storage
· You can now create Add-Ins that extend the VMM console.