Kevin Remde's IT Pro Weblog

  • Where can I use Managed Service Accounts? (So many questions. So little time. Part 32)

    Get the Windows Server 2008 R2 evaluation here.

    Great question!

    For those of you who are not familiar with these things called Managed Service Accounts, let’s first talk about the problem that the solve.  But let’s first set the stage with a couple of assumptions:

    1. You have some domain accounts being used as the identity for some services.
    2. For the sake of good security, you change the passwords for those domain accounts on a regular basis.


    “Um.. Kevin.. Yes to the first one.. but definitely not the second one.”

    Why not?

    “Because then the services won’t start.”

    Bingo.  And even worse, it doesn’t show up as a problem until days or weeks later when for some reason (an update, perhaps?) you have to restart a server.  Suddenly things are broken, and you’re not sure why… until you find that the service that Exchange or IIS was depending on didn’t start.  So unless you’re really good at also going to each and every server and each and every service definition to reset the passwords there, you’re going to have problems.

    Managed Service Accounts take the concerns of having to set/reset passwords out of your hands.  They are special Active Directory accounts that manage their passwords automatically for you; by default having 120 character complex passwords that reset themselves every 30-days, and having no rights to log-on locally. 

    Currently (and I say that because I don’t know if this is going to be different in Windows Server 2012) you 1) create the account, and then 2) install the account to a server using PowerShell

    For complete details on Managed Service Accounts, see these pages:

    So, back to Casy’s question: Can you use Managed Service Accounts on Server 2003 or Server 2003 R2?


    Well… I should probably clarify something here.  Managed Service Accounts require the Active Directory schema to be updated to the Server 2008 R2 version, but they don’t strictly require the domain functional level to be raised – meaning that you can use them even if you’re still running domain controllers that are Windows Server 2003 SP2, Windows Server 2003 R2, Windows Server 2008, or Windows Server 2008 SP2.  (You will need to do adprep /forestprep and adprep /domainprep.  See AdPrep for details.)  Plus, the Active Directory Management Gateway Service would have to be installed on those older Domain Controllers to allow them to manage Managed Service Accounts.

    “Okay.. so they can exist in a domain that has older domain controllers.  But can I install them and use them on older servers or workstations?”

    No.  Sorry.  “To use managed service accounts and virtual accounts, the client computer on which the application or service is installed must be running Windows Server 2008 R2 or Windows 7.”  (From the Service Accounts Step-by-Step Guide, “Requirements for using managed service accounts and virtual accounts” section.)

    I hope that clarifies things for you.


    Are you using Managed Service Accounts?  Have they been useful to you?  Please share your thoughts in the comments.

  • File Server Migration (So many questions. So little time. Part 38)

    This question was asked at a recent TechNet Event where we discussed Server Migration tools:

    Get your software evaluations here.

    For those of you not familiar with these, the Server Migration tools are best practices and utilities for moving server roles and data from old servers (Are you still running Windows Server 2003?) to the current Windows Server 2008 R2.  One of the migrations we discussed and demonstrated in our event was the move of File Services.  We installed the tools on the source and destination machines, opened firewall ports, transferred local users and groups, transferred the files and shares (across the network), and then shut down the old and renamed (and re-addressed) the new server to take on the identity of the original server.  In the end we had a machine that looked just like the original, but was actually a Server Core installation of Windows Server 2008 R2.

    The process of transferring the files over the network involves using two PowerShell commands – one on either end of the connection:

    (I’ll leave it to you to figure out which one you run on the source and destination.)

    Once these are run on both source and target, they see each other and the tunnel is created.  Then the files are encrypted and sent across the network.  NOTE: This is only supported on a single subnet. 

    “Ah.. so if you can’t cross subnets, then the desire to move files across a WAN connection really can’t be fulfilled.”

    Correct.  And although it might still be useful to throttle or somehow dictate the speed or method of the file transfer, those options don’t exist in the current version of the tools. There may be other tools out there that do that, and certainly there are a wide range of choices in how you move files (robocopy, xcopy, etc.), but the File Service migration in the Server Migration tools is meant to be a simple, straightforward method of duplicating an existing configuration to the new server platform.

    For more information and the steps required to plan for, perform, and verify this kind of migration, make sure you take advantage of the File Services Migration Guide.


    Have you migrated file services from an old to a newer server?  Did you use the Server Migration Tools, or some other method or toolset?  Please share your experiences in the comments!

  • Can DJOIN fix this? (So many questions. So little time. Part 25)

    Amy asks:

    Amy domain join problem

    This question was in the context of our discussion on “Offline Domain Join” (djoin.exe).  For those of you not familiar with it, Windows Server 2008 R2 and Windows 7 support the ability to join a machine to a domain, even while there is no network connectivity between the joining machine and a domain controller. 

    “That’s neat, Kevin.  How do I do that?”

    The process involves using the DJOIN.EXE command (from an elevated command prompt) two times.  The first time you run it on a domain controller (or a Windows 7 or Windows Server 2008 R2 machine as a domain administrator) to create a new computer entry in Active Directory:

    djoin /provision /domain <domain to be joined> /machine <name of the destination computer> /savefile blob.txt

    Two things result from this:

    1. The computer entry is created in Active Directory, and
    2. A file creating domain metadata is created.

    Now take that text file and use it on the machine joining the domain by running DJOIN again.. but this time with the /requestODJ parameter:

    djoin /requestODJ /loadfile blob.txt /windowspath %SystemRoot% /localos

    You might also be interested in using DJOIN for modifying offline virtual machines.  This involves using the DJOIN command the second time to change the properties of the operating system right inside the .VHD file by mounting the .VHD file and then pointing to the \WINDOWS directory path of the virtual machine’s installation:

    djoin /requestODJ /loadfile blob.txt /windowspath <path to Windows directory of the offline image>

    A great description of Offline Domain Join can be found HERE.  And here is the Offline Domain Join (djoin.exe) Step-by-Step Guide

    So.. back to Amy’s question: Can one use DJOIN to fix that annoying problem of a computer losing its association with the domain to which it was previously joined?

    I would say that, yes, it could be used for fixing that.  The DJOIN command has an optional /reuse parameter which “Specifies the reuse of any existing computer account. The password for the computer account will be reset.” (from the command syntax section of the Step-by-Step Guide)

    Caution: To do that would require you to keep that blob.txt file around and stored somewhere.  We don’t recommend that, because you should really treat that file as if it were protected credentials. From the Step-by-Step:

    The base64-encoded metadata blob that is created by the provisioning command contains very sensitive data. It should be treated just as securely as a plaintext password. The blob contains the machine account password and other information about the domain, including the domain name, the name of a domain controller, the security ID (SID) of the domain, and so on. If the blob is being transported physically or over the network, care must be taken to transport it securely.


    What do you think?  Does DJOIN have a place in your toolbelt?  Share your experiences with it, or any questions, in the comments.

  • Should I just wait for Windows 8? (So many questions. So little time. Part 28)

    Josh is asking what I’m sure a lot of people are also wondering, given the recent excitement around our next version of Windows..

    Josh wait for 8

    Wow!  A two-parter!  Smile   

    Short answer to your first part: No.  Don’t wait to start the migration. 

    Longer explanations and reasons:

    The many and varied ways that Windows 7 surpasses Windows XP are well documented and easy to find, so I won’t go into them all here.  But understanding those benefits and knowing that you can take advantage of them NOW should be reason enough. 

    But more importantly; If you are prepared for the migration to Windows 7 and are able to make that transition now, the move to Windows 8 will be a breeze.  You’ll be using the same migration tools.  You’ll be able to run Windows 8 on the same hardware.  And although the move to Windows 7 taskbar will be a slight learning curve, you still have a start menu.  The Start Screen might be too much for them to handle all at once.  Surprised smile

    (Actually, once you get the hang of a couple of new UI moves, and even if you only have keyboard and mouse available to you with no touch-screen, you can just think of and treat the Start screen as an improved, colorful, customizable start menu; one that has new full-screen apps that will also run on your next Windows RT tablet.)

    Part 2 of your question deals with licensing.  I won’t go there.

    “Oh c’mon you chicken..”

    Okay.. yeah.. sorry.  But honestly, I’m not familiar with “Implementation Credits”.  Are you referring to Desktop Deployment Planning Service (DDPS) credits?  If so, this question would be best answered by your Microsoft Account Executive, or the local Microsoft licensing experts your company has been working with.

    I hope you’re following my blog, Josh, or this will be a very short conversation.


    What about the rest of you?  Are you deploying Windows 7 or waiting for Windows 8?

  • My Favorite “Feature” (So many questions. So little time. Part 30)

    For my 30th posting in this series, I thought I’d tackle a couple of really tough questions posed by John at our Columbus, Ohio TechNet Event:

    Get the Server 2012 Beta

    Well, John… this of course would require me to enter an opinion here on my blog.  And I don’t think I’m allowed to give my opinion of Windows 8 or Windows Server 2012 just yet.

    “Oh c’mon you coward..”

    Hey now.. be nice.  Alright, since you dared me, I will mention one feature from each that I’m really excited about.

    For Windows 8, my favorite feature right now has got to be having Hyper-V available to me.  On my work laptop, which I use for presentations, I run several demo virtual machines (you saw them the morning of our TechNet Event) using Hyper-V.  It runs great!  And I also take advantage of new Hyper-V PowerShell commandlets for quickly resetting my machines to the starting place (snapshot) for my demos.

    For Windows Server 2012 (formerly codename Windows Server “8”) the feature-set I’m most excited about are the improvements in Hyper-V and in Storage.  More specifically, to actually configure and witness a running virtual machine move between two virtualization hosts with no cluster, no shared storage.. no shared NOTHIN’… That’s amazing.  And it’s a game-changer. 


    And John also decided to have a little fun with my request for written questions:

    You put your left foot in...

    Sorry we couldn’t fit those in that day, John.  Maybe next time. Smile with tongue out


    What features are your favorites?  What have you had a chance to try out and really like in Windows 8 or Windows Server 2012?

  • Why use DJOIN? (So many questions. So little time. Part 31)

    I have another question related to the DJOIN command:

    Click ME to download the Private Cloud Evaluation software

    For those of you who don’t know what an “Offline Domain Join” is or what the DJOIN.EXE command does, please refer to my blog post from the other week on the subject.  In a nutshell, Windows Server 2008 R2 and Windows 7 contain a tool (djoin.exe) that allows you pre-populate AD with a computer account, and then at a later time connect that computer to the domain without having to have the domain actually available at the time. 

    Which leads to this pretty good question that Mark asks: Why not just use the NETDOM JOIN command?

    The answer really has to do with the key benefit that an Offline Domain Join provides: The ability to do it OFFLINE.  For the NETDOM JOIN command to work, your machine has to be able to communicate with a domain controller.  Not so with DJOIN.EXE.


    What do you think? 

  • Free Global Event–24 Hours in a Private Cloud

    24 Hours in a Private Cloud

    Get yourself 5 of those “5 Hour Energy” drinks, set your alarm clock, and join the WORLD for this amazing opportunity to learn all about the Private Cloud solutions from Microsoft

    Here’s the description from the registration page:

    Every organization has the power to employ cloud technologies in their own way, at their own pace and with their own terms. The use of private cloud technologies help transform how organizations manage infrastructure resources, provision applications and automate services for their business. It also helps them leverage and manage public cloud services that expand their current infrastructure and application capabilities. As an end result, organizations increase IT operational agility, improved business focus and achieve value-add economics that evolves their IT infrastructure into a strategic asset.

    Over 24 hours, you will hear from top industry and technical professionals from around the world to help you better understand the private cloud technology solutions that are available today. You will hear from industry organizations about how they view the public cloud and how the role of the IT Professional will evolve as more and more organizations begin a private cloud transformation. Listen to the number of technical professionals who will be on hand talking about the required components to simplify private cloud creation and management. Talk with them and your peers about the numerous operational efficiencies that come from deploying a private cloud with the reduction of servers and the benefits of provisioning and managing virtual applications across multiple platforms.

    We hope that you will come away from this event with the knowledge and experience to help you in your private cloud infrastructure decisions and be prepared to have thought-leadership based discussions focused on building and managing your organization’s agile and efficient private cloud environment.

    Event Start:   May 10, 2012 8:00AM GMT (that’s 4:00AM Eastern US, 1:00AM Pacific)
    Event End:     May 11, 2012 8:00AM GMT (that’s 4:00AM Eastern US, 1:00AM Pacific)


  • New Event Series: Windows Azure Kick Start

    Windows Azure Kick Start

    Oakwood Systems

    Thanks to our friends at Oakwood Systems, I have a series of free, hands-on “kick start” events to tell you about.  Here’s the text from an e-mail I just received:

    What is the best way to determine if Windows Azure might be a fit for your organization?

    Microsoft is offering your organization the opportunity to learn more about Windows Azure in a hands-on lab environment through our upcoming Kick Start Training Events.  Developed with Microsoft, and delivered by one of Oakwood's certified Azure architects, this training will help you put Azure in context for your own organization.

    Those who attend will learn how to build a web application that runs in Windows Azure, how to sign up for free time in the cloud, and how to build a typical web application using the same ASP.NET tools that are being used today.  We'll review the types of applications that make sense to move to the cloud, and those that do not, and we'll have plenty of interactive time for Q&A.

    This is a full day session, with lunch included, and attendees will need to bring a laptop.  Details of the laptop requirements are on the registration site.  The download site for the session prerequisites is and click on the green install button.

    Whether you'll be joining us yourself or not, please feel free to forward this invitation to your development team members.  All are welcome (until we run out of seats, of course).  Click on your preferred city and date to the right to register.  These sessions are being offered to you at no charge, so we hope you will choose to leverage a session to advance your organization's Azure knowledge and planning.

    Best regards,

    Margaret Johnson
    on behalf of Oakwood's Azure Team
    and the Microsoft DPE Team


    Reserve your seat now in the city of your choice:

    These hands-on lab sessions will begin at 8AM and end between 5 and 6PM.  Lunch and beverages are included.

  • Breaking News: It’s Here! Get the Windows 8 Release Preview!

    Windows 8 Release Preview

    What a nice surprise!  Today Microsoft announces the availability of the next pre-release of the new client PC operating system: the Windows 8 Release Preview. 

    More details here:


    What do you think?  Let’s talk about your thoughts on this new milestone in the comments.

  • BIG NEWS: Release of Windows Server 2012 Release Candidate

    Microsoft has just made available the new pre-release of Windows Server 2012!




  • I love these Windows Azure success stories

    Windows Azure“MediaValet Thrives on Microsoft’s Cloud Platform” is the title of this press release.  It’s great to hear about the cases where Windows Azure is being used so effectively, and really taking full advantage of its… well.. advantages.  Things such as global scale and pay-as-you-go.  Great, reliable, redundant storage.  And the opportunity to create some massively parallel compute engines for heavy tasks such as image or video rendering.

  • Are you abandoning 32-bit in Windows 8? (So many questions. So little time. Part 24)

    Yes, I’m back (finally) with another in my series of expanded-answers to questions I have received during TechNet Events and IT Camps I’ve facilitated.  But for these next blog posts I’m going to have a little fun by actually showing you the written question.

    Our question today comes from Timi, who asked me:

    Windows 8 bits

    Great question.  Windows 8 – the consumer and professional desktop product – will still be available for 32-bit and 64-bit platforms.  So we’re not yet ready to abandon 32-bit desktops.  It’s not going away.  Certainly there are still a lot of you out there running or supporting people running 32-bit installations, and you’d still like to be able to do an in-place upgrade.

    And the answer to your next question you ask regarding “WOA” (which stands for Windows on ARM) is that it is 32-bit.  For those of you who are not familiar with it, Windows on ARM is now officially known as “Windows 8 RT”, and under-the-hood, though you won’t really have to worry about it, it is 32-bit.

    “Huh?  Why won’t I have to worry about it?  Don’t tell me what I should and shouldn’t be worried about.”

    Forgive me.  But really.. Windows 8 RT is the operating system for ARM-based devices.  It comes pre-installed and running on those devices.  It’s not an operating system that you can purchase and install yourself.  So, whether it’s 32-bit or 64-bit is really only a concern to the folks who are building Windows 8 + ARM based tablets.


    You’re welcome.

  • Limits on Migration? (So many questions. So little time. Part 26)

    At our TechNet Event in Columbus, OH recently, Andy asked:

    Click me

    There is no limitation, Andy.  I think some of the concerns people have are related to licensing.  Can I really take a live virtual machine and move it freely between nodes in a cluster?  The answer is, yes.  Absolutely.  Assuming you’re already licensed to run that virtual machine, you can migrate between nodes in a cluster as many times as you want.

    CLICK HERE for more details on Hyper-V in Windows Server 2008 R2

    And if you’re comparing licensing options between Windows Server 2008 R2 Hyper-V and VMware vSphere 5, make sure you try out this virtualization cost calculator.

  • Powerful PowerShell Resources (So many questions. So little time. Part 27)

    PowerShell is a shell.  ...OF POWER!!

    Thanks for the request, Howard.  Here are some good ones: 

    These are just a few of the hundreds that are out there.  I’m sure you’ll be able to “Bing” a few others

  • Hyper-V Test Network with Wireless? (So many questions. So little time. Part 29)

    At our Columbus, Ohio TechNet Event, Karl asked:

    Get the private cloud evaluation here

    Great question, Karl.  And I have seen two ways to do this well.

    Way #1 – Treat your test network as a subnet that requires routing.

    Although this method is more complex, I personally prefer it.  It treats the “Internal” network as a separate subnet, which is more real-world to me.  You can set up your own mini-company inside of the subnet, and have control over DHCP or other subnet-bound broadcast protocols.  Basically for this method you are going to treat your physical Hyper-V host as a router which does NAT (Network Address Translation) for you. 

    The steps are:

    1. Add the Network-Policy and Access Services – Routing and Remote Access on the physical host machine.  All you need is love NAT. 


    2. Create a new Hyper-V Network switch of type “Internal”.  I just name mine “Internal”. 

    3. For the adapter that appears in your Network Connections associated with that new Hyper-V network, set the addressing to something you’ll remember.  The address used here will be used as the “gateway address” for the machines connecting to it to give them Internet access, so common convention would recommend setting it to ( or  Whatever you want.  Since it’s unlikely you’ll be using more than 254 addresses for this test subnet, you could go with a 24-bit mask ( for this subnet, too.)  If you plan to have a Domain Controller and/or a DNS server in your subnet, you can add that address as the preferred DNS server.



    4. Now in the configuration of Routing and Remote Access,  you’ll set up NAT by right-clicking on NAT, choosing “New Interface…”, selecting the interfaces, and setting their Interface Type correctly.  The important one is the Wireless Network Connection, which you set as Public, and enable NAT on the interface.


    And the other network adapters (including “Internal” will be defined as Internal. (duh).  Eventually the configuration will look something like this.


    Now, as long as the virtual machines are connected to my “Internal” switch, and have their networking and default gateway set correctly, they’ll be able to get to the Internet.

    Way #2 – Use a Network Adapter Bridge

    Though not as real-world (my opinion) as using routing, this is definitely the easier way to do it.  In your Network Connections, CTRL+CLICK to select both the wireless and wired adapter.  Then right-click either one and you should see a “Bridge Connections” option.  Pick it.  Now whatever virtual network switch you’ve bridged with the wireless adapter should be able to get out to the outside world.


    Is this useful?  Do you have other tricks that have worked that you want to share with us?  Let us know in the comments.

  • Why .NET 3.5? (So many questions. So little time. Part 39)

    System Center 2012 / Private Cloud Evaluation Download

    Good question, Todd.  And unfortunately I don’t know the details enough to give you a precise answer.  But I will say that no matter what Windows Server 2012 has in it, it isn’t a supported platform for the current RTM (Release to Manufacturing) version of System Center 2012

    “Why not?”

    ‘cause it’s in beta – that’s why not.  So Server 2012 can’t a supported platform, and can’t be supported as a managed platform, either.

    “Really?  I can’t manage Windows Server 2012 beta with System Center 2012?”

    I didn’t say you can’t.  Some things will be manageable, just not supported.  And features and functions that are brand-new to Server 2012 (live storage migrations and “Shared Nothing” migrations in Hyper-V come to mind) won’t be manageable at all.

    “So.. that will change when Server 2012 finally comes out, right?”

    No.  Not immediately.  There will have to be an update of System Center 2012 to make that happen.  At the time of the writing of this blog post, no announcements of the timing of the release of Server 2012 or the next update of System Center 2012 have been made.  But it has been announced that an update of System Center 2012 will be required to support Server 2012.  In fact, early test releases of updates to System Center 2012 components are already available in CTP (Community Technology Preview) form on the Microsoft Connect site.

    But back to your question on why an older version of the .NET framework might be required for a released product: It’s because that framework was the current broadly available framework when development of the product was underway.  At some point in every product’s lifecycle, the requirements of the platform need to be decided upon and locked down, never to change.  So in that light it’s easy to see why Microsoft (or any development organization) might come out with a product that requires older tooling than what is more recently available at the time of launch.


    Does that make sense?  Do any of you in the product team want to comment more specifically on any .NET framework differences? 

  • TechNet Radio: Cloud Innovators - Determining your ROI with Office 365

    Recently, as a new installment of our “Cloud Innovators” series on TechNet Radio, I had the pleasure of talking with Shahrouz Malekpour of Ezy Consulting.  His company provides a service and a tool to help businesses determine what their investment will be when choosing Office 365 as their productivity platform-of-choice. 

    Click here to take advantage of Ezy Consulting’s Office 365 Evaluation Assessment Promo.

    Promo Code: TechNetRadio




    Video: WMV | MP4 | WMV (ZIP) | PSP
    Audio: WMA | MP3

    If you're interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:


    Websites & Blogs:

    More Videos:

  • Something cool I just learned: Windows 8, Hyper-V, and Wireless Networking

    This is neat… I was searching the Intertubes (“Googling it on BING”), looking for knowledge on some topics and some answers for future blog posts, and I happened upon one of John Savill’s excellent series of Q&A posts on the Windows IT Pro Magazine site.   One of the articles answers the question “What features are in Server Hyper-V that aren't in Client Hyper-V?”, and contains a handy list of what the differences are in Hyper-V in Windows Server 2012 and Windows 8

    One tidbit that I didn’t even notice while using Hyper-V on my current installation of the Windows 8 Consumer Preview was something he mentions right at the end of his article…

    “What is great with Client Hyper-V is that wireless networks can be used and your machine can still be put to sleep and hibernated!”\

    “My VMs can use my client’s wireless NIC?!  That is great!”

    Yep.  Look at the device I picked for this new virtual switch I just created…


    Pretty awesome.

  • Breaking News: Microsoft Assessment and Planning Toolkit 7.0 Beta

    Microsoft Solution Accelerators

    I don’t usually like to do this…

    “..but you’re going to do it anyway.”

    Shut up.  And get out of my head.  I usually don’t like to do a lot of copying and pasting of other people’s content (OPC™), but the text from this e-mail I received concerning the beta release of the next version of the Microsoft Assessment and Planning Toolkit (MAP 7.0) is so well written, I can’t improve upon it.

    Here is the skinny on the MAP Toolkit 7.0 beta…

    Accelerate your Migration to the Private Cloud with MAP 7.0 Beta!

    The Solution Accelerators team is pleased to announce the Microsoft Assessment and Planning (MAP) Toolkit 7.0 Beta.

    Get ready for the private cloud with the Microsoft Assessment and Planning (MAP) Toolkit 7.0 Beta. This update adds several new private cloud planning scenarios that help you build for the future with agility and focus while lowering the cost of delivering IT. Download the MAP Toolkit 7.0 Beta and begin your cloud transformation today!

    New capabilities allow you to:

    • Understand your readiness to deploy Windows in your environment with hardware and device readiness assessment
    • Determine Windows Server 2012 Beta readiness
    • Investigate how Windows Server and System Center can manage your heterogeneous environment through VMware migration and Linux server virtualization assessments
    • Size your desktop virtualization needs for both Virtual Desktop Infrastructure (VDI) and session-based virtualization using Remote Desktop Services
    • Ready your information platform for the cloud with the SQL Server 2012 discovery and migration assessment
    • Evaluate your licensing needs with usage tracking for Lync 2010, active users and devices, SQL Server 2012, and Windows Server 2012

    For a comprehensive list of features and benefits, click here.

    Key Features and Benefits

    Determine Windows desktop readiness

    MAP 7.0 Beta assesses the readiness of your IT environment for your Windows desktop deployment. This feature evaluates your existing hardware against the recommended system requirements for Windows. It provides recommendations detailing which machines meet the requirements and which machines may require hardware upgrades. 

    Key benefits include:

    • Assessment report and summary proposal to help you to understand the scope and benefits of a Windows desktop deployment.
    • Inventory of desktop computers, deployed operating systems, and applications.

    Assess Windows Server 2012 Beta readiness

    MAP 7.0 Beta assesses the readiness of your IT infrastructure for a Windows Server 2012 Beta deployment. This feature includes detailed and actionable recommendations indicating the machines that meet Windows Server 2012 Beta system requirements and which may require hardware updates. A comprehensive inventory of servers, operating systems, workloads, devices, and server roles is included to help in your planning efforts.

    Virtualize your Linux servers on Hyper-V

    MAP 7.0 Beta extends its server virtualization scenario to include Linux operating systems. Now, MAP enables you to gather performance data for Linux-based physical and virtual machines and use that information to perform virtualization and private cloud planning analysis for both Windows and Linux-based machines within the Microsoft Private Cloud Fast Track scenario.

    Key features allow you to:

    • Incorporate non-Windows machines into your virtualization planning.
    • View consolidation guidance and validated configurations with preconfigured Microsoft Private Cloud Fast Track infrastructures, including computing power, network, and storage architectures.
    • Get a quick analysis of server consolidation on Microsoft Private Cloud Fast Track infrastructures to help accelerate your planning of physical to virtual (P2V) migration to Microsoft Private Cloud Fast Track.
    • Review recommended guidance and next steps using Microsoft Private Cloud Fast Track.

    Click here to see more features and benefits.


    “Where do you suppose that phrase ‘here’s the skinny on..’ comes from?”

    I don’t know.  But if one of you readers knows the answer, please put it in the comments.  And let us know if you’re trying out the MAP 7.0 beta, too!

  • Can I run Hyper-V on Windows Server 2008 R2 Standard? (So many questions. So little time. Part 33)

    Chris A asked this question at a recent TechNet Event:

    Get started on your Private Cloud.

    Thanks for the question, Chris.  Hyper-V is actually available on all versions of Windows Server 2008 R2, and this will be true of foreseeable future versions of Windows Server as well.  There are a few differences in a couple of areas, however.  Here’s a slide that I’ve presented a couple of times that outlines the key differences.  (click to enlarge)


    Hyper-V Edition comparisons in Windows Server 2008 R2

    Notice, Chris, that you do have the ability to run Hyper-V on Windows Server 2008 R2 standard edition, but the main differences are that with Standard edition you don’t have as many “Use Rights” (only 1 additional VM license included), and we can’t support you running more than 192 VMs on a single host. 

    “Is that really all?”

    No.  Very important is the capability to create High Availability using Windows Failover Clustering, and to be able to support Live Migrations of running virtual machines between nodes in a cluster.  Standard Edition doesn’t have failover clustering available to it, and so therefore can’t support High Availability or Live Migration. 


    The news isn’t all bad.  You CAN actually do failover clustering, get High Availability and do Live Migrations using Microsoft Hyper-V Server 2008 R2 w/SP1

    “Seriously?  Isn’t that the free version of your hypervisor?”


    So, my advice to you is that if you’re thinking of buying or using a Windows Server 2008 R2 Standard Edition license for supporting virtualization, I’d encourage you to consider using Hyper-V Server 2008 R2 w/SP1 instead.


    Does that make sense?  Should we be giving so much virtualization power for free?  Are you using Hyper-V Server, or do you have any more questions about it?  Enter a comment and let’s start the conversation.

  • Getting the best performance out of Hyper-V (So many questions. So little time. Part 34)

    Bryan W. asked a big question at a TechNet Event a couple of weeks ago:

    Evaluation Software Found Here

    This question brings up a pretty big topic: Hyper-V performance.  And even more fundamentally it’s also a question of VHDs (Virtual Hard Disks) and how they perform based on their type or configuration. 

    The quick answer to your question, Bryan, is YES.  Whenever you can get more spindles working on a problem you’re likely to get better performance.  When I’m using differencing disks, I like to keep a parent disk on a different disk than the child disk.  And this also applies if you’re deciding where you want to put virtual machine snapshots.  Personally I like to keep my machine hard disks along with the machine configuration, but if I really wanted to squeeze out the best performance, I’d do things differently.

    “Are there any documents or pages out there that describe good performance practices for Hyper-V?”

    Thankfully, yes.  The document “Performance Tuning Guidelines for Windows Server 2008 R2” has a “Performance Tuning for Virtualization Servers” section, and pages 86-89 of that section provide a great discussion on optimizing storage I/O; such as this gem:

    Physical Disk Topology
    VHDs that I/O-intensive VMs use generally should not be placed on the same physical disks because this can cause the disks to become a bottleneck. If possible, they should also not be placed on the same physical disks that the root partition uses.

    A “Performance Tuning for the Storage Subsystem” section (page 24) also describes, in great detail, the options and their implications when configuring virtual machine storage.

    And if you really want to know how to measure a virtual machines performance based on a number of factors (Disk, Memory, Network, and CPU), check out this “Measuring Performance on Hyper-V” article.


    What about you?  How are you wringing the most out of the performance of your Hyper-V installations and virtual machines?  Share your best practices in the comments.

  • Can you run Exchange 2010 on a Core Server? (So many questions. So little time. Part 35)

    Dean attended our TechNet Event recently, and asked this question:

    Evaluation Software Found Here

    No.  A Server Core installation is really not meant to be an application platform for rich applications.  Many Windows applications require components that are not available in the current Windows Server 2008 R2 core installation.  And a core option is not supported in the list of Exchange 2010 supported Operating Systems.

    However, while researching this I did fine a very well-done summary of an attempt to make it work.  Johan Delimon gave it a shot, and documented his attempt here.  (Nice work!)  In short – he was sooo close.  But it didn’t work.  Sad smile


    Have any of you tried to run things on, or are currently supporting applications running on Windows Server Core installations?  Share your experience in the comments.

  • Your choice in .VHD files, and what it implies (So many questions. So little time. Part 36)

    Jeanne asked two questions at our TechNet Event a couple of weeks ago:

    Evaluate the Private Cloud

    So.. in question #1 you’re asking about the implications of using one particular .VHD (virtual hard disk) disk type; Fixed, Differencing, or Dynamic.  To answer that question, first let me take a minute to describe what these types are.  I’ll point to some benefits and weaknesses, and then I’ll point you to some documentation on what the performance impacts are.

    First of all you should understand that when you create a VHD file you have three choices.  Two of these choices concern the difference between what the virtual machine or file system thinks it has when compared the actual file of the disk, and one has to do with a linked relationship between disks.

    Fixed: This is a disk that, to the file system using it, believes it is a certain size.. and that size matches the actual size of the .VHD file.  So if I create a virtual disk for a virtual machine that believes it has 40GB on its C: drive, the .VHD file is approximately 40GB in size.

    Dynamic: This is in contrast to the Fixed-type disk.  In this case, the operating system or file system sees 40GB in the disk, but the size of the .VHD file starts small and grows dynamically as more information is added to it; potentially growing to the full capacity eventually.

    Differencing: This is a parent/child relationship.  Or a grandparent/parent/child relationship.  Or a great-grandparent/grandparent/parent/child relatio…

    “Get on with it!”

    Sorry.  The idea is that you have a .VHD that is a basis, or starting point of content for a new child disk.  Let’s say the .VHD that is going to be the parent is a fully installed and sysprep’d copy of an operating system.  So it’s ready for duplication.  Then you create differencing disks that refer to the common parent.  Those new machines based on the parent will all have the contents of the parent, plus whatever changes gets written in the child; or in the lowest-most disk in the chain of differencing disks.  IMPORTANT: Once you have based one or more children off of a parent disk, the parent .VHD file must never be modified.  If it is, any disk that they are based on becomes invalid and won’t work. 

    “What’s the benefit, then, of differencing disks?”

    Mainly there are two benefits:

    1. It’s potentially a quick way to create new machines off of a pre-installed, up-to-date operating system installation.  For example, in my 3-part screencast series on the System Center 2012 Unified Installer, I created all 8 of my virtual machines from the same parent disk.
    2. It’s a huge savings in space.  Differencing disks are dynamic disks that start out small and grows with any changes or additions to the system using it, so it’s easier to fit many more virtual machines in limited space.  In my case, the main place I like to put my virtual machines is a 160GB SSD.  I put my parent on a separate drive, and this way I can fit more new machines on that SSD. 

    So what are the performance implications of these options, Kevin?” 

    Of course, the more disk operations that are required, the more performance can be degraded.  The .VHD file option with the smallest impact in performance is a Fixed size disk.  (I mention .VHD File option because you do have the ability to run a virtual machine on a “Pass-thru” disk – which is pointing to an actual file or storage system location for the virtual machine to use as its disks.  That is even more efficient than .VHD files in terms of performance; but you lose the big benefits of flexibility and transportability that a virtual machine running off of file system objects gives you).  Next would be Dynamic Disks.  Operations to grow the disk add overhead when it’s required.  The same can be said for differencing disks, because as I mentioned above, your child disk is essentially a dynamic disk.  Changes that would have otherwise been written to the parent are being committed in the child, and that child .VHD file will grow as needed.

    “Where can I go to get more detailed information?  Is there anything like, say, a ‘Windows Server 2008 R2 Hyper-V Virtual Hard Disk Performance Whitepaper’?”

    As a matter of fact, there is exactly something like that:  ‘Windows Server 2008 R2 Hyper-V Virtual Hard Disk Performance Whitepaper’  Enjoy.


    In question #2, you are looking for a small-business equivalent of System Center 2012’s Service Manager component; something along the lines of Microsoft’s System Center Essentials, but with some (or all) of the functionality of Service Manager. 

    I’m afraid that doesn’t exist.  At least not from Microsoft.  What happens in the future I don’t know and can’t speculate on, but for now, I don’t know of a product for small-to-midsized businesses that does what Service Manager does.

    In case you’re interested in System Center Essentials, check out these resources:


    I hope that helps.  Let me know.

  • Why can’t I get Hyper-V working in Windows 8? (So many questions. So little time. Part 37)

    Dan, at a recent TechNet Event, noticed that although he can get Hyper-V working on his laptop when it’s running Windows Server 2008 R2, he couldn’t get it going when running the Windows 8 Consumer Preview.

    Evaluation Software Here

    SLAT (Second Level Address Translation) is a CPU-based memory indexing technology that greatly increases the performance of virtualization.  Intel’s version of this is what they call Extended Page Tables (EPT), and AMD has Rapid Virtualization Indexing (RVI). 

    Hyper-V running on Windows Server 2008 R2 can benefit from SLAT, but doesn’t strictly require it; unless you’re using RemoteFX.

    “So what about Windows Server 2012 and Windows 8?  Do they require SLAT?”

    Okay.. I need to tread lightly here…

    ** DISCLAIMER: What I am saying next is only based on the current beta and Consumer Preview of those two products; which means that it may or may not be true in the future, actual released products. **

    Currently, as is true in Windows Server 2008 R2,  Hyper-V in the beta of Windows Server 2012 (known as Windows Server “8” beta at the time it was released) doesn’t require SLAT unless you’re using RemoteFX

    But Hyper-V in the Consumer Preview of Windows 8 requires SLAT.  This requirement has to do with the fact that, as a client operating system, you’re using higher-end graphics than what is typically required (or desired) on a server installation.  I happen to be running the Consumer Preview of Windows 8 as my production (work) operating system, and I have Hyper-V installed.  It worked great for my recent series of TechNet Events while doing the AD and Migration and demos.


    What about you?  Are you as happy to have Hyper-V on a client operating system as I am?  Have you been testing/trying/learning Windows Server 2012?  Share your impressions or experiences in the comments!