• Extend System/Boot Volume on Windows Server 2008/ Windows Vista/Win7 Beta

     

    I’m sure the subject of extending volumes and drives due to lack of space or other reasons is still a good subject for IT Professionals who work with Windows Servers or Clients.

    There are public documents such as KB325590 or the following TechNet articles that talk about how to extend the volume on your Windows Server 2003 or Windows XP Client;

    Extend a simple or spanned volume

    http://technet.microsoft.com/en-us/library/cc776741.aspx

    The pain was when you wanted to extend the system/boot drive. In order to do that on Windows Server 2003 or Windows XP, you have to boot off of Windows PE and use diskpart. (Instructions in this KB325590)

    Now with Windows Vista and Windows Server 2008 extending the system/boot partition is as easy as what it used to be with non-system partitions.

    Here are the steps;

    1- Start Disk Management Console: (diskmgmt.msc);

    image

    ** As you can see we have 1.00GB unallocated space on Disk0 and would like to add that to the C: drive

    2- Right click on the C: partition and select Extend Volume…

    clip_image004

    3- Press Next on the Extend Volumes Wizard page;

    clip_image006

    4- You’ll be able to see the Total Volume Size, Maximum available space for extension and the amount of space you’d like to extend the system drive all in megabytes. By default the wizard will select the maximum amount that drive C: can be extended but you have the option to change that number; (please note: the number is in megabytes). Press Next after you are ready to extend the partition;

    clip_image008

    5- Confirm this action by pressing Finish button;

    clip_image010

    6- As you can see in the screen shot below, the system drive has been extended with no need to reboot or taking the system offline.

    clip_image012

    Amazing, isn’t it?

  • Hyper-V Guest Clustering Step-by-Step Guide

     

    You might have wondered about how to setup Failover Cluster inside Hyper-V Virtual Machines. As you know, the major element of a cluster is the shared storage piece. Inside Hyper-V VMs, iSCSI is the only supported method of providing shared disks over virtual NICs.

     

    Here are detailed steps on how to create Highly Available resources inside Hyper-V Windows Server 2008 R2 VM;

     

    1. VM Preparation: Provision the following VMs on your Hyper-V server;

    • One Windows Server 2008 R2 DC Virtual Machine
    • At least two Windows Server 2008 R2 Virtual Machines
    • Use an iSCSI target software to provide shared storage to each node

     

    Update (04/05/2011): Microsoft product group just released Microsoft iSCSI Taget Software 3.3 for public download

    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0

    http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx

     

    If you have access to MSDN or TechNet Plus subscription downloads, you may use Windows Server 2008 Storage Server and install Microsoft iSCSI Software Target on it (Evolution version from MSDN or TechNet Plus subscription download and production version from your Windows Server 2008 Storage Server's OEM vendor )

    Note: default Administrator password for Windows Server 2008 Storage Server is in the Windows Storage Server 2008 Release Notes (WSS2008_RELNOTES.DOC), which can be found in the “Tools” ISO file. Please see more information about Microsoft Storage Server 2008 at the end of this post.

    In my demo case, I'm using iSCSI target software that can be installed on Windows Server 2008 R2. So, I'm using my WS08 R2 Domain Controller as my storage server as well. This saves me one VM.

    You can use any other iSCSI target solution if it’s available to you. There are few free iSCSI target software such as Starwind that can also be used for this setup.

     

    2. Virtual Network: create two virtual network switches in Hyper-V management console. One will be serving cluster Heartbeat (Private Network switch) and the other one will be for cluster communication and iSCSI traffic (Internal Virtual Switch) .

    Note: Optionally you can add more virtual network switches based on your needs. If you need to access resources outside your Hyper-V server then you’ll need to create an External Virtual Switch and use that for cluster communication.

    3. Configure Shared Storage: Create a Virtual Hard Disk (VHD) on your Hyper-V host machine and attach it to your iSCSI Storage VM Server via the virtual SCSI controller.

    Note: all the iSCSI shared disks will be created inside this VHD, so you need to size the VHD based on the number of shared disks and their size you will need in your cluster.

    In my example, I’ve created a 20GB VHD inside Hyper-V Management console (New Hard Disk…) and attached it to my storage server VM's SCSI controller.

    clip_image001

    Now, from inside the storage VM, we'll need to access Microsoft iSCSI software target (under Administrative Tools)

    Start creating your shared disks (including Witness disk) under Devices.

    clip_image002

    clip_image003

    clip_image004

    clip_image005

    On the next step you can add the target node(s) or just move next and add it later. We'll come back and add it later.

    clip_image006

    You can see that our iSCSI disk has been provisioned but isn't being accessed by anyone yet.

    Let's repeat the last step and provision more iSCSI virtual disks for your shared disks on the cluster. (I named them SharedDisk0X)

    clip_image007

    Configure the firewall on both ends for iSCSI communication;

    On the client side configure firewall exception for iSCSI Service; (you should be just fine by checking the check box for Domain communications)

    clip_image008

    On the Storage Server side configure the firewall exception on iSCSI Service and Default iSCSI Target Port;

    clip_image009

    Configure iSCSI Initiator on each node;

    Select iSCSI initiator from Administrative Tools. If this is the first time using iSCSI initiator on that machine, Windows will require to start the iSCSI service.

    clip_image010

    The initiator name (screen shot below) will be used inside iSCSI Target Software to establish the connection.

    clip_image011

    Note: If you're using the Administrator account (with the same password on domain and local) for your Demo, make sure you logon to each node using DomainName\administrator user name. On Windows Server 2008 and above, using the administrator account will fall back to the local administrator account not Domain. Use can always use Whoami command to check the logged on account on your session. This could be a gatcha if your local and Domain Administrator accounts share the same password!

    clip_image012

    clip_image013

    On the Storage Server, start the iSCSI software target console.

    Under iSCSI Targets,

    clip_image014

    clip_image015

    Add iSCSI hard disks to each Target;

    clip_image016

    clip_image017

    clip_image018

     

    Note: you can also create only one Target (i.e. Name: W2K8_Clu), add Virtual Disks to it and then logon from the iSCSI initiators from each node using the same iQN name.

     

    clip_image019

    Now on each cluster node, go back and try to connect to the iSCSI software Target on the storage server, using iSCSI Initiator. Quick Connect… should work fine for you.

    clip_image020

    clip_image021

    clip_image022

    clip_image023

    After this all those iSCSI disks will show up in each node's Disk Management console (diskMgmg.msc)

    clip_image024

    While in there, we need to right click on each disk and bring them Online. Then initialize all the disks and assign them Drive letters. This has to be done on all nodes that are going to be joined to the cluster.

    Note: As you know, in production environment you need to have a dedicated network for your iSCSI traffic but for this demo I've used the same virtual network for the iSCSI traffic.

     

    4. Cluster Setup: Now that all the nodes are joined to the Domain and can see all the iSCSI shared disks, we need to install Failover Clustering Feature (From Server Manager Console) on each node. (No Reboot required)

    clip_image025

    After installing the Role, you’ll need to use the most awesome feature of new failover clustering in Windows Server 2008 and Windows Server 2008 R2 that is Cluster Validation Wizard.

    Go to Failover Cluster Manager console from Administrative Tools.

    Launch Validate a Configuration… wizard;

    clip_image026

    clip_image027

    clip_image028

    When the validation completes, you’ll need to review the report and address any possible issues in there.

    The following report looks like a successful one;

    clip_image029

    In my demo, I got 3 warnings that are related to not having a default gateway assigned to the Ethernet adaptors. (I wouldn't worry about this since my cluster won't need public network access)

    clip_image030

    With validation report looks good here , we can form a cluster with these nodes using the Create a Cluster... Option in Failover Clustering console’s main page;

    clip_image031

    Assign a Name and IP address for the cluster itself

    clip_image032

    clip_image033

    Congratulations, the cluster has been configured inside your virtual machines.

    clip_image034

    Now you can proceed with creating highly available resource under Services and applications such as MSDT, SQL and all the other ones.

    The following screen shot was taken after MSDTC or SQL resources were installed;

    image

    And you can see the available storage and used ones in the Failover cluster console;

    image

    So Long…!

     

    More Resources;

    Microsoft iSCSI Software Target

    http://technet.microsoft.com/en-us/library/dd573326(WS.10).aspx

    Configuring Windows Firewall for iSCSI Software Target

    http://technet.microsoft.com/en-us/library/dd573342(WS.10).aspx

    iSCSI Software Target 3.2 FAQ

    http://blogs.technet.com/storageserver/archive/2009/08/14/iscsi-software-target-3-2-faq.aspx

    Known Issues and Updates for Windows Storage Server 2008

    http://technet.microsoft.com/en-us/library/dd904408(WS.10).aspx

    Blog: Windows Storage Server 2008 with the Microsoft iSCSI Software Target 3.2 available to MSDN and TechNet Plus subscribers

    <http://blogs.technet.com/josebda/archive/2009/05/16/windows-storage-server-2008-with-the-microsoft-iscsi-software-target-3-2-available-to-msdn-and-technet-plus-subscribers.aspx>

    Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster

    http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx

    Guide to Creating and Configuring a Server Cluster Under Windows Server 2003

    http://www.microsoft.com/downloads/details.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en

  • Virtual Machine RAM and Windows Server 2012 Hyper-V Dynamic Memory

    With the introduction of Microsoft Hyper-V Dynamic Memory (DM), there’s been many discussions around DM configuration. Before Windows Server 2012 Hyper-V, the only options available in User Interface were Startup and Maximum RAM. You could however use another parameter called Minimum RAM (by default the same value as Startup RAM) but that needed to be configured using WMI.

    Windows Server 2012 Hyper-V made it easier by including that option in VM’s Memory settings.

    image

    When you start a VM with Dynamic Memory enabled, guest Operating System running inside the VM only has the knowledge of RAM that has been allocated to it at the moment and this is normally equal or around the value of Startup RAM. You can verify this by looking at Installed Memory (RAM) value on the System’s page or Installed Physical Memory (RAM) value on the System Summary page when you launch msinfo32.exe

    This number changes in an upward fashion until the next reboot, meaning that as VM’s memory pressure increases, Hyper-V will allocate more RAM to that VM increasing the above mentioned value. Now, for any reasons, if the VM’s memory pressure drops and Hyper-V balloons out part of that RAM from the VM, the value that guest OS has as the Installed RAM won’t decrease. Thus, User Interface and any WMI/PowerShell queries will show the maximum amount of RAM that has been allocated to that VM since the last reboot.

    Here’s a PowerShell sample code that shows the RAM formatted in GB;

    "{0:N0}" -f ((Get-WmiObject -Query "select * from WIN32_ComputerSystem").TotalPhysicalmemory/1GB)

     

    Now, why do we use this?

    There are many cases that applications or installer packages are looking for a minimum amount of RAM as a prerequisite. Some applications might not even launch unless the available RAM query return a satisfactory value. If you’ve ever tried to install Microsoft System Center 2012 components in a VM with Dynamic Memory enabled, you’ve probably seen the available RAM warning message in the prerequisite analyzer wizard. Not fulfilling that requirement won’t stop you from installing any System Center 2012 components though.

    Startup RAM value will allow you to allocate that much RAM to the VM upon the start of VM and let guest OS sets the Installed RAM value accordingly. This RAM will be locked to the VM while it’s booting and the extra amount of RAM that isn’t needed by the VM will be available to Hyper-V as soon as VM’s guest OS and Hyper-V Integration Services within the guest OS are up and running. In most cases the actual physical RAM allocated to the VM could go down to the value of minimum RAM if there’s no memory pressure. This all depends on how fast your VM boots.

    Off course, you can set the Startup RAM to same value as Maximum RAM for multiple VMs but you’ll need to be careful if the Maximum RAM’s too high and those VMs could start at the same time, some of them might not be able to boot, if the sum of the Maximum RAM values exceeds the total physical RAM available on the Hyper-V server. And yes, you can have the Startup delay action for those VMs and address this concern.

     image

     

    In summary, Dynamic Memory Startup RAM value is a nifty useful dial that could assist you with optimizing your Private Cloud’s memory compute resource. It’s the best to understand the minimum RAM that needs to be reported by the OS as available Installed RAM and set the Startup Value respectfully and let Hyper-V Dynamic Memory does its magic and be efficient in RAM distribution amongst your VMs. So long…

     

    More Info:

    Scripting dynamic memory, part 5: changing minimum memory

    Dynamic Memory Coming to Hyper-V Part 6…

  • How to determine the appropriate page file size on my server

    I’ve seen the question of how big the page file has to be on a serer several times. As you may know the rule of thumb is to make it 1.5 times the physical RAM.  But sometime system would have a different recommended value for your page file size.

    What if you would like to make sure that you are sure about the page file size and you want to prove it with numbers and you calculator?

    The best method to determine the correct page file size is to use the following performance counters in perfmon;

       commit limit

       commit charge

    (available both in perfmon and task manager )

    Commit limit includes memory and page file, so we can easily see how much page file is needed by observing these counters over time.  

    Under full application load you need to increase the page file until the commit limit is 20%-30% greater than the commit, (or reduce it if your minimum page file size was higher).  

    Since the correct measurements are based on application mix and number of other factors, you can see why the 1.5xRAM rule is just an estimate since we do not have an idea what we are running on the server.

     

    We also have an article that shows you a different method on Windows Server 2003 x64;

    889654  How to determine the appropriate page file size for 64-bit versions of Windows Server 2003 or Windows XP

    http://support.microsoft.com/kb/889654

     

    Either method should assist you with calculating the correct page file size on your server.

    Until next time.  Cheers!

     

    Just found Mark russinovich has a very deep dive about this topic on his Blog;

    http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx

     

  • What’s the story of Pagefile size on Hyper-V Servers?

     

    This question comes up frequently. So, how do we size the pagefile on Hyper-V host servers and how about Virtual Machines?

    Here’s the all you should know;

    Paging File Configuration within the VM

    clip_image001 Proper paging file configuration is vital to the way DM performs

    clip_image001[1] DM requires that the VM have a PF

    clip_image001[2] So how do you size your Paging File?

    clip_image002 The old way doesn’t apply to DM:

    • Use Peak Commit Charge
    • Size PF to Peak Commit Charge – Physical Memory + some buffer

    To understand how we should size the PageFile (PF), let’s consider an example:

    clip_image001[4] Current Commit: 100 MB

    clip_image003 Free Buffer: 20%  (this is a new Dynamic Memory Setting)

    clip_image003[1] Target Commit: 125 MB

    clip_image003[2] Paging File: 1MB

    clip_image003[3] Commit Limit: 126 MB

    clip_image003[4] Available Memory: 26 MB (Commit Limit – Current Commit)

    clip_image001[5] What happens when an app allocates 50 MB of memory?

    clip_image003[5] What happens when an app allocates five 10 MB memory chunks?

    Guidance:

    clip_image001[6] Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.

    clip_image003[6] Maximum PF should be (peak commit charge – maximum physical memory + some buffer)

    Paging File Configuration on the host

    If you have reserved enough RAM for the host operations and your host is only being used for Hyper-V (plus Fail Over Cluster for HA VMs), you shouldn’t need a large pagefile.

    With Dynamic Memory, you have to make sure that your host reserve is set properly

    HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization

                    Value Name: MemoryReserve

                    Value Type: DWORD

                    Value Data: (Decimal)Memory to reserve in MB

     

    With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file.
    So, Pagefile size of min= few Gigabyte and Max=2xmin should be sufficient.

    Also, there are 2 new performance counter objects available to the Hyper-V host;

    clip_image004 Hyper-V Dynamic Memory Balancer

      • Provides counters about the memory balancer running in the parent partition
      • Counters are focused on:
      • The memory that is available for the host to use
      • The add and remove operations performed by the memory balancer

    clip_image004[5] Hyper-V Dynamic Memory VM

      • Provides counters about memory usage on the guest VMs that are currently running
      • Each VM has its own set of counters
      • Counters are focused on:
      • Memory added and removed from the guest
      • The memory pressure that the guest is experiencing

     

    Hope this helps! so long…