For Microsoft Hyper-V admins out there, you might wanna check out the following couple of MSPress Microsoft Hyper-V Ebooks;
Optimizing and Troubleshooting Hyper-V Networking
http://shop.oreilly.com/product/0790145382924.do
Optimizing and Troubleshooting Hyper-V Storage
http://shop.oreilly.com/product/0790145383068.do
I was honored to write few sections and tech review both books for Mitch. Enjoy!
While publishing “Microsoft Office 365 Administration Inside Out”, we realized we had illustrated a fairly well-understood way to monitor your O365 tenant with System Center 2012 Operations Manager (SCOM), which may produce false positives. In “Microsoft Office 365 Administration Inside Out”, chapter 5 instructs the reader to browse to this web page and download the Office 365 management pack (MP). This MP is merely used as an example of how to use the System Center 2012 Operations Manager (SCOM) Web Application Availability Monitoring wizard to create a management pack of a few synthetic transactions, aka as fake or test transactions.
Download the following MP and Import into your System Center 2012 Operations Manager infrastructure. You will need to edit the MP as described in Chapter 5 and 6 of “Microsoft Office 365 Administration Inside Out”.
3302.Office.xml
It is important to understand that these synthetic transactions only monitor specific entities and relying on them to determine the state of your O365 tenant can lead to false positives. Let’s use Microsoft Exchange as an example. Most Microsoft Exchange servers store mailboxes in more than one mail database, known as the EDB file. There are a variety of reasons for administrators doing this, but the point here is that monitoring a mailbox’s availability only reflects its mail database availability. In O365, mailboxes may be spread across a number of mail databases. This means we may be incorrectly stating that email is running, while only some mailboxes are available.
Microsoft Office 365 distributes mailboxes across numerous mail databases, abstracting this application layer from the user and customers. Monitoring this O365 mailbox will only validate that the mail database it resides on is available. This begs the question; how do we monitor O365 more accurately?
Office 365 has a component called the System Health Dashboard (SHD), which is a matrix of health states of various O365 workloads over the last 7 days. As you can see, O365 has 8 Exchange Online workloads that are exposed in this dashboard and need to be monitored along numerous others services and workloads.
O365 provides the following 7 services and 35 workloads:
The O365 Service Health Dashboard (SHD) is the authoritative source for monitoring an O365 tenant. This means we need to query the SHD with SCOM to determine the state of the services and workloads we are interested in alerting on.
The concepts behind this entails 3 different parts:
We have been leveraging PowerShell and have made progress, but not successfully logged-in due to, what we identified as a Java component requirement.
=================================
$r = Invoke-WebRequest https://portal.microsoftonline.com/ServiceStatus/ServiceStatus.aspx -SessionVariable O365
$O365
$form = $r.Forms[0]
$form | Format-List
$form.Fields["cred_userid_inputtext"] = "username@yourdomain.onmicrosoft.com"
$form.Fields["cred_password_inputtext"] = "password"
$r = Invoke-WebRequest -Uri ("https://portal.microsoftonline.com/ServiceStatus/ServiceStatus.aspx" + $form.Action) -WebSession $O365 -Method POST -Body $form.Fields
$r.StatusDescription
==================================
Next we need to query the E-Mail and calendar access section of the aspx page. From this section of the page, we can retrieve the current state of the workload.
We are retrieving the first column because it is the only state we need. O365 keeps 30 days of history for the tenant service monitoring, but SCOM can maintain months to years’ of data.
As of this posting, we have completed the following PowerShell commands to accomplish this. This script is a work in progress because we are at an impasse attempting to move beyond the login page.
$EMailServiceName="E-Mail and calendar access"
$HTML = gc '.\Service health.htm'
$section=$false
$result=""
foreach($line in $HTML)
{
if($line -match "ServiceNames" -and $line.Contains($EMailServiceName)){$Section=$true}
if($section -and $line.Contains("TodayCol") -and $line -match "title=`"([^`"]*)`""){$result=$matches[1]}
if($line -match "SubServiceNames" -and -not $line.Contains($EMailServiceName)){$Section=$false}
}
write-host "Service:" "'"$EMailServiceName"'" "Curent Status is:" $result
Finally, once the value is retrieved, we can write to the Application Event Log like this:
write-eventlog -logname Application -source O365MP -eventID 301 -entrytype Information -message "O365 message" -category 1 -rawdata 10,20
There are 9 states an O365 Service/workload can be in. This table identifies them and how we could reflect them in SCOM:
SHD Service State
SHD Service State Icon
SCOM Monitor Status
SCOM
Alert
SCOM Alert Status
Normal service
Green
No
Healthy
Service Restored
Service degradation
Yellow
Yes
Warning
Restoring service
Extended recovery
Service interruption
Red
Error
Additional information
Informational
PIR published
Investigating
There are many pages online that guide you through creating a monitor in SCOM. Here are a few we recommend. While they may be written for SCOM 2007, they are applicable for SCOM 2012.
How to Create a Monitor
How to create a monitor based on a script
Create a Script-Based Unit Monitor in OpsMgr2007 via the GUI
Creating a SCOM monitor for an average value
Once this process is complete and working successfully, select a different workload to retrieve the state and repeat for each workload you are interested in monitoring.
This posting is provided "AS IS" with no warranties, and confers no rights. Use of included utilities is subject to the terms specified at http://www.microsoft.com/info/cpyright.htm
With the introduction of Microsoft Hyper-V Dynamic Memory (DM), there’s been many discussions around DM configuration. Before Windows Server 2012 Hyper-V, the only options available in User Interface were Startup and Maximum RAM. You could however use another parameter called Minimum RAM (by default the same value as Startup RAM) but that needed to be configured using WMI.
Windows Server 2012 Hyper-V made it easier by including that option in VM’s Memory settings.
When you start a VM with Dynamic Memory enabled, guest Operating System running inside the VM only has the knowledge of RAM that has been allocated to it at the moment and this is normally equal or around the value of Startup RAM. You can verify this by looking at Installed Memory (RAM) value on the System’s page or Installed Physical Memory (RAM) value on the System Summary page when you launch msinfo32.exe
This number changes in an upward fashion until the next reboot, meaning that as VM’s memory pressure increases, Hyper-V will allocate more RAM to that VM increasing the above mentioned value. Now, for any reasons, if the VM’s memory pressure drops and Hyper-V balloons out part of that RAM from the VM, the value that guest OS has as the Installed RAM won’t decrease. Thus, User Interface and any WMI/PowerShell queries will show the maximum amount of RAM that has been allocated to that VM since the last reboot.
Here’s a PowerShell sample code that shows the RAM formatted in GB;
"{0:N0}" -f ((Get-WmiObject -Query "select * from WIN32_ComputerSystem").TotalPhysicalmemory/1GB)
Now, why do we use this?
There are many cases that applications or installer packages are looking for a minimum amount of RAM as a prerequisite. Some applications might not even launch unless the available RAM query return a satisfactory value. If you’ve ever tried to install Microsoft System Center 2012 components in a VM with Dynamic Memory enabled, you’ve probably seen the available RAM warning message in the prerequisite analyzer wizard. Not fulfilling that requirement won’t stop you from installing any System Center 2012 components though.
Startup RAM value will allow you to allocate that much RAM to the VM upon the start of VM and let guest OS sets the Installed RAM value accordingly. This RAM will be locked to the VM while it’s booting and the extra amount of RAM that isn’t needed by the VM will be available to Hyper-V as soon as VM’s guest OS and Hyper-V Integration Services within the guest OS are up and running. In most cases the actual physical RAM allocated to the VM could go down to the value of minimum RAM if there’s no memory pressure. This all depends on how fast your VM boots.
Off course, you can set the Startup RAM to same value as Maximum RAM for multiple VMs but you’ll need to be careful if the Maximum RAM’s too high and those VMs could start at the same time, some of them might not be able to boot, if the sum of the Maximum RAM values exceeds the total physical RAM available on the Hyper-V server. And yes, you can have the Startup delay action for those VMs and address this concern.
In summary, Dynamic Memory Startup RAM value is a nifty useful dial that could assist you with optimizing your Private Cloud’s memory compute resource. It’s the best to understand the minimum RAM that needs to be reported by the OS as available Installed RAM and set the Startup Value respectfully and let Hyper-V Dynamic Memory does its magic and be efficient in RAM distribution amongst your VMs. So long…
More Info:
Scripting dynamic memory, part 5: changing minimum memory
Dynamic Memory Coming to Hyper-V Part 6…
There are lots of excitements at MMS 2013 (#MMS2013) in Las Vegas. Many opportunities to learn about Microsoft Private and Public Clouds, Cloud OS and celebrate many milestones achieved by Windows Server 2012, System Center 2012 SP1, Windows Intune, Windows Azure and Microsoft Cloud Solutions in general.
If you didn’t get a chance to attend, the good news is that you can stream the keynotes live on MSDN Channel 9;
http://channel9.msdn.com/Events/MMS/2013
Also, don’t miss the opportunity to go back and watch the recording of session you like to know more about during the week.
Windows Server 2012 (aka The Modern Cloud OS) has been out for over 6 months now
With Windows Server 2012, Microsoft delivers a server platform built on our experience of building and operating many of the world's largest cloud-based services and datacenter. Whether you are setting-up a single server for your small business or architecting a major new datacenter environment, Windows Server 2012 will help you cloud-optimize your IT so you can fully meet your organization's unique needs. Windows Server 2012 is;
Windows Server 2012 Rapid Deployment Program TCO Study Whitepaper
If you are interested to see how Windows Server 2012 has helped reduce Total Cost of Ownership with our customers that already deployed it, please see this whitepaper.
TechNet: Windows Server 2012 Technical Library
This library provides the core content that IT pros need to evaluate, plan, deploy, manage, troubleshoot, and support servers running the Windows Server 2012 operating system.
Independent Study: The Total Economic Impact of Windows Server 2012
This study conducted by Forrester Consulting is now available which uses their Total Economic Impact methodology to explore the potential costs and benefits of Windows Server 2012. Forrester also stated in the study that, “The data collected in this study indicates that deploying Windows Server 2012 has the potential to provide a solid ROI through quantifiable benefits, most notably increased scale, performance and flexibility with Hyper-V, the enablement of software defined networking with Network Virtualization, and improved storage integration & management.”
Overall, the study estimates a 6-month payback and 3-year risk-adjusted estimated ROI of 195% for a composite organization of 14,000 employees, which is based on the characteristics of the companies interviewed.
Simplify Deployment of Windows Server 2012 with MDT 2012 Update 1
Microsoft Deployment Toolkit 2012 Update 1 is now available and expands your deployment capabilities with support for the latest software releases, including Windows 8, Windows Server 2012, and System Center 2012 Configuration Manager SP1 Community Technology Preview.
Assess Windows Server 2012 Readiness with MAP
Microsoft Assessment and Planning (MAP) Toolkit 7.0 is now available for download with detailed and actionable recommendations, indicating the machines that meet Windows Server 2012 system requirements and which may require hardware updates. A comprehensive inventory of servers, operating systems, workloads, devices, and server roles is included to help in planning efforts.
Configuring and Managing Server Core Installations
This collection of topics provides the information needed to install and deploy Server Core servers; install, manage, and uninstall server roles and features; and manage the server locally or remotely. It also includes a quick reference table of common tasks and the commands for accomplishing them locally on a Server Core server.
Secure your Windows Server 2012 and Windows 8 clients with Microsoft Security Compliance Manager
The Security Compliance Manager (SCM) is a free tool from the Microsoft Solution Accelerators team that enables you to quickly configure and manage the computers in your environment and your private cloud using Group Policy and Microsoft System Center Configuration Manager. Microsoft Security Compliance Manager 3.0 is now available for download! SCM 3.0 includes new baselines for Windows Server 2012, Windows 8, Internet Explorer 10, an enhanced setting library, and new functionality to LocalGPO.
What’s New in Windows Server 2012
Find out what's New in Windows Server 2012. This content focuses on changes that will potentially have the greatest impact on your use of this release. I would highly recommend checking out the Hyper-V and Storage new features and improvements.
So, Why Hyper-V? Let’s take a look at Competitive Advantage of Windows Server 2012 Hyper-V over VMWare vSphere 5.1
If you’ve ever been curious to see Why Hyper-V is one of the leaders in the Server Virtualization space and how it compares to the latest release of VMWare vSphere 5.1, you’ll enjoy reading this whitepaper.
7 ways Windows Server 2012 pays for itself
I was reading InfoWorld article below and thought you’d also like to see what other people are saying about Windows Server 2012 and how they can go forward with their IT with the speed of tomorrow
How would you like to virtualize most business critical applications on Windows Server 2012 Hyper-V with confidence?
While Microsoft Virtualization Product team confirmed that now 99% of ClassA applications in the world can be virtualized on Windows Server 2012 Hyper-V, ESG Labs has also conducted an independent evaluation of Hyper-V and you can find the results here.
The key findings from ESG Labs were:
Windows Server 2012 Storage Whitepaper
Microsoft is introducing several new storage features and capabilities with Microsoft Windows Server® 2012. These innovative features and capabilities extend functionality in profound ways, including the ability to leverage inexpensive storage to create highly available, robust, and high performing storage solutions. These new Microsoft storage capabilities add dynamic functionality on each server and can work together to further enhance functionality at scale in large enterprise environments. This paper outlines these new features and capabilities and how they integrate, co-exist, and complement one another to extend the capabilities of your entire storage infrastructure. Also, wanted to let you know that a new report benchmarking various Storage features in Windows Server 2012 is now available online. The report can be downloaded here.
So, How about some Trainings?
Windows Server 2012 Technical Overview Courses on Microsoft Virtual Academy (MVA)
For you IT Admins, these free courses provide lots of value to head start your journey toward adapting Windows Server 2012. This course is designed to provide you with the key details of Windows Server 2012. The seven modules in this course, through video and whitepaper, provide details of the new capabilities, features, and solutions built into the product. With so many new features to cover, this course is designed to be the introduction to Windows Server 2012.
Windows Server 2012 Test Lab Guides
The Windows Server 2012 Test Lab Guides (TLGs) are a set of documents that describe how to configure and demonstrate the new features and functionality in Windows Server 2012 and Windows 8 in a simplified and standardized test lab environment. There are many different scenarios that you could use to build a test lab in your test/dev environment. If you’re interested in Test Lab Guides for other Microsoft solutions and products, please see here or follow their blog here.
Windows Server 2012 Virtual Labs
Experience Windows Server 2012 firsthand in these virtual labs. You can test drive new and improved features and functionality through deep end-to-end scenarios or specific features, including server management and Windows PowerShell, networking, Hyper-V, and new storage solutions. These are Microsoft Cloud hosted Virtual Labs that you can start from your PC and enjoy the free learning experience without even needing to stand up a lab environment. If you are interested in more virtual labs, please see here. Enjoy!
Free Books
Free Windows Server 2012 (RTM) eBook is Available Now
Mitch Tulloch has updated his very popular free ebook on Windows Server 2012 based on the RTM version of the software. A key feature of this book is the inclusion of sidebars written by members of the Windows Server team, Microsoft Support engineers, Microsoft Consulting Services staff, and others who work at Microsoft. These sidebars provide an insider’s perspective that includes both “under-the-hood” information concerning how features work, and strategies, tips, and best practices from experts who have been working with the platform during product development. If you are into eReaders, you can also download the ePub and Mobi versions for free.
Free ebook: Introducing Windows 8: An Overview for IT Professionals (Final Edition)
Get a headstart evaluating Window 8 - guided by a Windows expert who’s worked extensively with the software since the preview releases. Based on final, release-to-manufacturing (RTM) software, this book introduces new features and capabilities, with scenario-based insights demonstrating how to plan for, implement, and maintain Windows 8 in an enterprise environment. Get the high-level information you need to begin preparing your deployment now.
More Resources
Real impact for lean government
http://www.microsoft.com/industry/government/guides/Lean-Gov/default.aspx
Microsoft Private Cloud Solutions
http://www.cloudassessmenttool.com
Did you Know these facts?
http://www.didyouknow2012.com
Making government lean with virtualization and private cloud
http://www.microsoft.com/government/ww/public-services/blog/Pages/post.aspx?postID=255&aID=66
Microsoft Government
http://www.microsoft.com/government/en-us/state/pages/default.aspx
Microsoft Government Cloud Computing
http://www.microsoft.com/industry/government/guides/cloud_computing/default.aspx
Microsoft Government Server Virtualization
http://www.microsoft.com/industry/government/solutions/Server_Virtualization/default.aspx
Try the release candidate of Windows Server 2012, a scalable, dynamic, multitenant-aware, and cloud-optimized infrastructure. This section contains information to design, deploy, manage, and troubleshoot technologies in Windows Server 2012
System Center 2012 Extends Client Management and Security to Mac and Linux
Last week at TechEd North America, general manager Garth Fort discussed two key release milestones that will enable organizations using System Center Configuration Manager to extend client management and security to heterogeneous platforms. Both are available for download today.
Couldn’t make it to Orlando? Get a taste of TechEd from wherever you are. Watch the keynotes and many technical sessions On Demand. Public Recording from TechEd North America 2012 is available on TechEd Channel 9. Also you are interested to watch Failover Cluster Sessions at TechED Orlando 2012 and TechEd Amsterdam 2012, please see the blog post from Microsoft Clustering and High-Availability Product team here.
This collection contains information on how to design, deploy, manage, and troubleshoot technologies in Windows Server 2012.
This document goes through and looks at how Hyper-V on the Windows Server 2012 Release Candidate is stacking up against VMware. You’ll be amazed with all the feature enhancements and scalability metrics
Find out what's new and changed in Windows Server 2012. This content focuses on changes that will potentially have the greatest impact on your use of this release. I would highly recommend checking out the Hyper-V and Storage new features and improvements.
You can now use Windows PowerShell to automate all of the IT tasks around cloud datacenter management, starting from deploying your cloud infrastructure servers, through on-boarding virtual machines onto that infrastructure, and ending with monitoring your datacenter environment and collecting information about how it performs.
Windows Server 2012 First Look (free Course)
This course builds on the original Windows Server "8" First Look course with the addition of 4 new modules designed to go into more detail on the core Windows Server 2012 platform capabilities.
Windows Server 2012 Beta Test Lab Guides
The Windows Server “8” Beta Test Lab Guides (TLGs) are a set of rich documents that describe how to configure and demonstrate the new features and functionality in Windows Server “8” Beta and Windows 8 Consumer Preview in a simplified and standardized test lab environment. While these lab guides were published prior to the release of Windows Server 2012, all of them still apply and you can start testing and evaluating these features and functionalities with Windows Server 2012 RC and Windows 8 Release Preview.
Free Windows Server 2012 eBook Available Now
Download your free copy of the Microsoft Press eBook Introducing Microsoft Windows Server 2012. Written by Mitch Tulloch with the Microsoft Windows Server Team, this book explores enhancements and new features in Windows Server 2012 RC. If you are into eReaders, you can also download the ePub and Mobi versions for free.
Free SQL 2012 eBook Available Now
Download your free copy of the Microsoft Press eBook Introducing Microsoft SQL Server 2012. Written by Stacia Misner and Ross Mistry, this book explores enhancements and new capabilities, including improvements in operation, reporting, and management.
Hyper-V differencing virtual hard disk (VHD) is an interesting and sometimes very useful subject in virtualization world.
As a matter of fact Snapshot AVHDs are also differencing virtual hard disks. My favorite scenarios for using snapshots are stateless VDI (Virtual Desktop Infrastructure), training VMs and short term test labs. Differencing VHDs provide us with deployment speed and great storage saving.
If you need to know more about creating Differencing disks, please see the following article;
Hyper-V Virtual Machine (VM) Parent-Child Configuration Using Differencing Disks
http://social.technet.microsoft.com/wiki/contents/articles/hyper-v-virtual-machine-vm-parent-child-configuration-using-differencing-disks.aspx
One of my outstanding questions regarding Snapshots and Diff Disks has been walking the chain of child and parents up to the top parent. Off course, you can use Hyper-V Management Console and use Inspect Disk option to find the parent but you might be looking for another method to programmatically walk the chain and show you the relationship of multiple differencing disks in a tree. This made me look at few WMI classes and write few lines of code.
The first step is to find the AVHD (Snapshot) or VHD (Diff Disk) that is the newest one in the directory. A simple dir (PowerShell alias for Get-ChildItem) and Sort-Object will take care of this.
The second step is to inspect the VHD and you can take care of it with GetVirtualHardDiskInfo method of Msvm_ImageManagementService WMI class under Root\Virtualization namespace
Get-WmiObject –ComputerName “Localhost” -Namespace root\virtualization -Class Msvm_ImageManagementService
The output of this method, shows the parent of virtual hard disk being inspected in XML format. You can retrieve the parent virtual disk name and use it as the input for the next round of check. like a recursive loop.
So, I created the following function to take care of this repetitive task;
Function Inspect-DiffVHD([string]$VHDPath,[string]$Server = ".")
$HyperVNamespace = "root\virtualization"
$Server = "."
$ImageManagementServiceName = "Msvm_ImageManagementService"
$ImageManagementService = Get-WmiObject -ComputerName $Server `
-Namespace $HyperVNamespace -Class $ImageManagementServiceName
$result = $ImageManagementService.GetVirtualHardDiskInfo($vhdPath)
$VHDInfo = [xml]$result.info
$VHDInfo.INSTANCE.PROPERTY[4].value
The attached sample PowerShell script would generate the following output when pointed to a folder containing differencing VHDs (.VHD) or Snapshots (.AVHD). (The sample script assumes the original Child/Parent relationship hasn’t been altered).
Sample output;
Enter Path: D:\Hyper-V\Test
D:\hyper-v\test\test_72A34A5A-0B86-4858-9663-CCE3AD9828A2.avhd
D:\Hyper-V\test\test_9330B5ED-248D-442D-9705-F7A320F7AC62.avhd
D:\Hyper-V\test\test_F6BC8CA6-BC70-4D02-9D6D-5E3C7CAB9D6E.avhd
D:\Hyper-V\test\test_84D8C8BC-1723-49DB-92D8-CEC1752ADC3F.avhd
D:\Hyper-V\test\test.vhd
Please note that I didn’t get a chance to test my sample script against multiple tree snapshot scenario but you should be able to do this with a simple modification.
Hope you find this post helpful. So long…
This question comes up frequently. So, how do we size the pagefile on Hyper-V host servers and how about Virtual Machines?
Here’s the all you should know;
Paging File Configuration within the VM
Proper paging file configuration is vital to the way DM performs
DM requires that the VM have a PF
So how do you size your Paging File?
The old way doesn’t apply to DM: Use Peak Commit Charge Size PF to Peak Commit Charge – Physical Memory + some buffer
The old way doesn’t apply to DM:
To understand how we should size the PageFile (PF), let’s consider an example:
Current Commit: 100 MB
Free Buffer: 20% (this is a new Dynamic Memory Setting)
Target Commit: 125 MB
Paging File: 1MB
Commit Limit: 126 MB
Available Memory: 26 MB (Commit Limit – Current Commit)
What happens when an app allocates 50 MB of memory?
What happens when an app allocates five 10 MB memory chunks?
Guidance:
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Maximum PF should be (peak commit charge – maximum physical memory + some buffer)
Paging File Configuration on the host
If you have reserved enough RAM for the host operations and your host is only being used for Hyper-V (plus Fail Over Cluster for HA VMs), you shouldn’t need a large pagefile.
With Dynamic Memory, you have to make sure that your host reserve is set properly
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization
Value Name: MemoryReserve
Value Type: DWORD
Value Data: (Decimal)Memory to reserve in MB
With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file. So, Pagefile size of min= few Gigabyte and Max=2xmin should be sufficient.
Also, there are 2 new performance counter objects available to the Hyper-V host;
Hyper-V Dynamic Memory Balancer
Hyper-V Dynamic Memory VM
Hope this helps! so long…
In this post, we're going to look at an issue that was happened when I was teaching a Hyper-V R2 class few weeks back.
Students were experimenting Hyper-V Role installation options using Server Manager Console, DISM.exe, ocsetup.exe and ServerManager PowerShell Module.
After reboot, we noticed that one of Windows Server 2008 R2 SP1 Full edition hosts (Not Core) didn't have Hyper-V Management Console listed under Administration Tools. In addition to that, there no Hyper-V Role listed under Server Manager Console Roles!
When we tried to select the Hyper-V Role in Server Manager Add Roles Wizard, the following error message surprised us!
Hyper-V cannot be installed
The processor on this computer is not compatible with Hyper-V. To install this role, the processor must have a supported version of hardware-assisted virtualization, and that feature must be turned on in the BIOS.
We verified that hardware-assisted virtualization feature and hardware DEP are enabled within the server's BOIS.
Bcdedit.exe showed Hypervisorlaunchtype was set to Auto
%windir%\logs\CBS.log showed Hyper-V roles is installed and didn't show any errors.
Looking at dism.exe feature query (dism.exe /online /get-features) showed
Feature Name : Microsoft-Hyper-V
State : Enabled
So, comparing with a working Hyper-V R2 (Full Edition) host we found that the following feature was not installed in here;
Microsoft-Hyper-V-Management-Clients
So, it turned out the he’d used the following command to install Hyper-V role without installing the management console feature.
This feature is normally being installed on Hyper-V Server Full Edition when using Server Manager Role installation wizard while it needs to be installed separatly if command line tools such as dism.exe, ocsetup.exe are being used.
The command that was used was
Dism.exe /online /enable-feature /featurename:Microsoft-Hyper-V
Note: this command works fine on Server Core edition of R2 as it doesn't require management console and there's no server manager console either.
Instead, the following command should've been used on Windows Server 2008 R2 Full Edition;
Dism.exe /online /enable-feature /featurename:Microsoft-Hyper-V /featurename:Microsoft-Hyper-V-Management-Clients
Hope this helps you prevent this mystery or solve it! So long...
Yesterday while working on installing a new SCVMM 2008 R2 server setup, we encountered an issue when tried to add a new 2-node Hyper-V R2 host cluster to the VMM console.
Here’s how we were able to reproduce the error;
In SCVMM 2008 R2 console, select Add a host that is “Windows Server-based host on an Active Directory domain” . Enter the name of the server and choose add. You will get the following error;
Hostname.Hosts.domainname.com cannot resolve with DNS.
Ensure there is network communication with the DNS server. If the problem persists, contact your network administrator.
ID: 404 Details: No such host is known
We researched the issue and saw the following Wiki page that did not resolve this particular problem. (This article addresses similar error message in disjointed namespaces)
http://social.technet.microsoft.com/wiki/contents/articles/a-few-notes-on-system-center-virtual-machine-manager-2008-and-disjointed-namespaces.aspx
Here’s the TechNet explanation of this error message;
SCVMM 2008 R2 Command-Line Interface Error Messages
http://technet.microsoft.com/en-us/library/dd548296.aspx
404
NSLookup query against all node names and Failover Cluster name proved our DNS was working fine.
The next step was to check the networking and we found that default gateway on the SCVMM Server wasn’t configured properly. Setting the default gateway to the correct gateway resolved this issue.
I’ve also seen other networking problems on SCVMM server could result in the the same error message.
So, here are few troubleshooting steps that might help resolve this;
1- Domain functional level must be at least Windows 2003 2- DNS entries should exist for the SCVMM server and systems that will be added as Nodes and Failover Cluster names if any. 3- Check the network configuration on the SCVMM server such as default gateway and DNS server IP Addresses. Also, if there are multiple NICs on the server, try adding the node while other non-production NICs are disabled. 4- While adding the node to SCVMM, check the “Skip Active Directory name verification” checkbox on the Select Host Servers page where the host name is entered.
1- Domain functional level must be at least Windows 2003
2- DNS entries should exist for the SCVMM server and systems that will be added as Nodes and Failover Cluster names if any.
3- Check the network configuration on the SCVMM server such as default gateway and DNS server IP Addresses. Also, if there are multiple NICs on the server, try adding the node while other non-production NICs are disabled.
4- While adding the node to SCVMM, check the “Skip Active Directory name verification” checkbox on the Select Host Servers page where the host name is entered.
So long….
If you are using Hyper-V as your virtualization solution, you might be familiar with expand and compact of VHD (virtual hard disk) process. Few days back when I was trying to compact one of my VM’s Dynamic VHDs (Guest OS: Windows Server 2008 R2), I encountered the following error;
[Window Title] Edit Virtual Hard Disk Wizard [Error Message] The server encountered an error trying to edit the virtual disk. [Content] 'The system failed to compact 'D:\HYPER-V\VM001\Virtual Hard Disks\VM001-Disk1.vhd'. Error Code: The requested operation could not be completed due to a file system limitation.
[Window Title]
Edit Virtual Hard Disk Wizard
[Error Message]
The server encountered an error trying to edit the virtual disk.
[Content]
'The system failed to compact
'D:\HYPER-V\VM001\Virtual Hard Disks\VM001-Disk1.vhd'. Error Code: The requested operation could not be completed due to a file system limitation.
After researching this issue, it turned out that I had some Volume Shadow Copy (VSS) snapshots inside the guest operating system and needed to remove those first.
Normally I would use <vssadmin.exe delete shadows /all> command to do so but this time I received the following error;
Error: Snapshots were found, but they were outside your allowed context. Try removing them with the backup application which created them.
In this case, I had to use DiskShadow.exe tool that is part of Windows Server 2008 (R2) and delete all the existing VSS snapshots. and the steps are;
1- Start the VM
2- Logon with a local administrator account
3- Run the following command from an elevated command prompt:
DiskShadow.exe DISKSHADOW>Delete Shadows All
DiskShadow.exe
DISKSHADOW>Delete Shadows All
This successfully removed all the VSS snapshots.
4- Shutdown your VM and try compacting the VHD again
More Information;
http://technet.microsoft.com/en-us/library/cc772172(WS.10).aspx
http://technet.microsoft.com/en-us/library/ee221016(WS.10).aspx
http://blogs.technet.com/b/josebda/archive/2007/11/30/diskshadow-the-new-in-box-vss-requester-in-windows-server-2008.aspx
Vssadmin delete shadows
http://technet.microsoft.com/en-us/library/cc788026(WS.10).aspx
Hi, Mark here today. As you might know, with the release of Server 2008 R2, Server Manger offered Best Practices Analyzer (BPA) for few roles/features such as Web Server (IIS).
The following screen shot shows the Best Practices Analyzer for IIS on Windows Server 2008 R2
Today I’m going to talk about Hyper-V R2 BPA that has been released recently through KB977238
After you install this package (no reboot required), the Hyper-V Best Practices Analyzer will be added to the Hyper- V role page under Server Manager Console. You can now select the Scan This Role option and run the BPA on the server and review the report.
Couple of questions you might ask would be how to run the BPA remotely or on Windows Server 2008 R2 Core?
You can use Server Manager Remotely and connect to another server. If the target server already has Hyper-V BPA package installed, you can scan that machine from inside server manager console and review the report. This applied to both Full and Core editions of Server 2008 R2 as long as Firewall Remote Server Manager connection is already enabled on the target server.
Note: on Server Core R2 please run SCONFIG.EXE
Now, you have another option that is using PowerShell to run the BPA and Save the Results.
For detailed information please see Get-Help about_BestPractices
Here are steps that are needed to be done;
Output of Get-BPAResult could be exported as HTML or CSV and the Reports could be saved to an alternative path as you wish.
Using this method, you can write your PowerShell Script to run BPA on multiple Hyper-V R2 server (regardless of Core or Full edition) and save the reports on a centralized location.
You may check out the following sample Powershell script I have put together. It runs Hyper-V R2 BPA and saves the report in CSV format;
Hyper-V_BPA_Sample.zip
The same concept could be expanded for other roles on Windows Server 2008 R2 that already have BPAs available for them.
In order to check the existing BPA for Windows Server 2008 R2 please refer to the following TechNet page;
TechNet Windows Server 2008 R2 Best Practices Analyzers
http://technet.microsoft.com/en-us/library/dd392255(WS.10).aspx
Hope you enjoyed this post. So long….
Hi Mark here again. I was working on W2K8 R2 RD Connection Broker today (this is an exotic topic as people don’t get lots of queries on it) and thought I could share few notes with you just in case you get to the same situation and wonder how things should work.
With Windows Server 2008 R2 failover clustering, Connection Broker is officially listed as one of the resources that could be made highly available (HA) within Microsoft Failover Cluster Manager (MSFC).
Now, here are the recommended steps for creating a clustered Connection Broker on Windows Server 2008 R2;
If you are a fan of scripting and PowerShell, you could use the PowerShell Script from the TechNet Scriptcenter that could help with this process;
PowerShell Script: Manage RD Connection Broker Cluster (Create, Add nodes)
http://gallery.technet.microsoft.com/ScriptCenter/en-us/5aca700b-7b70-4d06-9ae3-2cfb4fc027ca
Something you might notice is that if you fail over this highly available resource, it doesn’t show the connection settings on the other node immediately.
The RD Connection Broker uses an embedded database to store the session related information and passes those to RDSHost servers when they query the Connection Broker service.
On Windows Server 2008 R2, this DB is local to each broker node. Thus, each Broker node keeps its own local DB and only 1 Broker node can be active in the cluster at any given time.
When Broker service fails over to another node and the second node becomes the Active node, all the data will be repopulated in the local database. The primary Connection Broker (Active Node) rebuilds user session state (by querying user session information from RDSH (formerly known as Terminal Servers) Servers and VDI Agents. This could cause a minimal amount of downtime (while session information is being rebuilt on new active node) after the failover.
While there’s no official numbers for the down time after a HA connection broker failover but 100s of RDVHosts & RDSHs should be able to rejoin within couple of minutes.
And please keep in mind that Connection Broker isn’t a proxy server and all the existing sessions will continue to work fine during the database rebuild process.
Here’s the TechNet Step-by-Step Guide;
Deploying Remote Desktop Connection Broker with High Availability Step-by-Step Guide
http://technet.microsoft.com/en-us/library/ff686148(WS.10).aspx
You might have wondered about how to setup Failover Cluster inside Hyper-V Virtual Machines. As you know, the major element of a cluster is the shared storage piece. Inside Hyper-V VMs, iSCSI is the only supported method of providing shared disks over virtual NICs.
Here are detailed steps on how to create Highly Available resources inside Hyper-V Windows Server 2008 R2 VM;
1. VM Preparation: Provision the following VMs on your Hyper-V server;
Update (04/05/2011): Microsoft product group just released Microsoft iSCSI Taget Software 3.3 for public download
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0
http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx
If you have access to MSDN or TechNet Plus subscription downloads, you may use Windows Server 2008 Storage Server and install Microsoft iSCSI Software Target on it (Evolution version from MSDN or TechNet Plus subscription download and production version from your Windows Server 2008 Storage Server's OEM vendor )
Note: default Administrator password for Windows Server 2008 Storage Server is in the Windows Storage Server 2008 Release Notes (WSS2008_RELNOTES.DOC), which can be found in the “Tools” ISO file. Please see more information about Microsoft Storage Server 2008 at the end of this post.
In my demo case, I'm using iSCSI target software that can be installed on Windows Server 2008 R2. So, I'm using my WS08 R2 Domain Controller as my storage server as well. This saves me one VM.
You can use any other iSCSI target solution if it’s available to you. There are few free iSCSI target software such as Starwind that can also be used for this setup.
2. Virtual Network: create two virtual network switches in Hyper-V management console. One will be serving cluster Heartbeat (Private Network switch) and the other one will be for cluster communication and iSCSI traffic (Internal Virtual Switch) .
Note: Optionally you can add more virtual network switches based on your needs. If you need to access resources outside your Hyper-V server then you’ll need to create an External Virtual Switch and use that for cluster communication.
3. Configure Shared Storage: Create a Virtual Hard Disk (VHD) on your Hyper-V host machine and attach it to your iSCSI Storage VM Server via the virtual SCSI controller.
Note: all the iSCSI shared disks will be created inside this VHD, so you need to size the VHD based on the number of shared disks and their size you will need in your cluster.
In my example, I’ve created a 20GB VHD inside Hyper-V Management console (New Hard Disk…) and attached it to my storage server VM's SCSI controller.
Now, from inside the storage VM, we'll need to access Microsoft iSCSI software target (under Administrative Tools)
Start creating your shared disks (including Witness disk) under Devices.
On the next step you can add the target node(s) or just move next and add it later. We'll come back and add it later.
You can see that our iSCSI disk has been provisioned but isn't being accessed by anyone yet.
Let's repeat the last step and provision more iSCSI virtual disks for your shared disks on the cluster. (I named them SharedDisk0X)
Configure the firewall on both ends for iSCSI communication;
On the client side configure firewall exception for iSCSI Service; (you should be just fine by checking the check box for Domain communications)
On the Storage Server side configure the firewall exception on iSCSI Service and Default iSCSI Target Port;
Configure iSCSI Initiator on each node;
Select iSCSI initiator from Administrative Tools. If this is the first time using iSCSI initiator on that machine, Windows will require to start the iSCSI service.
The initiator name (screen shot below) will be used inside iSCSI Target Software to establish the connection.
Note: If you're using the Administrator account (with the same password on domain and local) for your Demo, make sure you logon to each node using DomainName\administrator user name. On Windows Server 2008 and above, using the administrator account will fall back to the local administrator account not Domain. Use can always use Whoami command to check the logged on account on your session. This could be a gatcha if your local and Domain Administrator accounts share the same password!
On the Storage Server, start the iSCSI software target console.
Under iSCSI Targets,
Add iSCSI hard disks to each Target;
Note: you can also create only one Target (i.e. Name: W2K8_Clu), add Virtual Disks to it and then logon from the iSCSI initiators from each node using the same iQN name.
Now on each cluster node, go back and try to connect to the iSCSI software Target on the storage server, using iSCSI Initiator. Quick Connect… should work fine for you.
After this all those iSCSI disks will show up in each node's Disk Management console (diskMgmg.msc)
While in there, we need to right click on each disk and bring them Online. Then initialize all the disks and assign them Drive letters. This has to be done on all nodes that are going to be joined to the cluster.
Note: As you know, in production environment you need to have a dedicated network for your iSCSI traffic but for this demo I've used the same virtual network for the iSCSI traffic.
4. Cluster Setup: Now that all the nodes are joined to the Domain and can see all the iSCSI shared disks, we need to install Failover Clustering Feature (From Server Manager Console) on each node. (No Reboot required)
After installing the Role, you’ll need to use the most awesome feature of new failover clustering in Windows Server 2008 and Windows Server 2008 R2 that is Cluster Validation Wizard.
Go to Failover Cluster Manager console from Administrative Tools.
Launch Validate a Configuration… wizard;
When the validation completes, you’ll need to review the report and address any possible issues in there.
The following report looks like a successful one;
In my demo, I got 3 warnings that are related to not having a default gateway assigned to the Ethernet adaptors. (I wouldn't worry about this since my cluster won't need public network access)
With validation report looks good here , we can form a cluster with these nodes using the Create a Cluster... Option in Failover Clustering console’s main page;
Assign a Name and IP address for the cluster itself
Congratulations, the cluster has been configured inside your virtual machines.
Now you can proceed with creating highly available resource under Services and applications such as MSDT, SQL and all the other ones.
The following screen shot was taken after MSDTC or SQL resources were installed;
And you can see the available storage and used ones in the Failover cluster console;
So Long…!
More Resources;
Microsoft iSCSI Software Target
http://technet.microsoft.com/en-us/library/dd573326(WS.10).aspx
Configuring Windows Firewall for iSCSI Software Target
http://technet.microsoft.com/en-us/library/dd573342(WS.10).aspx
iSCSI Software Target 3.2 FAQ
http://blogs.technet.com/storageserver/archive/2009/08/14/iscsi-software-target-3-2-faq.aspx
Known Issues and Updates for Windows Storage Server 2008
http://technet.microsoft.com/en-us/library/dd904408(WS.10).aspx
Blog: Windows Storage Server 2008 with the Microsoft iSCSI Software Target 3.2 available to MSDN and TechNet Plus subscribers
<http://blogs.technet.com/josebda/archive/2009/05/16/windows-storage-server-2008-with-the-microsoft-iscsi-software-target-3-2-available-to-msdn-and-technet-plus-subscribers.aspx>
Failover Cluster Step-by-Step Guide: Configuring a Two-Node File Server Failover Cluster
http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx
Guide to Creating and Configuring a Server Cluster Under Windows Server 2003
http://www.microsoft.com/downloads/details.aspx?familyid=96F76ED7-9634-4300-9159-89638F4B4EF7&displaylang=en
If you ever used SCVMM (System Center Virtual Machine Manager) 2008 R2, you might have notices different CPU Types under the Processor properties of each Virtual Machine or inside Hardware Profiles. You might wonder what these CPU Types are and how they could come into play while Virtual Processor scheduling is actually being handled by Hyper-V while you can’t find this option anywhere in Hyper-V management console?!!
Well, this option is there just as a hint for SCVMM (when it doesn’t know anything about the Virtual Machine itself) to provide workload description for intelligent placement. For instance, if you create new VM you can say that you expect it to be equivalent of 2 proc 3.33 GHz Xeon with utilization of 40%. Placement will make sure that this VM is deployed on the right host that can provide enough CPU resources. Once VM is deployed and running, VMM will use historical performance information about this VM during migrations. Also note that CPU utilization parameter in “Customize Ratings” dialog is normalized to VM processor type.
Here’s another example;
Let’s say you deploy a VM from the Library and in the placement page you customize the expected CPU load and set it to 100%, then depending on the VM’s CPU Type, the placement rating will change.
100% Expected CPU load (in placement page)
If you change the CPU Type few times, you’ll see that placement will be modeled based on the load on that CPU Type. As you increase the speed of the processor, the host rating will go down.
Please note that your rating might be a little bit different based on your actual physical processor on your host machine. The test here was done on a single dual core 2.66GHz processor.
In this case, We have modeled 2 x 3.6GHz HT VM on a host with a single dual core processor at 2.66GHz
Hope this helped you understand this feature! So long….
Hi! If you’ve used Hyper-V, you might have seen a CPU Usage column inside Hyper-V Management console for each Virtual Machine (VM).
A friend of mine thought this was the CPU usage of that VM, however, when looking at the CPU utilization inside the VM those numbers didn’t match. so, what is this number and how Hyper-V Management console calculates it?
The percentage value displayed in the CPU Usage column for a VM is computed from 3 different sources below;
(1) CPU used when in the virtual machine context.
+ (2) CPU used by the worker process for the virtual machine (for device emulation and what not). This can be seen in vmwp.exe on the Parent Partition. (Note: each running VM has it own vmwp.exe process with VM’s GUID in the startup command line of it. please see the end of this post for more information**)
+ (3) CPU used in the hypervisor switching back and forth and doing other miscellaneous tasks. ===========================================================================
CPU Usage Percentage displayed in the Hyper-V Manager
Item (1) Shows up in the MMC SnapIn. Item (2) Shows up in the parent task manager.
Item (3) Is tracked in Hyper-V performance counters on the parent partition - but you can’t track this ‘per VM’ - but just system wide
Note that it is possible (and quite common) to have virtual machines running the processors hot - but task manager in the parent partition thinks nothing is happening. This is just one side effect of the fact that with Hyper-V the parent partition is really just a special virtual machine. Partitions in Hyper-V are isolated and aren’t aware of other partitions.
There may also be the case where both the CPU Usage displayed in Windows Task Manager inside the VM guest OS is negligible and the CPU Usage displayed in Windows Task Manager on the Host machine is also negligible but the CPU Usage displayed in the Hyper-V Manager is significant (25% or more). This could only be due to the Item (3) CPU Usage resulting from the hypervisor activity that neither the “parent” or “child” partitions have any means of capturing within their own Task Manager counters.
Customers should use Performance Monitor (Perfmon) on the Parent Partition to gather accurate Hyper-V Virtual and Logical Processor information on the VMs in Child Partitions as well as the Parent Partition.
Select the Hyper-V Hypervisor… counters - look at the Instances of the VM in question.
So, If you’re planning to monitor your VMs, you need to do this using Hyper-V counters on the Parent Partition otherwise you’ll be missing some vital data.
**Here’s how to find VMWP.exe process related to specific running VM; (You probably need to add Command Line column from View menu in Task Manager on the Parent Partition)
This GUID matches VM’s configuration file (.xml and other folders associated with that VM);
Hope you find this post helpful. So long!
Hi, today I’d like to write about Hyper-V and Virtual Disks.
As you know Windows Server 2008 R2’s been released with bunch of enhancements in Hyper-V. One of the improvements is performance increase with Dynamic Expanding virtual hard disks (VHD).
Previously on Windows Server 2008 we didn’t recommend using dynamic expanding VHDs in production as its performance wasn’t quite comparable with fixed size VHDs.
Now that Hyper-V product group has done such an awesome job of making dynamic VHDs par to fixed size VHDs, we get this question of whether these dynamic VHDs can be used in production or not?
Both virtual disk types are now supported in production. You may use either virtual hard disks based on your environment.
If you are trying to save space you need to use Dynamic Expanding virtual disks (VHD).
You can always expand the maximum size of Dynamic and Fixed VHDs (Using Edit Disk option in Hyper-V management Console or PowerShell script) if you need to.
The only important note here is to be careful about not oversubscribing the LUNs. This must be monitored closely if dynamic VHDs are being expanded frequently.
What’s New in Windows Server 2008 R2 Hyper-V?
http://www.microsoft.com/windowsserver2008/en/us/hyperv-r2.aspx
Also, please take time to review the following whitepaper;
Microsoft Hyper-V vs. VMware ESX & vSphere Operations & Management Cost Analysis White paper
http://download.microsoft.com/download/1/F/8/1F8BD4EF-31CC-4059-9A65-4A51B3B4BC98/Hyper-V-vs-VMware-ESX-and-vShpere-WP.pdf
So long!
Hello! This morning, I logged into my Hyper-V/SCVMM 2008 server to deploy a new Windows Server 2008 to my Hyper-V server and as soon as I launched SCVMM 2008 console and I kept getting the following error message;
Upon clicking the OK button, the error message popped up again and again. Hm…
I opened Hyper-V management console. No Errors!
Tried to connect to a VM and VMConnect (Virtual Machine’s console) Window gave me the same error code. (0x8009030e)
Looking at the err.exe tool the error translates to;
C:\>err.exe 0x8009030e
SEC_E_NO_CREDENTIALS winerror.h # No credentials are available in the security package
Searching for the same Error and Hyper-V on bing.com I got couple of hits on social.technet.microsoft.com that were pointing to unchecking the check box for “Use default credentials automatically (no prompt)” in Hyper-V Settings…
I unchecked the check box mentioned above and hyper-v started prompting for credentials every time I was trying to connect to a Virtual Machine’s console.
Starting the SCVMM 2008 console pops up the user credential’s window. Not very fun to keep getting prompted for your credentials, right?
Did some more digging to see what’s exactly causing this;
since 0x8009030e is pointing to certificate authentication in other scenarios of getting this error code, I believe Hyper-V is using a certificate to authenticate instead of NTLM or my Kerberos ticket.
Well, If you’ve ever logged on remotely to a Windows Server 2008 on a network that requires Smart Card Certificate in addition to your credentials, you know that if you don’t provide your credentials in few seconds, Windows Server 2008 will read and use your certificate on the smart card to log you on. Smart, isn’t it?
To make the story short, I found out that I was logged on to my server using the smart card certificate instead of my normal credentials. Logged out and logged back in by providing my domain username/password and everything looks good!
If you see this issue, you’ll need to log out, log back in with your domain credentials and you won’t see the error anymore. Disconnecting from your Remote Desktop Session and reconnecting back by providing your domain username and password won’t fix the issue since you’ve already logged on with your smart card certificate and just reconnecting to that existing session. You’ll need a logoff and then a brand new session to fix this problem. Hope this helps solve this puzzle for you too! So long…
I’m sure the subject of extending volumes and drives due to lack of space or other reasons is still a good subject for IT Professionals who work with Windows Servers or Clients.
There are public documents such as KB325590 or the following TechNet articles that talk about how to extend the volume on your Windows Server 2003 or Windows XP Client;
Extend a simple or spanned volume
http://technet.microsoft.com/en-us/library/cc776741.aspx
The pain was when you wanted to extend the system/boot drive. In order to do that on Windows Server 2003 or Windows XP, you have to boot off of Windows PE and use diskpart. (Instructions in this KB325590)
Now with Windows Vista and Windows Server 2008 extending the system/boot partition is as easy as what it used to be with non-system partitions.
Here are the steps;
1- Start Disk Management Console: (diskmgmt.msc);
** As you can see we have 1.00GB unallocated space on Disk0 and would like to add that to the C: drive
2- Right click on the C: partition and select Extend Volume…
3- Press Next on the Extend Volumes Wizard page;
4- You’ll be able to see the Total Volume Size, Maximum available space for extension and the amount of space you’d like to extend the system drive all in megabytes. By default the wizard will select the maximum amount that drive C: can be extended but you have the option to change that number; (please note: the number is in megabytes). Press Next after you are ready to extend the partition;
5- Confirm this action by pressing Finish button;
6- As you can see in the screen shot below, the system drive has been extended with no need to reboot or taking the system offline.
Amazing, isn’t it?
I’ve seen the question of how big the page file has to be on a serer several times. As you may know the rule of thumb is to make it 1.5 times the physical RAM. But sometime system would have a different recommended value for your page file size.
What if you would like to make sure that you are sure about the page file size and you want to prove it with numbers and you calculator?
The best method to determine the correct page file size is to use the following performance counters in perfmon;
commit limit
commit charge
(available both in perfmon and task manager )
Commit limit includes memory and page file, so we can easily see how much page file is needed by observing these counters over time.
Under full application load you need to increase the page file until the commit limit is 20%-30% greater than the commit, (or reduce it if your minimum page file size was higher).
Since the correct measurements are based on application mix and number of other factors, you can see why the 1.5xRAM rule is just an estimate since we do not have an idea what we are running on the server.
We also have an article that shows you a different method on Windows Server 2003 x64;
889654 How to determine the appropriate page file size for 64-bit versions of Windows Server 2003 or Windows XP
http://support.microsoft.com/kb/889654
Either method should assist you with calculating the correct page file size on your server.
Until next time. Cheers!
Just found Mark russinovich has a very deep dive about this topic on his Blog;
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
I’m a Microsoft Data Center Specialist with Microsoft US State and Local Government (SLG) team. My goal is to address challenging issues within SLG customer datacenters and their journey toward Private and Public Cloud adoption. Assisting customers to get a deeper understanding of managed and consolidated datacenters powered by Windows Server 2012, Windows Server 2012 Hyper-V, Remote Desktop, VDI and System Center 2012 Suit, along with Microsoft Identity Management Solutions (FIM, UAG, TMG) is my main area of focus. My customers also appreciate partnership between my team and many Microsoft Hardware, Software and Services partners to deliver a smooth and rich engagements during different phases of their IT enablement projects.
Before this role, I was a Microsoft Sr. Premier Filed Engineer (PFE) and Sr. Support Escalation Engineer for few years. During that time I’d assisted many Microsoft enterprise customers with their proactive and reactive matter within their mission critical environments.