It's all about Microsoft Infrastructure...

here you can find information about Virtualization, System Center, Unified Messaging, Directory Services, Deployment, MS Certification and much more...

February, 2011

  • WINDOWS SERVER 2008/2008 R2 MAX MEMORY

    ===============================================

    Windows Server 2008 Maximum Memory Limits

    ===============================================

    Windows Server 2008 x86

    · Windows Server 2008 STD Edition supports a maximum of 4 GB of memory

    · Windows Server 2008 EE/DCE support a maximum of 64GB of memory

    o Hyper-V is an x64 only role and thus isn’t available in this SKU

    Windows Server 2008 x64 with or without Hyper-V Role Enabled

    · Windows Server 2008 STD Edition supports a maximum of 32 GB of memory

    · Windows Server 2008 EE/DCE support a maximum of 1 TB of memory

    ===============================================

    Windows Server 2008 R2 Maximum Memory Limits

    ===============================================

    Windows Server 2008 R2 x86

    · There is no such product. Windows Server 2008 R2 is x64 only.

    Windows Server 2008 R2 x64 with Hyper-V Role Enabled

    · Windows Server 2008 R2 STD support a maximum of 32 GB of memory

    · Windows Server 2008 R2 EE/DCE support a maximum of 1 TB of memory

    Windows Server 2008 R2 x64 without Hyper-V Role Enabled

    · Windows Server 2008 R2 STD support a maximum of 32 GB of memory

    · Windows Server 2008 R2 EE/DCE support a maximum of 2 TB of memory

     

    ===============================

    FAQ

    ===============================

    Q: Why is the maximum memory different for Windows Server 2008 R2 depending on whether the Hyper-V role is enabled or not?

    A: System availability. Servers with 512 GB, 1 TB or more don’t just grow on trees. J Thus, Hyper-V has a supported limit of 1 TB.

    ------------------------------------------------------------------------------------------------------------

    Q: Will the maximum amount of memory of Windows Server with Hyper-V be raised?

    A:  Keep in mind that whether it’s the maximum amount of memory supported or number of processors supported, scalability is an ongoing development activity at Microsoft.

    Considering the limited number of servers that support such large memory footprints and the high cost associated with populating a server with >1 TB, we believe we have an excellent solution with Windows Server 2008 R2 Hyper-V today.

  • Best practice for Virtual Memory and Swap Space Setting for Hyper V Host

     

    Mark Russinovich’s blog

    <http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx>

     

    Windows® Internals, Fifth Edition

    http://www.microsoft.com/learning/en/us/Book.aspx?ID=12069&locale=en-us

     

    How to determine the appropriate page file size for 64-bit versions of Windows

    http://support.microsoft.com/?id=889654

     

    Pushing the Limits of Windows: Virtual Memory

    http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx

     

    The Case of the Enormous Page File

    http://blogs.technet.com/b/clinth/archive/2009/09/03/the-case-of-the-enormous-page-file.aspx

  • Hyper-V Cloud Fast Track Architecture Brief

    External Source: View article...

    Private Cloud Concepts

    Resiliency over Redundancy Mindset – This concept moves the high-availability responsibility up the stack from hardware to software. This allows costly physical redundancy within the facilities and hardware to be removed and increases availability by reducing the impact of component and system failures.

    Homogenization and Standardization – by homogenizing and standardizing wherever possible within the environment, greater economies of scale can be achieved. This approach also enables the “drive predictability” principle and reduces cost and complexity across the board.

    Resource Pooling – the pooling of compute, network, and storage that creates the fabric that hosts virtualized workloads.

    Virtualization – the abstraction of hardware components into logical entities. I know readers are of course familiar with server virtualization, but this concept speaks more broadly to benefits of virtualization across the entire resource pool. This may occur differently with each hardware component (server, network, storage) but the benefits are generally the same, including lesser or no downtime during resource management tasks, enhanced portability, simplified management of resources, and the ability to share resources.

    Fabric Management – a level of abstraction above virtualization that provides orchestrated and intelligent management of the fabric (i.e., datacenters and resource pools). Fabric Management differs from traditional management in that it understands the relationships and interdependencies between the resources.

    Elasticity – enables the perception of infinite capacity by allowing IT services to rapidly scale up and back down based on utilization and consume demand

    Partitioning of Shared Resources – While a fully shared infrastructure may provide the greatest optimization of cost and agility, there may be regulatory requirements, business drivers, or issues of multi-tenancy that require various levels of resource partitioning

    Cost Transparency – provides insight into the real costs of IT services enabling the business to make informed and fair decisions when investing in new IT applications or driving cost-reduction efforts.

    Hyper-V Cloud Fast Track Architecture Overview

    With the principles and concepts defined we took a holistic approach to the program thinking first about everything that would be ideal to achieve an integrated private cloud and pairing down from there to what now forms the first iteration of the offering. As stated, future versions will address more and more of the desired end-state.

    Scale Unit

    Scale Units represents a standardized unit of capacity that is added to a Resource Pool.  There are two types of Scale Unit; a Compute Scale Unit which includes servers and network, and a Storage Scale Unit which includes storage components.  Scale Units increase capacity in a predictable, consistent way, allow standardized designs, and enable capacity modeling.

    Server Hardware

    The server hardware itself is more complex that it might seem. First, what’s the ideal form-factor? Rack-mount or Blade? While we certainly have data that shows blades have many advantages for virtualized environments, they also can add cost and complexity for smaller deployments (4-12 servers). This is one decision where we provided guidance and experience on, but ultimately left the decision to the OEM as to when blades made sense for their markets. Most OEMs who have both blade and rack-mount options and will be offering both through this program.

    For CPU, all servers will have a minimum of 2-socket, quad-core processors yielding 8 logical processors. Of course, many of the servers in the program will have far more than 8 LPs, likely 12-24 will be most common as that’s the current price/performance sweet-spot. BTW - Hyper-V supports up to 64 LPs. The reason for this is that although the supported ratio of Virtual Processors to Logical Processors is 8:1, real-world experiences with production server workloads have shown more conservative average ratios. Based on that we concluded 8 LPs should be the minimum capacity starting point.

    Storage

    Storage is where, for me anyway, things begin to get really interesting. There are just so many exciting storage options for virtualized environments these days. Of course, it’s also a design challenge: which features are the highest priority and worth the investment? We again took a holistic approach and then allowed the partner to inject their special sauce and deep domain-expertise. Here’s the list of SAN storage features we targeted for common architecture criteria:

    o High Availability

    o Performance Predictability

    o Storage Networking

    o Storage Protocols

    o Data De-duplication

    o Thin Provisioning

    o Volume Cloning

    o Volume Snapshots

    o Storage Tiering

    o Automation

    One of the really cool advantages of this program is that it allows for multiple best-of-breed private cloud solutions to emerge taking advantage of each vendor’s strength. You can only find this in a multi-vendor, multi-participant program.

    On the Hyper-V side we provided common best-practices for Cluster Shared Volume configuration, sizing, and management as well as considered such things as MPIO, Security, I/O segregation, and more.

    Network

    Networking presents several challenges for Private Cloud architectures. Again here we find a myriad of choices from the OEMs and are able to leverage the best qualities of each where it makes sense. However, this is an area where we sometimes find IT happening for IT’s sake (i.e. complex, advanced networking implementations because they are possible and not necessarily because they are necessary to support the architecture). We need to look at the available products and features and only introduce complexity when it’s justified as we all know increased complexity often brings with it increased risk. Some of those items include:

    o Networking Infrastructure (Core, Distribution, and Access Switching)

    o Performance Predictability and Hyper-V R2 Enhancements (VMQ, TCP Checksum Offload, etc.)

    o Hyper-V Host Network Configuration

    o High Availability

    o 802.1q VLAN Trunks

    o NIC Teaming

    NIC Teaming in particular is one of those items that can be tricky to get right being there are different vendor solutions each with potentially different features and configuration options. Therefore it’s an example of a design element that benefits greatly from the Hyper-V Cloud Fast Track program taking all the guesswork out of NIC Teaming providing the best-practice configuration tested and validated by both Microsoft and the OEM.

    Private Cloud Management

    Let’s face it, cloud computing places a huge dependency on management and operations. Even the most well designed infrastructure will not achieve the benefits promised by cloud computing without some radical systems management evolution.

    Again leveraging the best-of-breed advantage, a key element of this architecture lies in that the management solution may be a mix of vendor software.                              
    Notice I said may. That’s because a vendor who is a big player in the systems management market may have chosen to use their software for some layers of the management stack while others may have chosen to use an exclusively Microsoft solution consisting of System Center, Forefront, Data Protection Manager, etc. I will not attempt to cover each possible OEM-specific solution. Rather, I just want to point out that we recognize the need and benefit of OEMs being able to provide their own elements of the management stack, such as Backup and Self-Service Portal. Some are, of course, essential to the Microsoft virtualization layer itself and are non-replaceable such as System Center Virtual Machine Manager and Operations Manager. Here is a summary of the management stack included:

    o Microsoft SQL Server

    o Microsoft System Center Virtual Machine Manager  and Operations Manager

    o Maintenance and Patch Management

    o Backup and Disaster Recovery

    o Automation

    o Tenant / User Self-Service Portal

    o Storage, Network and Server Management

    o Server Out of Band Management Configuration

    The Management layer is so critical and really is what transforms the datacenter into a dynamic, scalable, and agile resource enabling massive capex and opex cost reduction, improved operational efficiencies, and increased business agility. Any one of these components by themselves is great, but it’s the combination of them all that qualify it as a private cloud solution.

     Reference Architecture Whitepaper

  • Hyper-V DYNAMIC MEMORY INFO

    Part 1: Dynamic Memory announcement. This blog announces the new Hyper-V Dynamic Memory in Hyper-V R2 SP1. It also discussed the explicit requirements that we received from our customers. http://blogs.technet.com/virtualization/archive/2010/03/18/dynamic-memory-coming-to-hyper-v.aspx

    Part 2: Capacity Planning from a Memory Standpoint. This blog discusses the difficulties behind the deceptively simple question, “how much memory does this workload require?” Examines what issues our customers face with regard to memory capacity planning and why. http://blogs.technet.com/virtualization/archive/2010/03/25/dynamic-memory-coming-to-hyper-v-part-2.aspx

    Part 3: Page Sharing. A deep dive into the importance of the TLB, large memory pages, how page sharing works, SuperFetch and more. If you’re looking for the reasons why we haven’t invested in Page Sharing this is the blog. http://blogs.technet.com/virtualization/archive/2010/04/07/dynamic-memory-coming-to-hyper-v-part-3.aspx

    Part 4: Page Sharing Follow-Up. Questions answered about Page Sharing and ASLR and other factors to its efficacy. http://blogs.technet.com/b/virtualization/archive/2010/04/21/dynamic-memory-coming-to-hyper-v-part-4.aspx

    Part 5: Second Level Paging. What it is, why you really want to avoid this in a virtualized environment and the performance impact it can have. http://blogs.technet.com/b/virtualization/archive/2010/05/20/dynamic-memory-coming-to-hyper-v-part-5.aspx

    Part 6: Hyper-V Dynamic Memory. What it is, what each of the per virtual machine settings do in depth and how this all ties together with our customer requirements. http://blogs.technet.com/b/virtualization/archive/2010/07/12/dynamic-memory-coming-to-hyper-v-part-6.aspx

    Hyper-V Dynamic Memory Density. An in depth test of Hyper-V Dynamic Memory easily achieving 40% greater density. http://blogs.technet.com/b/virtualization/archive/2010/11/08/hyper-v-dynamic-memory-test-for-vdi-density.aspx