February, 2014

  • Windows Dynamic Cache Service Updated

    Good morning AskPerf! This is a quick blog to inform you that you no longer have to contact Microsoft Technical Support to obtain the Dynamic Cache Service for Windows Server 2008 R2. It is now freely available to download from the following link:

    Microsoft Windows Dynamic Cache Service

    image

    Additional Resources

    -Blake

  • Configuring Windows Failover Cluster Networks

    In this blog, I will discuss the overall general practices to be considered when configuring networks in Failover Clusters.

    Avoid single points of failure:

    Identifying single points of failure and configuring redundancy at every point in the network is very critical to maintain high availability. Redundancy can be maintained by using multiple independent networks or by using NIC Teaming. Several ways of achieving this would be:

    · Use multiple physical network adapter cards. Multiple ports of the same multiport card or backplane used for networks introduces a single point of failure.

    · Connect network adapter cards to different independent switches. Multiple Vlans patched into a single switch introduces a single point of failure.

    · Use of NIC teaming for non-redundant networks, such as client connection, intra-cluster communication, CSV, and Live Migration. In the event of a failure of the current active network card will have the communication move over to the other card in the team.

    · Using different types of network adapters eliminates affecting connectivity across all network adapters at the same time if there is an issue with the NIC driver.

    · Ensure upstream network resiliency to eliminate a single point of failure between multiple networks.

    · The Failover Clustering network driver detects networks on the system by their logical subnet. It is not recommended to assign more than one network adapter per subnet, including IPV6 Link local, as only one card would be used by Cluster and the other ignored.

    Network Binding Order:

    The Adapters and Bindingstab lists the connections in the order in which the connections are accessed by network services. The order of these connections reflects the order in which generic TCP/IP calls/packets are sent on to the wire.

    How to change the binding order of network adapters

    1. Click Start, click Run, type ncpa.cpl, and then click OK. You can see the available connections in the LAN and High-Speed Internet section of the Network Connections window.
    2. On the Advanced menu, click Advanced Settings, and then click the Adapters and Bindings tab.
    3. In the Connections area, select the connection that you want to move higher in the list. Use the arrow buttons to move the connection. As a general rule, the card that talks to the network (domain connectivity, routing to other networks, etc should the first bound (top of the list) card.

    Cluster nodes are multi-homed systems.  Network priority affects DNS Client for outbound network connectivity.  Network adapters used for client communication should be at the top in the binding order.  Non-routed networks can be placed at lower priority.  In Windows Server 2012/2012R2, the Cluster Network Driver (NETFT.SYS) adapter is automatically placed at the bottom in the binding order list.

    Cluster Network Roles:

    Cluster networks are automatically created for all logical subnets connected to all nodes in the Cluster.  Each network adapter card connected to a common subnet will be listed in Failover Cluster Manager.  Cluster networks can be configured for different uses.

     

    Name

    Value

    Description

    Disabled for Cluster Communication

    0

    No cluster communication of any kind sent over this network

    Enabled for Cluster Communication only

    1

    Internal cluster communication and CSV traffic can be sent over this network

    Enabled for client and cluster communication

    3

    Cluster IP Address resources can be created on this network for clients to connect to. Internal and CSV traffic can be sent over this network

    Automatic configuration

    The Network roles are automatically configured during cluster creation. The above table describes the networks that are configured in a cluster.

    Networks used for ISCSI communication with ISCSI software initiators is automatically disabled for Cluster communication (Do not allow cluster network communication on this network).

    Networks configured without default gateway is automatically enabled for cluster communication only (Allow cluster network communication on this network).

    Network configured with default gateway is automatically enabled for client and cluster communication (Allow cluster network communication on this network, Allow clients to connect through this network).

    Manual configuration

    Though the cluster networks are automatically configured while creating the cluster as described above, they can also be manually configured based on the requirements in the environment.

    To modify the network settings for a Failover Cluster:

    · Open Failover Cluster Manager

    · Expand Networks.

    · Right-click the network that you want to modify settings for, and then click Properties.

    · If needed, change the name of the network.

    · Select one of the following options:

    o Allow cluster network communication on this network.  If you select this option and you want the network to be used by the nodes only (not clients), clear Allow clients to connect through this network. Otherwise, make sure it is selected.

    o Do not allow cluster network communication on this network.  Select this option if you are using a network only for iSCSI (communication with storage) or only for backup. (These are among the most common reasons for selecting this option.)

    Cluster network roles can also be changed using PowerShell command, Get-ClusterNetwork.

    For example:

    (Get-Cluster Network “Cluster Network 1”). Role =3

    This configures “Cluster Network 1” to be enabled for client and cluster communication.

    Configuring Quality of Service Policies in Windows 2012/2012R2:

    To achieve Quality of Service, we can either have multiple network cards or used, QoS policies with multiple VLANs can be created.

    QoS Prioritization is recommended to configure on all cluster deployments. Heartbeats and Intra-cluster communication are sensitive to latency and configuring a QoS Priority Flow Control policy helps reduce the latency.

    An example of setting cluster heartbeating and intra-node communication to be the highest priority traffic would be:

    New-NetQosPolicy “Cluster”-Cluster –Priority 6
    New-NetQosPolicy “SMB” –SMB –Priority 5
    New-NetQosPolicy “Live Migration” –LiveMigration –Priority 3

    Note:

    Available values are 0 – 6

    Must be enabled on all the nodes in the cluster and the physical network switch

    Undefined traffic is of priority 0

    Bandwidth Allocation:

    It is recommended to configure Relative Minimum Bandwidth SMB policy on CSV deployments

    Example of setting minimum policy of cluster for 30%, Live migration for 20%, and SMB Traffic for 50% of the total bandwidth.

    New-NetQosPolicy “Cluster”-Cluster –Priority 6
    New-NetQosPolicy “SMB” –SMB –Priority 5
    New-NetQosPolicy “Live Migration” –LiveMigration –Priority 3

    Multi-Subnet Clusters:

    Failover Clustering supports having nodes reside in different IP Subnets. Cluster Shared Volumes (CSV) in Windows Server 2012 as well as SQL Server 2012 support multi-subnet Clusters.

    Typically, the general rule has been to have one network per role it will provide. Cluster networks would be configured with the following in mind.

    Client connectivity

    Client connectivity is used for the applications running on the cluster nodes to communicate with the client systems. This network can be configured with statically assigned IPv4, IPv6 or DHCP assigned IP addresses. APIPA addresses should not be used as will be ignored networks as the Cluster Virtual Network Adapter will be on those address schemes. IPV6 Stateless address auto configuration can be used, but keep in mind that DHCPv6 addresses are not supported for clustered IP address resources. These networks are also typically a routable network with a Default Gateway.

    CSV Network for Storage I/O Redirection.

    You would want this network if using as a Hyper-V Cluster and highly available virtual machines. This network is used for the NTFS Metadata Updates to a Cluster Shared Volume (CSV) file system. These should be lightweight and infrequent unless there are communication related events getting to the storage.

    In the case of CSV I/O redirection, latency on this network can slow down the storage I/O performance. Quality of Service is important for this network. In case of failure in a storage path between any nodes or the storage, all I/O will be redirected over the network to a node that still has the connectivity for it to commit the data. All I/O is forwarded, via SMB, over the network which is why network bandwidth is important.

    Client for Microsoft Networks and File and Printer Sharing for Microsoft Networks need to be enabled to support Server Message Block (SMB) which is required for CSV. Configuring this network not to register with DNS is recommended as it will not use any name resolution. The CSV Network will use NTLM Authentication for its connectivity between the nodes.

    CSV communication will take advantage of the SMB 3.0 features such as SMB multi-channel and SMB Direct to allow streaming of traffic across multiple networks to deliver improved I/O performance for its I/O redirection.

    By default, the cluster will automatically choose the NIC to be used for CSV for manual configuration refer the following article.

    Designating a Preferred Network for Cluster Shared Volumes Communication
    http://technet.microsoft.com/en-us/library/ff182335(WS.10).aspx

    This network should be configured for Cluster Communications

    Live Migration Network

    As with the CSV network, you would want this network if using as a Hyper-V Cluster and highly available virtual machines. The Live Migration network is used for live migrating Virtual machines between cluster nodes. Configure this network as Cluster communications only network. By default, Cluster will automatically choose the NIC for Live migration.

    Multiple networks can be selected for live migration depending on the workload and performance. It will take advantage of the SMB 3.0 feature SMB Direct to allow migrations of virtual machines to be done at a much quicker pace.

    ISCSI Network:

    If you are using ISCSI Storage and using the network to get to it, it is recommended that the iSCSI Storage fabric have a dedicated and isolated network. This network should be disabled for Cluster communications so that the network is dedicated to only storage related traffic.

    This prevents intra-cluster communication as well as CSV traffic from flowing over same network. During the creation of the Cluster, ISCSI traffic will be detected and the network will be disabled from Cluster use. This network should set to lowest in the binding order.

    As with all storage networks, you should configure multiple cards to allow the redundancy with MPIO. Using the Microsoft provided in-box teaming drivers, network card teaming is now supported in Win2012 with iSCSI.

    Heartbeat communication and Intra-Cluster communication

    Heartbeat communication is used for the Health monitoring between the nodes to detect node failures. Heartbeat packets are Lightweight (134 bytes) in nature and sensitive to latency. If the cluster heartbeats are delayed by a Saturated NIC, blocked due to firewalls, etc, it could cause the cluster node to be removed from Cluster membership.

    Intra-Cluster communication is executed to update the cluster database across all the nodes any cluster state changes. Clustering is a distributed synchronous system. Latency in this network could slow down cluster state changes.

    IPv6 is the preferred network as it is more reliable and faster than IPv4. IPv6 linklocal (fe80) works for this network.

    In Windows Clusters, Heartbeat thresholds are increased as a default for Hyper-V Clusters.

    The default value changes when the first VM is clustered.

     

    Cluster Property

    Default

    Hyper-V Default

    SameSubnetThreshold

    5

    10

    CrossSubnetThreshold

    5

    20

     

    Generally, heartbeat thresholds are modified after the Cluster creation. If there is a requirement to increase the threshold values, this can be done in production times and will take effect immediately.

    Configuring full mesh heartbeat

    The Cluster Virtual Network Driver (NetFT.SYS) builds routes between the nodes based on the Cluster property PlumbAllCrossSubnetRoutes.

    Value Description

    0     Do not attempt to find cross subnet routes if local routes are found

    1     Always attempt to find routes that cross subnets

    2     Disable the cluster service from attempting to discover cross subnet routes after node successfully joins.

    To make a change to this property, you can use the command:

    (Get-Cluster). PlumbAllCrossSubnetRoutes = 1

    References for configuring Networks for Exchange 2013 and SQL 2012 on Failover Clusters.

    Exchange server 2013 Configuring DAG Networks.
    http://technet.microsoft.com/en-us/library/dd298065(v=exchg.150).aspx

    Before Installing Failover Clustering for SQL Server 2012
    http://msdn.microsoft.com/en-us/library/ms189910.aspx

    At TechEd North America 2013, there was a session that Elden Christensen (Failover Cluster Program Manager) did that was entitled Failover Cluster Networking Essentials that goes over a lot of configurations, best practices etc.

    Failover Cluster Networking Essentials
    http://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/MDC-B337#fbid=ZpvM0cLRvyX

    S. Jayaprakash
    Senior Support Escalation Engineer
    Microsoft India GTSC

  • Network Isolation of Windows Modern Apps – How Apps work with Akamai Internet Caching Servers in Windows 8/8.1

    Good morning AskPerf! Mario Liu here from the Windows 8/8.1 Support Team. Today I’d like to discuss a Windows Modern App connectivity issue I’ve worked over the past few months.

    We have several Enterprise customers who have deployed Windows 8 and 8.1 in their environments. However, after joining the machine to the domain, we discovered that numerous Windows Modern Apps could not display the contents correctly. For example, if we launched the Weather App, we saw the message “This page failed to load”:

    clip_image001

    On the surface, this looks like a network connectivity issue, however other network related programs ran fine. A Network Trace was captured, but we could not find any dropped packets.

    After further troubleshooting, we found that the same Windows 8/8.1 machines worked if they were outside of their corporate network. This gave us a clue that something inside their environment is causing the issue. In one step, we noticed that if the Firewall was disabled, the Modern App ran fine. Obviously turning the firewall off is not recommended.

    After lots of additional troubleshooting steps were done, we discovered that the Akamai Internet Caching Servers had a hand in this issue. The Windows Firewall Service blocked the traffic to Akamai devices, but why would it do this? Why would a Modern App have network connectivity issues to Akamai devices?

    To answer this question, first we need understand what Akamai Internet Caching Server is. It is a widely used content management solution worldwide. The main purpose of it is to save the web contents (mostly static) as the cache on the server. So next time a client machine tries to reach the same website, it actually reaches the cache on the local Akamai Server instead of taking longer time to reach the real contents on the remote web server. By doing this, we save a lot network traffic and speed up the contents delivery. So, the Akamai Internet Caching Server plays a similar role as an Internet Proxy Server, but from a web content management perspective. It basically delivers the contents to Windows clients on behalf of real/live web server.

    OK, so now that we understand Akamai’s role, let’s switch back to the Windows Modern App. In Windows, there is an important feature called Windows Store App Network Isolation. Modern Apps are network isolated depending on the network capabilities the app developer chooses. An enterprise private network is protected and only available to those apps that declare the privateNetworkClientServer capability in the app manifest. There are very few Modern Apps in the Microsoft store that work on a private network. Even though users can very easily download and install any kind of app, the app cannot talk to the enterprise network because the app can only talk to the public network. In most cases, what is a private / public internet network is automatically detected, which means the Modern App knows to reach public internet even when a Windows 8/8.1 machine is running in a corporate (private) environment. But what happens if we bring the Akamai Internet Caching Server into this scene? Instead of letting Modern App reach the real contents on the remote web server via public internet, the Modern App is forced to talk to the Akamai device which is on the corporate (private) network. Remember, very few Modern Apps are approved to work on a private network ONLY. If the apps try to reach private network, the firewall will block it. Here is a way you can check to see if your machine is hitting this problem:

    1. Enable WFP (Windows Filtering Platform) auditing by running the following command via elevated command prompt:

    auditpol /set /subcategory:"Filtering Platform Packet Drop" /success:enable /failure:enable

    2. Reproduce the issue.

    3. Go to event viewer -> Windows Logs -> Security. You should see some event ID 5152 which means Filtering Platform Packet Drop.

    4. Look at one of these events and you should find this similar information. Note: 192.168.0.10 is Akamai server’s address in this example.

    The Windows Filtering Platform has blocked a packet.

    Application Information:

    Process ID: 2712

    Application Name: \device\harddiskvolume2\windows\system32\wwahost.exe

    Network Information:

    Direction: Outbound

    Source Address: 192.168.0.30

    Source Port: 50571

    Destination Address: 192.168.0.10

    Destination Port: 80

    Protocol: 6

    Filter Information:

    Filter Run-Time ID: 71620

    Layer Name: Connect

    Layer Run-Time ID: 48

    Wwahost.exe is the Modern App process. As you can see, WFP blocked this traffic with Filter ID 71620 because it tried to reach a private IP address. This filter is called Block Outbound Default Rule. If you want to know how I can tell this filter’s name, you can do netsh.exe wfp capture start via elevated command prompt when you reproduce the issue. This will generate a wfpdiag.cab file and in this cab you can see wfpdiag.xml. Now open this xml file to correlate the filter name and ID.

    So, at this point, we now know that Akamai is filtering the cached Internet traffic. That internet cache however is hosted in the private network. Akamai internet cache is essentially the internet network and needs to be deployed as such.

    There are several solutions. But they can be essentially classified in two types:

    Type 1: Either the Akamai servers must be deployed outside of the private network, or

    Type 2: The Akamai servers must be declared explicitly as proxies who live in the private network

    Microsoft’s recommendation is Type 1 as the enterprise cannot control much of what the Akamai devices do. So, putting the device just outside of the corporate firewall on a different address space or outside of enterprise subnet would appear to be a better solution. However, if this requires a fundamental change of your infrastructure, the Type 2 solution can also be guaranteed to work.

    Below we cover some solutions of how the Akamai Servers can be deployed.

    Solution 1 – Akamai servers on their own IP space.

    Akamai servers are deployed outside of the DMZ and they are given an address space different from the one provided to the company. In this scenario the ISP might participate or even the company can simply not provision the subnet given to Akamai anywhere else.

    Solution 2 – Akamai servers are outside of AD sites and Subnets.

    Akamai servers are given an IP address space that is not one included in the AD sites and Subnets configuration of active directory. In theory it should be a security issue that an external device not belonging to the corporation is included in the AD sites and subnets as it providing it with access that the enterprise does not want to provide. Hence this approach is not only an appropriate deployment of network isolation but an appropriate deployment of AD in general and a good tightening of the network.

    Solution 3 – Akamai servers are removed manually from the private network by using the Network isolation Group Policy controls to override the private network.

    Network Isolation Group Policy controls allows Group Policy admins to add private network subnets. It also allows them to completely override the private network definitions in AD sites and subnets and hence carve out any IP space that shouldn’t be considered part of the private network of the enterprise.

    Let’s assume your corporate IP address space ranges from 192.168.0.1 to 192.168.0.255 and you have two Akamai servers with IP 192.168.0.10 and 192.168.0.11.

    Here is the example to implement this solution:

    1. Open the Group Policy Management snap-in (gpmc.msc) and edit the Default Domain Policy.
    2. From the Group Policy Management Editor, expand Computer Configuration, expand Policies, expand Administrative Templates, expand Network, and click Network Isolation.
    3. In the right pane, double-click Private network ranges for apps.
    4. In the Private network ranges for apps dialog box, click Enabled. In the Private subnets text box, type the private subnets for your intranet, and remove Akamai servers.

    In this example the values are 192.168.0.1-192.168.0.9; 192.168.0.12-192.168.0.255 in the Private subnets text box. Note we have excluded 192.168.0.10 and 192.168.0.11 since they are Akamai’s.

    5.Double-click Subnet definitions are authoritative, click Enabled.

    Solution 4 – Akamai servers are configured as proxies by using the Network Isolation Group Policy controls to declare them as proxies.

    The Network Isolation GP controls also allow admins to add internet proxies. It can also be used to completely override the proxy discovery mechanisms provided by default with a preferred proxy definition.

    The enterprise admin can easily use the network isolation GP internet proxy controls to add the Akamai servers as internet content proxies.

    Here is the example to implement this solution:

    1. Open the Group Policy Management snap-in (gpmc.msc) and edit the Default Domain Policy.
    2. From the Group Policy Management Editor, expand Computer Configuration, expand Policies, expand Administrative Templates, expand Network, and click Network Isolation.
    3. Double-click Internet proxy servers for apps. Click Enabled, and then in the Domain Proxies text box, type the IP addresses of your Internet proxy servers, separated by semicolons. In this example the values are 192.168.0.10; 192.168.0.11
    4. Double-click Proxy definitions are authoritative. Ensure that the Not Configured default value is selected to add these proxies to other discovered http proxies.

    For more information about the feature of Network Isolation of Windows Modern Apps, please refer to:

    Isolating Windows Store Apps on Your Network

    How to set network capabilities

    -Mario

  • LIVE: Microsoft Virtual Federal Forum

    For the first time, the 2014 Virtual Federal Forum, will be streamed LIVE from the Reagan Center in Washington, DC! This online digital experience will be completely hybrid, focused on Real Impact for a Lean and Modern Federal Governmentand will showcase innovative and cost effective solutions to unleash greater capabilities within agencies, while helping simplify and modernize processes. The Virtual Federal Forum is designed exclusively for the Federal government community providing the opportunity to hear from Microsoft executives, thought leaders, and strategic partners.  Virtual attendees get bonus material not available to the in-person audience and have the ability to download related session materials, take live polls and surveys, share ideas and ask questions to experts and executives through Chat, Twitter and Q&A sessions.

    Date: Tuesday, March 4th
    Time: 8am EST – 2:30pm EST

    Agenda Highlights:

    · Keynote speaker, The Honorable Tom Ridge Former Secretary of the U.S. Department of Homeland Security, will be speaking on The Global Mission to Secure Cyberspace and will be available to virtual attendees for Live Q&A.

    · Hear from top government agencies in a special customer panel; Veteran Affairs, the U.S. Navy and the Environmental Protection Agency discuss real world lessons learned and technology innovations.

    · Learn how to leverage a next generation mobile workforce for a 21st Century government with live demos and best practices from Jane Boulware, VP US Windows.

    · Senior Director of Microsoft’s Institute for Advanced Technology in Governments talks “Rethinking cyber defense…lessons learned from Microsoft’s own experience.

    Other featured speakers would include:

    Greg Myers - Vice President: Federal
    Walter Puschner - Vice President: User Experience IT
    Vaughn Noga - Acting Principal Deputy Assistant Administrator for Environmental Information
    Captain Scott Langley - USN, MCSE CEH CISSP, Commander Navy Reserve Forces Command N6/CTO
    Maureen Ellenberger - Veterans Relationship Management, Program Manager, Veteran Affairs
    Dave Aucsmith - Microsoft’s Institute for Advanced Technology in Government

    Register for this event using this unique URL.

    Thank you in advance and we look forward to your participation at the Virtual Federal Forum!

  • What's New in Defrag for Windows Server 2012/2012R2

    Hello everyone, I am Palash Acharyya, Support Escalation Engineer with the Microsoft Platforms Core team. In the past decade, we have come a long way from Windows Server 2003 to all the way to Windows Server 2012R2. There has been a sea-change in the overall Operating System as a whole, and we have added/modified a lot of features. One of these is Disk Defragmentation and I am going to talk about it today.

    Do I need Defrag?

    To put this short and simple, defragmentation is a housekeeping job done at the file system level to curtail the constant growth of file system fragmentation. We have come a long way from Windows XP/2003 days when there used to be a GUI for defragmentation and it used to show fragmentation on a volume. Disk fragmentation is a slow and ongoing phenomena which occurs when a file is broken up into pieces to fit on a volume. Since files are constantly being written, deleted and resized, moved from one location to another etc., hence fragmentation is a natural occurrence. When a file is spread out over several locations, it takes longer for a disk to complete a read or a write IO. So, from a disk IO standpoint is defrag necessary for getting a better throughput? For example, when Windows server backup (or even a 3rd party backup solution which uses VSS) is used, it needs a Minimum Differential Area or MinDiffArea to prepare a snapshot. You can query this area using vssadmin list shadowstorage command (For details, read here). The catch is, there needs to be a chunk of contiguous free space without file fragmentation. The minimum requirement regarding the MinDiffArea is mentioned in the article quoted before.

    Q. So, do I need to run defrag on my machine?

    A. You can use a Sysinternal tool Contig.exe to check the fragmentation level before deciding to defrag. The tool is available here. Below is an example of the output which we can get:

    clip_image001

    There are 13,486 fragments in all, so should I be bothered about it? Well, the answer is NO. Why?

    Here you can clearly observe that I have 96GB free space in C: volume, out of which Largest free space block or Largest Contiguous free space blockis approximately 54GB. So, my data is not scattered across the entire disk. In other words, my disk is not getting hammered during a read/write IO operation and running defrag here will be useless.

    Q. Again, coming back to the previous question, is defrag at all necessary?

    A. Well, it depends. We can only justify the need for defrag if it is causing some serious performance issues, else it is not worth the cost. We need to understand that file fragmentation is not always or solely responsible for poor performance. For example, there could be many files on a volume that are fragmented, but are not accessed frequently. The only way someone can tell if they need defrag is to measure their workload to see if fragmentation is causing slower and slower performance over time. If you determine that fragmentation is a problem, then you need to think about how effective it will be to run defrag for an extended period of time or the overall cost in running it. The word cost figuratively means the amount of effort which has gone behind running this task from an Operating System’s standpoint.  In other words, any improvement that you see will be at the cost of defrag running for a period of time and how it might interrupt production workloads. Regarding the situation where you need to run defrag to unblock backup, our prescriptive guidance should be to run defrag if a user encounters this error due to unavailability of contiguous free space. I wouldn’t recommend running defrag on a schedule unless the backups are critical and consistently failing for the same reason.

    A look at Windows Server 2008R2:

    Defragmentation used to run in Windows Server 2008/2008R2 as a weekly once scheduled task. This is how it used to look like:

    clip_image003

    The default options:

    clip_image005

    What changed in Server 2012:

    There have been some major enhancements and modifications in the functionality of defrag in Windows server 2012. The additional parameters which have been added are:

    /D     Perform traditional defrag (this is the default).

    /K     Perform slab consolidation on the specified volumes.

    /L     Perform retrim on the specified volumes.

    /O     Perform the proper optimization for each media type.

    The default scheduled task which used to run in Windows Server 2008R2 was defrag.exe –c which is doing a defragmentation in all volumes. This was again volume specific, which means the physical aspects of the storage (whether it’s a SCSI disk, or a RAID or a thin provisioned LUN etc.) are not taken into consideration. This has significantly changed in Windows server 2012. Here the default scheduled task is defrag.exe –c –h –k which means it is doing a slab consolidation of all the volumes with a normal priority level (default being Low). To explain Slab Consolidation, you need to understand Storage Optimization enhancements in Windows Server 2012 which has been explained in this blog.

    So what does Storage Optimizer do?

    The Storage Optimizer in Windows 8/Server 2012 , also takes care of maintenance activities like compacting data and compaction of file system allocation for enabling capacity reclamation on thinly provisioned disks. This is again platform specific, so if your storage platform supports it, Storage Optimizer will consolidate lightly used ‘slabs’ of storage and release those freed storage ‘slabs’ back to your storage pool for use by other Spaces or LUNs. This activity is done on a periodic basis i.e., without any user intervention and completes the scheduled task provided it is not interrupted by the user. I am not getting into storage spaces and storage pools as this will further lengthen this topic, you can refer TechNet regarding Storage Spaces overviewfor details.

    This is how Storage Optimizer looks like:

    clip_image006

    This is how it looks like after I click Analyze

    clip_image007

    For a thin-provisioned storage, this is how it looks like:

    clip_image008

    The fragmentation percentage showing above is file level fragmentation, NOT to be confused with storage optimization. In other words, if I click on the Optimize option, it will do storage optimization depending on the media type. In Fig 5., you might observe fragmentation on volume E: and F: (I manually created file system fragmentation). If I manually run a defrag.exe –d (Traditional defrag) in addition with the default –o (Perform optimization), they won’t contradict each other as Storage Optimization and Slab Consolidation doesn’t work at a file system level like the traditional defrag used to do. These options actually show their potential in hybrid storage environments consisting of Storage spaces, pools, tiered storage etc. Hence, in brief the default scheduled task for running Defrag in Server 2012 and Server 2012 R2 does not do a traditional defrag job (defragmentation at a file system level) which used to happen in Windows Server 2008/2008R2. To do the traditional defragmentation of these volumes, one needs to run defrag.exe –d and before you do that, ensure that if it will be at all required or not.

    Q. So why did we stop the default file system defragmentation or defrag.exe -d?

    A. Simple, it didn’t justify the cost and effort to run a traditional file system defragmentation as a scheduled task once every week. When we talk about storage solutions having terabytes of data, running a traditional defrag (default file system defragmentation) involves a long time and also affects the server’s overall performance.

    What changed in Server 2012 R2:

    The only addition in Windows Server 2012R2 is the below switch:

    /G     Optimize the storage tiers on the specified volumes.

    Storage Tiers allow for use of SSD and hard drive storage within the same storage pool as a new feature in Windows Server 2012 R2. This new switch allows optimization in a tiered layout. To read more about Tiered Storage and how it is implemented, please refer to these articles:

    Storage Spaces: How to configure Storage Tiers with Windows Server 2012 R2
    http://blogs.technet.com/b/askpfeplat/archive/2013/10/21/storage-spaces-how-to-configure-storage-tiers-with-windows-server-2012-r2.aspx

    What's New in Storage Spaces in Windows Server 2012 R2
    http://technet.microsoft.com/en-us/library/dn387076.aspx

    Summary:

    In brief, we need to keep these things in mind:

    1. The default scheduled task for defrag is as follows:

    Ø Windows Server 2008R2: defrag.exe –c

    Ø Windows Server 2012: defrag.exe –c –h –k

    Ø Windows Server 2012 R2: defrag.exe –c –h –k –g

    On a client machine it will be Windows –c –h –ohowever, if there is a thin provisioned media present, defrag will do slab consolidation as well.

    2. The command line –c –h –k (for 2012) and –c –h –k –g (for 2012R2) for the defrag task will perform storage optimization and slab consolidation on thin provisioned media as well. Different virtualization platforms may report things differently, like Hyper-V shows the Media Type as Thin Provisioned, but VMware shows it as a Hard disk drive. The fragmentation percentage shown in the defrag UI has nothing to do with slab consolidation. It refers to the file fragmentation of the volume.  If you want to address file fragmentation you must run defrag with –d(already mentioned before)

    3. If you are planning to deploy a PowerShell script to achieve the same, the command is simple.

    PS C:\> Optimize-Volume -DriveLetter <drive letter name> -Defrag -Verbose

    Details of all PowerShell cmdlets can be found here.

    That’s all for today, till next time.

    Palash Acharyya
    Support Escalation Engineer
    Microsoft Platforms Support

  • Our UK Windows Directory Services Escalation Team is Hiring – Support Escalation Engineers.

    Hi! Its Linda Taylor here again from the Directory Services Escalation team in the UK. In this post, I want to tell you – We are hiring in the UK!! Would you like to join the UK Escalation Team and work on the most technically challenging and ...read more
  • Adding shortcuts on desktop using Group Policy Preferences in Windows 8 and Windows 8.1

    Hi All! My name is Saurabh Koshta and I am with the Core Team at Microsoft. Currently I work in the client space so supporting all aspects of Windows 8 and Windows 8.1 is my primary role. We very often get calls from customers who are evaluating ...read more
  • RAP as a Service (RaaS) from Microsoft Services Premier Support

    In this post, I’m excited to discuss a new Premier Support offering called Risk Assessment Program (RAP) as a Service (or RaaS for short).

    For those that are not familiar with RAP, it is a Microsoft Services Premier Support offering that helps prevent serious issues from occurring by analyzing the health and risks present in your current environment.

    For example: if you haven’t done a WDRAP (Windows Desktop RAP) and your end-users are suffering slow boot times, slow logon times, slow file copy, hung applications, and applications crashing, it could help! A WDRAP assesses your current environment and recommends changes which improve the Windows user experience.

    Our new RAP as a Service offering helps accelerate the process of diagnosis and reporting, using our RaaS online service.

    Q: So what is Microsoft RAP as a Service (RaaS)?
    A:  RaaS is an evolution of the Risk Assessment Program offering.

    • RaaS is a way of staying healthy, proactively.
    • It’s secure and private.
    • The data is collected remotely.
    • We analyze against best practices established by knowledge obtained from Microsoft IT, and over 20,000 customer assessments.
    • It enables you to view your results immediately.

    You can also take a look at this video describing RAP as a Service:

     

    Microsoft RAP as a Service


    Q:  What are the benefits of RaaS over a RAP?
    A:  The benefits are:

    • Online delivery with a Microsoft accredited engineer.
    • A modern best-practices toolset that allows you to assess your environment at any time and includes ongoing updates for a full year.
    • You get immediate on-line feedback on your environment.  Just run the straightforward toolset and you’ll garner instant insight into your environment.
    • Easily share results with your IT staff and others in your organization.
    • You can reassess your environment to track remediation and improvement progress.
    • Reduced resource overhead requirements.  There’s no need to take your people away from their other work for multiple days, nor do they need to travel to the location where the work is being performed.
    • Better scheduling flexibility.  Due to the agile structure of the RaaS service offering, turnaround times to get a Microsoft accredited engineer to review your environment are much shorter.
    • Better security.  While both offerings are highly secure, RaaS has the added benefit of including no intermediary steps in the assessment process.
    • RaaS includes remediation planning, which helps you understand what’s required to get your environment optimally healthy.
    • A broader toolset that is continually enhanced.   For example, RaaS for Active Directory includes assessment checks that were previously available as two separate service offerings: an Active Directory RAP and Active Directory Upgrade Assessment Health Check.  These are combined in the Active Directory RaaS. RaaS also includes additional new tests such as support for Windows Server 2012.

    Q:  What technologies can be assessed using RaaS?

    … and others coming soon, such as Hyper-V and more.  Please contact your Microsoft Premier Support Technical Account Manager for further info on availability.

    Q:  I can’t wait until the releases of the other technologies!
    A:  In the meantime, you can still request a RAP for those technologies until these are released with RaaS.

    Q:  Is RaaS currently available for non-Premier Support customers?
    A:  Not at this time. To find out more about Premier Support, please visit Microsoft Services Premier Support  

    Q:  Do I use the RaaS service for my environment before or after going into production?
    A:  Both. We highly recommend you test your environment before going live using RaaS.  We also recommend using RaaS after you go in production, because changes between test and production are inevitable.

    Q:  What are the system requirements for a RaaS?
    A:  The Microsoft Download Center has a detailed description of RAP as a Service (RaaS) Prerequisites.

    Q:  How do I schedule a RaaS?
    A:  Talk to your Microsoft Premier Support Technical Account Manager (TAM) or Application Developer Manager (ADM), and they can schedule the RaaS.

    Q:  Where would I go to sign-in for the RaaS?
    A:  You browse to the Microsoft Premier Proactive Assessment Services site and enter your credentials. The packages will be waiting for you to download and start running.

    Q:  I’m in a secure environment; we cannot access external websites.
    A:  It’s alright! We have a portable version for your needs.

    Q:  Does it take a lot of ramp-up time to get familiar with the toolset?
    A:  No, the package is wizard-driven for ease of use.

    Q:  Do I need to have any down time?
    A:  No, the data collection is non-invasive, so no scheduled downtime is required. Collect the data on your own schedule.

    Q:  OK, I collected the data, now what are my next steps?
    A:  Once data collection is complete, you can submit the data privately and securely to the Microsoft Premier Proactive Assessment Services site for analysis.

    Q:  When do I get to see my results?
    A:  We (the accredited Microsoft engineers) will analyze and annotate the report for your specific environment.  Once you receive the report back, we will set up a conference call to go over the findings with your staff.

    Q:  How long is the report available for us?
    A:  The report is available online for twelve months so you can continue remediating any issues/problems.

    Q:  Can I re-run the RaaS toolset?
    A:  Yes, you get to re-collect the data, submit the data again and get the detailed analysis back for a whole year, as a Premier customer.

    Q:  Can I still have Microsoft Premier Field Engineers come on-site?
    A:  Yes, we still have that option available to assist you! Regular RAPs are still available.

    Thank you and I hope you found this useful and something you can take advantage of.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Global Business Support

  • FREE online event: Virtualizing Your Data Center with Hyper-V and System Center

    Virtualizing Your Data Center with Hyper-V and System Center
    Free online event with live Q&A: http://aka.ms/virtDC
    Wednesday, February 19th from 9am – 5pm PST

    What: Fast-paced live virtual session 
    Cost: Free
    Audience: IT Pro
    Prerequisites: For IT pros new to virtualization or with some experience and looking for best practices.

    If you're new to virtualization, or if you have some experience and want to see the latest R2 features of Windows Server 2012 Hyper-V or Virtual Machine Manager, join us for a day of free online training with live Q&A to get all your questions answered. Learn how to build your infrastructure from the ground up on the Microsoft stack, using System Center to provide powerful management capabilities. Microsoft virtualization experts Symon Perriman and Matt McSpirit (who are also VMware Certified Professionals) demonstrate how you can help your business consolidate workloads and improve server utilization, while reducing costs. Learn the differences between the platforms, and explore how System Center can be used to manage a multi-hypervisor environment, looking at VMware vSphere 5.5 management, monitoring, automation, and migration. Even if you cannot attend the live event, register today anyway and you will get an email once we release the videos for on-demand replay! 

    Topics include:

    • Introduction to Microsoft Virtualization
    • Host Configuration
    • Virtual Machine Clustering and Resiliency
    • Virtual Machine Configuration
    • Virtual Machine Mobility
    • Virtual Machine Replication and Protection
    • Network Virtualization
    • Virtual Machine and Service Templates
    • Private Clouds and User Roles
    • System Center 2012 R2 Data Center
    • Virtualization with the Hybrid Cloud
    • VMware Management, Integration, and Migration

    Register here: http://aka.ms/virtDC

    Also check out the www.MicrosoftVirtualAcademy.com for other free training and live events.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Global Business Support

  • Top Tips from “Tip of the Day”

    The following links are for the top four tips from the 'Tip of the Day' blog during the month of January.

    http://blogs.technet.com/b/tip_of_the_day/archive/2014/01/25/tip-of-the-day-validating-a-single-disk-in-cluster.aspx

    http://blogs.technet.com/b/tip_of_the_day/archive/2014/01/21/tip-of-the-day-registry-enhancements.aspx

    http://blogs.technet.com/b/tip_of_the_day/archive/2014/01/14/some-hidden-dfsr-improvements.aspx

    http://blogs.technet.com/b/tip_of_the_day/archive/2014/01/17/managed-service-accounts.aspx

    NOTE: Tip of the Day is a random daily tip. The idea behind it harkens back to something I started when I was first hired at Microsoft.  I told myself, "I want to try to learn something new everyday. If I can learn at least one thing today, then I can call the day a success." Tip of the day is my attempt to share the things I pick up alone the way.

    Robert Mitchell
    Senior Support Escalation Engineer
    Microsoft Customer Service & Support

  • How to add a Pass-through disk to a Highly Available Virtual Machine running on a Windows Server 2012 R2 Failover Cluster

    I was recently asked how to add a Pass-through disk to a Highly Available VM running on a Windows Server 2012 R2 Failover Cluster. I tested the steps I was accustomed to for Windows Server 2008 and found the Disk to be in a ReadOnly state in the VM. As it turns out there are differences in how you add the disk and how Cluster refreshes the VM after the disk as added. Below are the steps to do this successfully:

    The first step is to add the disk to Failover Cluster. When you do this the disk will be placed in Available Storage. Note the Disk Number associated with your Pass-through disk for later use.

    image

    After adding the disk to Failover Cluster, assign it to the VM role and ensure that the disk is online. If it is offline when you perform the remaining steps, the disk will be ReadOnly in the VM with no way to fix it but to start over.

    image

    image

    Now that you have the Pass-through disk added to the VM role, all that is left is to modify the VM setting by adding the Pass-through disk to a virtual SCSI adapter. Before doing this you will need to Shut Down the VM and leave the Configuration resource online. Also ensure the disk is online.

    image

    Go to the VM Settings from within Failover Cluster administrator by right clicking on the VM and selecting Settings:

    image

    Click on the SCSI controller, select Hard Drive and then click on Add. You can use the existing SCSI controller the Operating System VHDX is attached too, or add a second one if you like.

    image

    Click the Physical hard disk radio button and from the drop down select the disk that corresponds to the disk you added to the VM role. This is the disk number you noted above. Click ok to complete the change.

    image

    Start the Virtual Machine and you will now have access from within the VM to your Pass-Through disk.

    image

    Here is an abbreviated list of steps if you are familiar with the interfaces and would just like to know what needs to be done so you can go do it.

    1. Add the disk that will be a Pass-through disk to Failover Cluster and assign the disk to the VM role.
    2. Online the disk if it is offline.
    3. Shutdown the Virtual Machine, but leave the Virtual Machine Configuration resource online.
    4. Add the Pass-through disk to the VM configuration. Use the existing SCSI controller or add a second.
    5. Start the Virtual Machine

    Steven Andress
    Sr. Support Escalation Engineer
    Platforms Global Business Support