SupportingWindows

  • Windows 10 is coming!

    Windows10logo

    Hello folks, as I’m sure you already know, Windows 10 will be available tomorrow, July 29th.  With that said, we will be blogging some of the new features that our team will be supporting in this new OS.

    We will also blog about features that some of other teams support.  Namely, how to manage Windows 10 notifications and upgrade options:

    How to manage Windows 10 notification and upgrade options

    Windows 10 landing page

    See you soon!

    -Blake

  • The New and Improved CheckSUR

    One of the most used and arguably most efficient tools that we utilize when troubleshooting Servicing issues, prior to Windows 8/Windows 2012, is the System Update Readiness tool(also known as CheckSUR). However, as we continue to improve our operating systems, we must continue to improve our troubleshooting tools as well. Thus, I want to introduce the “Updated CheckSur”

    Improvements for the System Update Readiness Tool in Windows 7 and Windows Server 2008 R2
    https://support.microsoft.com/en-us/kb/2966583

    In short, previously, CheckSUR would load its payload locally on the machine and run the executable to attempt to resolve any discrepancies it detects in the package store.

    With these improvements, the utility no longer carries a payload.  It also doesn’t require repeated downloads of the CheckSUR package that was previously required. The new CheckSUR package will stay installed until removed by the user.

    I’m sure you’re wondering: without the payload, how will CheckSUR resolve any issues? After installing this patch and rebooting (which is required), the CheckSUR functionality is now exposed through the DISM command:

    DISM /Online /Cleanup-Image /Scanhealth

    This command should seem familiar if you have used DISM for troubleshooting on any Win8+ operating system. There is, however, no restorehealth/checkhealth with this update.  Scanhealth provides the same functionality as restorehealth in Win8+ OS’s and the CheckSUR tool did previously.

    Another new feature is that CheckSUR will now also detect corruption on components for Internet Explorer.

    A few extra points to note:

    • The “Updated CheckSUR” is specific to only run on Windows 7 SP1 and Windows 2008 R2 SP1
    • CheckSUR can only be run on an online OS
    • CheckSUR can be used as a scheduled proactive method by scheduling a task to run /scanhealth as a desired time to ensure that the system is periodically checked for corruption
    • Manual steps that previously could be utilized to run CheckSUR are no longer available with the update to CheckSUR

    One of my favorite parts of the update is that the results are still logged in the c:\windows\logs\CBS\checksur.log and still gives the same layout and information surrounding its findings once the file has been accessed and opened. I will be creating another article shortly that discusses some steps to take when you encounter a CheckSUR.log with errors.

    Thank You
    Nicholas Debanm
    Support Escalation Engineer

  • Azure SNAT

    This post was contributed by Pedro Perez. Azure’s network infrastructure is quite different than your usual on-premises network as there are different layers of software abstraction that work behind the curtains. I would like to talk today about ...read more
  • Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network and Creation)

    Hello, cluster fans. In my previous blog, Part 1, I talked about how to work around the storage block in order to implement Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Networking in Cluster on Azure.

    Before that, you should know some basic concepts of Azure networking. Here are a few Azure terms we need use to setup the Cluster.

    VIP (Virtual IP address): A public IP address belongs to the cloud service. It also serves as an Azure Load Balancer which tells how network traffic should be directed before being routed to the VM.

    DIP (Dynamic IP address): An internal IP assigned by Microsoft Azure DHCP to the VM.

    Internal Load Balancer: It is configured to port-forward or load-balance traffic inside a VNET or cloud service to different VMs.

    Endpoint: It associates a VIP/DIP + port combination on a VM with a port on either the Azure Load Balancer for public-facing traffic or the Internal Load Balancer for traffic inside a VNET (or cloud service).

    You can refer to this blog for more details about those terms for Azure network:

    VIPs, DIPs and PIPs in Microsoft Azure
    http://blogs.msdn.com/b/cloud_solution_architect/archive/2014/11/08/vips-dips-and-pips-in-microsoft-azure.aspx

    OK, enough reading, Storage is ready and we know the basics of Azure network, can we start to building the Cluster? Yes!

    The first difference you will see is that you need create the Cluster with one node and add the other nodes as the next step. This is because the Cluster Name Object (CNO) cannot be online since it cannot acquire a unique IP Address from the Azure DHCP service. Instead, the IP Address assigned to the CNO is a duplicate address of node who owns CNO. That IP fails as a duplicate and can never be brought online. This eventually causes the Cluster to lose quorum because the nodes cannot properly connect to each other. To prevent the Cluster from losing quorum, you start with a one node Cluster. Let the CNO’s IP Address fail and then manually set up the IP address.

    Example:

    The CNO DEMOCLUSTER is offline because the IP Address it is dependent on is failed. 10.0.0.4 is the VM’s DIP, which is where the CNO’s IP duplicates from.

    image

    In order to fix this, we will need go into the properties of the IP Address resource and change the address to another address in the same subnet that is not currently in use, for example, 10.0.0.7.

    To change the IP address, right mouse click on the resource, choose the Properties of the IP Address, and specify the new 10.0.0.7 address.

    image

    Once the address is changed, right mouse click on the Cluster Name resource and tell it to come online.

    image

    Now that these two resources are online, you can add more nodes to the Cluster.

    Instead of using Failover Cluster Manager, the preferred method is to use the New-Cluster PowerShell cmdlet and specify a static IP during Cluster creation. When doing it this way, you can add all the nodes and use the proper IP Address from the get go and not have to use the extra steps through Failover Cluster Manager.

    Take the above environment as example:

    New-Cluster -Name DEMOCLUSTER -Node node1,node2 -StaticAddress 10.0.0.7

    Note:The Static IP Address that you appoint to the CNO is not for network communication. The only purpose is to bring the CNO online due to the dependency request. Therefore, you cannot ping that IP, cannot resolve DNS name, and cannot use the CNO for management since its IP is an unusable IP.

    Now you’ve successfully created a Cluster. Let’s add a highly available role inside it. For the demo purpose, I’ll use the File Server role as an example since this is the most common role that lot of us can understand.

    Note:In a production environment, we do not recommend File Server Cluster in Azure because of cost and performance. Take this example as a proof of concept.

    Different than Cluster on-premises, I recommend you to pause all other nodes and keep only one node up. This is to prevent the new File Server role from moving among the nodes since the file server’s VCO (Virtual Computer Object) will have a duplicated IP Address automatically assigned as the IP on the node who owns this VCO. This IP Address fails and causes the VCO not to come online on any node. This is a similar scenario as for CNO we just talked about previously.

    Screenshots are more intuitive.

    The VCO DEMOFS won’t come online because of the failed status of IP Address. This is expected because the dynamic IP address duplicates the IP of owner node.

    image

    Manually editing the IP to a static unused 10.0.0.8, in this example, now the whole resource group is online.

    image

    But remember, that IP Address is the same unusable IP address as the CNO’s IP. You can use it to bring the resource online but that is not a real IP for network communication. If this is a File Server, none of the VMs except the owner node of this VCO can access the File Share.  The way Azure networking works is that it will loop the traffic back to the node it was originated from.

    Show time starts. We need to utilize the Load Balancer in Azure so this IP Address is able to communicate with other machines in order to achieving the client-server traffic.

    Load Balancer is an Azure IP resource that can route network traffic to different Azure VMs. The IP can be a public facing VIP, or internal only, like a DIP. Each VM needs have the endpoint(s) so the Load Balancer knows where the traffic should go. In the endpoint, there are two kinds of ports. The first is a Regular port and is used for normal client-server communications. For example, port 445 is for SMB file sharing, port 80 is HTTP, port 1433 is for MSSQL, etc. Another kind of port is a Probe port. The default port number for this is 59999. Probe port’s job is to find out which is the active node that hosts the VCO in the Cluster. Load Balancer sends the probe pings over TCP port 59999 to every node in the cluster, by default, every 10 seconds. When you configure a role in Cluster on an Azure VM, you need to know out what port(s) the application uses because you will need to add the port(s) to the endpoint. Then, you add the probe port to the same endpoint. After that, you need update the parameter of VCO’s IP address to have that probe port. Finally, Load Balancer will do the similar port forward task and route the traffic to the VM who owns the VCO. All the above settings need to be completed using PowerShell as the blog was written.

    Note: At the time of this blog (written and posted), Microsoft only supports one resource group in cluster on Azure as an Active/Passive model only. This is because the VCO’s IP can only use the Cloud Service IP address (VIP) or the IP address of the Internal Load Balancer. This limitation is still in effect although Azure now supports the creation of multiple VIP addresses in a given Cloud Service.

    Here is the diagram for Internal Load Balancer (ILB) in a Cluster which can explain the above theory better:

    image

    The application in this Cluster is a File Server. That’s why we have port 445 and the IP for VCO (10.0.0.8) the same as the ILB. There are three steps to configure this:

    Step 1: Add the ILB to the Azure cloud service.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $ServiceName = "demovm1-3va468p3" # the name of the cloud service that contains the VM nodes. Your cloud service name is unique. Use Azure portal to find out service name or use get-azurevm.

    image

    $ILBName = "DEMOILB" # newly chosen name for the new ILB

    $SubnetName = "Subnet-1" # subnet name that the VMs use in the VNet

    $ILBStaticIP = "10.0.0.8" # static IP address for the ILB in the subnet

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName

    image

    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)

    {

    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

    # Add Azure ILB using the above variables.

    Add-AzureInternalLoadBalancer -InternalLoadBalancerName $ILBName -SubnetName $SubnetName -ServiceName $ServiceName -StaticVNetIPAddress $ILBStaticIP

    # Check the settings.

    Get-AzureInternalLoadBalancer –servicename $ServiceName

    clip_image001

    Step 2: Configure the load balanced endpoint for each node using ILB.

    Run the following PowerShell commands on your on-premises machine which can manage your Azure subscription.

    # Define variables.

    $VMNodes = "DEMOVM1", “DEMOVM2" # cluster nodes’ names, separated by commas. Your nodes’ names will be different.

    $EndpointName = "SMB" # newly chosen name of the endpoint

    $EndpointPort = "445" # public port to use for the endpoint for SMB file sharing. If the cluster is used for other purpose, i.e., HTTP, the port number needs change to 80.

    # Add endpoint with port 445 and probe port 59999 to each node. It will take a few minutes to complete. Please pay attention to ProbeIntervalInSeconds parameter. This tells how often the probe port detects which node is active.

    ForEach ($node in $VMNodes)

    {

    Get-AzureVM -ServiceName $ServiceName -Name $node | Add-AzureEndpoint -Name $EndpointName -LBSetName "$EndpointName-LB" -Protocol tcp -LocalPort $EndpointPort -PublicPort $EndpointPort -ProbePort 59999 -ProbeProtocol tcp -ProbeIntervalInSeconds 10 -InternalLoadBalancerName $ILBName -DirectServerReturn $true | Update-AzureVM

    }

    # Check the settings.

    ForEach ($node in $VMNodes)

    {

    Get-AzureVM –ServiceName $ServiceName –Name $node | Get-AzureEndpoint | where-object {$_.name -eq "smb"}

    }

    image

    Step 3: Update the parameters of VCO’s IP address with Probe Port.

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2008 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address 10.0.0.0" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “10.0.0.8” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB.

    cluster res $IPResourceName /priv enabledhcp=0 overrideaddressmatch=1 address=$ILBIP probeport=59999  subnetmask=255.255.255.255

    Run the following PowerShell commands inside one of the cluster nodes if you are using Windows Server 2012/2012 R2.

    # Define variables

    $ClusterNetworkName = "Cluster Network 1" # the cluster network name (Use Get-ClusterNetwork or GUI to find the name)

    $IPResourceName = “IP Address 10.0.0.0" # the IP Address resource name (Use get-clusterresource | where-object {$_.resourcetype -eq "IP Address"} or GUI to find the name)

    $ILBIP = “10.0.0.8” # the IP Address of the Internal Load Balancer (ILB)

    # Update cluster resource parameters of VCO’s IP address to work with ILB

    Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ILBIP";"ProbePort"="59999";"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"OverrideAddressMatch"=1;"EnableDhcp"=0}

    You should see this window:

    image

    Take the IP Address resource offline and bring it online again. Start the clustered role.

    Now you have an Internal Load Balancer working with the VCO’s IP. One last task you need do is with the Windows Firewall. You need to at least open port 59999 on all nodes for probe port detection; or turn the firewall off. Then you should be all set. It may take about 10 seconds to establish the connection to the VCO the first time or after you failover the resource group to another node because of the ProbeIntervalInSeconds we set up previously.

    In this example, the VCO has an Internal IP of 10.0.0.8. If you want to make your VCO public-facing, you can use the Cloud Service’s IP Address (VIP). The steps are similar and easier because you can skip Step 1 since this VIP is already an Azure Load Balancer. You just need to add the endpoint with a regular port plus the probe port to each VM (Step 2. Then update the VCO’s IP in the Cluster (Step 3). Please be aware, your Clustered resource group will be exposed to the Internet since the VCO has a public IP. You may want to protect it by planning enhanced security methods.

    Great! Now you’ve completed all the steps of building a Windows Server Failover Cluster on an Azure IAAS VM. It is a bit longer journey; however, you’ll find it useful and worthwhile. Please leave me comments if you have question.

    Happy Clustering!

    Mario Liu
    Support Escalation Engineer
    CSS Americas | WINDOWS | HIGH AVAILABILITY  

  • Supported IP Protocols for Azure Cloud Services

    As of today, June 2015, the supported IP protocols for Azure cloud services are TCP (protocol number 6) and UDP (protocol number 17) only. All TCP and UDP ports are supported. This applies to both cloud service VIPs as well as instance level public IPs ...read more
  • Building Windows Server Failover Cluster on Azure IAAS VM – Part 2 (Network)

    Hello, cluster fans. In my previous blog, I talked about how to work around the storage block in order to implementing Windows Server Failover Cluster on Azure IAAS VM. Now let’s discuss another important part – Network in cluster on Azure ...read more
  • Windows Server Failover Cluster on Azure IAAS VM – Part 1 (Storage)

    Hello, cluster fans. This is Mario Liu. I’m the Support Escalation Engineer in Windows High Availability team in Microsoft CSS Americas. I have a good news for you that starting in April, 2015, Microsoft supports Windows Server Failover Cluster ...read more
  • Walkthrough on Session hint / TSVUrl on Windows Server 2012

    Hello Askperf, my name is Naresh and today we are going to discuss how we can connect to a Windows 2012 Remote desktop collection from thin clients or other clients that are not session hints aware.

    You might be thinking what are “Session hints”, so let us right away dig into the need for session hints. The connection broker in Windows 2012/R2 has changed the way clients connect to a group of RDSH/RDVH servers – earlier called farms but now we have them grouped as ‘collections’ in Windows 2012/R2. With Windows 2012, we brought changes in the way how the GUI looks, how we install different roles and how these different roles interact with each other. With all this the flow of remote desktop connections and how a client connects to the endpoint servers, changed as well.

    Classical way of connecting to Remoteapps-windows server 2008 r2

    In Windows 2008 R2 we deployed RemoteApps as:

    1. MSI files
    2. RDP files
    3. Connect through RDWeb

    To explain the connection flow I will walk you through the RDP file content of a RemoteApp in Windows 2008/R2 vs. Windows 2012/R2.

    This is how a RDP file for a RemoteApp would look like in a 2008 R2 RDS environment:

    clip_image001

    1. The client reads the full address (of the farm) and the RDGateway properties.
    2. If the client finds the RDGateway, it will authenticate against the gateway and based on the CAP and RAP policy the connection would be passed on.
    3. The Client would then do a DNS query for the full address (of the farm) – assuming this is a DNS Round Robin or the farm name is pointing to a NLB – and would try to connect to the RDSH server. (If there is a dedicated redirector, then one of them will receive this connection.)
    4. The RDSH (or the redirector) server receiving the connection would then contact the connection broker and if there is an existing disconnected session available for this user on an RDSH, the connection broker would send the details of the RDSH server back to the redirector. If there is no disconnected session, the connection broker would determine the best suited server as per the load balancing algorithms and would send the details of that server to the redirector.
    5. Redirector would in turn pass those details to the client and the client would then directly logon to the application on the assigned server. Session established.

    Change in the way we connect in 2012 -Session Hint / TSVUrl

    In a 2012/R2 environment the RDP file looks like this:

    clip_image002

    1. In Windows 2012 the concept of Farms has been deprecated and replaced by collections. However, unlike Farms, collections do not have an entry in the DNS. Therefore the client reads the full address (which is for connection broker which hosts the RDS deployment and collections) and the RDGateway properties.
    2. If the client finds the RDGateway, it will authenticate against the gateway and based on the CAP and RAP policy the connection would be passed on.
    3. The Client would then do a DNS query for the full address, i.e. the connection broker for windows 2012 and would try to connect to the RD Connection broker. The term redirector is no longer used in Windows 2012 and instead connection broker does the redirection, but how?

    What are session hints/TSVUrls ?

    clip_image003

    If you see the above RDP file, I have also highlighted the loadbalanceinfo which consists of the TSVUrl. A TSVUrl or session hints suggests which collection in the deployment the client should connect to. So along with the Full Address and gateway information, the client also reads the loadbalancerinfo and sends that over to the connection broker.

    4. The connection broker then reads the TSVUrl to determine the collection name and then suggests which RD Session host participating in the collection should take the session based on whether there is an already existing session or not.

    5. If there is an existing session available for this user on an RDSH in that collection, the connection broker would send the details of the RDSH server back to the client. If there is no disconnected session, the connection broker would determine the best suited server within the collection as per the load balancing algorithms and would send the details of that server to the client.

    6. The client would then directly logon to the application on that assigned server. Session established.

    DefaultTsvUrl: workaround for incompatible RDClient

    However, what would happen if the RD client does not understand the TSVURLs? Yes, the client would directly logon to the connection broker but since the application is not hosted there, it would error out.

    We have seen a lot of customers not wanting to move over to Windows 2012 Remote desktop services because they have Clients like, old thin clients with old RD clients and some non-windows clients or some of the old Windows clients that might not understand TSVURLs. I would highly recommend upgrading the clients to the latest by getting in touch with the OEM vendor/manufacturer for getting the latest RD client for these devices (in case of old Windows fat cients either install RDC 8 or later or else upgrade to the operating system that supports RDC 8) making sure they are tsvurl aware, given the so many other benefits and features the latest RD client would bring along. However, we do understand that some of our customers would have genuine reasons to keep these clients and also while planning and implementing an upgrade, one would need to run the show in the meantime with the non-compatible clients.

    For such cases, we can use the below registry key on the connection broker hosting the deployment.

    Important This section, method, or task contains steps that tell you how to modify the registry. However, serious problems might occur if you modify the registry incorrectly. Therefore, make sure that you follow these steps carefully. For added protection, back up the registry before you modify it. Then, you can restore the registry if a problem occurs. For more information about how to back up and restore the registry, click the following article number to view the article in the Microsoft Knowledge Base 322756 How to back up and restore the registry in Windows.

    The following tuning recommendation has been helpful in alleviating the problem:

    1. Start Registry Editor (Regedit.exe).

    2. Locate and then click the following key in the registry:

    HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\ClusterSettings

    3. On the Edit menu, click Add Value, and then add the following registry value:

    Value name: DefaultTsvUrl
    Data type: REG_SZ
    Value data: tsv://<TSVURL>

    This registry would provide the connection broker with the default loadbalanceinfo in case the client was unable to read the loadbalanceinfo provided in the remoteapp.

    To find the TSVUrl to be set in DefaultTsvUrl, you can go to the following registry on the connection broker:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\<CollectionName>\Applications\<RemoteApp>\

    RDPFileContents REG_SZ

    You can find the tsvurl in the RDPFilecontents of the collection you would like to set as your default. Then configure it as your DefaultTsvUrl . You can then keep the show running while you upgrade to newer compatible clients.

    NOTE: This is being suggested as an alternate/workaround when you do not have upgrading the client as an option. It has the following caveats that one should be aware of:

    1. This would only be read when the client is unable to understand the tsvurl sent in the RDP file (from the remote app) and thus does not present the tsvurl to connection broker.
    2. Whenever such a client comes the DefaultTsvUrl sends it to one single collection as specified in the registry value. DefaultTsvUrl can only point to one single collection only and thus you may want to plan and create a single collection for non compatible clients that has all their required apps in it. There is no provision of defining multiple collections in this registry so if you want to use incompatible clients over multiple collections then it won't be possible.
    3. In case you change that collection, you will have to change the defauDefaultTsvUrl lttsvurl registry value as well.
    4. This registry is only a workaround for tsvurls and will not work if the clients are not compatible with remoteapps itself. It is only for providing a workaround for clients that were able to access remoteapps earlier in Windows 2008/R2 but cannot access them through collections as explained in the section "Change in the way we connect in 2012 -Session Hint / TSVUrl".

    -Naresh

  • Invitation to provide feedback through UserVoice

    I am not sure if everyone is aware of UserVoice but I am here to tell you about it.  UserVoice is where you can provide feedback to the Microsoft Product Groups who are now monitoring these forums.  Do you have idea or suggestion on how to make Windows Server 2016 better or a feature you would like to see added?  Well, speak up and let us know what you are thinking.

    There are multiple forums to provide this feedback.  Below is the listing of the various features.  But first, how to start User Voice and how the Windows Server Product Team will respond.

    How to start User Voice?

    1. Create user account. (Enter contact in case we need to ask more questions. )
    2. Add your voice! (I wish… )
         Or
    3. Cast a vote to the idea you like. You get 10 votes total!

    What ideas will be most considered by the Windows Server Product Team? 

    • Idea with high votes will be considered heavily.
    • Clear and actionable ideas will be reviewed quickly.  

    Caution: Do not create a single idea with multiple ideas contained in it. We need to understand the priorities. Please make sure they are separate ideas so we can see clear votes on each distinct idea. In this case, we will likely to close the idea.

    Once the Windows Server Product Team has reviewed the idea, the idea status will change.

    Note: "Under Review" status means that the Windows Server Product Team is reviewing. It does not guarantee any deliverable.

    We will provide notification on all declined ideas.

    Each vote get released when the idea is closed (either decline or completed).

    Now, as far as the various forums, here you go and let us know what you would like to see:

    Clustering  
    Diagnostics 
    General Feedback 
    Nano Server 
    Networking 
    Remote Management Tools 
    Security and Assurance 
    Storage 
    Virtualization 

    Also, if you are looking to provide feedback on Automation (PowerShell and Scripting), please provide your suggestions using our PowerShell Connect Site

    Remember, these sites are for feature suggestions and ideas only.

    Enjoy,
    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • Use Azure custom routes to enable KMS activation with forced tunneling

    Previously, if customers enabled forced tunneling on their subnets, OS activations prior to Windows Server 2012 R2 would fail as they could not connect to the Azure KMS server from their cloud service VIP. However, thanks to the newly released Azure custom ...read more
  • Microsoft Ignite sessions dealing with what we do in AskCore

    Early in the month of May, Microsoft held it's Ignite Conference (formally known as TechEd) in Chicago, Illinois.  This conference was a huge success with over 23,000 attendees.  There are a lot of new things coming out with Windows 10 and Windows Server 2016 over the next year.  I wanted to provide you some of the sessions that deal specifically with what our Core Group supports (Failover Clustering, Storage, Hyper-V, and Deployment).  There were tons more sessions regarding all other aspects of Microsoft Azure, Applications (such as SQL, Exchange, Office 365), SCVMM, Security, etc, but I wanted to pull out these specific sessions since they deal with what we deal with.

    Give it a look and see how the next version of Windows could be for you.  Each full session is approx 75 minutes in length.  I also pulled out a few "Ignite Studios" productions that are in the 20 minute or so range.

    To see the full list of all sessions from Microsoft Ignite, please visit our Channel 9 site.

     

    Failover Clustering / Storage
    ==============================

    Stretching Failover Clusters and Using Storage Replica in Windows Server vNext
    https://channel9.msdn.com/Events/Ignite/2015/BRK3487
    In this session we discuss the deployment considerations of taking a Windows Server Failover Cluster and stretching across sites to achieve disaster recovery. This session discusses the networking, storage, and quorum model considerations. This session also discusses new enhancements coming in vNext to enable multi-site clusters.

    Deploying Private Cloud Storage with Dell Servers and Windows Server vNext
    http://channel9.msdn.com/Events/Ignite/2015/BRK3496
    The storage industry is going through strategic tectonic shifts. In this session, we’ll walk through Dell’s participation in the Microsoft Software Defined Storage journey and how cloud scale scenarios are shaping solutions. We will provide technical guidance for building Storage Spaces in Windows Server vNext clusters on the Dell PowerEdge R730xd platform.

    Exploring Storage Replica in Windows Server vNext
    http://channel9.msdn.com/Events/Ignite/2015/BRK3489
    Delivering business continuity involves more than just high availability, it means disaster preparedness. In this session, we discuss the new Storage Replica feature, including scenarios, architecture, requirements, and demos. Along with our new stretch cluster option, it also covers use of Storage Replica in cluster-to-cluster and non-clustered scenarios.

    Upgrading Your Private Cloud to Windows Server 2012 R2 and Beyond!
    http://channel9.msdn.com/Events/Ignite/2015/BRK3484
    We are moving fast, and want to help you to keep on top of the latest technology! This session covers the features and capabilities that will enable you to upgrade to Windows Server 2012 R2 and to Windows Server vNext with the least disruption. Understand cluster role migration, cross version live migration, rolling upgrades, and more.

    Overview of the Microsoft Cloud Platform System
    http://channel9.msdn.com/Events/Ignite/2015/BRK2472
    With the Microsoft Cloud Platform System, we are sharing our cloud design learnings from Azure datacenters, so customers can deploy and operate a cloud solution with Windows Server, Microsoft System Center and the Windows Azure Pack. This solution provides Infrastructure-as-a-Service and Platform-as-a-Service solutions for enterprises and service providers.

    Architectural Deep Dive into the Microsoft Cloud Platform System
    http://channel9.msdn.com/Events/Ignite/2015/BRK3459
    The Microsoft Cloud Platform System has an automated framework that keeps the entire stamp current from software to firmware to drivers across all Windows Server, Microsoft System Center, Windows Azure Pack, SQL Server and OEM/IHV and prevent disruptions to tenant and management workloads. This session covers the complete architecture for CPS and deployment in your datacenter.

    Platform Vision & Strategy (4 of 7): Storage Overview
    http://channel9.msdn.com/Events/Ignite/2015/BRK2485
    This is the fourth in a series of 7 datacenter platform overview sessions.

    StorSimple: Extending Your Datacenter into Microsoft Azure with Hybrid Cloud Storage
    http://channel9.msdn.com/Events/Ignite/2015/BRK2494
    StorSimple provides a hybrid cloud storage solution with a hybrid storage array in the on-premises datacenter that seamlessly extends storage capabilities to the cloud. This session details the implementation and functionality of the solution and discusses how the solution solves the issue of growing IT costs related to storage growth and management.

    Hyper-V Storage Performance with Storage Quality of Service
    http://channel9.msdn.com/Events/Ignite/2015/BRK3504
    Windows Server vNext allows you to centrally monitor and manage performance for Hyper-V workloads using Scale-Out File Servers. Learn how to monitor storage performance from a customer, Hyper-V, and storage admin’s viewpoint, then author effective policies to deliver the performance your customers need.

    Spaces-Based, Software-Defined Storage: Design and Configuration Best Practices
    http://channel9.msdn.com/Events/Ignite/2015/BRK3463
    Going well beyond a feature walkthrough, this session delves into the nuances and complexities of the spaces-based SDS design. Starting with the hardware selection and continuing up the stack, this session empowers you to successfully design, deploy, and configure a storage solution based completely on Windows Server 2012 R2 and proven best practices. Examples galore!

    Virtualization
    ===============

    Platform Vision & Strategy (2 of 7): Server Virtualization Overview
    http://channel9.msdn.com/Events/Ignite/2015/BRK2466
    Windows Server and Microsoft Azure are ushering in the next generation of computing for modern apps and cloud infrastructure. What are Containers? Nano Server? New in Hyper-V? Azure IaaS? Or how does this fit into Microsoft’s cloud strategy? Get the answers and more! Come learn about new capabilities in Windows Server, Hyper-V and Azure VMs.

    The Hidden Treasures of Windows Server 2012 R2 Hyper-V?
    http://channel9.msdn.com/Events/Ignite/2015/BRK3506
    It's one thing to hear about and see a great demo of a Hyper-V feature. But how do you put them into practice? This session takes you through some of those lesser-known elements of Hyper-V that have made for great demonstrations, introduces you to some of the lesser-known features, and shows you best practices, how to increase serviceability and uptime, and design/usage tips for making the most of your investment in Hyper-V.

    Microsoft's New Windows Server Containers
    http://channel9.msdn.com/Events/Ignite/2015/BRK2493
    In this session, we cover what containers are, what makes them such an exciting technology, how they will work in Windows Server, and how Docker will integrate with them.

    An Insider’s Guide to Desktop Virtualization
    http://channel9.msdn.com/Events/Ignite/2015/BRK3853
    Ready to drink from a fire hose? In this highly energized session, learn about insights, best practices, and hear unfiltered thoughts about Desktop Virtualization, VDI, vendors, and solutions. Discussion topics include: VDwhy, VDCry, VDI Smackdown, building and designing a Microsoft VDI solution, and 3D graphics. Experience the Microsoft and Citrix Virtual Desktop solution with a huge amount of videos and demos. With unique content and insights, this session is fun and packed with great content for everyone interested in Desktop Virtualization—and some nice giveaways. A session you don’t want to miss!

    Shielded Virtual Machines
    =========================

    Harden the Fabric: Protecting Tenant Secrets in Hyper-V
    https://channel9.msdn.com/Events/Ignite/2015/BRK3457
    In today’s environments, hosters need to provide security assurance to their tenants. "Harden the fabric" is a Windows Server and Microsoft System Center vNext scenario, which includes enhancements in Hyper-V, Virtual Machine Manager, and a new Guardian Server role that enables shielded VMs. Technologies which ensure that host resources do not have access to the Virtual Machine or data.

    Platform Vision & Strategy (5 of 7): Security and Assurance Overview
    https://channel9.msdn.com/Events/Ignite/2015/BRK2482
    Come learn how Microsoft is addressing persistent threats, insider breach, organized cyber crime and securing the Microsoft Cloud Platform (on-premises and connected services with Azure). This includes scenarios for securing workloads, large enterprise tenants and service providers.

    Shielded VMs and Guarded Fabric Validation Guide for Windows Server 2016
    https://gallery.technet.microsoft.com/Shielded-VMs-and-Guarded-44176db3
    This document provides you an installation and validation guide for Windows Server 2016 Technical Preview (build #10074) and System Center Virtual Machine Manager vNext for Guarded Fabric Hosts and Shielded VMs. This solution is designed to protect tenant virtual machines from compromised fabric administrators.

    Windows 10
    ===========
    Top Features of Windows 10
    http://channel9.msdn.com/Events/Ignite/2015/BRK2339
    In this demo-heavy session, see why you need to start thinking: Windows 10. The answer to every question will be Windows 10, but what are the questions? How do you deliver a more secure standard operating environment? How do you make mobility familiar for all your users? What changes the deployment conversation? What changes the app conversation? How do you “mobilize” Win32 applications? What changes the way you manage device lifecycles? What changes how you buy your devices? There will be prizes, there will be fun and you’ll be ready, set for the rest of your Windows 10 experience at Microsoft Ignite.

    The New User Experience with Windows 10
    http://channel9.msdn.com/Events/Ignite/2015/THR0310
    Are you ready for Windows 10? Well, it was designed and developed based on feedback from millions of people around the world, so we think you probably are! Join us as we show you how Windows 10 combines the familiar things you love with a modern touch. Get a deeper look at the user experience and discover new features. Find out how Windows 10 makes you more productive, celebrates a new generation of apps, and unlocks the power of hardware.

    Upgrading to Windows 10: In Depth
    http://channel9.msdn.com/Events/Ignite/2015/BRK3307
    With Windows 10, we are encouraging everyone, including organizations, to upgrade from their existing OS (Windows 7, Windows 8, or Windows 8.1). This upgrade process is easy and reliable, but how exactly does it work? In this session, we dig deep and explore how the process works to ensure that everything (apps, settings, data, drivers) is preserved.

    Windows 10: Ask the Experts
    http://channel9.msdn.com/Events/Ignite/2015/BRK2320
    We’ve talked a lot about Windows 10 already. In this session, we hold an open Q&A, hosted by the always-entertaining Mark Minasi, where you can ask anything about Windows 10. No questions are off limits. So if you’ve still got questions and are looking for answers, bring them to this session.

    Provisioning Windows 10 Devices with New Tools
    http://channel9.msdn.com/Events/Ignite/2015/BRK3339
    A new feature in Windows 10, runtime provisioning will help to reduce the cost of deploying Windows PCs and devices such as tablets and phones. This new feature will enable IT professionals and system integrators to easily configure a general-purpose device during first boot or runtime without re-imaging for the organization's use. In this session, we look at the new tools that enable these scenarios, and exploring the capabilities and deployment options for them.

    Overview of Windows 10 for Enterprises
    http://channel9.msdn.com/Events/Ignite/2015/THR0342
    Windows 10 brings a wealth of new features and solutions to the enterprise. In this session, we explain the various security, management, and deployment features of Windows 10 along with showing you some of the new end-user features that will not only make your customers more productive but also delight them.

    Overview of Windows 10 for Enterprises
    http://channel9.msdn.com/Events/Ignite/2015/FND2901
    Windows 10 brings a wealth of new features and solutions to the enterprise. In this session, we explain the various security, management, and deployment features of Windows 10 along with showing you some of the new end-user features that will not only make your customers more productive but also delight them.

    Overview of Windows 10 for Education
    http://channel9.msdn.com/Events/Ignite/2015/BRK2305
    While Windows has always provided great learning outcomes for students and a comprehensive platform for teachers and administrators, there are several reasons why education customers in general should take notice of Windows 10. From the minimal learning curve user experience for mouse and keyboard users, to the familiar usability scaled across Windows 10 devices, teachers and students will be productive and comfortable from the start. In this session we explain how we are simplifying management and deployment, including in-place upgrades from Windows 7 or 8.1 and provisioning off-the-shelf devices without wiping and replacing images. Learn about benefits of the new, unified app store, allowing flexible distribution of apps.

    What's New in Windows 10 Management and the Windows Store
    http://channel9.msdn.com/Events/Ignite/2015/BRK3330
    Windows 10 continues to add new and improved management technologies, to ensure that Windows continues to be the best—and most flexible—operating system to manage. In this session, we talk about all the changes that are coming, including enhancements to built-in mobile device management protocols, new Windows Store and volume purchase program capabilities, sign-on capabilities with organizational IDs (Microsoft Azure Active Directory), sideloading and other app deployment enhancements, and new capabilities being added to other existing management technologies, such as PowerShell, WMI, etc.

    Windows Server 2016
    ===================

    Nano Server
    http://channel9.msdn.com/Events/Ignite/2015/THR0480
    Come hear about important transformations in Windows Server – the new installation option called Nano Server. Nano Server is a deep rethink of the server architecture. The result is a new, lean cloud fabric host and application development platform, resulting in 20x smaller than Server Core and a reduction in security attack service surface and reboots!

    Deployment
    ============

    How Microsoft IT Deploys Windows 10
    http://channel9.msdn.com/Events/Ignite/2015/BRK3303
    Learn how Microsoft IT adopted and deployed Windows 10 internally using Enterprise Upgrade as the primary deployment method. This approach reduced the deployment overhead by using System Center Configuration Manager Operating System Deployment (OSD) and upgrade which resulted in significant reductions in helpdesk calls. In addition we share how we are leveraging some of the new Enterprise scenarios to delight users while securing the enterprise. You can realize similar benefits in your enterprise by adopting these best practices as you migrate from Windows 7 and 8.x to 10.

    Expert-Level Windows 10 Deployment
    http://channel9.msdn.com/Events/Ignite/2015/BRK4301
    Join us for a live demo on how to build a Windows deployment solution, based on Microsoft System Center Configuration Manager. In the session we are taking OS Deployment in Microsoft Deployment Toolkit and System Center Configuration Manager to its outer limits. Deployment tips, tricks, and hard core debugging in a single session. You can expect a lot of live demos in this session.

    Windows 10 Deployment: Ask the Experts
    http://channel9.msdn.com/Events/Ignite/2015/BRK3333
    Still have questions about Windows deployment, even after all the other sessions this week? For this session, we gather as many experts as we can find for a roundtable Q&A session, with plenty of “official” and “real-world” answers for everyone, troubleshooting and implementation advice, and probably a fair number of opinions and “it depends” answers as well.

    Preparing Your Infrastructure for Windows 10
    http://channel9.msdn.com/Events/Ignite/2015/BRK3325
    So you want to deploy Windows 10 in your organization? While many organizations will be able to do this with little impact, there are some scenarios and features that can impact existing server, management, and network infrastructures. In this session, we take a look at those impacts so you know what to expect.

    Deploying Windows 10: Back to Basics
    http://channel9.msdn.com/Events/Ignite/2015/BRK2316
    Are you new to Windows deployment, or maybe just rusty? In this session, we review the tools that are available, explain all the acronyms, and explore best practices for deploying Windows 10. During the process, we show all the key tools that we recommend for building and customizing Windows 10 images, deploying Windows 10 images, provisioning new computers, and migrating from older operating systems like Windows 7.

    What's New in Windows 10 Deployment
    http://channel9.msdn.com/Events/Ignite/2015/THR0322
    With the upcoming release of Windows 10, there will be new and updated ways to deploy Windows. In this session, we review new recommendations for upgrading existing devices using a simple in-place upgrade process, provisioning tools for transforming new devices into ones ready for enterprise use, as well as updates to traditional deployment tools and techniques (ADK and beyond). We also talk about application compatibility, hardware requirements, and other common deployment questions.

    What's New in Windows 10 Deployment
    http://channel9.msdn.com/Events/Ignite/2015/BRK3321
    With the upcoming release of Windows 10, there will be new and updated ways to deploy Windows. In this session, we review new recommendations for upgrading existing devices using a simple in-place upgrade process, provisioning tools for transforming new devices into ones ready for enterprise use, as well as updates to traditional deployment tools and techniques (ADK and beyond). We also talk about application compatibility, hardware requirements, and other common deployment questions.

    Deploying Microsoft Surface Pro 3 in the Enterprise
    http://channel9.msdn.com/Events/Ignite/2015/BRK3327
    You have chosen Surface Pro 3 for your organization. Now, get the tips and tricks directly from engineers who built it. This session offers useful information on how you can deploy, manage, and support these devices throughout your org like a jedi master.

    Troubleshooting Windows 10 Deployment: Top 10 Tips and Tricks
    http://channel9.msdn.com/Events/Ignite/2015/BRK3318
    Need help with troubleshooting Windows deployment issues? Johan and Mikael share lessons learned around handling device drivers in the deployment process, common deployment issues and their workarounds, parsing log files, WinPE and PXE troubleshooting, UEFI deployments. As a foundation, Microsoft Deployment Toolkit and Microsoft System Center Configuration Manager will be used. You can expect a lot of live demos, tips, and tricks in this session.

    Preparing for Windows 10 Deployment: Assessment, Compatibility, and Planning
    http://channel9.msdn.com/Events/Ignite/2015/BRK3334
    Before you can deploy Windows 10, you need to make sure your organization is ready. That requires information gathering, compatibility analysis, project management, and piloting – an iterative process. In this session, we talk about tools to help with common concerns around app and hardware compatibility, web compatibility, readiness for upgrades, and more.

    Enjoy,
    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Cloud Group

  • Azure DNS Server Redundancy

    Customers may observe that their PaaS role instances and IaaS virtual machines are only issued one DNS server IP address by DHCP. This does not mean that name resolution in Azure has a single point of failure however. The Azure DNS infrastructure is ...read more
  • What is the IP address 168.63.129.16?

    The IP address 168.63.129.16 is a virtual public IP address that is used to facilitate a communication channel to internal platform resources for the bring-your-own IP Virtual Network scenario. Because the Azure platform allow customers to define any ...read more
  • Task Scheduler "A task or folder with this name already exists"

    Hello AskPerf! Blake here with a quick blog to discuss an issue I’ve seen more frequently over the past few months. Here is the Scenario:

    When you try and create a new Scheduled Task via the command line (schtasks.exe), the following error appears:

    "WARNING: The task name "PERFTEST" already exists. Do you want to replace it (Y/N>?"

    If you hit Y, then this message will appear:

    "ERROR: Cannot create a file when that file already exists."

    clip_image002

    When you try and create the same task via the taskschd.msc snap-in, this message is displayed:

    "An error has occurred for task test.  Error message: A task or folder with this name already exists."

    clip_image003

    When you click OK, the following error appears:

    "Transaction support within the specified resource manager is not started or was shut down due to an error"

    clip_image004

    After you click OK, the task is not created.

    Research internally as well as out on the Internet suggest that the Transaction Log is corrupted. To fix this you need to do the following:

     

    1. Open up an elevated CMD prompt
    2. Type in the following and hit enter: "fsutil resource setautoreset true c:\"
    3. Reboot
    4. After your machine reboots, you should be able to create new Scheduled Tasks now

    NOTE I’ve only seen this on Windows 2008 R2 SP1 thus far, and will update this blog post if seen on other Operating Systems down the line.

    Additional Resources

    -Blake

  • WMIDiag 2.2 is here!

    Hello AskPerf blog readers! Jeff here from the Windows Performance Team once again. I am happy to announce that the new version of WMIDIAG is finally here. It’s now compatible for Windows 8/8.1 as well as Sever 2012/2012R2. Some of you may have been aware that the previous version used to also show a lot of errors and that majority of them were erroneous or false positives simply due to wmi class name changes between OS versions. That has been all cleaned up and all errors have been corrected. When you run the new version it should look a lot cleaner and what errors you do see should be accurate and deserving of attention.

    The WMI Diagnosis Tool is a VBScript based-tool for testing, validating, and analyzing WMI installation/issues. The tool collects data from WMI installations on all Microsoft Operating Systems at any or no service pack level.

    WMI Diagnostics 2.2 requires you to have Local Administrator rights as well as Windows Script Host (WSH) enabled.

    To download this tool, please click here.

    After you download WMIDiag.exe, run it and extract the files to a local folder. If you double-click WMIDiag.vbs, the following message will appear:

    image

    If you want to see its activity, then you would run “cscript WMIDiag.vbs” from the command prompt, or you can change the default script host to the command line by running “cscript //H:CScript”.

    Note: By default WMIDiag does not check repository consistency and you would need to run manually from command prompt using “cscript WMIDiag.vbs checkconsistency

    WMIDIAG can be run from Windows Explorer, or from the command line. Each time it runs, the WMI Diagnosis Tool creates the following three files in the %TEMP% directory:

    • .LOG file containing all the WMI Diagnosis Tool activity as well as a WMI report at the end
    • .TXT file containing the WMI Diagnosis Tool report
    • .CSV file containing statistics that can be used to measure trends and issues

    When the WMI Diagnosis Tool terminates, the ERRORLEVEL environment variable is set to one of the following values:

    0 = SUCCESS

    • WSH has a script execution timeout setup (in machine or system environment)
    • Machine reports suspicious improper shutdowns
    • User Account Control (UAC) status is reported (Vista and above)
    • Local account token filter policy is reported (Vista and above)
    • Unexpected binaries in the WBEM folder
    • The Windows Firewall is enabled
    • Some WMI service installed in the machine are dependent on the WMI service (i.e. "SMS Agent)
    • WMI ADAP has a status different than 'running'
    • Some WMI namespaces require a packet privacy encryption for a successful connection
    • Some WMI permanent subscriptions or timer instructions are configured
    • Some information about registry key configurations for DCOM and/or WMI was reported

    1 = ERROR

    • System32 or WBEM folders are not in the PATH
    • WMI system file(s)\ repository is/are missing
    • WMI repository is inconsistent (XP SP2, 2003 SP1 and above)
    • DCOM is disabled
    • WMI service is disabled
    • The RPCSS and/or the WMI service(s) cannot be started
    • WMI DCOM setup issues
    • Expected default trustee or ACE has been removed from a DCOM or WMI security descriptor
    • The ADAP status is not available
    • One or more WMI connections failed
    • Some GET operations\WMI class MOF representations\WMI qualifier retrieval operations failed
    • Some critical WMI ENUMERATION operations\WMI EXECQUERY\WMI GET operations failed
    • Some WRITE operations in the WMI repository\PUT\DELETE operations failed
    • One of the queries of the event log entries for DCOM, WMI and WMIADAPTER failed
    • Some critical registry key configurations for DCOM and/or WMI were reported

    2 = WARNING

    • System32 or WBEM folders are further in the PATH string than the maximum system length
    • System drive and/or Drive type reporting are skipped
    • DCOM has an incorrect default authentication level (other than 'Connect')
    • DCOM has an incorrect default impersonation level (other than 'Identify')
    • WMI service has an invalid host setup
    • WMI service (SCM configuration) has an invalid registry configuration
    • Some WMI components have a DCOM registration issue
    • WMI COM ProgID cannot be instantiated
    • Some WMI providers have a DCOM registration issue
    • Some dynamic WMI classes have a registration issue
    • Some WMI providers are registered in WMI but their registration lacks a CLSID
    • Some WMI providers have a correct CIM/DCOM registration but the corresponding binary file cannot be found
    • A new ACE or Trustee with a denied access has been modified to a default trustee of a DCOM or WMI security descriptor
    • An invalid ACE has been found for an actual DCOM or WMI security descriptor
    • WMI ADAP never ran on the examined system
    • Some WMI non-critical ENUMERATION operations failed\skipped
    • Some WMI non-critical EXECQUERY operations failed\skipped
    • Some non-critical WMI GET VALUE operations failed
    • Some WMI GET VALUE operations were skipped (because of an issue with the WMI provider)
    • The WRITE operations in the WMI repository were not completed
    • The information collection for the DCOM, WMI and WMIADAPTER event log entries was skipped
    • New event log entries for DCOM, WMI and WMIADAPTER were created during the WMI Diagnosis Tool execution
    • Some non-critical registry key configurations for DCOM and/or WMI were reported

    3 = Command Line Parameter errors

    4 = User Declined (Clicked the Cancel button when getting a consent prompt)

    • WMIDiag is started on an unsupported build or OS version
    • WMIDiag has no Administrative privileges
    • WMIDiag is started in Wow environment (64-bit systems only)

    When you run the WMI Diagnosis Tool via command line:

    C:\>CSCRIPT WMIDiag.vbs

    image

    The generated report “%TEMP%\WMIDIAG-V2.2_WIN8.1_CLI.RTM.64_MYPC_2015.05.11_15.02.30-REPORT.TXT“ contains two types of figures:

    • WARNING - Information that is useful if certain actions are executed
    • ERROR - Problems that need to be solved to avoid errors reported by WMI

    WMI DIAG 2.2 FAQ:

    1. Where can I get the WMI Diagnosis Tool?

    The WMI Diagnosis Tool can be downloaded from the Microsoft Download Center at http://www.microsoft.com/en-us/download/details.aspx?id=7684. More information about the WMI Diagnosis Tool usage can be found in the document (WMIDiag.doc) which comes along with the download.

    2. Is the tool supported?

    There is no official support for WMI Diagnosis Tool.

    3. Can the WMI Diagnosis Tool diagnose a remote computer?

    The WMI Diagnosis Tool is not designed to diagnose remote computers. This is due to the fact that WMI remote access is mainly based on the WMI infrastructure. Because the aim of WMI Diagnosis Tool is to diagnose WMI, the WMI Diagnosis Tool does not use WMI to perform its core operations. That’s why the WMI Diagnosis Tool must be run locally. However, the WMI Diagnosis Tool can be deployed remotely using Group Policy, Systems Management Server (SMS), or Microsoft Operations Manager (MOM) via a Management Pack. With Windows Vista, the WMI Diagnosis Tool can also be remotely executed through WinRM/WinRS, provided you configure and enable these features (WinRM/WinRS are not enabled by default). Microsoft SysInternals tool PSEXEC.EXE on Technet can also be used.

    4. Does the WMI Diagnosis Tool fix problems it discovers?

    No. The WMI Diagnosis Tool executes in read-only mode. Even though the WMI Diagnosis Tool diagnoses the situation and provides procedures to fix problems, at no time does the tool automatically fix a problem. This is by design, because the correct repair procedure depends on the context, the usage, and the list of applications installed on the computer.

    I hope this new tool will help you identifying potential WMI issues in your environment. Don’t forget to read the support document (WMIDiag.doc) included in the WMIDIAG 2.2 download.

    -Jeff

  • Multiple per device RDS CALS are issued the same device issue…

    Hello AskPerf! Ishu Sharma here again from Microsoft Performance team.  Today I will be discussing an issue where multiple per device Remote Desktop Services CALS are issued to the same device.
    Before we dive into this topic, I would like to recall the below facts about RDS Per Device Licensing.

    If an unlicensed client connects to a Remote Desktop Server for the first time, the Remote Desktop Licensing Server issues the client a temporary RDS Client Access License (CAL). After the user has logged into the session, the RDS server instructs the License Server to mark the issued temporary RDS CAL token as being validated. The next time the client connects, an attempt is made to upgrade the validated temporary RDS CAL token to a full RDS CAL token. If no license tokens are available, the temporary RDS CAL token will continue to function for 90 days.
    When a client device receives an RDS Device CAL from an RDS Host, it receives it in the form of a digital certificate from a license server. That certificate is saved in the below location on Licensing server:

    [HKLM\Software\Microsoft\TermServLicensing\Certificates]
    [HKLM\System\CurrentControlSet\services\TermservLicensing\Parameters\Certificates.000]
    [HKLM\System\CurrentControlSet\services\TermservLicensing\Parameters\Certificates.001]

    The digital certificate is an actual certificate copied to the client device. Once a client device connects to an RDS Host, an RDS CAL digital certificate is transferred from the license server to the client device. The license server loses one of its licenses from its inventory, and the client device has the digital certificate that it can present to any RDS Host on future connections.

    Clients store their license under the key:

    [HKEY_LOCAL_MACHINE\Software\Microsoft\MSLicensing]

    The MSLicensing key contains two sub-keys used to store both unique client-specific information and any license certificates obtained from license servers.

    HardwareID
    Store

    HardwareID stores a Random 20-byte identifier specific to the client machine and is generated automatically by Windows. This ID uniquely identifies the machine to the license server. When a client is allocated an RDS CAL from the license server, this HardwareID is recorded in the licensing database to associate the client with the CAL. This entry is made when clients are allocated both temporary CALs and permanent licenses.

    image

    Store is used to store the terminal services CAL allocated from the license server.  Entries are contained in sub key named License00x, where X is a numerical ID beginning with 0.  Each License00x entry contains a separate CAL.

    The License00x entry contains four binary components that comprise a terminal services CAL certificate:

    • ClientLicense
    • CompanyName
    • LicenseScope
    • ProductID

    Every time the client device connects to an RDS Host, it presents its RDS CAL certificate to the server. The server checks not only whether the client device has a valid certificate, but also the expiration date of that certificate. If the expiration date of the certificate is within 7 days of the current date, the RDS Host connects to the license server to renew the license for another random period of 52 to 89 days.

    Ideally each Client device should be issued only one RDS CAL. However, there would be times where License Server Manager will show multiple per device CALS being issues to the same device as shown in the below picture:

    image

    Now this is intriguing!! Why is the same device consuming multiple RDS CALS? The administrators usually notice this issue when they start running out of per device CALS and when they check the list of issued per device CALS in RDS licensing Manager, they notice that multiple RDS CALS have been issues to the same device.
    To temporarily get around this issue you can revoke licenses but the catch is that you can only revoke 20% of the CALS at one time. This may not help if you have very few CALS left and you see that multiple per device CALS are being allocated to multiple machines.

    Below are the possible reasons which can cause this issue:

    1.    If you have built multiple machines using the same image:

    a)    There could be times when you used a syspreped image or Citrix provisioned machines where the HardwareID was defined in the image because of which each device which was built using that image got the same hardware ID. This would result in the below situation:

      • If Client1 has HWID xxxx and logs into the RDS, it will get license 1
      • Then Client 2 which also has HWID xxx logs in and does not have license 1, so it's issued a new license, license 2
      • If Server 1 tries to log in again, the xxx HWID is now associated with license 2, which Client 1 does not have, so Client 1 will get issued a new license, license 3
      • Now the XXX HWID is associated with license 3
      • Every time that HWID logs in, no matter what machine it is, its license will be compared to what's in the database for HWID XXX
      • That's where the problem comes in -- machines are constantly getting new licenses, even when they aren't needed.

    Resolution In order to get around this issue, you need to rectify the image itself and use a syspreped image which does not have MSLicensing Key information of the original machine hardcoded to it

    b)    You Create a Citrix Provisioned machine where all the machines are booted from a pre-defined image and all the changes are lost after reboot. So every time the machine connects it gets a new ClientHWID and this is lost on the next boot. The next time the machine connects to the RDS Host, it gets a new Client HWID and hence a new RDS license is issued. Citrix XenDesktop provisioned machine with different hardware ID which can cause the license server to recognize it as different device and issue duplicate licenses

    Resolution It is recommended to use Per-User RDS licensing in these scenarios, because the licenses are reverted when the user logs off, hence the number of licenses will not be affected.

    2.    This could also happen if you have a script in place which deletes MSLicensing Key at shutdown.

    Resolution Remove the script

    3.    Different machines using same name.

    If machines are cloned, sometimes third party cloning tools do not wipe out all the stale information and the cloned clients although with a different hardware it would give the same computer name to the RDS Host.

    Though the Hardware ID might be different, if two different machines have the same name, looking at the Licensing Manager you might think that the same device is using multiple CALS but it is not.

    4.    Machine was re-built:

    For some reason if a machine that got a CAL once is re-built then due the new installation it got a new hardware ID and when connected again to Remote desktop server and hence got another CAL.

    Assume that a client device successfully authenticates to an RDS Host and is granted a full RDS CAL certificate that was (worst case) randomly selected to expire at the 89 day maximum. When it passes down the certificate, the license server decrements its total RDS CAL license count by one, also noting that particular certificate's expiration date. Now, assume that a catastrophic event occurs at the client, causing its local operating system to be reinstalled and its local RDS CAL certificate to be lost. When that client authenticates to an RDS Host, the RDS will request a new RDS CAL certificate from the license server and the license server (again) decrements its RDS CAL inventory by one. At this point there have been two RDS CAL licenses given out to that one client, but the first one will never be renewed because the certificate was lost when the client was rebuilt. After 89 days (the randomly selected duration of the first certificate), the first RDS CAL is returned to the pool by the license server.

    Resolution The old CAL will be freed within next 52-89 days after being issued or you can simply revoke the old CAL.

    5.     Multiple Hardware ID’s in the MSLICENSING Reg key of the client machine:

    This could happen if the license has been corrupted. If it has already been corrupted, a new hardware ID will be generated automatically for the client during next RDS Host logon and hence you may notice duplicate CALS for that device.

    Resolution To determine which one you need to delete, go to the server, and open PowerShell “As Administrator” on the RDS License server, and execute the following command: get-wmiobject Win32_TSIssuedLicense | export-csv [outputfile]
    Then in the output file, find out the client who is issued with multiple licenses, then record the hardware ID within the license which is not the most recently issued.
    Then go back to the client, open registry, locate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing\HardwareID and check the ClientHWID which matches the one you just record, delete the HardwareID subkey.

    DATA collection

    1.    Look at TerminalServices-Licensing event logs.
    2.    Generate per device RDS Per device Cal report to verify if the issue is because of multiple Hardware ID’s issued to the same machine, or same hardware ID issued to different machines or due to duplicate Machine names with different Hardware IDs.

    Script for RDS Per Devices CALs (PowerShell)

    This shows Keypack ID, License ID, Name of the client device along with Hardware ID and Expiration date of the CAL as shown below.

    image

    3. Use the RDS Client License Test tool (TSCTST.EXE) provided with the Windows Server 2003 Resource Kit on the client machine for which you see multiple CALS to display details about the license token residing on a client device. It is a command-line utility that displays the following information by default:

    • Issuer
    • Scope
    • Issued to computer
    • Issued to user
    • License ID
    • Type/Version
    • Valid From
    • Expires On

    By using the /A switch, the following additional information is displayed:

    • Server certificate version
    • Licensed product version
    • Hardware ID
    • Client platform ID
    • Company name

    3. If you are still not able to find the cause, Microsoft professional can help you collect an RDS Licensing ETL trace while reproducing the issue. The etl trace should tell what name / HWID was used to request new licenses.

    Quick Workarounds

    1.    If all per device CALS are exhausted and you are working to find the case of multiple RDS CALS being issued to same device, temporarily you can change the licensing mode to per user to allow remote sessions. However, this should not be a practice as it will be a breach of Microsoft Licensing agreement.

    2.    Regenerate the ClientHWID and Rebuild the License server database (KB273566) and reinstall the CAL Packs to restore all the CALS.

    The hardware ID can be regenerated by deleting the below keys manually:

    Reg Delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing\HardwareID

    Reg Delete HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSLicensing\Store /f

    The next time you need to take an Remote session as an admin to regenerate the hardware ID as normal users do not have permissions on this registry key. Or you can use tools (RegenerateHDWID) to regenerate the hardware ID’s on the fly.

    -Ishu

  • 2012 R2 License Server issuing Built-in OverUsed CALs for 2008 R2 Session Host Servers

    Hello AskPerf! My name is Prachi Singh and today I will be talking about a behavior that can occur when users attempt to pull licenses from a 2012 R2 License server via a 2008 R2 Session Host. Under these circumstances, you may see a line item in your 2012 R2 license manager that says “Windows Server 2008 or Windows Server 2008 R2 -Installed TS or RDS Per User CAL”. Under “License Program” you then see “Built-in Overused”.

    clip_image002

    In the case above, the license server is used to issue RDS CALs to users when they connect to both Windows Server 2008 R2 and Windows Server 2012 R2 Session Host Servers. When a user connects to a Windows Server 2012 R2 Session Host, a Windows Server 2012 "per User" RDS CAL is issued.

    However, when a user connects to a Windows Server 2008 R2 RDS Server, a Windows Server 2008 R2 "Built-in OverUsed" RDS CAL category appears and shows the value only for the issued RDS CAL. The "Total" and "Available" values remain 0. Additionally, the issued RDS CAL amount is not deducted from the total Windows Server 2012 RDS CALs.

    What is the "Built-in OverUsed" group and is it ok to have it?

    The "Built-In Overused" group was also used in earlier operating systems if the licensing mode was being set to Per User but no "per user" CALs were installed on the license server and the users will still connect to the terminal servers. This was an indication for admins that they must install licenses. After the applicable licenses get installed, this group goes away and the number of licenses issued gets synchronized with the installed license group.

    Why are Windows Server 2008 R2 RDS CALs not deducted from the installed Windows Server 2012 RDS CALs?

    By default, a license server attempts to provide the most appropriate RDS CAL for a connection. For example, a license server running Windows Server 2008 R2 tries to issue a Windows Server 2008 R2 RDS CAL for clients connecting to an RD Session Host server running Windows Server 2008 R2, and a Windows Server 2003 TS CAL for clients connecting to a terminal server running Windows Server 2003. If the most appropriate RDS CAL is not available, a license server running Windows Server 2008 R2 issues a Windows Server 2008 R2 RDS CAL, if available, to a client connecting to a terminal server running Windows Server 2003 or Windows Server 2000.

    Why are the "Built-In Overused" RDS CALs “issued” counted but not the “total” and “remaining” too?

    Starting with Windows Server 2012 R2 license server, when only Windows Server 2012 RDS CALs are installed and a user logs on to a Windows Server 2008 R2 RDS Server, the "Built-in OverUsed" group is displayed and the user gets a 2008 R2 "Built-In Overused" RDS CAL. Here, in this case it is just a reporting mechanism to tell that these number of users have logged in without an appropriate CAL. This is to make admins visible that 2012 licenses were issued for older terminal servers for which no dedicated (in this case the 2008 R2) RDS CALs are installed.

    Since, this group is displayed separately, the number of licenses will not be deducted directly from the 2012 RDS CAL group. The "Built-In Overused" group will display only the number of licenses issued and no " Remaining" or "total", because in the background the 2008 RDS CALs are not actually installed. The column “Built-in Overused” represents the number of user connections to Windows Server 2008 R2 servers where a Per User license was issued.

    Do you need to install additional Windows Server 2008 R2 RDS CALs too, or is this a compatibility behavior?

    Server 2012 RDS requires a Server 2012 RD Licensing server.  A 2012 RD Licensing server will serve 2012/2008 R2/2008/2003 servers, so you may consolidate your RDS CALs onto a Server 2012 RD Licensing server if you would like to.

    RDS CALs are not forward compatible, only backward compatible. Meaning that Windows Server 2012 CALs will work with Server 2008 R2

    Windows Server 2012 RDS CALs can be issued to 2003, 2008/R2 terminal server. For more detailed info, you may check below article:

    RDS and TS CAL Interoperability Matrix

    clip_image004

    The above screenshot shows that there are 4 users who are connecting to 2008 R2 Session Host Server and 1 user who connects to 2012 R2. With respect to reporting, the admin has the number of issued RDS CALs (Built-in OverUsed + 2012 RDS CALs) and they should make sure that the total does not exceed the number of installed RDS CALs.

    The RDS CAL reports will contain information about both (Built-in Overused + 2012 RDS CALs)

     

    RD License Server:

    ******LAB-DC

       

    Report Date:

         

    CAL Version

    CAL Type

    Installed CALs

    CALs in Use

    CAL Availability

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    0

    4

    None

    Windows Server 2012

    RDS Per User CAL

    20

    1

    Available

             

    Successful Per User License Issuance Detail

           
             

    Issued to User

    CAL Version

    CAL Type

    Expires On

     

    PerfNation.com\User1

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    Sunday, May 10, 2015 8:57:24 PM

     

    PerfNation.com\User2

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    Sunday, May 10, 2015 9:04:53 PM

     

    PerfNation.com\User3

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    Monday, May 11, 2015 1:13:27 PM

     

    PerfNation.com\User4

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    Monday, May 11, 2015 1:14:35 PM

     

    PerfNation.com\User6

    Windows Server 2012

    RDS Per User CAL

    Thursday, May 14, 2015 1:21:11 PM

     

    No Per User License Issuance has failed

           
             

    No Per Device License has been issued

           
                 

    Are the "Built-In Overused" RDS CALs handled like any other CALs, especially regarding license renewal?

    Per user "RDS CALs are valid 60 days but can be extended automatically if the user logs on again to the RDS server. If the license it has is within seven days of expiring, then the RD Session Host server attempts to obtain a license for the User at each login. If the server cannot find a license server to renew the license before it expires or no license is available, the license will expire. If the server has the licenses available, it will issue it to the user. This is how a "Built-in OverUsed" per user CALs as well as all other "normal" per user RDS CALs behaves.

    When a user (which got "Built-In Overused" RDS CAL issued) logs on to a Windows Server 2012 R2 RDS server, the built-in overused CAL gets converted to 2012 RDS CAL. Once converted, the user will continue using 2012 RDS CAL even if he connects to 2008 R2 RDS server ( once "upgraded" the license is no longer "downgraded").

    clip_image006

    clip_image008

    The report will look something like this:

    CAL Usage Report

           

    RD License Server:

    ******LAB-DC

         

    Report Date:

    Monday, March 16, 2015 6:17:51 PM

         
             

    CAL Version

    CAL Type

    Installed CALs

    CALs in Use

    CAL Availability

    Windows Server 2008 or Windows Server 2008 R2

    TS or RDS Per User CAL

    0

    0

    None

    Windows Server 2012

    RDS Per User CAL

    20

    5

    Available

             

    Successful Per User License Issuance Detail

           
             

    Issued to User

    CAL Version

    CAL Type

    Expires On

     

    PerfNation.com\User6

    Windows Server 2012

    RDS Per User CAL

    Thursday, May 14, 2015 1:21:11 PM

     

    PerfNation.com\User1

    Windows Server 2012

    RDS Per User CAL

    Friday, May 15, 2015 12:27:38 PM

     

    PerfNation.com\User4

    Windows Server 2012

    RDS Per User CAL

    Friday, May 15, 2015 12:36:11 PM

     

    PerfNation.com\User2

    Windows Server 2012

    RDS Per User CAL

    Friday, May 15, 2015 12:38:37 PM

     

    PerfNation.com\User3

    Windows Server 2012

    RDS Per User CAL

    Friday, May 15, 2015 12:40:01 PM

     
             

    No Per User License Issuance has failed

           
             

    No Per Device License has been issued

           

     

    -Prachi

  • Troubleshoot ADFS 2.0 with these new articles

    Hi all, here's a quick public service announcement to highlight some recently published ADFS 2.0 troubleshooting guidance. We get a lot of questions about configuring and troubleshooting ADFS 2.0, so our support and content teams have pitched in to ...read more
  • Migrating User Profile Disks in Remote Desktop Services

    Good morning AskPerf!  This is Sree Krishna and Ramesh from Remote Desktop Services Team. Today we will discuss User Profile Disk migrations.

    As you may know, Microsoft released a new feature to manage user profiles in Remote Desktop Services (RDS) deployments called User Profile Disks. User Profile Disks (UPD) store user and application data on a single virtual disk that is dedicated to one user.

    UPD takes advantage of the NTFS attributes to control the permissions of objects. Since every user has their own user profile disk, each disk is created with explicit permissions. In other words when a user profile disk is created, the ACL (Access Control List) is added with the below default permissions:

    a) SYSTEM

    b) Administrator

    c) <User account to which the User profile disk belongs >

    All other user permissions are removed to avoid the user profile disk being accessed other than the corresponding user.

    Why would this be an issue?

    Let's consider the below scenario:

    • You have the user profile disks created on Drive A:.
    • For some reason such as (Space constraints/ Server migration / Data migration) you are forced to move your User Profile disks to a different location.

    This is when you should consider the Windows NT permission architecture.

    Scenario 1

    You plan to migrate the UPD files on within the same Drive (Migrating the files that are one volume to another volume on the same disk)

    If so, then you have nothing to worry about as this not difficult.

    Scenario 2

    On the Other Hand you Move between different drives. For Example (From a Volume on Drive A to a volume on Drive B:), Windows NT permissions Architecture is not going to favor you here.

    Below are summary highlights between copy and Move among Same/different Volumes.

     

    Same Volume

    Different Volume

    Copy

    Security attributes are NOT retained and NOT carried

    Security attributes are NOT retained and NOT carried

    Move

    Security attributes are retained and carried

    Security attributes are NOT retained and NOT carried

    The differences and conditions are neatly outlined in the below articles:

    How permissions are handled when you copy and move files and folders

    All this being said, when the permissions of a User Profile Disk loses its default permissions (especially in Scenario 2), you will likely end up with problems in your Remote desktop Services environment.

    Symptoms you May notice:

    If the User Profile Disk loses permission for its corresponding user, that user will be logged on with a temporary profile. The issue with this is all of the users profile settings will not be available, and, any changes made in that session will be lost.

    clip_image002

    Additionally you may see Event ID’s 1511 recorded in the event viewer for every login attempt.

    Mitigation options:

    HOW TO: Copy a Folder to Another Folder and Retain its Permissions

     

    • To preserve permissions when files and folders are copied or moved, use the Xcopy.exe utility with the /O or the /X switch. The object’s original permissions will be added to inheritable permissions in the new location.
    • To add an object's original permissions to inheritable permissions when you copy or move an object, use the Xcopy.exe utility with the –O and –X switches.
    • To preserve existing permissions without adding inheritable permissions from the parent folder, use the utilities such as Robocopy.exe

    xcopy c:\old c:\new /O /X /E /H /K

    clip_image004

    where old is the source folder and new is the destination folder.

    /E - Copies folders and subfolders, including empty ones.
    /H - Copies hidden and system files also.
    /K - Copies attributes. Typically, Xcopy resets read-only attributes.
    /O - Copies file ownership and ACL information.
    /X - Copies file audit settings (implies /O).

    Hopefully this post makes you aware of issues that can arise from migrating UPD’s.

    Reference:

    -Sree & Ramesh

  • Icons of unpublished/old remote apps appearing on RDWEB Page in Server 2012/2012 R2

    Hello Askperf! This is Ishu Sharma from Microsoft Performance team. Today I am going to discuss a peculiar issue I came across in a couple of cases involving Server 2012 and 2012 R2. In these case, users were able to see icons for unpublished remote apps on the RDWEB page.

    The remote apps, which show as actually published in the collection, can be launched. However, when you click on the icons for the apps that are not published you receive an error. Hence it was very clear that these remote apps do not actually exist but their entries are being pulled from somewhere.

    In one case, I checked and confirmed that those extra remote app icons were not published in the collection. Which means, these remote app icons were being pushed from somewhere else. I used tools like procmon and RDWEB tracing while launching the app to figure out where those unpublished/old remoteapp icons are coming from. However, before doing that I checked the below registry key on the connection broker because every time a collection or remote application is published an entry for the same gets created in the below registry location:

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\CollectionName

    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Terminal Server\CentralPublishedResources\PublishedFarms\CollectionName\Applications

    clip_image001

    I found that there was a stale entry for an old collection on the connection broker which does not exist anymore and those extra remote app icons were published for the old collection. Hence, deleting the registry entry for the old collection from connection broker resolved the issue.

    In one of the cases we could also see the same thing in rdweb tracing logs (RemoteApps go missing from RDWeb page when any of the RDSH servers are rebooted) . It shows that icons for remote app were being written to cache from two different collections but one of the collection (ContosStandard) actually doesn’t exist anymore.

    w3wp.exe Information 0 2014/03/05 13:45:26 [Verbose] 7 Wrote icon CSMAGIC-ContosoRemoteApps-CmsRdsh into cache

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: MEDITECH WilMed Healthcare added, FileExtension: .rdp

    w3wp.exe Information 0 2014/03/05 13:45:26 [Verbose] 7 Wrote icon DWRCC-ContosoRemoteApps-CmsRdsh into cache

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: Dameware Remote Control added, FileExtension: .rdp

    w3wp.exe Information 0 2014/03/05 13:45:26 [Verbose] 7 Wrote icon EXCEL-ContosoRemoteApps-CmsRdsh into cache

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: Microsoft Excel 2010 added, FileExtension: .rdp

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: PreOP Blank added, FileExtension: .rdp

    w3wp.exe Information 0 2014/03/05 13:45:26 [Verbose] 7 Wrote icon mstsc-ContosoStandard-CmsRdsh into cache

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: Remote Desktop Connection added, FileExtension: .rdp

    w3wp.exe Information 0 2014/03/05 13:45:26 [Verbose] 7 Wrote icon powershell-ContosoStandard-CmsRdsh into cache

    w3wp.exe Information 0 2014/03/05 13:45:26 [Info] 7 :App: Windows PowerShell added, FileExtension: .rdp

    You may also come across cases where instead of a stale entry for old collection, you might find a stale entry for just an old remote app which is not supposed to be displayed on RDWEB page. So, in such scenarios you can verify the above registry Keys to check that no extra/stale entries for a non-existent collection or remote app is present.

    Reference Articles:

    -Ishu

  • Uncover the mystery of a bugcheck 0x24 (NTFS_FILE_SYSTEM)

      My name is Nan, I am an Escalation Engineer in Platforms Global Escalation Services in GCR. Today I’d like to share an interesting sample case with respect to filter manager to showcase the powerful debugging extension fltkd.dll (it is included ...read more
  • A Treatise on Group Policy Troubleshooting–now with GPSVC Log Analysis!

    Hi all, David Ani here from Romania. This guide outlines basic steps used to troubleshoot Group Policy application errors using the Group Policy Service Debug logs (gpsvc.log). A basic understanding of the logging discussed here will save time and may ...read more
  • Keeping Azure PowerShell Current

    I have been supporting Azure for 7 years now and one of the constants is the rapid pace of change of the services offered. This means that not only do you have to continually stay abreast of the most recent changes, but you also have to make sure that ...read more
  • Traffic Manager and Azure SLB - Better Together!

    This post was contributed by Pedro Perez Can Traffic Manager coexist with Azure Load Balancer? And how do we keep session affinity with them? Yes they can coexist, and in fact it is a good idea to use both together. That is because Traffic Manager ...read more
  • Remote Desktop Services (RDS) 2012 session deployment scenarios “Quick Start”

    Good morning AskPerf! Jason here to continue our mini-series on RDS Session Deployment.

    Please see RDS 2012 Session Host deployment scenarios for an overview of the different ways to deploy RDS on Windows 2012 for additional information.

    When would I use Quick Start? Typically you would not. It’s similar to a Standard Deployment which is the current best practice, except that it will only deploy the RDS components to a single server. All components (Connection Broker, RDWeb, and RDSH) will be installed with no option to modify. This would be good if setting up a quick POC, or maybe a lab environment, or if you are only going to deploy one server with all the components for example for a small office. If you want to split the components out to different machines, then chose Standard Deployment.

    DEPLOYING “QUICK START”:

    1. On the server that will become the Connection Broker, logon with a domain account that is an administrator and start Server Manager. From Manage menu item, select Add Roles and Features.

    clip_image002

    2. Select Remote Desktop Services installation.

    clip_image004

    3. Select Quick Start.

    clip_image006

    4. Select Session-based desktop deployment.

    clip_image008

    5. Add your local server to the Selected list for Specify RD Connection Broker server.

    clip_image010

    6. On the Confirm Selections dialog, check Restart the destination server automatically if required.

    clip_image012

    7. The RDS session deployment will now begin the install to all the servers and components selected. A progress dialog will be shown and the server will reboot.

    clip_image014

    8. After reboot, log in and the progress dialog will be shown again and installation will continue.

    9. After installation is complete, in the Server Manager Dashboard, there will be a Remote Desktop Services role listed in the left navigation pane.

    clip_image016

    10. Selecting Remote Desktop Services will display the Overview of the new deployment. From this page, the next steps would be to add / specify both the license server and RD Gateway if needed.

    clip_image018

    LICENSING:

    There are multiple ways to configure licensing in RDS 2012 and this can be confusing. Group Policy always takes precedence and will NOT show in the Connection Broker console if configured, however the settings will show in the RD Licensing Diagnoser console. Do not mix methods of setting licensing. For example, do not set in GPO and also in Server Manager Gui as this may result in errors. A more detailed post about licensing methods will be following this post series.

    1. To add a new license server through Gui, simply click on ‘RD Licensing’ node. The Add RD Licensing Servers dialog will be displayed. See next step to add an existing license server.

    clip_image020

    2. To add an existing license server, in Deployment Overview, click on TASKS, and select Edit Deployment Properties.

    clip_image022

    3. In Configure the deployment, select RD Licensing. Best practice is to use Per User mode as it provides better resiliency in case of a license server outage. Enter the license server and select Add.

    clip_image024

    Not covered in this post, but is the next step after deployment is to configure the QuickStart Session Collection. Simply right click on QuickStart in Deployment Overview to get started.

    Congratulations! You now have a new 2012 RDS session deployment.

    -Jason