SupportingWindows

  • It turns out that weird things can happen when you mix Windows Server 2003 and Windows Server 2012 R2 domain controllers

    We have been getting quite a few calls lately where Kerberos authentication fails intermittently and users are unable to log on. By itself, that’s a type of call that we’re used to and we help our customers with all the time. Most experienced AD admins ...read more
  • Windows Performance Monitor Overview

    Hello AskPerf!  My name is Matt Graham and I will be writing a high level overview of the capabilities of Windows Performance Monitor.  The intention of this blog post is to introduce new users to this powerful, and often underutilized, tool.  So rather than going through each part of Performance Monitor and explaining it in depth, my aim here is to offer a quick guide to the tool.

    When you first open Performance Monitor (perfmon), you see the following:

    clip_image001

    Let's briefly go through each one and talk about what they do. 

    Performance

    At the very top level "Performance" gives you an overview of your systems memory usage, network usage, disk usage, etc.  You can right click on "Performance" and connect to another computer to view a remote computers performance statistics. (NOTE: Should add brief comments about what is required in order to remotely connect to another machine…)

    Monitoring Tools

    From the Monitoring Tools icon you can right click and launch the Resource Monitor.  Resource Monitor is another powerful tool that can help you see how your system resources are being used.  You also have the ability to launch the System Reliability Monitor.  This utility allows you to see information about software updates and installations.  You can also see critical events that occurred and on what day those events occurred.  Finally, you can see all of the problem reports that have been sent from your computer by clicking on the "View all problem reports" link at the bottom of the window.

    clip_image002

    Performance Monitor

    The Performance Monitor is primarily for viewing real time statistics.  By default only one counter is selected; the %Processor Time counter.  However you can add additional counters by clicking on the green plus sign.  This will allow you to monitor any counters you wish in real time.

    clip_image003

    While you can see all of the performance counters you like here, the real power of Performance Monitor is found in its ability to capture performance metrics over an elapsed period of time.  Capturing data over a period of time allows you to see trends and these trends are what are most useful for determining the overall performance of your system.  To capture this data, you can create what are called "Data Collector Sets".

    Data Collector Sets

    Data Collector Sets are aptly named.  They collect data from your system so that you can view changes in configuration information and performance information over a specified period of time.

    There are basically three types of data collector sets:

    Performance Counter

    Capture data based on the polling of an event at a specified time interval

    Event Traces

    Capture data based on events that occur rather than based on a specified time interval

    System Configuration Information

    Capture configuration information

    Under Data Collector Sets you can see the following:

    clip_image004

    User Defined

    Under User Defined, you can create your own custom Data Collector Set.  These Data Collections Sets can contain counters, traces, and configuration collectors.  You can right click on User Defined and select New -> New Data Collector Set to create one.  You can create a data collector set from a template or create your own custom set.  Let's create a custom one:

    1. Name your Data Collector Set and select "Create manually (Advanced)".

    2. On this screen you can choose to create data logs (counter / trace / config) or you can create a Performance Counter Alert.  The "Performance Counter Alert" option allows you to create alerts based off of certain performance values and thresholds.  For now, we will select "Create Data Logs" and place a check box in all three boxes:

    clip_image005

    3. On the screen below you can set the counter interval rate (how often do you want it to capture the selected data) and the specific counters that you want to capture.

    clip_image006

    4. Once you click Add, you can select counters and then add them to the "Added Counters" box.  Note that you have options in terms of whether you want Perfmon to collect the data as a total or if you want to break the data up, in this case, per processor.  You should pay attention to which one you select as the meaning of these counters depends on what is being counted.

    clip_image007

    5. You will then be prompted to add trace providers.  Trace providers simply provide information to perfmon about a specific set of events.  For example, if you wanted to collect event information about the Windows Firewall, you would select the "Microsoft-Windows-Firewall" provider.  You can then edit the properties (if you know what you are doing) and even record registry keys (after you hit next you can specify which keys to record).

    clip_image008

    6. You can also specify a location where you would like to save the data.  By default, the data is saved to: %systemdrive%\PerfLogs\Admin\New Data Collector Set.

    7. Finally, you can select a user that you would like to run the data collector set as.  This may be helpful in environments where desktops and servers are locked down for security purposes.

    If you look at the "New Data Collector Set" that we just created, you can see that it contains a performance counter, a trace, and a configuration collector.  You can right click on any of these to modify them as you see fit.

    clip_image009

    Finally, if you look at the remaining items under Data Collector Sets, you can see a bunch of preconfigured collector sets.

    clip_image010

    Reports

    The final part of Performance Monitor is the Reports section.  Here you can view the information that was collected by your data collector sets.  If you have never run your data collector set, then you will not see any information when you click on it.

    clip_image011

    However, once you have run your data collector set, you can click on it and see the reports and information collected:

    clip_image012

     

    clip_image013

    So this is a basic overview of Windows Performance Monitor.  Once you are familiar with the parts, you can then dive into learning about which counters to use when, and what your counter interval rate should be when you are trying to capture this kind of data vs that kind of data.

    Additional Resources

  • Deploy Windows to Surface Pro 3 using Microsoft Deployment Toolkit

    Hi, my name is Scott McArthur and I am Senior Support Escalation Engineer on the Deployment/Devices team. In today’s blog I am going to go over the steps to deploy Windows 8.1 Enterprise X64 Update to a Surface Pro 3. In this example I will be using the following deployment technologies

    • Microsoft Deployment Toolkit 2013 Server
    • Windows Server 2012 R2 WDS server

    I will be using the Microsoft USB to Ethernet adapter to PXE boot the MDT 2013 Lite Touch Images from the WDS server. If you don’t have the adapter you could utilize a USB hard drive and Media Deployment from MDT (not covered in this blog). There are various ways to deploy Windows to a device so this is just one example.

    Before starting you need to gather up the following:

    • Note: We are going to make the download of this update easier but in the meantime you can grab this update from this link.
    • Optional: Existing Surface Pro 3 with OEM image installed. Used to gather files for Pen Pairing during OOBE

    Note: In this blog I am using the Surface Pro 3 as the hardware to build the reference image on. In environment where you are building an image that will only go on a Surface Pro 3 this is generally not a problem but if you create reference image that is going to many different types of systems we recommend for you to build your reference image in a Generation 1 Hyper-V virtual machine so that the reference image is “clean” of any drivers and then you use the features of MDT or SCCM to layer the device specific drivers down during deployment. Since there are so many factors involved I opted to show the simpler of scenarios and then you can decide what fits best for your environment and goals.

    Step #1: Extract the contents of the Surface Pro 3 Firmware and Driver pack

    After downloading the Surface Pro 3 firmware and driver pack you will see the following files:

    • Surface Ethernet Adapter.zip
    • Surface Gigabit Ethernet Adapter.zip
    • Surface Pro – July 2104.zip
    • Surface Pro 2 – July 2014.zip
    • Surface Pro 3 – July 2014.zip

    Note: This package is updated on regular basis so the filenames be slightly different but overall package organization should be similar.

    Extract the contents of the following files:

    • Surface Pro 3 – July 2014.zip
    • Surface Ethernet Adapter.zip
    • Surface Gigabit Ethernet Adapter.zip

    For the next steps we will assume they were extracted to the following locations

    • C:\Surface_Pro3_July_2014
    • C:\Surface_Ethernet_Adapter
    • C:\Surface_Gigabit_Ethernet_Adapter
    • C:\KB2968599

    Step #2: Import OS

    In this step we will import the OS. Surface Pro 3 only supports Windows 8.1 X64 Update. This can be Enterprise or Professional.

    • Right click Operating Systems and choose import
    • Browse to your location of your VL Windows 8.1 Enterprise Update X64 ISO
    • Provide directory name such as “Windows 8.1 Enterprise Update X64”
    • Click next and Finish

    Step #3: Add the Surface Pro 3 Firmware and Driver pack drivers to MDT

    In the Microsoft Deployment Toolkit Workbench create the following folder structure under Out-Of-Box Drivers

    image

    Note: The last folder must be called “Surface Pro 3”

    • Right click Out-Of-Box Drivers\WindowsPEX64 folder and choose import drivers. Browse to C:\Surface_Ethernet_Adapter and import the driver
    • Right click Out-Of-Box Drivers\WindowsPEX64 folder and choose import drivers. Browse to C:\Surface_Gigabit_Ethernet_Adapter and import the driver
    • Right click Out-Of-Box Drivers\X64\Surface Pro 3 and choose import drivers. Browse to C:\Surface_Pro3_July_2014

    Step #4: Create Selection Profile for Windows PE drivers

    This still will create a selection profile for Windows PE drivers. This helps to ensure only the necessary drivers are imported into Lite Touch boot image.

    • In the Microsoft Deployment Toolkit workbench navigate to Advanced Configuration\Selection Profiles.
    • Right click and choose new selection profile
    • Name the selection Profile WindowsPEx64
    • Browse to Out-Of-Box Drivers\WindowsPEX64
    • Select WindowsPEX64 folder

    image

    • Next
    • Finish

    Step #5: Assign Selection Profile for Windows PE

    This step will assign the previously created selection profile to Windows PE Lite touch so that only the drivers under WindowsPEx64 are added to the boot image

    • Right click the Deployment share and choose properties
    • Choose Windows PE tab
    • Choose Platform X64
    • Choose Drivers and Patches tab
    • For selection profile choose WindowsPEx64

    image

    Step #6: Import Updates

    In this step we will import the update that enables the Pen button functionality with modern OneNote. In most cases you would probably add other security updates and other updates to your deployment at this point also.

    In the Microsoft Deployment Workbench right click packages and choose import and then browse to C:\KB2968599\Windows8.1-KB2968599-x64.msu

    image

    Step #7: Create Task Sequence

    In this step we will create a task sequence to deploy Windows 8.1 Enterprise Update X64

    • In the Microsoft Deployment Workbench right click Task Sequences and choose new
    • Task Sequence ID=BLDWin81ENTUPX64
    • Task Sequence Name=Build Windows 8.1 Enterprise Update X64 reference image
    • Choose Standard Client Task Sequence
    • Choose the Windows 8.1 Enterprise Update X64 reference image OS
    • Choose Do not Specify a product key at this time
    • Fill out Organization and other information
    • Fill out local administrator password
    • Finish

    Step #8: Edit Task Sequence for Drivers

    In this step we will edit the task sequence to modify the driver injection step. There are a number of ways to address drivers in MDT. The key to preventing driver installation issues to make sure that the only drivers used during the deployment are the ones designed for the Surface. If your Out-Of-Box drivers contain drivers for other systems and you do not use one of the options below then you cannot control what drivers get used during the deployment. This can lead to problems so we would recommend you use Selection Profiles or other methods to ensure only the drivers designed for the Surface are used during the deployment. For additional reading on this topic I encourage you to take a look at this blog

    Option #1:Create a selection profile for Out-Of-Box Drivers\Windows81Update\X64\Surface Pro 3 and then set the Inject Drivers TS step to this selection profile. It is recommended to choose the “Install all drivers from this selection profile option also. Disadvantage to this option is that this TS would be specific to Surface Pro 3. If you configure this option it will look like this in the task sequence

    image

    Option #2 (Recommended approach):Use the DriverGroup001 variable to set this based on the Model of the system. This is more flexible since it will take the Model (WMI variable from the BIOS) information and use this to decide which folder to use. This allows this task sequence to work for a variety of devices. The folder names have to match EXACTLY with the Model exposed by the system (MSINFO32 will show you the model)

    We will use Option #2 for these steps

    In Microsoft Deployment Toolkit workbench right click the task sequence you created earlier and choose properties

    • Choose the Task Sequence tab
    • Browse to the Preinstall phase and look for step called “Inject Drivers”
    • Click the Enable Bitlocker step which is right before the “Inject Drivers” step
    • Click Add, General, Set Task Sequence Variable
    • Set the following:

    Name: Set DriverGroup001 variable to Model
    Task Sequence Variable: DriverGroup001
    Value: Windows81Update\x64\%model%

     

    image

    • Choose the Inject Drivers step that occurs after this step and set the Selection profile to Nothing and choose “install all drivers from the selection Profile”. This is important so all the firmware updates and drivers for devices that are not present(for example keyboard) are added to the deployment

    image

    • Click apply and save the task sequence

    Step #9: Modify the Unattend.xml

    In this step we will modify the Unattend.xml to make sure OOBE is completely automated. There is additional prompt during OOBE to join wireless network if the wireless driver is available. The TS Unattend.xml does not contain the entry to automate this since this is a new setting with Windows and the template in MDT 2013 doesn’t contain it

    • In Microsoft Deployment Toolkit workbench right click the TS and choose properties
    • Choose OS info tab
    • Choose Edit Unattend.xml

    Note: This will take a while the first time a catalog is created. If you encounter error take a look at KB2524737.

    • Navigate to 7 OOBESyetm\Microsoft-Windows-Shell-Setup\OOBE
    • For HideWirelessSetupInOOBE choose True

    image

    Another option to consider modifying at this point is configuring whether or not the Power button shows on the start screen. The OEM image that ships with Surface Pro 3 is configured to show the Power button on the start screen. If you do a new install the default behavior is not to show the power button (by design). For additional information on this behavior and Unattend option to configure this see the following:

    KB2959188: Power/shutdown button may be missing from the Windows 8.1 start screen

    image

    Step #10: Configure Image for Pen Pairing during OOBE (Optional)

    During the 1st bootup of the OEM image that ships with the Surface Pro 3 you are prompted during OOBE to pair the pen. In most cases you will probably want to pairthe pen after the deployment is complete but if you would like to add this step to the deployment you can use the following instructions.

    Note: The pairing prompt will occur during OOBE so it will interrupt MDT’s automated deployment. Once paired you must click next for it to continue. Ideally this is something IT person would handle for the user before handing over the device to the user.

    1. Take one of your existing Surface Pro 3 devices that has the OEM image on it and copy the following files to USB flash drive or other location:

    %windir%\system32\oobe\info\default\1033\oobe.xml
    %windir%\system32\oobe\info\default\1033\PenPairing_en-US.png
    %windir%\system32\oobe\info\default\1033\PenError_en-US.png
    %windir%\system32\oobe\info\default\1033\PenSuccess_en-US.png

    2. On the MDT server open Deployment and Imaging Tools Environment cmd prompt

    3. Use the DISM command to mount the image you are deploying

    Dism /mount-wim /wimfile:d:\deploymentshare\operating systems\<name of image>\sources\install.wim /index:1 /mountdir:c:\mount

    4. Create the following pathing in the image

    C:\mount\windows\system32\oobe\info\default\1033

    5. Copy all the files from Step #1 above into this folder

    6. Close any explorer Windows and switch to C:\ to make sure no open file handles to the c:\mount folder

    7. Unmount the image and save changes

    Dism /unmount-wim /mountdir:c:\mount /commit

    Step #11: Configure Default Display Resolution

    The default display resolution for the Surface Pro 3 is 2160x1440. To set this automatically you can add the following entry to your customsettings.ini (Right click the Deployment share, properties, rules):

    [Settings]
    Priority=Model, Default

    [Surface Pro 3]
    XResolution=2160
    YResolution=1440

    This uses the MDT functionality of where it knows the Model (Surface Pro 3) and based on these entries adds the resolution settings to the Unattend.xml for you

    Step #12: Update MDT server and WDS server

    At this point you would want to do a full generation of the deployment share to create the Lite Touch boot images to ensure the Surface Ethernet Adapter driver is incorporated into the MDT Lite Touch boot images and then import these images to your Windows Deployment Service (WDS) server. I would recommend you utilize a 2012R2 WDS server. For additional information on support for UEFI in WDS take a look at KB2938884.

    Step #13: PXE boot

    The final step is to PXE boot the Surface Pro 3. To PXE boot do the following:

    • Shut the device down
    • Press and hold volume down button
    • Press the Power button
    • When you see the Surface Logo you can let go
    • You should see prompt to PXE boot. The Surface Pro 3 supports a On Screen Keyboard(OSK)
    • Press the Keyboard icon in upper right of screen
    • Press Enter button on OSK
    • Using arrow keys on OSK choose your MDT 2013 Lite Touch image from the WDS server
    • Then follow the prompts during Lite Touch to initiate the deployment

    If you can’t get the Surface Pro 3 to PXE boot check the following:

    • Make sure you are using Microsoft USB to Ethernet Adapter. 3rd party adapters are not supported for PXE booting
    • Check and make sure this issue does not apply to your environment
    • 2602043: Invalid Boot File Received Error Message When PXE booting from WDS

    Additional Notes

    Some additional tips:

    • Check out my other blog for some additional tips for the PEN at “Deploying Surface Pro 3 Pen and OneNote Tips
    • If you do not want to see the Deployment Summary at the end of the deployment you can add the following entries customsettings.ini:

    [Default]
    ;Skip Final Summary Screen
    SkipFinalSummary=Yes
    ;Control behavior after system is complete
    FinishAction=Shutdown|Reboot|Restart|Logoff

    Thanks for reading this blog and good luck with your Surface Pro 3 deployments.

    Scott McArthur
    Senior Support Escalation Engineer

  • Deploying Surface Pro 3 Pen and OneNote Tips

    Hi, my name is Scott McArthur and I am Senior Support Escalation Engineer on the Deployment/Devices team. In today’s blog I am going to go over some tips for deploying Surface Pro 3 related to the Pen and OneNote integration.

    Tip #1: Deploying custom image to Surface Pro 3

    You may have noticed that if you deploy a custom image to Surface Pro 3 the Pen button does not bring up modern OneNote. The image that ships with Surface Pro 3 contains additional update that adds this functionality. If you are deploying a custom image you will need to incorporate that update into your deployment or reference image.

    KB2968599: Quick Note-Taking Experience Feature for Windows 8.1

    At this time we are working on making this update easier to download but in the meantime you can download it from the following direct link:

    http://download.windowsupdate.com/d/msdownload/update/software/crup/2014/06/windows8.1-kb2968599-x64_de4ca043bf6ba84330fd96cb374e801071c4b8aa.msu

    Tip #2: Setting default OneNote for the Pen

    If you use the Desktop version of OneNote 2013 you may want it to be the default application when the Pen button is pressed. To change the default OneNote application you can set this in OneNote 2013 using following steps

    1. Click File, Options
    2. Choose Advanced
    3. Under Default OneNote Application

    image

    NOTE: If you do not have this option in OneNote 2013 make sure you have the following update for OneNote installed:

    KB2881082: July 8, 2014 update for OneNote 2013

    Tip #3: Double click functionality for screenshots

    One of the other nice features is the ability to double click the pen button to send a screenshot to OneNote.

    Magic Tricks with OneNote and Surface Pro 3
    http://blogs.office.com/2014/06/18/magic-tricks-with-onenote-and-surface-pro-3

    In order to support this functionality, the Modern OneNote app must be the latest version available from the store.  So, if this functionality does not work, make sure Modern OneNote app has been updated.

    If you configure the Desktop OneNote as the default OneNote application, it should work by default with the double-click feature.

    Tip #4: Adding Pen pairing to a deployment

    During the 1st bootup of the OEM image that ships with the Surface Pro 3, you are prompted during OOBE to pair the pen. If you are deploying a custom image and want to add this setup screen to your deployment (to be completed by technician or user), you can use the following steps:

    Requirements:

    • System with the Windows ADK installed
    • Install.wim are you are deploying
    • Another Surface Pro 3 system that has the OEM system image that ships from Microsoft

    1. Take one of your existing Surface Pro 3 devices that has the OEM image on it and copy the following files to USB flash drive or other location:

    %windir%\system32\oobe\info\default\1033\oobe.xml
    %windir%\system32\oobe\info\default\1033\PenPairing_en-US.png
    %windir%\system32\oobe\info\default\1033\PenError_en-US.png
    %windir%\system32\oobe\info\default\1033\PenSuccess_en-US.png

    2. Open Deployment and Imaging Tools Environment cmd prompt

    3. Use the DISM command to mount the image you are deploying

    Dism /mount-wim /wimfile:c:\install.wim /index:1 /mountdir:c:\mount

    4. Create the following folder structure in the image

    C:\mount\windows\system32\oobe\info\default\1033

    5. Copy all the files from Step #1 above into this folder

    6. Close any explorer Windows and switch to C:\ to make sure no open file handles to the c:\mount folder

    7. Unmount the image and save changes

    Dism /unmount-wim /mountdir:c:\mount /commit

    Tip #5: Troubleshooting the pen

    For additional information on troubleshooting the pen take a look at:

    http://www.microsoft.com/surface/en-us/support/touch-mouse-and-search/troubleshoot-surface-pen#penshows

    Hope this helps with your Surface Pro 3 deployments

    Scott McArthur
    Senior Support Escalation Engineer

  • Windows Azure Pack: Infrastructure as a Service Jump Start

    Free online event with live Q&A with the WAP team: http://aka.ms/WAPIaaS

    Two half-days – Wednesday July 16th & Thursday July 17th– 9am-1pm PST

    IT Pros, you know that enterprises desire the flexibility and affordability of the cloud, and service providers want the ability to support more enterprise customers. Join us for an exploration of Windows Azure Pack's (WAP's) infrastructure services (IaaS), which bring Microsoft Azure technologies to your data center (on your hardware) and build on the power of Windows Server and System Center to deliver an enterprise-class, cost-effective solution for self-service, multitenant cloud infrastructure and application services.

    Join Microsoft’s leading experts as they focus on the infrastructure services from WAP, including self-service and automation of virtual machine roles, virtual networking, clouds, plans, and more. See helpful demos, and hear examples that will help speed up your journey to the cloud. Bring your questions for the live Q&A!

    Register here: http://aka.ms/WAPIaaS

    Course Outline:

    Day One

    • Introduction to the Windows Azure Pack
    • Install and Configure WAP
    • Integrate the Fabric
    • Deliver Self-Service

    Day Two

    • Automate Services
    • Extend Services with Third Parties
    • Create Tenant Experiences
    • Real-World WAP Deployments

    Instructor Team

    Andrew Zeller | Microsoft Senior Technical Program Manager
    Andrew Zeller is a Technical Program Manager at Microsoft, focusing on service delivery and automation with Windows Server, System Center, and the Windows Azure Pack.

    Symon Perriman | Microsoft Senior Technical Evangelist
    As Microsoft Senior Technical Evangelist and worldwide technical lead covering virtualization (Hyper-V), infrastructure (Windows Server), management (System Center), and cloud (Microsoft Azure), Symon Perriman is an internationally recognized industry expert, author, keynote presenter, executive briefing specialist, and technology personality. He started in the technology industry in 2002 and has been at Microsoft for seven years, working with multiple teams, including engineering, evangelism, and technical marketing. Symon holds several patents and more than two dozen industry certifications, including Microsoft Certified Trainer (MCT), MCSE Private Cloud, and VMware Certified Professional (VCP). In 2013, he co-authored Introduction to System Center 2012 R2 for IT Professionals (Microsoft Press) and he has contributed to five other technical books. Symon co-hosts the weekly Edge Show for IT Professionals, and his technologies have been featured in PC Magazine, Reuters News, and The Wall Street Journal. He graduated from Duke University with degrees in Computer Science, Economics, and Film & Digital Studies, and he also serves as the technical lead for several startups and entertainment production companies.​

  • Software-Defined Networking with Windows Server and System Center Jump Start

    Free online event with live Q&A with the WAP team: http://aka.ms/WAPIaaS

    Two half-days – Wednesday July 16th & Thursday July 17th– 9am-1pm PST

    IT Pros, you know that enterprises desire the flexibility and affordability of the cloud, and service providers want the ability to support more enterprise customers. Join us for an exploration of Windows Azure Pack's (WAP's) infrastructure services (IaaS), which bring Microsoft Azure technologies to your data center (on your hardware) and build on the power of Windows Server and System Center to deliver an enterprise-class, cost-effective solution for self-service, multitenant cloud infrastructure and application services.

    Join Microsoft’s leading experts as they focus on the infrastructure services from WAP, including self-service and automation of virtual machine roles, virtual networking, clouds, plans, and more. See helpful demos, and hear examples that will help speed up your journey to the cloud. Bring your questions for the live Q&A!

    Register here: http://aka.ms/WAPIaaS

    Course Outline:

    Day One

    • Introduction to the Windows Azure Pack
    • Install and Configure WAP
    • Integrate the Fabric
    • Deliver Self-Service

    Day Two

    • Automate Services
    • Extend Services with Third Parties
    • Create Tenant Experiences
    • Real-World WAP Deployments

    Instructor Team

    Andrew Zeller | Microsoft Senior Technical Program Manager
    Andrew Zeller is a Technical Program Manager at Microsoft, focusing on service delivery and automation with Windows Server, System Center, and the Windows Azure Pack.

    Symon Perriman | Microsoft Senior Technical Evangelist
    As Microsoft Senior Technical Evangelist and worldwide technical lead covering virtualization (Hyper-V), infrastructure (Windows Server), management (System Center), and cloud (Microsoft Azure), Symon Perriman is an internationally recognized industry expert, author, keynote presenter, executive briefing specialist, and technology personality. He started in the technology industry in 2002 and has been at Microsoft for seven years, working with multiple teams, including engineering, evangelism, and technical marketing. Symon holds several patents and more than two dozen industry certifications, including Microsoft Certified Trainer (MCT), MCSE Private Cloud, and VMware Certified Professional (VCP). In 2013, he co-authored Introduction to System Center 2012 R2 for IT Professionals (Microsoft Press) and he has contributed to five other technical books. Symon co-hosts the weekly Edge Show for IT Professionals, and his technologies have been featured in PC Magazine, Reuters News, and The Wall Street Journal. He graduated from Duke University with degrees in Computer Science, Economics, and Film & Digital Studies, and he also serves as the technical lead for several startups and entertainment production companies.​ ​

  • Is Offloaded Data Transfers (ODX) working?

    Offloaded Data Transfers (ODX) was a new data transfer strategy that makes advancements in moving files.  Only storage devices that comply with the SPC4 and SBC3 specifications will work with this feature.  With this feature, copying files from one server to another is much quicker.  This feature is only available in Windows 8/2012 and above as both the source and the destination.

    This is how it works at very high level:

    • A user copies or moves a file by using Windows Explorer, a command line interface, or as part of a virtual machine migration.
    • Windows Server 2012/2012R2 automatically translates this transfer request into an ODX (if supported by the storage array) and it receives a token that represents the data.
    • The token is copied between the source server and destination server.
    • The token is delivered to the storage array.
    • The storage array internally performs the copy or move and provides status information to the user.

    Below is a picture representation of what it looks like.  The top box is the way we are used to seeing it.  If you copy a file from one machine to another, the entire file is copied over the network.  In the bottom box, you see that the token is passed between the machines and the data is transferred on the storage.  This makes copying files tremendously faster, especially if you these files are in the gigabytes.

    image

    For more information on Offloaded Data Transfers, please refer to

    Windows Offloaded Data Transfers Overview

    Many Windows installations have additional filter drivers loaded on the Storage stack.  This could be antivirus, backup agents, encryption agents, etc.  So you will need to determine if the installed filter drivers support ODX.  As a quick note, if the filter driver supports ODX, but the storage does not (or vice versa), then ODX will not be used.

    We have filter manager supported features (SprtFtrs) which will tell if filter drivers support ODX.  We can use the FLTMC command, as shown below, to list filter drivers and their supported features.  For example:

    X:\> fltmc instances
    Filter      Volume Name    Altitude   Instance Name  Frame  SprtFtrs
    ----------  -------------  ---------  -------------  -----  --------
    FileInfo    C:             45000      FileInfo       0      00000003
    FileInfo    I:             45000      FileInfo       0      00000003
    FileInfo    D:             45000      FileInfo       0      00000003 <-It supports both Offload Read and Write
    FileInfo    K:             45000      FileInfo       0      00000003
    FileInfo    \Device\Mup    45000      FileInfo       0      00000003

    You can also see the Supported Features available for a filter driver in the registry:

    HKLM\system\CurrentControlset\services\<FilterName>

    The SupportedFeatures registry value contains an entry. If it is 3 as in the above FLTMC output, is supports ODX.

    Now that we have determined that ODX is supported by the required components, is it actually working?  You can see the ODX commands FSCTL_OFFLOAD_WRITE and FSCTL_OFFLOAD_READ captured in a Process Monitor trace when it is working.  When ODX is working, we see the following in Process Monitor.

    clip_image001

    In case a target fails the Offload, it does not support ODX, or does not recognize the token, it can give a STATUS_INVALID_TOKEN and/or STATUS_INVALID_DEVICE_REQUEST as the result.

    Other reasons why it might not work :

    1)    Something above Storage stack such as encryption or File system driver (Encryption filter, etc.) can cause it to fail.
    2)    Even though two disks/volumes might support offload they might be incompatible. This has to be established by involving the Storage Vendors.

    It is not a recommendation, but for informational purposes, you can disable ODX functionality in the registry if so desired.  You can do this  with a PowerShell command:

    Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" -Value 1

    Or, you can edit the registry directly.  The value of 1 (false) means that it is disabled while the value of 0 (true) means it is enabled.  When this change is made, you will need to reboot the system for it to take effect.

    One last thing to mention is that you should always keep current on hotfixes.  This is especially true if you are running Failover Clustering.  Below are the recommended hotfixes you should be running on Clusters, which include fixes for ODX.

    Recommended hotfixes and updates for Windows Server 2012-based failover clusters
    Recommended hotfixes and updates for Windows Server 2012 R2-based failover clusters

    Other references:
    http://technet.microsoft.com/en-in/library/jj200627.aspx
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn265439(v=vs.85).aspx
    http://msdn.microsoft.com/en-us/library/windows/desktop/hh848056(v=vs.85).aspx

    Shasank Prasad
    Senior Support Escalation Engineer
    Microsoft Corporation

  • Bugchecking a Computer on A Usermode Application Crash

    Hello my name is Gurpreet Singh Jutla and I would like to share information on how we can bugcheck a box on any usermode application crash. Set the application as a critical process when the application crash is reproducible. We may sometimes need a complete ...read more
  • Understanding ARM Assembly Part 3

    My name is Marion Cole, and I am a Sr. Escalation Engineer in Microsoft Platforms Serviceability group.  This is Part 3 of my series of articles about ARM assembly.  In part 1 we talked about the processor that is supported.  In part 2 ...read more
  • Debugging a Windows 8.1 Store App Crash Dump (Part 2)

    In Part 1 , we covered the debugging of a Windows Store Application crash dump that contains a Stowed Exceptions Version 1 (SE01) structure.   This post continues on from Part 1, covering the changes introduced in March 2014. These Windows Updates ...read more
  • 2012R2 iSCSI Target Settings for Configuring a Specific Network

    Let's say you have a 2012R2 iSCSI Target Server with multiple networks configured. We all know that iSCSI traffic should be on a separate network. So how do you go by configuring iSCSI to use a specific network on the target server? In previous version of Windows this was much easier to find since it was in the iSCSI Target software.

    1. Start Server Manager

    2. Select File and Storage Services in the right pane

    clip_image002

    3. In the Servers field right click the server name and select iSCSI Target Settings

    clip_image004

    clip_image005

    Now you don't have to worry about iSCSI traffic going over the wrong network in case a clients networks are not configured properly.

    Steven Graves
    High Availability Sr. SEE
    Microsoft Premier Support

  • Listener Certificate Configurations in Windows Server 2012 / 2012 R2

    Hello AskPerf!  Kiran Kadaba here to talk about configuring Listener Certificates.

    When we have the Remote Desktop Session Host role installed on a server, or have the server as part of an RDS collection/deployment, it’s quite easy to configure certificate through the connection broker UI.

    We have received a high amount of inquires on how we can configure certificates if the server is not part of a deployment, and is simply being configured for ‘Remote Desktop for Administration’.

    In Windows 2003/2008/2008 R2, we had the ‘Remote Desktop Configuration Manager’ MMC snap-in which allowed us direct access to the RDP Listener. Here we could bind a certificate to the listener and in turn, enforce SSL security for the RDP sessions.

    In Windows 2012, we no longer have this MMC snap-in, nor do we have direct access to the RDP listener. You can follow the below steps to configure the certificates on Windows 2012/2012 R2.

    This can be achieved in 2 ways:

    Method 1:  Using WMI

    The configuration data for the RDS Listener is stored in the ‘Win32_TSGeneralSetting’ class in WMI under the ‘Root\CimV2\TerminalServices’ namespace.

    The certificate for the RDS listener is referenced through the ‘Thumbprint’ value of that certificate on a property called ‘SSLCertificateSHA1Hash’.

    This thumbprint value is unique to each certificate. You can find the value using the following steps:

    1. Open the properties dialog for your certificate and select the Details tab

    2. Scroll down to the Thumbprint field and copy the space delimited hex string into something like Notepad

    Here is what the certificate thumbprint will look like in the certificate properties:

    clip_image002 

    Once I copy this into notepad, it will look as follows:

    clip_image004

    After I remove the spaces, it will still contain the invisible ASCII character that will only be visible in the command prompt (shown below):

    clip_image006

    Ensure that this ASCII character is removed before we run the command to import the certificate

    3. Remove all the spaces from the string. (Keep in mind that there may be an ‘invisible’ ACSII character that also gets copied. This is not visible in Notepad. Only way to validate, would be to copy directly into the command prompt window.)

    4. This is the value you need to set in WMI. It should look something like this: 1ea1fd5b25b8c327be2c4e4852263efdb4d16af4.

    Now that you have the thumbprint value, here's a one-liner you can use to set the value using wmic:

    wmic /namespace:\\root\cimv2\TerminalServices PATH Win32_TSGeneralSetting Set SSLCertificateSHA1Hash="THUMBPRINT"

    clip_image008

    This solution would work on Windows 7 and Windows 8 systems as well.

    Note: The certificate you want to use, must be imported to the 'Personal' Certificate Store for the Machine account, before you run the above commands. Failure to do so will result in a “Invalid Parameter” error.

    Option 2:  Registry edits

     

    1. Install a server authentication certificate to the ‘Personal’ Certificate Store, using the Computer account.
    2. Create the following registry value containing the certificate’s SHA1 hash to configure this custom certificate to support TLS instead of using the default self-signed certificate.
      HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp
      Value name:  SSLCertificateSHA1Hash
      Value type:  REG_BINARY
      Value data:  <certificate thumbprint>
      The value should be the thumbprint of the certificate separated by comma ‘,’ and no empty spaces. For example, if you were to export that registry key the SSLCertificateSHA1Hash value would look like this:
      “SSLCertificateSHA1Hash"=hex:42,49,e1,6e,0a,f0,a0,2e,63,c4,5c,93,fd,52,ad,09,27,82,1b,01

    3. The Remote Desktop Host Services service runs under the NETWORK SERVICE account. Therefore, it is necessary to set the ACL of the key file used by RDS (referenced by the certificate named in the SSLCertificateSHA1Hash registry value) to include NETWORK SERVICE with "Read" permissions. To modify the permissions follow the steps below:
    Open the Certificates snap-in for the local computer:

      1. Click Start, click Run, type mmc, and click OK.
      2. On the File menu, click Add/Remove Snap-in.
      3. In the Add or Remove Snap-ins dialog box, in the Available snap-ins list, click Certificates, and click Add.
      4. In the Certificates snap-in dialog box, click Computer account, and click Next.
      5. In the Select Computer dialog box, click Local computer: (the computer this console is running on), and click Finish.
      6. In the Add or Remove Snap-ins dialog box, click OK.
      7. In the Certificates snap-in, in the console tree, expand Certificates (Local Computer), expand Personal, and navigate to the SSL certificate that you would like to use.
      8. Right-click the certificate, select All Tasks, and select Manage Private Keys.
      9. In the Permissions dialog box, click Add, type NETWORK SERVICE, click OK, select Read under the Allow checkbox, then click OK.

    -Kiran

  • Understanding ARM Assembly Part 2

    My name is Marion Cole, and I am a Sr. Escalation Engineer in Microsoft Platforms Serviceability group.  This is Part 2 of my series of articles about ARM assembly.  In part 1 we talked about the processor that is supported.  Here we are ...read more
  • ‘Tip of the Day’ Top Tips for February

    The following links are for the top five tips from the 'Tip of the Day' blog during the month of February.

    Tip of the Day: Good Bye VDS, Hello SMAPI

    Tip of the Day: Failover DHCP

    Tip of the Day: Screenshots on Surface

    Tip of the Day: Deduplication and Backups

    Tip of the Day: Optimized Files not Available in Down Level OS

    NOTE: Tip of the Day is a random daily tip about Microsoft products. The idea behind it harkens back to something I started when I was first hired at Microsoft.  I told myself, "I want to try to learn something new every day. If I can learn at least one thing today, then I can call the day a success." Tip of the day is my attempt to share the things I pick up alone the way.

    Robert Mitchell
    Senior Support Escalation Engineer
    Microsoft Customer Service & Support

  • NTFS Misreports Free Space (Part 3)

    It’s been a while since my last post on this topic, and I wanted to take some time to update everyone on a cool new feature in Windows Server 2012 R2 and Windows 8.1.  Today we declare part 1 and part 2 of this blog as obsolete - at least for Windows ...read more
  • Introducing Script Browser - A world of scripts at your fingertips

    To reuse script samples on the Internet, the following steps seem quite familiar to IT Pros: wandering through different script galleries, forums and blogs, switching back and forth between webpages and scripting environment, and countless download, copy and paste operations. But all of these will drive one as dizzy as a goose. Need a simpler way of searching and reusing scripts? Try out the new Script Browser add-in for PowerShell ISE!

    Download Here

    Script Browser for Windows PowerShell ISE is an app developed by Microsoft Customer Services & Support (CSS) with assistance from the PowerShell team and the Garageto save IT Pros from the painful process of searching and reusing scripts. We start from the 9,000+ script samples on TechNet Script Center. Script Browser allows users to directly search, learn, and download TechNet scripts from within PowerShell ISE – your scripting environment. Starting from this month, Script Browser for PowerShell ISE will be available for download. If you are a PowerShell scripter or are about to be one, Script Browser is a highly-recommended add-in for you.

    Nearly 10,000 scripts on TechNetare available at your fingertips. You can search, download and learn scripts from this ever-growing sample repository.

    · We enabled offline searchfor downloaded script samples so that you can search and view script samples even when you have no Internet access.

    You will get the chance to try out another new function bundled with Script Browser - ‘Script Analyzer’. Microsoft CSS engineer managed to use the PowerShell Abstract Syntax Tree (AST) to check your current script against some pre-defined rules. In this first version, he built 7 pilot PowerShell best practice checking rules. By double-clicking a result, the script code that does not comply with the best practice rule will be highlighted. We hope to get your feedback on this experimental feature.

    It is very essential that an app satisfies users’ requirements. Therefore, feedback is of prime importance. For Script Browser, Microsoft MVPs are one of the key sources where we get constructive feedback. When the Script Browser was demoed at the 2013 MVP Global Summit in November and 2014 Japan MVP Open Day, the MVP community proposed insightful improvements. For instance, MVPs suggested showing a script preview before users can decide to download the complete script package. MVPs also wanted to be able to search for script samples offline. These were great suggestions, and the team immediately added the features to the release. We have collected a pool of great ideas (e.g. MVPs also suggested that the Best Practice rules checking feature in Script Analyzer should be extensible). We are committed to continuously improving the app based on your feedback.

    We have an ambitious roadmap for Script Browser. For example, we plan to add more script repositories to the search scope. We are investigating integration with Bing Code Search. We are also trying to improve the extensibility of Script Analyzer rules. Some features, like script sample sharing and searching within an enterprise, are still in their infancy.

    The Script Browser was released in mid-April and

    has received thousands of downloads since it was released a week ago. Based on your feedbacks, today we release the 1.1 update to respond to the highly needed features. The team is committed to making the Script Browser and Script Analyzer useful. Your feedback is very important to us.

    Download Script Browser & Script Analyzer 1.1
    (If you have already installed the 1.0 version, you will get an update notification when you launch Windows PowerShell ISE.)

    1. Options to Turn on / Turn off Script Analyzer Rules

    You can either select to turn on or turn off the rules in the Settings window of Script Analyzer.

    image

    You can also suggest a new Script Analyzer rule or vote for others’ suggestions. Our team monitors the forum closely. Based on your suggestions and votes, we will provide the corresponding Script Analyzer rules in future updates. We are also looking into the capability for you to write your own Script Analyzer rules and plug into the Script Analyzer.

    2. Refined Script Analyzer Rules with Detailed Description

    Thanks to your feedback, we refined the Script Analyzer rules that were released in the version 1.0. We also fixed all rule issues that you reported. Each rule comes with a detailed description, good/bad examples, and supporting documents. Here are the 5 refined rules released in this update. We look forward to learning your feedback.

    Invoke-Expression use should be carefully considered

    Invoke-Expression is a powerful command; it’s useful under specific circumstances but can open the door for malicious code being injected. This command should be used judiciously.

    http://blogs.msdn.com/b/powershell/archive/2006/11/23/protecting-against-malicious-code-injection.aspx

    Cmdlet alias use should be avoided

    Powershell is a wonderfully efficient scripting language, allowing an administrator to accomplish a lot of work with little input or effort. However, we recommend you to use full Cmdlet names instead of alias' when writing scripts that will potentially need to be maintained over time, either by the original author or another Powershell scripter. Using Alias' may cause problems related to the following aspects:

    Readability, understandability and availability. Take the following Powershell command for an example:

    Ls | ? {$_.psiscontainer} | % {"{0}`t{1}" -f $_.name, $_.lastaccesstime}

    The above syntax is not very clear to the novice Powershell scripter, making it hard to read and understand.

    The same command with the full Cmdlet names is easier to read and understand.

    Get-ChildItem | Where-Object {$_.psiscontainer} | ForEach-Object {"{0}`t{1}" -f $_.name, $_.lastaccesstime

    Lastly, we can guarantee that an alias will exist in all environments.

    For more information, please see the linked Scripting Guy blog on this topic.

    http://blogs.technet.com/b/heyscriptingguy/archive/2012/04/21/when-you-should-use-powershell-aliases.aspx

    Empty catch blocks should be avoided

    Empty catch blocks are considered poor design decisions because if an error occurs in the try block, the error will be simply swallowed and not acted upon. Although this does not inherently lead to undesirable results, the chances are still out there. Therefore, empty catch blocks should be avoided if possible.

    Take the following code for an example:

    try
    {
            $SomeStuff = Get-SomeNonExistentStuff
    }
    catch
    {
    }

    If we execute this code in Powershell, no visible error messages will be presented alerting us to the fact that the call to Get-SomeNonExistentStuff fails.

    A possible solution:

    try
    {
             $SomeStuff = Get-SomeNonExistentStuff
    }
    catch
    {
            "Something happened calling Get-SomeNonExistentStuff"
    }

    For further insights:

    http://blogs.technet.com/b/heyscriptingguy/archive/2010/03/11/hey-scripting-guy-march-11-2010.aspx

    Positional arguments should be avoided

    Readability and clarity should be the goal of any script we expect to maintain over time. When calling a command that takes parameters, where possible consider using Named parameters as opposed to Positional parameters.

    Take the following command, calling an Azure Powershell cmdlet with 3 Positional parameters, for an example:

    Set-AzureAclConfig "10.0.0.0/8" 100 "MySiteConfig" -AddRule -ACL $AclObject -Action Permit

    If the reader of this command is not familiar with the set-AzureAclConfig cmdlet, they may not know what the first 3 parameters are.

    The same command called using Named parameters is easier to understand:

    Set-AzureAclConfig -RemoteSubnet "10.0.0.0/8" -Order 100 -Description "MySiteConfig" -AddRule -ACL $AclObject -Action Permit

    Additional reading:

    http://blogs.technet.com/b/heyscriptingguy/archive/2012/04/22/the-problem-with-powershell-positional-parameters.aspx

    Advanced Function names should follow standard verb-noun naming convention

    As introduced in Powershell 2.0, the ability to create functions that mimic Cmdlet behaviors is now available to scripters. Now that we as scripters have the ability to write functions that behave like Cmdlets, we should follow the consistent nature of Powershell and name our advance functions using the verb-noun nomenclature.

    Execute the Cmdlet below to get the full list of Powershell approved verbs.

    Get-Verb

    http://technet.microsoft.com/en-us/magazine/hh360993.aspx

    3. Issue Fixes
    • Fixed a locale issue “Input string was not in a correct format..” when Script Browser launches on locales that treat double/float as ‘##,####’. We are very grateful to MVP Niklas Akerlund for providing a workaround before we release the fix.
    • Fixed the issues (including the error 1001, and this bug report) when some users install the Script Browser.
    • Fixed the issues in Script Analyzer rules

    We sincerely suggest you give Script Browser a try (click here to download). If you love what you see in Script Browser, please recommend it to your friends and colleagues. If you encounter any problems or have any suggestions for us, please contact us at onescript@microsoft.com. Your precious opinions and comments are more than welcome.

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Global Business Support

  • How to Configure MSDTC to Use a Specific Port in Windows Server 2012/2012R2

    My name is Steven Graves and I am a Senior Support Escalation Engineer on the Windows Core Team.  In this blog, I will discuss how to configure MSDTC to use a specific port on Windows Server 2012/2012R2 as this has slightly changed from the way it is configured in Windows Server 2008 R2 in order to prevent overlapping ports.  As a reference, here is the blog for Windows 2008 R2.

    How to configure the MSDTC service to listen on a specific RPC server port
    http://blogs.msdn.com/b/distributedservices/archive/2012/01/16/how-to-configure-the-msdtc-service-to-listen-on-a-specific-rpc-server-port.aspx

    Scenario

    There is a web server in a perimeter network and a standalone SQL Server (or Clustered SQL Server instance) on a backend production network and a firewall that separates the networks. MSDTC needs to be configured between the web server and backend SQL Server using a specific port in order to limit the ports opened on the firewall between the networks.

    So as an example, we will configure MSDTC to use port 5000.

    There are two things that need to be configured on the frontend web server to restrict the ports that MSDTC will use.

    • Configure the ports DCOM can use
    • Configure the specific port or ports for MSDTC to use

    Steps

    1. On the web server launch Dcomcnfg.exefrom the Run menu.

    2. Expand Component Services, right click My Computer and select Properties

    clip_image002

    3. Select the Default Protocols tab

    clip_image004

    4. Click Properties button

    clip_image006

    5. Click Add

    6. Type in the port range that is above the port MSDTC will use. In this case, I will use ports 5001-6000.

    7. Click OK back to My Computer properties window and click OK.  Here is the key that is modified in the Registry for the ephemeral ports.

    clip_image008

    8. Start Regedt32.exe

    9. Locate HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSDTC

    10. Right click the MSDTC key, select New and DWord (32-bit) Value

    11. Type ServerTcpPort for the key name

    12. Right click ServerTcpPort key and select Modify

    13. Change radio button to Decimal and type 5000 in the value data, click OK.  This is how the registry key should look

    clip_image010

    14. Restart the MSDTC Service (if stand-alone) or take the MSDTC Resource offline/online in Failover Cluster Manager if clustered.

    To confirm MSDTC is using the correct port:

    1. Open an Administrative command prompt and run Netstat –ano to get the port and the Process Identifier (PID)
    2. Start Task Manager and select Details tab
    3. Find MSDTC.exe and get the PID
    4. Review the output for the PID to show it is MSDTC

    clip_image012

    Now DTC will be using the port specified in the registry and no other processes will try to use the same port thus preventing an overlap of ports.

    Steven Graves
    Senior Support Escalation Engineer
    Microsoft Core Support

  • Removing .NET Framework 4.5/4.5.1 removes Windows 2012/2012R2 UI and other features

    This is Vimal Shekar and Krishnan Ayyer from the Windows Support team. Today in this blog, we will be discussing about an issue that we are seeing increasingly being reported in support. We will look at the effects of removing .Net Framework from a Windows Server 2012/2012 R2 installation.

    Windows Server 2012 includes .NET Framework 4.5 and Windows Server 2012 R2 includes .NET Framework 4.5.1. The .NET Framework provides a comprehensive and consistent programming model to build and run applications (including Roles and Features) that are built for various platforms. Windows Explorer (Graphical Shell), Server Manager, Windows PowerShell, IIS, ASP .NET, Hyper-V, etc, are all dependent on .NET Framework. Since there are multiple OS components dependent on .Net Framework, this feature is installed by default.  Therefore, you do not have to install it separately.

    It is not recommended to uninstall .NET Framework.  In some given circumstances, there may be a requirement to remove/re-install .Net Framework on Windows Server 2012/2012 R2.

    When you uncheck the .NET Framework 4.5 checkbox in the Remove Roles/Features Wizard of Server Manager, Windows will check all roles/features that may also be installed as it would need to be removed as well..  If there are other roles or features dependent on .NET Framework, those would be listed in this additional window.

    For Example:

    image

     

    If you read through the list, the components that are affected by this removal are listed as follows:

    1. .NET Framework 4.5 Features
    2. RSAT (Remote Administration Assessment Toolkit) which includes Hyper-V Management tools and Hyper-V GUI,
    3. User interfaces and Infrastructure, which includes Graphical Management Tools and Infrastructure Server Graphical Shell (Full Shell and min Shell),
    4. PowerShell which will remove complete PowerShell 4.0 and ISE

    The list of components may differ depending upon the Roles and Features installed on the Server machine.  If you were to use DISM.EXE commands to remove .Net Feature, you may not even see such a list.  If you were to use PowerShell to remove .Net feature using the following command, you will not get the list.

    Uninstall-WindowsFeature Net-Framework-45-Features

    If you were to use Remove-WindowsFeature PowerShell cmdlet, you can add the –whatifswitch to see the list of features that would also be impacted.

    Remove-WindowsFeature Net-Framework-45-Features –WhatIf

    Unfortunately, we all get in a hurry sometimes and we do not read through the list and click “Remove Features”. If you notice – the “Server Graphical Shell” and “Graphical Management Tools and Infrastructure” are part of the features being removed.

    Here is a sample output from running Remove-WindowsFeature Net-Framework-45-Features -WhatIf. Again you will see that removing .Net Framework will effectively also remove the following:

    clip_image005

    The two key features that I wanted to point out are:

    [User Interfaces and Infrastructure] Server Graphical Shell

    [User Interfaces and Infrastructure] User Interfaces and Infrastructure

    As stated earlier, this will leave the server without a graphical shell for user interaction. Only the command prompt will be available post reboot.

    If you get into this situation, run the below commands in the Server Core’s command prompt window to help you recover:

    DISM.exe /online /enable-feature / all featurename:NetFx4
    DISM.exe /online /enable-feature /all featurename:MicrosoftWindowsPowerShell

    The above commands will re-install .Net 4.0 and PowerShell on the server. Once PowerShell is installed, you can add the Graphical Shell (Windows Explorer) using the following command:

    Install-WindowsFeature Server-Gui-Shell, Server-Gui-Mgmt-Infra

    Once the GUI Shell is installed, you will need to restart the server with the following command:

    Restart-Computer

    NOTE:

    Remove-WindowsFeature and Uninstall-WindowsFeature are aliases.  The -whatif command shows what would occur if the command was run but does not execute the command.. 

    We hope this information was helpful.

    Vimal Shekar
    Escalation Engineer
    Microsoft Support

    Krishnan S Ayyer
    Technical Advisor
    Microsoft Support

  • What’s New in Windows Servicing: Service Stack Improvements: Part 3

    Servicing Stack improvements in KB2821895 for Windows 8, and How its assists the upgrade to 8.1?

    My name is Aditya and I am a Sr. Support Escalation Engineer for Microsoft on the Windows Core Team. This blog is a continuation of the previous Servicing Part 1. So to understand this blog better, it is recommended that one reads the previous blog post.  As mentioned in the previous, this is a 4 part Blog series on Windows Servicing.

    What’s New in Windows Servicing: Part 1
    What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2
    What’s New in Windows Servicing: Service Stack Improvements: Part 3

    This feature will back port Windows 8.1 features that reduce the disk footprint of the component store. Any freed space will be reserved for system use in upgrading to Windows 8.1.

    As from the last blog, we discussed about the hard work put in by our Core Deployment Platform (CDP) team in terms of reducing the amount of free disk space required for small footprint devices. Even with these reductions, an upgrade requires at least 5 GB of free space.

    To further reduce the perceived amount of space required, a Servicing Stack Update (SSU) for Windows 8 has been created that back ports Windows 8.1 Component Store Footprint Reduction features. It also introduces the maintenance task for controlling the footprint reductions and a set of Deep Clean operations. Any space freed by the maintenance task will be reserved for use by the Windows 8.1 upgrade process.

    The below features were targeted for the down-level port:

    1. Delta compression of the Component Store

    2. Deep Clean, uninstall of superseded GDR packages

    The features are used by the maintenance task to scavenge disk space. In addition to back porting these features, the servicing stack update must reserve free space for upgrade to Windows 8.1 Client. As we do not encourage upgrades of Windows Server 2012, this feature cannot reserve space on server SKUs, it is only for the Client SKU’s.

    When we install Windows 8 (32bit), on a machine, and the check the size of the WINSXS, folder, we should see something like as shown in figure 1:

    image

    When we perform Windows Update for the first time, on the machine via the Control Panel Applet, we should have about 84 updates, which come up to about 515mb, as shown in figure 2:

    image

    After the reboot of the machine the WINSXS, folder sees a little growth in size, which is about 2gb, as shown in Figure 3:

    image

    Looking at the amount of space that has been taken up, after applying Windows Update, we should download and apply the update KB2821895.

    image

    After the update is installed, a maintenance task will run weekly and continue to reclaim disk space up to the time the machine is upgraded to Windows 8.1. It creates a temporary file equal to size of space saved by delta compression during the reduction of the footprint. This file is hidden and marked as OS file so, that it is not easily visible.

    Location of reserve file is:
    %windir%\winsxs\reserve.tmp
    clip_image006

    The size of this file is saved to registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\Configuration\[reserve]”.This value is used to determine if the reserve file was created on machine and then deleted.

    clip_image008

    Note: Only Windows 8 SKUs that are capable of being upgraded through the Microsoft Store will have space reserved in the temp file.

    During the Windows 8.1 store upgrade process, this file is deleted and the reclaimed disk space becomes free space which should ensure a successful upgrade to Windows 8.1.

    New to WIN8.1 and Windows Server 2012 R2

    Smart Pending Logic

    This feature allows updates that do not require reboot to install immediately and does not require them to merge with updates being installed that require a reboot. It also decrease the time it takes to install updates during reboot since only the updates that require a reboot would be installed at that time.

    Currently, when multiple updates are being applied to a system and one or more of the updates requires a reboot, all updates after the “first update that requires reboot” are installed during the reboot process.

    In the current servicing stack design, Windows Servicing Stack passes a flag to the Servicing Infrastructure to pend the installation of a package if:

    • Any package is already pended
    • Pending.xml exists
    • PendingRequired flag is set in the Servicing Infrastructure store

    The limitations with this design are:

    • After the packages are merged together and installation is attempted during reboot, failure caused by any one of those updates causes a failure for all other updates.
    • Our teams that design their components for reboot-less updating cannot gain any benefit of their design because of limitation in stack itself.
    • Because all pended updates are installed during machine reboot, the number of updates pended determines the non-interactive user time while installing the updates.

    The current design that we have in Windows 8 and Server 2012, looks something like this:

    image

    With this new feature, Windows Servicing Stack would not check to see if a reboot is pending and will always try to install the update completely. The operational flow of the new design looks like this:

    image

     

    In Windows 8.1 and Server 2012 R2, updates that don’t require a reboot would be completely installed and only those that require a reboot are pended for installation during reboot. Smart pending logic applies to online servicing operations only.

    Smart Pending exceptions:

    The following types of packages are not going to be smart pended due to performance and reliability reasons:

    • Large packages, such as a service pack or language pack.
    • Special packages that cannot be merged with other packages.
    • Servicing stack updates.

    Below diagram describes

    image

    I hope this blog would have helped in understanding the changes made to Windows 8 OS and the new features added to the Windows 8.1 OS, especially with the Smart Logic put in to make sure that we save more drive space.

    The next blog in the series we would be discussing about the automated maintenance tasks to check for system file corruption, file system health, cleaning up unused drivers etc.?? Till then happy reading….

    Aditya
    Senior Support Escalation Engineer
    Microsoft Platforms Support

  • An Update about the Windows 8.1 Update

    Hi everyone, David here. Today over at the Springboard series blog we announced some important news that applies to anyone who has been trying to roll out the Windows 8.1 update in an enterprise environment. We don’t usually do announcements about ...read more
  • Managing the Store app pin to the Taskbar added in the Windows 8.1 Update

    Warren here, posting with more news regarding the Windows 8.1 Update. Among the many features added by Windows 8.1 Update is that the Store icon will be pinned to the users taskbar when users first logon after updating their PC with Windows 8.1 Update ...read more
  • What's New in the Windows 8.1 Update

    Hello AskPerf Readers! Henry Chen here from the Devices & Deployment team. Today, I would like to spend some time highlighting some of the user experience changes in the shell for Windows 8.1 Update.

    Windows 8.1 Update introduces numerous enhancements to the Desktop experience for mouse and keyboard users.

    Start Screen

    For mouse users, when right clicking anywhere on the Start Screen, a context menu will now appear in replacement of the command bar. The context menu provides the same options as what the command bar. Some of these commands are tile resizing, enable/disable live tile, and uninstall the application.

    For devices with a screen larger than 8.5" and are not connected standby capable, Power and search controls can now accessible from the Start screen. For other devices, only the search control is available.

    clip_image004

    Pin Apps to Taskbar

    Users can now pin modern apps to the taskbar with the exception of modern Internet Explorer. Users can block this setting by unselecting "Show Windows Store apps on the taskbar" from the Taskbar and Navigation properties.

     clip_image006

    Access taskbar from anywhere

    When you are using a mouse, you can see the taskbar from any screen, including Start or a Windows Store app. Move your mouse pointer below the bottom edge of the screen to show the taskbar and then click an app to open or switch to it.

    clip_image009

    Modern App User Interface (UI)

    Your mouse works more consistently anywhere in Windows. When moving the mouse to the top of the screen, the close and minimize buttons will appear from within Windows Store apps.

    For devices that are touch enabled, users will continue to use the close gesture (From top edge, tap and hold dragging to bottom of the screen for a few seconds).

    clip_image012

    These are just some of the user experience changes available in Windows 8.1 Update. Let us know what you think or if you have any questions regarding these changes.

    Enjoy!

    For more information and to view the complete list of new features, check the following links:

    What’s new in Windows 8.1 Update and Windows RT 8.1 Update?

    Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 Update April, 2014

    -Henry

  • Options for Managing Go to Desktop or Start after Sign in in Windows 8.1

    Hi, David here. Over the past year we’ve gotten a lot of feedback from our customers about the pain of changing from older versions of Windows over to Windows 8 and Windows 8.1. While it’s a great OS with a lot of compelling features, it’s ...read more
  • What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2

    My name is Aditya and I am a Sr. Support Escalation Engineer for Microsoft on the Windows Core Team. This blog is a continuation of the previous Servicing Part 1. So to understand this blog better, it is recommended that one reads the previous blog post.  As mentioned in the previous, this is a 4 part Blog series on Windows Servicing.

    What’s New in Windows Servicing: Part 1
    What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2

    Before we dive into Single Instancing and Delta Compression, I thought it would be a good idea, to talk about why this was introduced and how it worked in the previous Operating Systems. The reason for both Single Instancing and Delta Compression was to reduce the Windows (Windows 8.1 and Windows Server 2012 R2) footprint. Here is how and why:

    Windows Footprint Reduction Features:The disk footprint of a Windows directly affects end-users, as it reduces the amount of available space for music, videos, pictures, and all other content. Even as we shift more user content to the cloud, factors such as high-resolution photos and videos, limited and costly bandwidth, and safety/security concerns over cloud storage mean that local storage requirements would remain constant for the next few years.

    The disk footprint of Windows also directly affects our OEM partners. Available storage is one of the most important metrics, that an end-user looks at when purchasing a system, and OEMs are pushed to provide higher storage capacity. The current trend is that many of the OEM’s are shifting to SSD storage, due to its small footprint (enabling smaller, sleeker devices), low power consumption, low noise, and improved performance. Unfortunately, SSD storage can cost as much as 10x the price of conventional spindle based storage, which means that OEMs can only add limited storage to their systems before the cost becomes too great.

    If Windows consumes less of the available disk footprint, while still providing a great end-user experience, this provides end-users with more disk space for their content, without requiring the OEM to spend more on storage, thus reducing the price of PCs.

    For rollback purposes, the previous versions of Windows Components are sometimes kept in WinSxS store after installation of new updates through Windows Updates. The MUM Servicing feature which was introduced in Windows 7 and Windows Server 2008 R2 ensures that the disk space growth due to GDR installations can be reclaimed after a Service Pack (SP) installation by running Disk Cleanup Utility manually.

    Windows strives to constrain servicing footprint growth, which are due to GDR installations either before or after a SP installation. The feature also focuses on enabling the servicing footprint reduction support at the Component Based Servicing technology level that targets for the following scenarios:

    1. Consumers opt in for automatic updates on their Windows 8 devices, and notice that the WinSxS store footprint no longer grows significantly over time.

    2. Consumers notice that the WinSxS store footprint has grown due to update installations over time, and then run Disk Cleanup Utility to reduce the WinSxS store footprint and reclaim disk space on their devices.

    3. OEMs service their golden images in technician labs over time to keep them up-to-date and secured. Before the image is delivered to ODM for deployment at factory floor, they clean up the image by running DISM to scavenge away all the superseded components and recapture the smaller sized image.

    4. Similarly, IT Admins service master images in their image libraries to keep them up-to-date and secured. Before the image is ready for deployment to Client machines, they clean up the image by running DISM to scavenge away all the superseded components and recapture the smaller sized image.

    The feature to reduce the disk space used by Windows with the focus on Windows Components. Windows Update routinely installs patches on released Windows machines but does not always remove previous content that is replaced by the patches and which are not in use anymore. The purpose of this feature is to reduce disk footprint growth over time and also to provide a means by which power-users can reduce the original disk footprint of Windows.

    This feature reduces disk footprint grown over time by uninstall and deletion of content that can be removed from the system and compression of unused content that may not be removable from the system

    Reducing the footprint of Windows also improves deployment performance, which benefits consumers, Enterprise, and OEMs.

    1. Single Instancing Catalogs: This feature contributes to the component store footprint reduction by single-instancing catalogs across the CATROOT, and Windows Servicing Stack stores. 

    Term

    Definition

    Catroot

    %windir%\system32\catroot

    Servicing Stack Packages

    %windir%\servicing\packages

    Servicing Stack Catalogs

    %windir%\winsxs\catalogs

     The redundant catalogs are single-instanced by hard-linking them across the three stores, nullifying the Windows Servicing Stack footprint overhead. To minimize impact to other catalog clients, changes were scoped to just those catalogs installed by the Servicing Stack.

    More information on how hard-linking affects and works in the Windows Servicing Stack, one can refer to this TechNet article:

    Manage the Component Store

    2. Delta Compression of Superseded Components: This feature contributes to the component store footprint reduction by significantly reducing the size of files that have been superseded by later updates, yet they remain on the computer in case the user needs to uninstall a recent update. 

    Term

    Definition

    Component

    The smallest serviceable unit that includes files, registry data, meta-data, and etc., that describes how to service that set of files, and etc.

    Installed component - winner

    This is the ‘winning’ version of a component in a component family. This is the payload that is projected into System32 (or whichever location is specified in the component manifest).

    Installed component - superseded

    These components are installed, but are older versions than the winning component. The payload exists in the component store, but does not get projected to System32. If the winning component is uninstalled, the highest versioned remaining component becomes the new winner.

    Latent component

    These components are available for installation under the proper circumstances, but are not currently installed. The most common form of a latent component is a component that belongs to an optional feature that is currently disabled.

    Superseded components are kept in the component storein case a user uninstalls the winning component (by uninstalling an update, for example). End-users infrequently uninstall updates, making those updates a prime target for reclaiming space. This feature uses a type of compression known as delta compression to dramatically reduce the size of superseded and latent components.

    Delta compressionis a technology based on the differencing of two similar files. One version is used as a baseline and another versions is expressed as baseline + deltas.

    The delta compression is performed against the winner component at the time of compression. This means the deltas for a specific component is different from machine to machine, depending on which winner was available at the time of compression.

    Let me explain this by use of the following diagram Figure 1, in which V1, V2, and V3are all installed components prior to compression. During compression, V1 and V2 are compared against V3, the current winner, to create the necessary deltas.

    clip_image002

    Figure 1

    In the next example, refer Figure 2 below, where V1 and V2 are installed, with V2 being the winner. After compression, V1 delta is created using V2 as the basis. Subsequently, V3 is installed. After the next compression, V2 delta is created using V3 as the basis.

    Figure 2

    Decompression or Rehydration:If the winning component is uninstalled, Windows Servicing Stack decompresses any components that are using the uninstalled version as their baseline, and makes the next highest versioned component the new winner. The uninstalled version is marked for deletion, and later when the Servicing Stack’s maintenance task runs, the uninstalled version is deleted, and any remaining superseded files are compressed against the new winner. For example refer to figure 3 below.

    Figure 3

    There may be cases where a file needs to be decompressed, but its basis file is also compressed. In these case, the Windows Servicing Stack would decompress the full chain of files necessary to decompress the final winning file.

    Figure 4

    At this point the big question that comes to mind is When Do We Delta Compress Components? The answer is pretty simple, Delta compression of superseded and latent content in the component store happens as part of the Servicing Stacks maintenance task. This process can be triggered either manually, or automatically.

    Manual maintenance:Manually triggered by dism.exe.

    Dism /online /cleanup-image /startcomponentcleanup

    Automatic maintenance: Triggered by a scheduled maintenance task when the system is idle.

    Task Scheduler Library  -->  Microsoft  -->  Windows  -->  Servicing

    The automatic case is interruptible and resume-able. It automatically stops when the computer is no longer idle, and resumes when it becomes idle again.

    For more detailed information, please refer to What’s New in Windows Servicing: Part 1.

    Definitions:

    Term

    Definition

    Delta compression

    Compressing a file by capturing a diff of the file against a basis file. Requires the basis file to decompress.

    Backup directory

    Directory containing copies of boot critical files that are used to repair corruption

    Manifest

    Files describing the contents of a component. Windows is essentially defined by component manifests, approximately 15,000 of them (on amd64).

    I hope this blog would have helped in understanding the efforts, put in the background by all the Windows team, in order to reduce the size of WINSXS considerably in Windows 8.1 and Windows Server 2012 R2.

    The next blog in the series we would be discussing about the Servicing Stack improvements in KB2821895for Windows 8, and why it will assist your upgrade to 8.1?? Till then happy reading….

    Aditya
    Senior Support Escalation Engineer
    Microsoft Platforms Support

  • Failover Clustering and Active Directory Integration

    My name is Ram Malkani and I am a Support Escalation Engineer on Microsoft’s Windows Core team. I am writing to discuss how Failover Clustering is integrated with Active Directory on Windows Servers.

    Windows Server Failover Clustering, has always had a very strong and cohesive attachment with the Active Directory. We made considerable changes to how Failover Clustering integrates with AD DS, as we made progression to new versions of Clusters running on Windows Servers. Let us see the story so far:

    Window Server 2003 and previous version.

    Windows Server 2008, 2008 R2

    Windows Server 2012

    We needed a Cluster Service Account (CSA). A domain user, whose credentials were used for the Cluster service and the Clustered resources. This had its problems, changing the password for the account, rotating the passwords, etc. Later, we did add support for Windows Clusters on 2003 to use Kerberos Authentication which created objects in Active Directory.

    We moved away from CSA, and instead, the Cluster started the use of Active Directory computer objects associated with the Cluster Name resource (CNO) and Virtual Computer Objects (VCOs) for other network names in the cluster. When cluster is created, the logged on user needed permissions to create the computer objects in AD DS, or you would ask the Active Directory administrator to pre-stage the computer object(s) in AD DS. Cluster communications between nodes also uses AD authentication.

    The same information provided for Windows 2008 and 2008R2 applies, however, we included a feature improvement to allow Cluster nodes to come up when AD is unavailable for authentication and allow Cluster Shared Volumes (CSVs) to become available and the VMs (potentially Domain Controllers) on it to start. This was a major issue as otherwise we had to have at least one available Domain Controller outside the cluster before the Cluster Service could start.

     

    What’s new with Clustering in Windows Server 2012 R2

    We have introduced, a new mode to create a Failover Cluster on Windows Server 2012 R2, known as Active Directory detached Cluster. Using this mode, you would not only no longer need to pre-stage these objects but also stop worrying about the management and maintenance of these objects. Cluster Administrators would no longer need to be wary about accidental deletions of the CNO or the Virtual Computer Objects (VCOs). The CNOs and VCOs are now instead created in Domain Name System (DNS).

    This feature provides greater flexibility when creating a Failover Cluster and enables you to choose to install Clusters with or without AD integration. It also improves the overall resiliency of cluster by reducing the dependencies on CNO and VCOs, thereby reducing the points of failure on the cluster.

    The intra-cluster communication would continue to use Kerberos for authentication, however, the authentication of the CNO would be done using NT LM authentication. Thus, you need to remember that for all Cluster roles that need Kerberos Authentication use of AD-detached cluster is not recommended.

     

    Installing Active Directory detached Cluster

    First, you should make sure that the nodes, running Windows Server 2012 R2 that you are intending to add to the cluster are part of the same domain, and proceed to install the Failover-Cluster feature on them. This is very similar to conventional Cluster installs running on Windows Servers. To install the feature, you can use the Server Manager to complete the installation.

    Server Manager can be used to install the Failover Clustering feature:

    Introducing Server Manager in Windows Server 2012
    http://blogs.technet.com/b/askcore/archive/2012/11/04/introducing-server-manager-in-windows-server-2012.aspx

    We can alternatively use PowerShell (Admin) to install the Failover Clustering feature on the nodes.

    Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools

    An important point to note is that PowerShell Cmdlet ‘Add-WindowsFeature’ is being replaced by ‘Install-WindowsFeature’ in Windows Server 2012 R2. PowerShell does not install the management tools for the feature requested unless you specify  ‘-IncludeManagementTools’ as part of your command. 

    image

     

    BONUS READ:
    The Cluster Command line tool (CLUSTER.EXE) has been deprecated; but, if you still want to install it, it is available under:
    Remote Server Administration Tools --> Feature Administration Tools --> Failover Clustering Tools --> Failover Cluster Command Interface in the Server Manager

    image

    The PowerShell (Admin) equivalent to install it:

    Install-WindowsFeature -Name RSAT-Clustering-CmdInterface

    Now that we have Failover Clustering feature installed on our nodes. Ensure that all connected hardware to the nodes passes the Cluster Validation tests. Let us now go on to create our cluster. You cannot create an AD detached clustering from Cluster Administrator and the only way to create the AD-Detached Cluster is by using PowerShell.

    New-Cluster MyCluster -Node My2012R2-N1,My2012R2-N2 -StaticAddress 192.168.1.15 -NoStorage -AdministrativeAccessPoint DNS

    image

    NOTE:
    In my example above, I am using static IP Addresses, so one would need to be specified.  If you are using DHCP for addresses, the switch “-StaticAddress 192.168.1.15 ” would be excluded from the command.


    Once we have executed the command, we would have a new cluster created with the name “MyCluster” with two nodes “My2012R2-N1” and “My2012R2-N2”. When you look Active Directory, there will not be a computer object created for the Cluster “MyCluster”; however, you would see the record as the Access Point in DNS.

    image

     

    For details on cluster roles that are not recommended or unsupported for AD detached Clusters, please read:

    Deploy an Active Directory-Detached Cluster
    http://technet.microsoft.com/en-us/library/dn265970.aspx

    That’s it! Thank you for your time.

    Ram Malkani
    Support Escalation Engineer
    Windows Core Team