May, 2013

  • Another Who Done It

    Hi my name is Bob Golding, I am an EE in GES. I want to share an interesting problem I recently worked on.  The initial symptom was the system bugchecked with a Stop 0xA which means there was an invalid memory reference.  The cause of the crash ...read more
  • New tidbits on Windows 8.1

    Today on the Windows Blog, Antoine Leblond gave some insight on what’s to come with Windows 8.1.  Check it out here:

    Continuing the Windows 8 vision with Windows 8.1

    As an FYI, this update will be free to those running Windows 8.

    -AskPerf Team

  • MMS 2013 Hands On Labs Available

    A few months ago we held the annual 2013 Microsoft Management Summit in Las Vegas. As in years past, the event sold out quickly and it was a very busy week. To everyone that attended, our sincere thanks.  As a recap, the below blog gives you the list of available sessions online to view that deal with topics covered on the Core Blog site.

    Sessions from MMS 2013 Now Available
    http://blogs.technet.com/b/askcore/archive/2013/04/23/sessions-from-mms-2013-now-available.aspx

    As usual, the hands-on labs and instructor-led labs continue to be some of the most popular offerings at MMS. MMS Labs offer folks the opportunity to kick the tires on a wide array of Microsoft technologies and products. As usual the lines started early. For the fourth year in a row, all of the MMS Labs were 100% virtualized using Windows Server Hyper-V and managed via System Center by our partners at XB Velocity and using HP servers and storage. Of course, this year we upgraded to the latest version so everything was running on a Microsoft Cloud powered by Windows Server 2012 Hyper-V and System Center 2012 SP1.

    (BTW, Microsoft blogged about this topic in the past years, if you’re interested, the links are here and here.)  Before I jump into the Microsoft Private Cloud, let me provide some context about the labs themselves.

    What is a MMS Hand On Lab?

    One of the reasons the MMS Hands on Labs are so popular is because it’s a firsthand opportunity to evaluate and work with Windows Server and System Center in a variety of scenarios at your own pace. Here’s a picture of some of the lab stations…

    clip_image001

    With the hands on labs, we’ve done all the work to create these scenarios based on your areas of interest. So, what does one of these labs look like on the backend? Let’s be clear, none of these labs are a single VM. That’s easy. Been there, done that. When you sit down and request a specific lab, the cloud infrastructure provisions the lab on highly available infrastructure and deploys services that can be anywhere from 4 – 12 virtual machines in your lab in seconds. There are over 650 different lab stations and we have to account for all types of deployment scenarios. For example,

    1. In the first scenario, all users sit down at 8 am and provision exactly the same lab. Or,
    2. In the second scenario, all users sit down at 8 am and provision unique, different labs. Or,
    3. In the third scenario, all users sit down at 8 am and provision a mix of everything

    The lab then starts each lab in a few seconds. Let’s take a closer look at what some of the labs look like in terms of VM deployment.

    MMS Lab Examples

    Let’s start off with a relatively simple lab. This first lab is a Service Delivery and Automation lab. This lab uses:

    • Four virtual machines
    • 16 virtual processors
    • 15 GB of memory total
    • 280 GB of storage
    • 2 virtual networks

    …and here’s what each virtual machine is running…

    clip_image002

    Interested in creating virtualizing applications to deploy to your desktops, tablets, Remote Desktop Sessions? This next lab is a Microsoft Application Virtualization (App-V) 5.0 Overview lab. This lab uses:

    Seven virtual machines

    • 14 virtual processors
    • 16 GB of memory total
    • 192 GB of storage
    • 2 virtual networks

    clip_image003

    How about configuring a web farm for multi-tenant applications? Here’s the lab which uses:

    • Six virtual machines
    • 24 virtual processors
    • 16 GB of memory total
    • 190 GB of storage
    • 2 virtual networks

    clip_image004

    Ever wanted to enable secure remote access with RemoteApp, DirectAccess and Dynamic Access Control? Here’s the lab you’re looking for. This lab uses:

    Seven virtual machines

    • 28 virtual processors
    • 18 GB of memory total
    • 190 GB of storage
    • 2 virtual networks

    clip_image005

    Again, these are just a few of the dozens of labs ready for you at the hands on labs.

    MMS 2013 Private Cloud: The Hardware

    BTW, before I get to the specifics, let me point out that this Microsoft/HP Private Cloud Solution is an orderable solution available today...

    Compute. Like last year, we used two HP BladeSystem c7000s for compute for the cloud infrastructure. Each c7000 had 16 nodes and this year we to upgraded to the latest BL460c Generation 8 Blades. All 32 blades were then clustered to create a 32 node Hyper-V cluster. Each blade was configured with:

    • Two sockets with 8 cores per socket and thus 16 cores. Symmetric Multi-Threading was enabled and thus we had a total of 32 logical processors per blade.
    • 256 GB of memory per blade with Hyper-V Dynamic Memory enabled
    • 2 local disks 300 GB SAS mirrored for OS Boot per blade
    • HP I/O Accelerator cards (either 768 GB or 1.2 TB) per blade

    Storage. This year we wanted to have a storage backend that could take advantage of the latest storage advancements in Windows Server 2012 (such as Offloaded Data Transfer and SMI-S) so we decided to go with a 3Par StoreServ P10800 storage solution. The storage was configured as a 4 node, scale-out solution using 8 Gb fibre channel and configured with Multi-Path IO and two 16 port FC switches for redundancy. There was a total of 153.6 TB of storage configured with:

    • 64 x 200 GB SSD disks
    • 128 x 600 GB 15k FC disks
    • 32 x 2 TB 7200k RPM SAS

    As you can see, the 3Par includes SSD, 15k and 7200k disks. This is so the 3Par can provide automated storage tiering with HP’s Adaptive Optimization. With storage tiering, this ensures the most frequently used storage (the hot blocks) reside in the fastest possible storage tier whether that’s RAM, SSD, 15k or 7200k disks respectively. With storage tiering you can mix and match storage types to find the right balance of capacity and IOPs for you. In short, storage tiering rocks with Hyper-V. From a storage provisioning perspective, both SCVMM and the 3Par storage both support standards based storage management through SMI-S so the provisioning of the 3Par storage was done through System Center Virtual Machine Manager. Very cool.

    Networking. From a networking perspective, the solution used VirtualConnect FlexFabric 10Gb/E and everything was teamed using Windows Server 2012 NIC Teaming. Once the network traffic was aggregated in software via teaming, that capacity was carved up in software.

    Time for the Pictures…

    Here’s a picture of the racks powering all of the MMS 2013 Labs. The two racks on the left with the yellow signs are the 3Par storage while the two racks on the right contain all of the compute nodes (32 blades) and management nodes (a two node System Center 2012 SP1 cluster). What you don’t see are the crowds gathered around pointing, snapping pictures, and gazing longingly…

    clip_image006

    MMS 2013: Management with System Center. Naturally, the MMS team used System Center to manage all the labs, specifically Operations Manager, Virtual Machine Manager, Orchestrator, Configuration Manager, and Service Manager. System Center 2012 SP1 was completely virtualized running on Hyper-V and was running on a small two node cluster using DL360 Generation 8 rackmount servers.

    Operations Manager was used to monitor the health and performance of all the Hyper-V labs running Windows and Linux. Yes, I said Linux. Linux runs great on Hyper-V (it has for many years now) and System Center manages Linux very well… J To monitor health proactively, we used the ProLiant and BladeSystem Management Packs for System Center Operations Manager. The HP Management Packs expose the native management capabilities through Operations Manager such as:

    • Monitor, view, and get alerts for HP servers and blade enclosures
    • Directly launch iLO Advanced or SMH for remote management
    • Graphical View of all of the nodes via Operations Manager

    In addition, 3Par has management packs that plug right into System Center, so Operations Manager was used to manage the 3Par storage as well…

    clip_image007

    …having System Center integration with the 3Par storage came in handy when one of the drives died and Operations Manager was able to pinpoint exactly what disk failed and in what chassis…

    clip_image008

    Of course, everything in this Private Cloud solution is fully redundant so we didn’t even notice the disk failure for some time…

    In terms of managing the overall solution, here’s a view of some of the real time monitoring we were displaying and where many folks just sat and watched.

    clip_image009

    Virtual Machine Manager was used to provision and manage the entire virtualized lab delivery infrastructure and monitor and report on all the virtual machines in the system. In addition, HP has written a Virtual Machine Manager plug-in so you can view the HP Fabric from within System Center Virtual Machine Manager. Check this out:

    clip_image010

    It should go without saying that to support a lab of this scale and with only a few minutes between the end of one lab and the beginning of the next, automation is a key precept. The Hands on Lab team was positively gushing about PowerShell. “In the past, when we needed to provide additional integration it was a challenge. WMI was there, but the learning curve for WMI is steep and we’re system administrators. With PowerShell built-into WS2012, we EASILY created solutions and plugged into Orchestrator. It was a huge time saver.”

    MMS 2013: Pushing the limit…

    As you may know, Windows Server 2012 Hyper-V supports up to 64 nodes and 8,000 virtual machines in a cluster. Well, we have a history for pushing the envelope with this gear and this year was no different. At the very end of the show, the team fired up as many virtual machines to see how high we could go. (These were all lightly loaded as we didn’t have the time to do much more…) On Friday, the team fired up 8,312 virtual machines (~260 VMs per blade) running on a 32 node cluster. Each blade has 256 GB of memory each and we kept turning on VMs until all the memory was consumed.

    MMS 2013: More data…

    • Over the course of the week, over 48,000 virtual machines were provisioned. This is ~8,000 more than last year. Here’s a quick chart. Please note that Friday is just a half day…

    clip_image011

    • Average CPU Utilization across the entire pool of servers during labs hovered around 15%. Peaks were recorded a few times at ~20%. In short, even with thousands of Hyper-V VMs running on a 32 node cluster, we were barely taxing this well architected and balanced system.
    • While each blade was populated with 256 GB, they weren’t maxed. Each blade can take up to 384 GB.
    • Storage Admins: Disk queues for each of the hosts largely remained at 1.0 (1.0 is nirvana). When 3200 VMs were deployed simultaneously, the disk queue peaked at 1.3. Read that again. Show your storage admins. (No, those aren’t typos.)
    • The HP I/O Accelerators used were the 768 GB version and 1.2 TB versions. The only reason we used a mix of different sizes because that’s what we had available.
    • All I/O was configured for HA and redundancy.
      • Network adapters were teamed with Windows Server 2012 NIC Teaming
      • Storage was fibre channel and was configured with Active-Active Windows Server Multi-Path I/O (MPIO). None of it was needed, but it was all configured, tested and working perfectly.
    • During one of the busiest days at MMS 2013 with over 3500 VMs running simultaneously, this configuration wasn’t even breathing hard. It’s truly a sight to behold and a testament to how well this Microsoft/HP Private Cloud Solution delivers.

    From a management perspective, System Center was the heart of the system providing health monitoring, ensuring consistent hardware configuration and providing the automation that makes a lab this complex successful. At its peak, with over 3500 virtual machines running, you simply can’t work at this scale without pervasive automation.

    From a hardware standpoint, the HP BladeSystem and 3Par storage are simply exceptional. Even at peak load running 3500+ virtual machines, we weren’t taxing the system. Not even close. Furthermore, the fact that the HP BladeSystem and 3Par storage integrate with Operations Manager, Configuration Manager and Virtual Machine Manager provides incredible cohesion between systems management and hardware. When a disk unexpectedly died, we were notified and knew exactly where to look. From a performance perspective, the solution provides a comprehensive way to view the entire stack. From System Center we can monitor compute, storage, virtualization and most importantly the workloads running within the VMs. This is probably a good time for a reminder…

    If you’re creating a virtualization or cloud infrastructure, the best platform for Microsoft Dynamics, Microsoft Exchange, Microsoft Lync, Microsoft SharePoint and Microsoft SQL Server is Microsoft Windows Server with Microsoft Hyper-V managed by Microsoft System Center. This is the best tested, best performing, most scalable solution and is supported end to end by Microsoft.

    One More Thing...

    Finally, we’ve been talking about Windows Server and System Center as part of our Microsoft Private Cloud Solution. I’d also like to point out that Windows Server 2012 Hyper-V is the same rock-solid, high performing and scalable hypervisor we use to power Windows Azure too.

    Read that again.

    That’s right. Windows Azure is powered by Windows Server 2012 Hyper-V. See you at TechEd.

    P.S. Hope to see you at the Hands on Lab at TechEd!

    More pictures below…

    Here’s a close up of one of the racks. This rack has one of the c7000 chassis with 16 nodes for Hyper-V. It also includes the two managements heads clustered used for System Center. At the bottom of the rack are the Uninterruptible Power Supplies.

    clip_image012

    …and here’s the back of one of the racks that held a c7000…

    clip_image013

    HP knew there was going to be a lot of interest, so they created full size cardboard replicas diagraming the hardware in use.

    clip_image014

    …and here’s one more…

    clip_image015

     

    John Marlin
    Senior Support Escalation Engineer
    Microsoft Enterprise Platforms Support

  • Back to the Loopback: Troubleshooting Group Policy loopback processing, Part 2

    Welcome back! Kim Nichols here once again with the much anticipated Part 2 to Circle Back to Loopback . Thanks for all the comments and feedback on Part 1. For those of you joining us a little late in the game, you'll want to check out Part 1: Circle ...read more
  • Surface Pro Firmware and Driver Pack for Enterprise deployments

    Today I am going to discuss the Surface Pro Firmware and Driver Pack.  The Surface Pro Firmware and Driver Pack is a collection of drivers and firmware updates that Enterprise will need if they want to deploy their own custom image to the Surface Pro using Microsoft Deployment Toolkit, System Center Configuration Manager, or any other deployment solution. 

    The Surface Pro Firmware and Driver Pack can be downloaded using the following link:

    http://go.microsoft.com/fwlink/?LinkID=301483

    This link will always direct you to the latest version of this driver package since we will be releasing this on a regular basis. 

    The current package which was released 5/14/2013 contains the following: 

    • May2013SurfacePro.zip
    • Surface Pro - Enterprise Deployment Quick Start Guide.pdf

    The .pdf contains some general guidance around deploying to the Surface Pro using Microsoft Deployment Toolkit and SCCM.  It also explains how firmware updates work. 

    Hope this helps with your deployments!

    Scott McArthur
    Senior Support Escalation Engineer

  • We're back. Did you miss us?

    Hey all, David here. Now that we’ve broken the silence , we here on the DS team felt that we owed you, dear readers, an explanation of some sort. Plus, we wanted to talk about the blog itself, some changes happening for us, and what you should hopefully ...read more
  • We're back. Did you miss us?

    Hey all, David here. Now that we’ve broken the silence , we here on the DS team felt that we owed you, dear readers, an explanation of some sort. Plus, we wanted to talk about the blog itself, some changes happening for us, and what you should hopefully ...read more
  • Remoting Your Debug Crash Cart With KDNET

    This is Christian Sträßner from the Global Escalation Services team based in Munich, Germany.   Back in January, my colleague Ron Stock posted an interesting article about Kernel Debugging using a serial cable: How to Setup a Debug Crash Cart to ...read more
  • Our Bangalore Team is Hiring - Windows Server Escalation Engineer

    Would you like to join the world’s best and most elite debuggers to enable the success of Microsoft solutions?   As a trusted advisor to our top customers you will be working with to the most experienced IT professionals and developers in the industry ...read more
  • How to manage Out-of-Box Drivers with the use of Model Specific Driver Groups in Microsoft Deployment Toolkit 2012 Update 1

    Hello.  My name is Bill Spears and I am a Premier Field Engineer in the Windows/Platforms group at Microsoft.  In today’s blog I will discuss the approach that I use to manage Out-Of-Box drivers within the deployment process of MDT (Microsoft Deployment Toolkit 2012 Update 1).
       

    If you are used to deployments of legacy Operating Systems (such as Windows XP), Out-of-Box driver management sometimes became a very confusing task, and many folks ended up having very long driver paths in their answer files (OEMPnPDriversPath) and ended up having a large folder structure of drivers on each machine that contained drivers for any and all hardware that was to be deployed in that environment and in turn, this was a nightmare to manage.  The good news is that managing your Out-of-Box drivers is now much easier and cleaner when deploying Windows 7 via Microsoft Deployment Toolkit 2012 Update 1.  Now you can import all of the Out-of-Box drivers that will be needed into your MDT Deployment Workbench, and then those drivers can be injected offline into your install WIM depending on which hardware you are deploying the image to.
     

    Driver injection works as follows:   When a machine is booted via the Litetouch media, PNPEnum.exe collects a hardware inventory of the devices in that machine to determine which drivers will need to be injected in to the image before the image is installed on the target machine. By default the inject driver step in the task sequence will query for driver matches in the “All Drivers” Selection Profile.  So by default, you could get away with dumping all of your drivers in to the Out-Of-Box Drivers node of MDT and have most deployment scenarios install with the correct driver. In reality, this is a horrible way to approach driver management.  The reason I say that is because sometimes you will have multiple drivers that state via their INF file that they will work for a particular device (PNPID), when in reality, that’s not always true.  This could be the result of a poorly written driver, or it could be that even though you have a driver that is a match based on the INF, you may need to force the use of a different version of that driver for a particular model of hardware.  For Best Practice, it makes more sense to create a folder structure under the Out-of-Box Driver store to better manage how you add drivers to your MDT Workbench. I prefer to create a subfolder for Each Operating System, then each architecture type (x86,x64), then each hardware model, as shown in the screen shot below:

    clip_image002

    In the screenshot above, notice the folder structure created under the Out-of-Box Drivers Node. Import the drivers for each specific model into the respective folder.

    This also makes it easier when it comes time to add updated drivers for existing hardware types, or add a new folder for a new hardware type when you start getting newer model machines.  Having this well organized folder structure now also lets you be very granular of what drivers you make available during your deployment by making use of MDT Variables and Rules.  Consider the scenario of where PNPEnum.exe detects a piece of hardware with a certain PNPID, but where we have multiple drivers in our Out-of-Box drivers node that claim to be a match for this hardware based on the driver’s INF file information.  Without the use of Selection Profiles or DriverGroups, you would not be able to force which driver gets installed.  The driver who wins the built in driver ranking process would end up getting installed. By using DriverGroups, you can force the Driver Injection step of the task sequence to only look in a specific folder for its choice of drivers.  Therefore you are in control of what specific drivers are available to what specific machines.   This is possible by adding Model specific settings to your customsettings.ini file that point to the specific folder containing only drivers for that model based on the %model% variable that we detect during the Gather phase of Litetouch. 

    Now that we have our drivers imported into our folder structure in the Out-of-Box Drivers node in a manageable folder structure, it’s time to configure the customsettings.ini to add the custom Model sections.  This is outlined in the screenshots below:

    First, to find the model name of your machine, you could use one of the following methods:

    clip_image004

    In the screenshot above, notice I’m running the command: wmic computersystem get model to obtain the model name.  Alternatively, you could obtain this info via MSinfo32 as shown below:

    clip_image006

    The screenshot above is a portion of the output from the MSInfo32 command.  Notice the System Model name.  Next we will configure the customsettings.ini.

    Next, add the following to your customsettings.ini (Rule tab of Deployment Share properties)

    [Settings]
    Priority=Model,Default

    [Default]

    [HP Notebook 123]
    DriverGroup001=Windows 7\x64\%model%
    DriverSelectionProfile=nothing

    [HP Desktop 234]
    DriverGroup002=Windows 7\x64\%model%
    DriverSelectionProfile=nothing

    [Dell Laptop 345]
    DriverGroup003=Windows 7\x64\%model%
    DriverSelectionProfile=nothing

    [Lenovo Laptop 456]
    DriverGroup004=Windows 7\x64\%model%
    DriverSelectionProfile=nothing

    Notice we are using the Model variable to go to that specific section of the rule file, which then sets the path to only look in the subfolder that you force it to look in.  Now you can rest assured that when you run this task sequence on a particular model machine, you know exactly where it will be looking to find the drivers based on what hardware is detected in that machine.

    If new drivers become available from the OEM for a particular model, you would simply need to replace the new drivers in the proper folder of your Out-of-Box Drivers node. If you will be deploying this task sequence to a new piece of hardware, you would simply create a new folder in your Out-of-Box Drivers node, then import the new drivers into this folder, then create the necessary subsection for that model in your customsettings.ini (as shown above) and then you will be ready to deploy to the new hardware.

    One thing to note about Driver injections is that the driver must have an INF and SYS file in order for us to install the driver this way.  If a driver installs via an EXE and there is no way to extract that driver to reveal the actual INF and SYS file, then you would be forced to add that driver EXE package as an application and install it as an application.  Note that in that situation, you would need to handle the Network Adapter Drivers and Mass Storage Device drivers differently to ensure that the Operating System could communicate with the storage device and the network adapter in order to complete the install, and then the Vendor’s EXE program could be launched as an application to install the other drivers.

    Note that there are multiple approaches to handle Out-of-Box Driver Management with your deployments, and this is just one approach and it is the approach that works best for me.  Which method you use will be dependent on what works best for you in your particular situation, as there is not always a one size fits all solution to the design process of building and managing your images. The key benefit I find from using this method is that you know exactly where your drivers are coming from and once you have the framework setup, it’s easy to add updated drivers for existing models, and easy to add new models to your deployments, and most of all, it is very organized.

    Another thing to note is that when using this procedure to deploy Lenovo machines, other special considerations may need to be made.  This is due to the fact the Lenovo reports back a model string that frequently changes. This is explained further in the following blog which also has a recommended solution for dealing with this scenario:
    http://blogs.technet.com/b/mniehaus/archive/2009/07/17/querying-the-mdt-database-using-a-custom-model.aspx

    I hope that this information is helpful for your design strategy of how to manage Out-of-Box drivers.

    Bill Spears
    Microsoft Corporation
    Premier Field Engineer

  • AD FS 2.0 Claims Rule Language Part 2

    Hello, Joji Oshima here to dive deeper into the Claims Rule Language for AD FS. A while back I wrote a getting started post on the claims rule language in AD FS 2.0. If you haven't seen it, I would start with that article first as I'm going to build on ...read more
  • Finally a Windows Task Manager Performance tab blog!

    Good morning AskPerf!  How many times have we looked at Windows Task Manager and wondered what the values on the Performance tab meant?  Why do they not add up?  What is the difference between Free and Available Memory, etc., etc., etc.?  In today’s post, we will take a look at these values and explain what each one means.

    Below is a screenshot of the Performance tab from a Windows 2008 R2 Server with 16GB RAM and a 16GB page file:

    clip_image002

     

    Resource Monitor’s Memory tab looked like this:

    clip_image004

    The Performance tab is divided into the following sections:

    • CPU Usage – This indicates the percentage of processor cycles that are not idle at the moment. If this graph displays a high percentage continuously (and if you are not able to find any process chewing it up, it could be due to interrupts / DPCs. Use Process Explorer to get more details regarding them.  It may also mean that processor is overloaded on the system) Depending the number of CPUs on the system, we can see multiple graphs per CPU on the right.
    • CPU Usage History - Indicates how busy the processor has been. The graph only shows values since the time the Task Manager was opened.
    • Memory - Indicates the percentage of the physical memory that is currently being used.
    • Physical Memory Usage History - Indicates how much physical memory is being utilized. It also shows values since Task Manager was opened.
    • Physical Memory (MB) - Indicates the total and available physical memory, as well as the amount of memory used by system cache.
    • Kernel Memory (MB) - Indicates the memory used by the operating system and the drivers running in kernel mode (Paged and Non-paged pool).
    • System - Provides totals for the number of handles, threads, and processes currently running. A process is a single executable program. A thread is an object within a process that runs program instructions. A handle is a reference to a resource used by the operating system. A process may have multiple threads, each of which in turn may have multiple handles.

    We need to keep in mind that the Memory Usage graph (showed in Windows Vista/2008/7/2008R2) is the sum of all the process’s private working set.  On older Operating Systems (XP/2003), the PF Usage value seen is the Total System Commit.  This represents the potential page file usage, i.e how much pagefile would be used if all the private committed virtual memory in the system had to be paged out to the disk.

    Now taking a detailed look at the Physical Memory section:

    • Total - This counter shows the total amount of RAM that is usable by the operating system. Note that there can be a difference between the Installed RAM and the Total RAM due to Physical Memory shadow setting in BIOS, memory mapped to PCI Device etc. To know more about the reasons for this difference click here.
    • Cached - This represents the sum of the system working set, standby list, and modified page list. If you want to find the matching counters in Perfmon, and then load up the following objects under Memory – Cache Bytes, Modified pages list bytes, Standby cache core bytes, standby cache normal priority byte and standby cache reserve bytes.
    • Available - This is amount of physical memory that is currently available for use by the operating system, the drivers and the processes. It is equal to the sum of the standby pages, the free pages and the zero page lists.
    • Free - This is the sum of the free pages and the zero page lists.

    Under the Kernel Memory section, we have:

    • Paged - This is the currently used Pool paged byte in MB.
    • Nonpaged - This is the currently allocated Nonpaged Pool bytes in MB.

    For more details, click here.

    Here’s some information about different states of a Page in Memory (Reference: Windows Internal 5th Edition):

    • Active - (also called Valid) The page is part of a working set (either a process working set or the system working set) or it’s not in any working set (for example, nonpaged kernel page) and a valid PTE usually points to it.
    • Standby - The page previously belonged to a working set but was removed (or was perfected directly into the standby list). The page wasn’t modified since it was last written to disk. The PTE still refers to the physical page but is marked invalid and in transition.
    • Modified - The page previously belonged to a working set but was removed. However, the page was modified while it was in use and its current contents haven’t yet been written to disk or remote storage. The PTE still refers to the physical page but is marked invalid and in transition. It must be written to the backing store before the physical page can be reused.
    • Modified no-write - Same as a modified page, except that the page has been marked so that the memory manager’s modified page writer won’t write it to disk. The cache manager marks pages as modified no-write at the request of file system drivers. For example, NTFS uses this state for pages containing file system metadata so that it can first ensure that transaction log entries are flushed to disk before the pages they are protecting are written to disk.
    • Free - The page is free but has unspecified dirty data in it. These pages can’t be given as a user page to a user process without being initialized with zeros, for security reasons.
    • Zeroed - The page is free and has been initialized with zeros by the zero page thread (or was determined to already contain zeros).
    • ROM - The page represents read-only memory.
    • Bad - The page has generated parity or other hardware errors and can’t be used. This is also used internally by the system for pages that may be transitioning from one state to another or are on internal look-aside.

    With that, we have come to the end of this post.  Please feel free to post additional questions below.  Until next time.

    -Digvijay

  • What killed my process?

    Hello, world!

    We're often challenged with a process that exits unexpectedly, but this doesn't always equate to an application "crash".  Occasionally this behavior is caused by cross-process termination, where one process terminates another one.

    Discovering root cause of this behavior used to be just slightly less cumbersome than a barefoot walk to Mordor, but an easy solution called "Silent Process Exit Monitoring" exists Windows 7/2008R2 and later OS's.

    The Debugging Tools for Windows includes a GUI utility called GFLAGS.EXE that may be used to enable this monitoring with the following quick steps:

    1) Run GFLAGS.EXE and select the Silent Process Exit tab.

    2) Type the name of the process that is exiting unexpectedly.

    3) Hit the TAB key on the keyboard to refresh the GUI.

    4) Check the following boxes:

    a. Enable Silent Exit Process Monitoring
    This enables the feature and tracks silent process exits in the application event log.
    (Event ID: 3001)

    b. Enable Notification
    This optionally creates a balloon popup with the same information in the event log.

    c. Ignore Self Exits
    This prevents superfluous logging when the application exits gracefully, such as when File / Exit is selected from a menu.

    5) Click OK to save the change and exit the GFLAGS tool.

    NOTE: The changes will take effect immediately for any new processes launched after the change.  A reboot is NOT required.

    clip_image001

    When another process forces termination of the monitored process, the offending process name is listed in a balloon popup and in the application event log. (if this option is selected)

     

    clip_image002

     

    The following is an example of the event log entry.

    Source:        Microsoft-Windows-ProcessExitMonitor
    Event ID:      3001
    Level:         Information
    Description: The process 'calc.exe' was terminated by the process 'I Hate Calculators.exe' with termination code 0.

    Silent Process Exit may also be configured through the registry remotely if the machine is not accessible via the console or a remote desktop session.

    Example:

    Windows Registry Editor Version 5.00
    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\calc.exe]
    "GlobalFlag"=dword:00000200

    [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SilentProcessExit\calc.exe]
    "IgnoreSelfExits"=dword:00000001

    Note: Substitute the name of the process you want to monitor for CALC.EXE.

    More information on Silent Process Exit Monitoring is available on MSDN.

    Keep this in your bag of tricks for the next time you run into this niche scenario.

    - Aaron Maxwell