What's New in Defrag for Windows Server 2012/2012R2

What's New in Defrag for Windows Server 2012/2012R2

  • Comments 11
  • Likes

Hello everyone, I am Palash Acharyya, Support Escalation Engineer with the Microsoft Platforms Core team. In the past decade, we have come a long way from Windows Server 2003 to all the way to Windows Server 2012R2. There has been a sea-change in the overall Operating System as a whole, and we have added/modified a lot of features. One of these is Disk Defragmentation and I am going to talk about it today.

Do I need Defrag?

To put this short and simple, defragmentation is a housekeeping job done at the file system level to curtail the constant growth of file system fragmentation. We have come a long way from Windows XP/2003 days when there used to be a GUI for defragmentation and it used to show fragmentation on a volume. Disk fragmentation is a slow and ongoing phenomena which occurs when a file is broken up into pieces to fit on a volume. Since files are constantly being written, deleted and resized, moved from one location to another etc., hence fragmentation is a natural occurrence. When a file is spread out over several locations, it takes longer for a disk to complete a read or a write IO. So, from a disk IO standpoint is defrag necessary for getting a better throughput? For example, when Windows server backup (or even a 3rd party backup solution which uses VSS) is used, it needs a Minimum Differential Area or MinDiffArea to prepare a snapshot. You can query this area using vssadmin list shadowstorage command (For details, read here). The catch is, there needs to be a chunk of contiguous free space without file fragmentation. The minimum requirement regarding the MinDiffArea is mentioned in the article quoted before.

Q. So, do I need to run defrag on my machine?

A. You can use a Sysinternal tool Contig.exe to check the fragmentation level before deciding to defrag. The tool is available here. Below is an example of the output which we can get:


There are 13,486 fragments in all, so should I be bothered about it? Well, the answer is NO. Why?

Here you can clearly observe that I have 96GB free space in C: volume, out of which Largest free space block or Largest Contiguous free space blockis approximately 54GB. So, my data is not scattered across the entire disk. In other words, my disk is not getting hammered during a read/write IO operation and running defrag here will be useless.

Q. Again, coming back to the previous question, is defrag at all necessary?

A. Well, it depends. We can only justify the need for defrag if it is causing some serious performance issues, else it is not worth the cost. We need to understand that file fragmentation is not always or solely responsible for poor performance. For example, there could be many files on a volume that are fragmented, but are not accessed frequently. The only way someone can tell if they need defrag is to measure their workload to see if fragmentation is causing slower and slower performance over time. If you determine that fragmentation is a problem, then you need to think about how effective it will be to run defrag for an extended period of time or the overall cost in running it. The word cost figuratively means the amount of effort which has gone behind running this task from an Operating System’s standpoint.  In other words, any improvement that you see will be at the cost of defrag running for a period of time and how it might interrupt production workloads. Regarding the situation where you need to run defrag to unblock backup, our prescriptive guidance should be to run defrag if a user encounters this error due to unavailability of contiguous free space. I wouldn’t recommend running defrag on a schedule unless the backups are critical and consistently failing for the same reason.

A look at Windows Server 2008R2:

Defragmentation used to run in Windows Server 2008/2008R2 as a weekly once scheduled task. This is how it used to look like:


The default options:


What changed in Server 2012:

There have been some major enhancements and modifications in the functionality of defrag in Windows server 2012. The additional parameters which have been added are:

/D     Perform traditional defrag (this is the default).

/K     Perform slab consolidation on the specified volumes.

/L     Perform retrim on the specified volumes.

/O     Perform the proper optimization for each media type.

The default scheduled task which used to run in Windows Server 2008R2 was defrag.exe –c which is doing a defragmentation in all volumes. This was again volume specific, which means the physical aspects of the storage (whether it’s a SCSI disk, or a RAID or a thin provisioned LUN etc.) are not taken into consideration. This has significantly changed in Windows server 2012. Here the default scheduled task is defrag.exe –c –h –k which means it is doing a slab consolidation of all the volumes with a normal priority level (default being Low). To explain Slab Consolidation, you need to understand Storage Optimization enhancements in Windows Server 2012 which has been explained in this blog.

So what does Storage Optimizer do?

The Storage Optimizer in Windows 8/Server 2012 , also takes care of maintenance activities like compacting data and compaction of file system allocation for enabling capacity reclamation on thinly provisioned disks. This is again platform specific, so if your storage platform supports it, Storage Optimizer will consolidate lightly used ‘slabs’ of storage and release those freed storage ‘slabs’ back to your storage pool for use by other Spaces or LUNs. This activity is done on a periodic basis i.e., without any user intervention and completes the scheduled task provided it is not interrupted by the user. I am not getting into storage spaces and storage pools as this will further lengthen this topic, you can refer TechNet regarding Storage Spaces overviewfor details.

This is how Storage Optimizer looks like:


This is how it looks like after I click Analyze


For a thin-provisioned storage, this is how it looks like:


The fragmentation percentage showing above is file level fragmentation, NOT to be confused with storage optimization. In other words, if I click on the Optimize option, it will do storage optimization depending on the media type. In Fig 5., you might observe fragmentation on volume E: and F: (I manually created file system fragmentation). If I manually run a defrag.exe –d (Traditional defrag) in addition with the default –o (Perform optimization), they won’t contradict each other as Storage Optimization and Slab Consolidation doesn’t work at a file system level like the traditional defrag used to do. These options actually show their potential in hybrid storage environments consisting of Storage spaces, pools, tiered storage etc. Hence, in brief the default scheduled task for running Defrag in Server 2012 and Server 2012 R2 does not do a traditional defrag job (defragmentation at a file system level) which used to happen in Windows Server 2008/2008R2. To do the traditional defragmentation of these volumes, one needs to run defrag.exe –d and before you do that, ensure that if it will be at all required or not.

Q. So why did we stop the default file system defragmentation or defrag.exe -d?

A. Simple, it didn’t justify the cost and effort to run a traditional file system defragmentation as a scheduled task once every week. When we talk about storage solutions having terabytes of data, running a traditional defrag (default file system defragmentation) involves a long time and also affects the server’s overall performance.

What changed in Server 2012 R2:

The only addition in Windows Server 2012R2 is the below switch:

/G     Optimize the storage tiers on the specified volumes.

Storage Tiers allow for use of SSD and hard drive storage within the same storage pool as a new feature in Windows Server 2012 R2. This new switch allows optimization in a tiered layout. To read more about Tiered Storage and how it is implemented, please refer to these articles:

Storage Spaces: How to configure Storage Tiers with Windows Server 2012 R2

What's New in Storage Spaces in Windows Server 2012 R2


In brief, we need to keep these things in mind:

1. The default scheduled task for defrag is as follows:

Ø Windows Server 2008R2: defrag.exe –c

Ø Windows Server 2012: defrag.exe –c –h –k

Ø Windows Server 2012 R2: defrag.exe –c –h –k –g

On a client machine it will be Windows –c –h –ohowever, if there is a thin provisioned media present, defrag will do slab consolidation as well.

2. The command line –c –h –k (for 2012) and –c –h –k –g (for 2012R2) for the defrag task will perform storage optimization and slab consolidation on thin provisioned media as well. Different virtualization platforms may report things differently, like Hyper-V shows the Media Type as Thin Provisioned, but VMware shows it as a Hard disk drive. The fragmentation percentage shown in the defrag UI has nothing to do with slab consolidation. It refers to the file fragmentation of the volume.  If you want to address file fragmentation you must run defrag with –d(already mentioned before)

3. If you are planning to deploy a PowerShell script to achieve the same, the command is simple.

PS C:\> Optimize-Volume -DriveLetter <drive letter name> -Defrag -Verbose

Details of all PowerShell cmdlets can be found here.

That’s all for today, till next time.

Palash Acharyya
Support Escalation Engineer
Microsoft Platforms Support

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • HDDs are not appear in the drive list of schedule optimization also correct specification?

  • This is all nice and neat... in theory. I'd rather like to see a meaningful input on what actually happens in Windows client http://bit.ly/win8-defrag-ssd-en (after reading it carefully, and not just glancing for 30 seconds).

  • Hello Palash,

    Thank you very much for the explanation. It made understand a lot better the new changes in W2k12.

    I just came to the situation that I am seeing some warnings in SCCM related to the fragmentation of my W2K12 Domain Controllers disks. They all need a file defragmentation (-d) since they have file level fragmentation around 37% and 40% in their logical disks.

    I understand the cost of doing this in DCs, specifically in normal work hours. We will need to schedule this during off hours.

    I thought at the begining the server maintenance scheduled by default at 3:00 AM weekly should take care of this. After your explanation, correct me if I am wrong, there is no anymore file level defragmentation using the programmed scheduled task.

    I was just wondering if you have taken into consideration that at some point with the new changes, we will have file level fragmentation. Shouldn't take the maintenance task this into consideration and, when needed, scheduled a file level defragmentation?. As I can see, this should be something manual when needed.

    Thanks in advance for your clarification.

  • I have to say that I am extremely disappointed by storage teirs in Windows Server 2012 R2. To summarize my primary issue, after setting up a basic 'RAID-1' mirrored data storage pool (just adding 8x 4TB drives to a pool with no striping options) but with a 128GB SSD fast tier and filling the pool with data, performance is basically acceptable (not great though). Then I go about re-writing a few 2+ TB .MP4 files (from a fast client workstation via a 1 GB LAN). Sometimes performance is OK, but often its just terrible, on the order of 10MB/s throughput.

    I've traced this back to the defragsvc service on the server. Apparently it's optimization is getting in the way of my work (since the HDD is the bottleneck). Didn't Microsoft implement IO priority in WSE2012? If so, why isn't this optimization at least running at a lower priority?

    Didn't you guys test this stuff? It's almost unusably slow and my only alternative, rebuilding the pool using two stripes, which will double IO performance but also require extending the pool in sets of 4 drives is going to take ages at this performance level (glacial).

  • how do I disable optimizer, disabling it is a best practices according to microsoft.

  • Interesting: two minor problems I have encountered: the Management Pack in Operations Manager is still only looking at fragmentation. And in servers with DPM 2012 R2 installed the GUI does not allow selecting individual disks for optimisation. Only "select all" is available.

  • How do I disable defrag domain wide? I can schedule in a GPO, but not unschedule a locally scheduled task.

    We use a disk/block based backup. After a full backup, only blocks that have changed are backed up. A defrag can move and thus mark as changed numerous blocks. This causes our block based backup to backup the same old data repeatedly. This is causing our disk based backup units to grow far larger than necessary and they are not cheap.

    This can also be a problem for servers with shared SAN space. A hundred servers on SAN all defragging is going to hurt performance of all servers on that SAN.

    Visiting 400 servers to disable this would be real pain. And luck is the OS or an update would reschedule it soon after. I need to know how to permanently disable this forever.

  • What's the advantage of "contig.exe -f -q -v c:" over "defrag /a /v" ?

  • The following fails for my thin-provisioned SAN attached disk:
    defrag L: -a -v (-o)
    ...with the following error message:
    The slab consolidation / trim operation cannot be performed because the volume alignment is invalid.

    This works without issue:
    Optimize-Volume -DriveLetter L -Analyze -Defrag -Verbose

    Any idea what's going on? Why does PowerShell's Optimize-Volume work but defrag.exe fail?

  • It appears the analysis fails with both defrag.exe and Optimize-Volume:
    defrag L: -a -v (-o)
    Optimize-Volume -DriveLetter L -Analyze -Verbose

    ...however, a full defrag has no issue:
    defrag L: -v (-o)
    Optimize-Volume -DriveLetter L -Defrag -Verbose

    What's going on with analysis failing?