I/O Performance impact of running Start-DedupJob with –Priority High

I/O Performance impact of running Start-DedupJob with –Priority High

  • Comments 3
  • Likes

My name is Steven Andress and I am a Support Escalation Engineer with Microsoft’s Platforms Support Team.  This is a short blog post to alert you to a condition you might encounter when running a deduplication job using the Start-DedupJob PowerShell cmdlet. 

Start-DedupJob
http://technet.microsoft.com/en-us/library/hh848442.aspx

Start-DedupJob [-Type] <Type> [[-Volume] <String[]> ] [-AsJob] [-CimSession <CimSession[]> ] [-Full] [-InputOutputThrottleLevel <InputOutputThrottleLevel> ] [-Memory <UInt32> ] [-Preempt] [-Priority <Priority> ] [-ReadOnly] [-StopWhenSystemBusy] [-ThrottleLimit <Int32> ] [-Timestamp <DateTime> ] [-Wait] [ <CommonParameters>]

The Start-DedupJobcmdlet starts a new data deduplication job for one or more volumes. The Priority setting sets the CPU and I/O priority for the optimization job you run by using this cmdlet.  The only way to run a Deduplication job with High Priority is to use the cmdlet.  When Priority is set to High, I/O for other processes using the volume may be slowed down or even blocked.   If this is a Cluster Shared Volume (CSV), I/O to the volume from other nodes can be similarly impacted.   

Workaround:
Do not use "-Priority High" when starting dedup jobs if this is server is in production hours.  If you wish to use this switch, please ensure that it is done after hours so that productivity is not affected.

Regards,
Steven Andress
Senior Support Escalation Engineer
Microsoft Corporation

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment
  • Not the only one. ddpcli.exe enqueue /opt /vol * /priority high