Search Guys

Everything search, including product commentary and guidance, search strategies and general technical guidance

SharePoint 2013 and Search Service PerformanceLevel

SharePoint 2013 and Search Service PerformanceLevel

  • Comments 4
  • Likes

Just a quick note…  Set-SPEnterpriseSearchService still has the PerformanceLevel option in SP2013, but apart from the crawler component it doesn’t do anything.  And in practice changing from Maximum (the default) to Partly Reduced won’t achieve much throttling anyway.  Best just leave it at Maximum.

  • I do not agree. I've set the level to Reduced and I see a big difference in CPU utilization (and crawl time of course).

  • Interesting... in SharePoint 2013, the PerformanceLevel only applies to the Crawler component and it is doing significantly less than it was in 2010.

  • I concur with Remco. Reducing thread priority in my case made a significant impact to the overall CPU performance on my two search servers.

  • Admittedly this is an old thread now, but...

    This impacts the number of threads total AND the number of threads per host (e.g. per start address being crawled) that the Crawl Component can spawn up (e.g. for "Accessing the Network" [gathering content] or [feeding] "Threads in Plugins"). Also, when PerformanceLevel is "Maximum", the thread scheduling is running at "Normal" level, whereas the Reduced/PartiallyReduced threads run at "Below Normal" thread scheduling priority... which can definitely have an impact on overall CPU.

    Also, the number of hosts [start addresses] being crawled at one time also have a big impact on the number of threads spawned by the crawl component. Per the definition for "Performance" levels, you see that there is an overall limit and a limit on the number of threads per host. In most cases where you just crawl a single host, the number of threads spawned by the crawler will be bound by the "threads per host".

    However, when crawling multiple hosts (e.g. whether it be 1 content source with 30 start addresses ...or 30 content sources with 1 start address each), the crawl component will start spawning threads for each host to perform the work of gathering/feeding and [assuming the 30 hosts being concurrently crawled] you will [almost certainly] start hitting the "total" threads [per Crawl Component] ceiling.... and this is when you start seeing 100% on a crawler and it's locking out the server (b/c the threads are all running with normal priority).

    And, if you are needing way more threads per host than can be spawned by the crawl component (e.g. concurrently crawling many hosts at the same time), most likely you're hitting a scenario called [crawl thread] starvation... meaning, the crawler needs more threads to handle the work for that many hosts, but cannot allocate the threads. In this case, you need more crawl components...

    *(...I plan to write this up in more detail in a blog)

    I hope this helps,

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment