Thoughts from the EPS Windows Server Performance Team
Good morning AskPerf! I realize it’s been a couple of weeks or so since we’ve posted. The reason is that we’ve all been a bit busy trying to wrap up the end of our fiscal year, write our reviews and of course, there was the Fourth of July holiday in there as well. But, we’re now back and we’re going to kick things off with a short post on the basics of Synchronization Mechanisms. Not every IT Admin we work with has a programming background – and when we start delving into the minutiae of a server hang dump analysis we can cause plenty of confusion if we don’t define our terms first. If you’re not terribly familiar with IRQL you might want to go back and read our earlier post on IRQL (Interrupt Request Level). What we’re basically talking about today are synchronization mechanisms used to protect shared memory locations in kernel-mode memory. Without robust synchronization mechanisms, we run the risk of losing data.
Consider this example – you have two threads that both need to add a value to the same global variable. If both threads run simultaneously on a multiprocessor system, then the results of one thread’s operation might be lost by the other thread overwriting it. So, Thread A and Thread B are running – they both read the value of this global variable, and both threads need to increment the initial value of 1 by 1. Thread A reads the value, as does Thread B – but Thread B updates the variable first and completes its operation. Now, the new value should be 1+1 = 2. But, since there are no synchronization mechanisms at work, Thread A still has the original value of the variable, which is 1. Rather than adding 1 to the updated value, it adds 1 to the previous value and the new value of the variable is 1+1 = 2, and not 1+2 = 3. This is what we sometimes refer to as a race condition.
Now, on a uniprocessor system, race conditions can also be caused by one thread pre-empting another. Let’s take another look at our example – but this time on a uniprocessor system. Thread A reads in the value of the variable, and then gets pre-empted by Thread B which is a higher-priority thread. Thread B reads in the value, performs its operation of 1+1 and returns the new value of 2. Now, Thread A continues its work – but because there are no synchronization mechanisms to handle the pre-emption, Thread A uses the original value of 1 that it read prior to being pre-empted and adds 1 to it – returning 2. Thus, the update performed by Thread B is lost.
Now, had we had an elementary syncrhronization mechanism in place, we might have seen very different results. Thread A and B still have the same function to perform. This time however, Thread A acquires a lock. It reads the variable into the register and adds 1. The new variable value is 2. Thread A returns this value and then releases the lock, allowing Thread B to start. Thread B then acquires a lock and reads in the value of the variable – which is now 2 as a result of Thread A’s operation, and not 1 as in our previous example. Once Thread B performs its operation, returns the value and releases the lock – the value is 3, which is the correct result. Basically, the lock has ensured that one thread’s read / write operations complete prior to allowing another thread to access that shared variable.
Granted, these examples are very simplistic but they do illustrate why synchronization mechanisms are important. There are a number of common synchronization methods available – determining which one to use depends on the operation. The four common methods are:
That’s all for today. In our next post, we’ll cover Interlocked Operations. Until next time …
- CC Hameed