|This article needs additional citations for verification. (October 2012)|
In software engineering, a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available. Since the thread remains active but is not performing a useful task, the use of such a lock is a kind of busy waiting. Once acquired, spinlocks will usually be held until they are explicitly released, although in some implementations they may be automatically released if the thread being waited on (that which holds the lock) blocks, or "goes to sleep".
Because they avoid overhead from operating system process re-scheduling or context switching, spinlocks are efficient if threads are only likely to be blocked for a short period. For this reason, spinlocks are often used inside operating system kernels. However, spinlocks become wasteful if held for longer durations, as they may prevent other threads from running and require re-scheduling. The longer a lock is held by a thread, the greater the risk is that the thread will be interrupted by the OS scheduler while holding the lock. If this happens, other threads will be left "spinning" (repeatedly trying to acquire the lock), while the thread holding the lock is not making progress towards releasing it. The result is an indefinite postponement until the thread holding the lock can finish and release it. This is especially true on a single-processor system, where each waiting thread of the same priority is likely to waste its quantum (allocated time where a thread can run) spinning until the thread that holds the lock is finally finished.
Implementing spin locks correctly is difficult because one must take into account the possibility of simultaneous access to the lock, which could cause race conditions. Generally, such implementation is only possible with special assembly language instructions, such as atomic test-and-set operations, and cannot be easily implemented in high-level programming languages or in languages not supporting truly atomic operations. On architectures without such operations, or if high-level language implementation is required, a non-atomic locking algorithm may be used, e.g. Peterson's algorithm. But note that such an implementation may require more memory than a spinlock, be slower to allow progress after unlocking, and may not be implementable in a high-level language if out-of-order execution is allowed.
; Intel syntax locked: ; The lock variable. 1 = locked, 0 = unlocked. dd 0 spin_lock: mov eax, 1 ; Set the EAX register to 1. xchg eax, [locked] ; Atomically swap the EAX register with ; the lock variable. ; This will always store 1 to the lock, leaving ; previous value in the EAX register. test eax, eax ; Test EAX with itself. Among other things, this will ; set the processor's Zero Flag if EAX is 0. ; If EAX is 0, then the lock was unlocked and ; we just locked it. ; Otherwise, EAX is 1 and we didn't acquire the lock. jnz spin_lock ; Jump back to the MOV instruction if the Zero Flag is ; not set; the lock was previously locked, and so ; we need to spin until it becomes unlocked. ret ; The lock has been acquired, return to the calling ; function. spin_unlock: mov eax, 0 ; Set the EAX register to 0. xchg eax, [locked] ; Atomically swap the EAX register with ; the lock variable. ret ; The lock has been released.
The simple implementation above works on all CPUs using the x86 architecture. However, a number of performance optimizations are possible:
On later implementations of the x86 architecture, spin_unlock can safely use an unlocked MOV instead of the slower locked XCHG. This is due to subtle memory ordering rules which support this, even though MOV is not a full memory barrier. However, some processors (some Cyrix processors, some revisions of the Intel Pentium Pro (due to bugs), and earlier Pentium and i486 SMP systems) will do the wrong thing and data protected by the lock could be corrupted. On most non-x86 architectures, explicit memory barrier or atomic instructions (as in the example) must be used. On some systems, such as IA-64, there are special "unlock" instructions which provide the needed memory ordering.
To reduce inter-CPU bus traffic, code trying to acquire a lock should loop reading without trying to write anything until it reads a changed value. Because of MESI caching protocols, this causes the cache line for the lock to become "Shared"; then there is remarkably no bus traffic while a CPU waits for the lock. This optimization is effective on all CPU architectures that have a cache per CPU, because MESI is so widespread.
Execution Via O/s Methods and Classes
Recursively defined, spinlock fits in the categories of Thread Processing> Delaying Thread Processing> Timers. As such, it is possible to use Spinlock as a high level o/s system method call instead of in assembly, and still avoid race and other problems. Some of these methods and classes (using the FCL, CLR and C# as an example) include:
SpinWait struct (internally calling Thread's static Sleep, Yield and SpinWait methods) This way, the thread can tell the system it doesn't want to be scheduled for a certain wait time.
public static void (Int32 millisecondsTimeout); public static void Sleep(TimeSpan timeout);
With two threads in line, hyperthreaded CPUs can also switch between threads with spinlocks/ spinloops by calling Thread's SpinWait method:
public static void SpinWait(Int32 iterations);
Both in the FCL for Windows and their Win32 cousins, there also are SpinLock classes and methods. In Win32 these include YieldProcessor, SwitchToThread and Sleep. In the FCL you'd use System.Threading.Spinlock (a struct similar to a SpinLock class except that it uses SpinWait to improve execution). SpinLock structure also offers timeout options, and is a value type, making it memory friendly.
Two caveats using system level spinlock methods, instances or classes: 1. Don't pass SpinLock instances around, as they will be continually copied, losing synch. 2. Don't mark SpinLock instance fields readonly because the instance will fail (as defined) because the state must change as the lock is used.
The primary disadvantage of a spinlock is that, while waiting to acquire a lock, it wastes time that might be productively spent elsewhere. There are two ways to avoid this:
- Do not acquire the lock. In many situations it is possible to design data structures that do not require locking, e.g. by using per-thread or per-CPU data and disabling interrupts.
- Switch to a different thread while waiting. This typically involves attaching the current thread to a queue of threads waiting for the lock, followed by switching to another thread that is ready to do some useful work. This scheme also has the advantage that it guarantees that resource starvation does not occur as long as all threads eventually relinquish locks they acquire and scheduling decisions can be made about which thread should progress first. Spinlocks that never entail switching, usable by real-time operating system, are sometimes called raw spinlocks.
Most operating systems (including Solaris, Mac OS X and FreeBSD) use a hybrid approach called "adaptive mutex". The idea is to use a spinlock when trying to access a resource locked by a currently-running thread, but to sleep if the thread is not currently running. (The latter is always the case on single-processor systems.)
- Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth Edition ed.). Addison-Wesley. pp. 176–179. ISBN 0-201-59292-4.
- Richter, Jeffrey; Microsoft (2012). CLR via C# (Fourth Edition ed.). Wintellect. pp. 774–775. ISBN 9780735667457.
- Jonathan Corbet (9 December 2009). "Spinlock naming resolved". LWN.net. Retrieved 14 May 2013.
- Silberschatz, Abraham; Galvin, Peter B. (1994). Operating System Concepts (Fourth Edition ed.). Addison-Wesley. p. 198. ISBN 0-201-59292-4.
- pthread_spin_lock documentation from The Open Group Base Specifications Issue 6, IEEE Std 1003.1, 2004 Edition
- Variety of spinlock Implementations from Concurrency Kit
- Article "User-Level Spin Locks - Threads, Processes & IPC" by Gert Boddaert
- Paper "The Performance of Spin Lock Alternatives for Shared-Memory Multiprocessors" by Thomas Anderson
- Paper "Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors" by John M. Mellor-Crummey and Michael L. Scott. This paper received the 2006 Dijkstra Prize in Distributed Computing.
- Spin-Wait Lock by Jeffrey Richter
- Austria C++ SpinLock Class Reference
- Interlocked Variable Access(Windows)