||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (February 2015)|
In computer science, load-link and store-conditional (LL/SC) are a pair of instructions used in multithreading to achieve synchronization. Load-link returns the current value of a memory location, while a subsequent store-conditional to the same memory location will store a new value only if no updates have occurred to that location since the load-link. Together, this implements a lock-free atomic read-modify-write operation.
LL/SC was originally proposed by Jensen, Hagensen, and Broughton for the S-1 AAP multiprocessor at Lawrence Livermore National Laboratory. Load-link is also known as "load-linked", "load and reserve", or "load-locked".
Comparison of LL/SC and compare-and-swap
If any updates have occurred, the store-conditional is guaranteed to fail, even if the value read by the load-link has since been restored. As such, an LL/SC pair is stronger than a read followed by a compare-and-swap (CAS), which will not detect updates if the old value has been restored (see ABA problem).
Real implementations of LL/SC do not always succeed if there are no concurrent updates to the memory location in question. Any exceptional events between the two operations, such as a context switch, another load-link, or even (on many platforms) another load or store operation, will cause the store-conditional to spuriously fail. Older implementations will fail if there are any updates broadcast over the memory bus. This is often called weak LL/SC by researchers, as it breaks many theoretical LL/SC algorithms. Weakness is relative, and some weak implementations can be used for some algorithms.
LL/SC is more difficult to emulate than CAS. Additionally, stopping running code between paired LL/SC instructions, such as when single-stepping through code, can prevent forward progress, making debugging tricky.
Most platforms provide multiple sets of instructions for different data sizes; e.g., ldarx/stdcx for doubleword on the PowerPC.
Some CPUs require the address being accessed exclusively to be configured in write-through mode.
Some CPUs track the load-linked address at a cache-line or other granularity, such that any modification to any portion of the cache line (whether via another core's store-conditional or merely by an ordinary store) is sufficient to cause the store-conditional to fail.
All of these platforms provide weak LL/SC. The PowerPC implementation allows an LL/SC pair to wrap loads and even stores to other cache lines (although this approach is vulnerable to false cache line sharing). This allows it to implement, for example, lock-free reference counting in the face of changing object graphs with arbitrary counter reuse (which otherwise requires double compare-and-swap, DCAS).
The ARM implementation defines platform dependent blocks, ranging from 8 bytes to 2048 bytes, and an LL/SC attempt in any given block fails if there is between the LL and SC a normal memory access inside the same block. The ARM implementation is the strongest and most practical.
LL/SC has two advantages over CAS when designing a load-store architecture: reads and writes are separate instructions, as required by the design philosophy (and pipeline architecture); and both instructions can be performed using only two registers (address and value), fitting naturally into LSA encoding schemes. CAS, on the other hand, requires three registers (address, old value, new value) and a dependency between the value read and the value written. x86, being a CISC architecture, does not have this constraint; though modern chips may well translate a CAS instruction into separate LL/SC micro-operations internally.
Hardware LL/SC implementations typically do not allow nesting of LL/SC pairs. A nesting LL/SC mechanism can be used to provide a MCAS primitive (multi-word CAS, where the words can be scattered). In 2013, Trevor Brown, Faith Ellen, and Eric Ruppert have implemented in software a multi-address LL/SC extension (which they call LLX/SCX) that relies on automated code generation; they have used it to implement one of the best performing concurrent binary search tree (actually a chromatic tree), slightly beating the JDK CAS-based skip list implementation.
- J. H. Anderson and M. Moir. "Universal constructions for multi-object operations". In Proc. 14th Annual ACM Symposium on Principles of Distributed Computing, pages 184–193, 1995. See their Table 1, Figures 1 & 2 and Section 2 in particular.
- James R. Larus; Ravi Rajwar (2007). Transactional Memory. Morgan & Claypool Publishers. p. 55. ISBN 978-1-59829-124-7.
- Keir Fraser (2004), "Practical lock-freedom" UCAM-CL-TR-579, p. 20
- Trevor Brown, Faith Ellen, and Eric Ruppert. "Pragmatic primitives for non-blocking data structures." In Proceedings of the 2013 ACM symposium on Principles of distributed computing, pp. 13-22. ACM, 2013. See also slides
- Trevor Brown, Faith Ellen, and Eric Ruppert. "A general technique for non-blocking trees." In Proceedings of the 19th ACM SIGPLAN symposium on Principles and practice of parallel programming, pp. 329-342. ACM, 2014.
- Eric H. Jensen, Gary W. Hagensen, and Jeffrey M. Broughton. "A New Approach to Exclusive Data Access in Shared Memory Multiprocessors", Technical Report UCRL-97663, Lawrence Livermore National Laboratory, Nov. 1987. https://e-reports-ext.llnl.gov/pdf/212157.pdf
- John D. Bruner, Gary W. Hagensen, Eric H. Jensen, Jay C. Pattin and Jeffrey M. Broughton, “Cache Coherence on the S-1 AAP”, Technical Report UCRL-97646, Lawrence Livermore National Laboratory, November 11, 1987. https://e-reports-ext.llnl.gov/pdf/212414.pdf
- D. Detlefs, P. Martin, M. Moir and Guy L. Steele, Jr. "Lock-free reference counting", 20th Annual ACM Symposium on Principles of Distributed Computing, 2001, pp. 190–199 
- Kirk Reinholtz. "Atomic Reference Counting Pointers", C/C++ Users Journal, December, 2004 
- Sites, R. L. 1993. Alpha AXP architecture. Commun. ACM 36, 2 (Feb. 1993), 33-44. DOI= http://doi.acm.org/10.1145/151220.151226