Causal consistency

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Causal consistency is known as one of the main weak memory consistency models that can be used to assign consistency restrictions to all memory accesses for distributed implementations of data structures in the domain of concurrent programming. For example, in distributed shared memory and distributed transactions.

Other stronger consistency models like sequential consistency and linearizability have downsides such as: they take too long and require more space; also in terms of implementation, they are unattainable in some situations. Due to those reasons, causal consistency was proposed in the nineties [1] as a weaker consistency model in order to improve the performance and gain in efficiency when defining the semantics of memory accesses in shared memory models. In causality, distributed executions are represented as partial orders based on Lamport's concept of potential causality. [2]


Causal consistency can be defined as a model that captures the causal relationships between operations in the system and guarantees that each process can observe those causally related operations in common causal order. In other words, all processes in the system agree on the order of the causally related operations. [1] For instance, If there is an operation or event A that causes another operation B, then causal consistency provides an assurance that each other process of the system observes operation A before observing operation B. Therefore, causally related operations and concurrent operations are distinguishable in the causal consistency model. In more detail, if one write operation influences another write operation, then these two operations are causally related, one relies on the other. Concurrent operations are the ones that are unrelated by causality or causally independent. In particular, concurrent writes are independent operations, no one causes or influences the other. [3]

Thus, a system provides causal consistency if this following condition holds: write operations that are related by potential causality are seen by each process of the system in common order. Also, concurrent writes can occur in any order and can be seen in different orders by the processes in the system. [3]

Subsequently, causal consistency model is weaker than sequential consistency, which expects all processes to observe not just causally related writes but all write operations in common order. [4] However, causal consistency is stronger than PRAM consistency, which requires only the write operations that are done by a single process to be observed in common order by each other process. [5] It follows that when a system is sequentially consistent, it is also causally consistent. Additionally, causal consistency implies PRAM consistency, but not vice versa.


Here is an example of causal consistency. [6]

Causal relations are respected in the following event sequence:

P1 : W(x)1 W(x)3
P2 : R(x)1 W(x)2
P3 : R(x)1 R(x)3 R(x)2
P4 : R(x)1 R(x)2 R(x)3

Write operation W(x)2 is caused by an earlier write W(x)1, and it means that these two writes W(x)1 and W(x)2 are causally related as process P2 observes, reads, the earlier write W(x)1 that is done by process P1. Thus, every other process observes W(x)1 first before observing W(x)2. Also, another thing to notice is that the two write operations W(x)2 and W(x)3, with no intervening read operations, are concurrent since processes P3 and P4 observe, read, them in different orders.


There are four different session guarantees that are ensured and guaranteed by the causal consistency model. Those guarantees are summarized below as defined by Terry et. al: [7]

  • Read Your Writes : this means that preceding write operations are indicated and reflected by the following read operations.
  • Monotonic Reads: this implies that an up-to-date increasing set of write operations is guaranteed to be indicated by later read operations.
  • Writes Follow Reads: this provides an assurance that write operations follow and come after reads by which they are influenced.
  • Monotonic Writes: this guarantees that write operations must go after other writes that reasonably should precede them.


In brief, the implementation of causal consistency relies on a critical observation to see which write operations are seen by which processes. In other words, in order to implement the causal consistency model, it is important to build and maintain a dependency graph for memory accesses showing the causal relationships between the operations.

On the other hand, causal consistency is a useful consistency model to solve many problems which cannot be solved by other consistency criteria in distributed systems. For instance, in distributed databases, ordering operations problem cannot be solved by eventual consistency, but causal consistency can solve that. [8] The causal consistency provides an assurance that every process observes operations in the same causal order, and this shows why the causal consistency is stronger than eventual consistency. Also, causal consistency can be developed for all abstract data types such as queues or counters. [9]

Implementing causal consistency includes the following steps: (1) Maintain context metadata at every site, summarising what updates happened-before the site's current state, using either vector clocks or dependence graphs. (2) Piggy-back metadata on every update message, summarising what updates happened-before sending it. (3) When receiving an update, if its piggy-backed context happened-before the local context, deliver it to the application. Otherwise, it is too early; buffer the message and deliver it only when the local context matches; in the meantime, wait passively for the missing updates to arrive. This approach enables availability under partition.[10]

In a fully peer-to-peer system, the size of the metadata is at least proportional to the number of active writers.[11] This cost can be decreased by using approximation techniques, [12] or by restricting the communication topology. The search for efficient implementations of causal consistency is a very active research area.

Since time and ordering are so fundamental to our intuition, it is hard to reason about a system that does not enforce causal consistency. However, many distributed databases lack this guarantee, even ones that provide serialisability.[13] Spanner does guarantee causal consistency, but it also forces strong consistency, thus eschewing availability under partition. More available databases that ensure causal consistency include MongoDB and AntidoteDB.


  1. ^ a b Ahamad, M., Neiger, G., Burns, J. E., Kohli, P., & Hutto, P. W. (1995). Causal memory: Definitions, implementation, and programming. Distributed Computing, 9(1), 37-49.
  2. ^ Lamport, L. (1978). Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7), 558-565.Chicago
  3. ^ a b Gogia, R., Chhabra, P., & Kumari, R. (2014). Consistency Models in Distributed Shared Memory Systems.International Journal of Computer Science and Mobile Computing,196-201
  4. ^ Lamport, L. (1979). How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE transactions on computers, 100(9), 690-691.
  5. ^ Lipton, R. J., & Sandberg, J. S. (1988). PRAM: A scalable shared memory. Princeton University, Department of Computer Science.Chicago
  6. ^ Mosberger, D. (1993). Memory consistency models. ACM SIGOPS Operating Systems Review, 27(1), 18-26.
  7. ^ Terry, D. B., Demers, A. J., Petersen, K., Spreitzer, M. J., Theimer, M. M., & Welch, B. B. (1994, September). Session guarantees for weakly consistent replicated data. In Parallel and Distributed Information Systems, 1994., Proceedings of the Third International Conference on (pp. 140-149). IEEE.
  8. ^ Elbushra, M. M., & Lindström, J. (2015). Causal Consistent Databases. Open Journal of Databases (OJDB), 2(1), 17-35.
  9. ^ Perrin, M., Mostefaoui, A., & Jard, C. (2016, February). Causal consistency: beyond memory. In Proceedings of the 21st ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming (p. 26). ACM.
  10. ^ Carlos Baquero and Nuno Preguiça. Why Logical Clocks Are Easy. Comm. ACM 59(4), pp. 43–47, April 2016.
  11. ^ B. Charron-Bost, Concerning the size of logical clocks in distributed systems. In Information Processing Letters, 39(1) p. 11--16, 1991.
  12. ^ Torres-Rojas, Francisco J. and Ahamad, Mustaque. Plausible clocks: constant size logical clocks for distributed systems. Distributed Computing, 12(4), 1999.
  13. ^ K. Daudjee and K. Salem. Lazy database replication with ordering guarantees. In Int. Conf. on Data Engineering, pp. 424–435, Apr. 2004.