Split-brain (computing)

From Wikipedia, the free encyclopedia
Jump to: navigation, search

Split-brain is a term in computer jargon, based on an analogy with the medical Split-brain syndrome. It indicates data or availability inconsistencies originating from the maintenance of two separate data sets with overlap in scope, either because of servers in a network design, or a failure condition based on servers not communicating and synchronizing their data to each other. This last case is also commonly referred to as a network partition.

Typical usage of the jargon term is when internal and external Domain Name Services (DNS) for a corporate network are not communicating, so that separate DNS name spaces are to be administrated for external computers and for internal ones. This requires a double administration, and if there is domain overlap in the computer names, there is a risk that the same fully qualified domain name (FQDN), may ambiguously occur in both name spaces referring to different computer IP addresses.[1]

High-availability clusters usually use a heartbeat private network connection which is used to monitor the health and status of each node in the cluster. For example the split-brain syndrome may occur when all of the private links go down simultaneously, but the cluster nodes are still running, each one believing they are the only one running. The data sets of each cluster may then randomly serve clients by their own "idiosyncratic" data set updates, without any coordination with the other data sets.

A shared storage may experience data corruption. If the data storages are kept separate data inconsistencies that might require operator intervention and cleanup.

Approaches for dealing with split-brain[edit]

Davidson et al.,[2] after surveying several approaches to handle the problem, classify them as either optimistic or pessimistic.

The optimistic approaches simply let the partitioned nodes work as usual; this provides a greater level of availability, at the cost of sacrificing correctness. Once the problem has ended, automatic or manual reconciliation might be required in order to have the cluster in a consistent state. One current implementation for this approach is Hazelcast, which does automatic reconciliation of its key-value store.[3]

The pessimistic approaches sacrifice availability in exchange for consistency. Once a network partitioning has been detected, access to the sub-partitions is limited in order to guarantee consistency. A typical approach, as described by Coulouris et al.,[4] is to use a quorum-consensus approach. This allows the sub-partition with a majority of the votes to remain available, while the remaining sub-partitions should fall down to an auto-fencing mode. One current implementation for this approach is the one used by MongoDB replica sets.[5]

References[edit]

  1. ^ Windows Server 2008 Active Directory, Configuring (2nd Edition), Holme, Ruest, Ruest, Kellington, ISBN 978-0-7356-5193-7
  2. ^ Davidson, Susan; Garcia-Molina, Hector; Skeen, Dale (1985). "Consistency In A Partitioned Network: A Survey". ACM Computing Surveys (CSUR) 17 (3): 341–370. 
  3. ^ "Hazelcast Documentation". Retrieved 12 December 2012. 
  4. ^ Coulouris, George; Dollimore, Jean; Kindberg, Tim (2001). Distributed systems: concepts and design (3. ed., 1st, 2nd and 3rd impression. ed.). Harlow [u.a.]: Addison-Wesley. ISBN 0201-61918-0. 
  5. ^ "MongoDB Replication Fundamentals". Retrieved 12 December 2012.