A cascading failure is a process in a system of interconnected parts in which the failure of one or few parts can trigger the failure of other parts and so on. Such a failure may happen in many types of systems, including power transmission, computer networking, finance, human body systems, and transportation systems.
Cascading failures may occur when one part of the system fails. When this happens, other parts must then compensate for the failed component. This in turn overloads these nodes, causing them to fail as well, prompting additional nodes to fail one after another.
- 1 Cascading failure in power transmission
- 2 Cascading failure in computer networks
- 3 Cascading structural failure
- 4 Other examples of cascading failures
- 5 Interdependent cascading failures
- 6 Model for overload cascading failures
- 7 See also
- 8 References
- 9 Further reading
- 10 External links
Cascading failure in power transmission
Cascading failure is common in power grids when one of the elements fails (completely or partially) and shifts its load to nearby elements in the system. Those nearby elements are then pushed beyond their capacity so they become overloaded and shift their load onto other elements. Cascading failure is a common effect seen in high voltage systems, where a single point of failure (SPF) on a fully loaded or slightly overloaded system results in a sudden spike across all nodes of the system. This surge current can induce the already overloaded nodes into failure, setting off more overloads and thereby taking down the entire system in a very short time.
This failure process cascades through the elements of the system like a ripple on a pond and continues until substantially all of the elements in the system are compromised and/or the system becomes functionally disconnected from the source of its load. For example, under certain conditions a large power grid can collapse after the failure of a single transformer.
Monitoring the operation of a system, in real-time, and judicious disconnection of parts can help stop a cascade. Another common technique is to calculate a safety margin for the system by computer simulation of possible failures, to establish safe operating levels below which none of the calculated scenarios is predicted to cause cascading failure, and to identify the parts of the network which are most likely to cause cascading failures.
One of the primary problems with preventing electrical grid failures is that the speed of the control signal is no faster than the speed of the propagating power overload, i.e. since both the control signal and the electrical power are moving at the same speed, it is not possible to isolate the outage by sending a warning ahead to isolate the element.
Cascading failure caused the following power outages:
- Blackout in northeast America in 1965
- Blackout in Southern Brazil in 1999
- Blackout in northeast America in 2003
- Blackout in Italy in 2003
- Blackout in London in 2003
- European Blackout in 2006
- Blackout in northern India in 2012
- Blackout in South Australia in 2016
Cascading failure in computer networks
Cascading failures can also occur in computer networks (such as the Internet) in which network traffic is severely impaired or halted to or between larger sections of the network, caused by failing or disconnected hardware or software. In this context, the cascading failure is known by the term cascade failure. A cascade failure can affect large groups of people and systems.
The cause of a cascade failure is usually the overloading of a single, crucial router or node, which causes the node to go down, even briefly. It can also be caused by taking a node down for maintenance or upgrades. In either case, traffic is routed to or through another (alternative) path. This alternative path, as a result, becomes overloaded, causing it to go down, and so on. It will also affect systems which depend on the node for regular operation.
The symptoms of a cascade failure include: packet loss and high network latency, not just to single systems, but to whole sections of a network or the internet. The high latency and packet loss is caused by the nodes that fail to operate due to congestion collapse, which causes them to still be present in the network but without much or any useful communication going through them. As a result, routes can still be considered valid, without them actually providing communication.
If enough routes go down because of a cascade failure, a complete section of the network or internet can become unreachable. Although undesired, this can help speed up the recovery from this failure as connections will time out, and other nodes will give up trying to establish connections to the section(s) that have become cut off, decreasing load on the involved nodes.
A common occurrence during a cascade failure is a walking failure, where sections go down, causing the next section to fail, after which the first section comes back up. This ripple can make several passes through the same sections or connecting nodes before stability is restored.
Cascade failures are a relatively recent development, with the massive increase in traffic and the high interconnectivity between systems and networks. The term was first applied in this context in the late 1990s by a Dutch IT professional and has slowly become a relatively common term for this kind of large-scale failure.
Network failures typically start when a single network node fails. Initially, the traffic that would normally go through the node is stopped. Systems and users get errors about not being able to reach hosts. Usually, the redundant systems of an ISP respond very quickly, choosing another path through a different backbone. The routing path through this alternative route is longer, with more hops and subsequently going through more systems that normally do not process the amount of traffic suddenly offered.
This can cause one or more systems along the alternative route to go down, creating similar problems of their own.
Also, related systems are affected in this case. As an example, DNS resolution might fail and what would normally cause systems to be interconnected, might break connections that are not even directly involved in the actual systems that went down. This, in turn, may cause seemingly unrelated nodes to develop problems, that can cause another cascade failure all on its own.
In December 2012, a partial loss (40%) of GMail service occurred globally, for 18 minutes. This loss of service was caused by a routine update of load balancing software which contained faulty logic—in this case, the error was caused by logic using an inappropriate all instead of the more appropriate some. The cascading error was fixed by fully updating a single node in the network instead of partially updating all nodes at one time.
Cascading structural failure
Certain load-bearing structures with discrete structural components can be subject to the "zipper effect", where the failure of a single structural member increases the load on adjacent members. In the case of the Hyatt Regency walkway collapse, a suspended walkway (which was already overstressed due to an error in construction) failed when a single vertical suspension rod failed, overloading the neighboring rods which failed sequentially (i.e. like a zipper). A bridge that can have such a failure is called fracture critical, and numerous bridge collapses have been caused by the failure of a single part. Properly designed structures use an adequate factor of safety and/or alternate load paths to prevent this type of mechanical cascade failure.
Other examples of cascading failures
Biochemical cascades exist in biology, where a small reaction can have system-wide implications. One negative example is ischemic cascade, in which a small ischemic attack releases toxins which kill off far more cells than the initial damage, resulting in more toxins being released. Current research is to find a way to block this cascade in stroke patients to minimize the damage.
In the study of extinction, sometimes the extinction of one species will cause many other extinctions to happen. Such a species is known as a keystone species.
Yet another example of this effect in a scientific experiment was the implosion in 2001 of several thousand fragile glass photomultiplier tubes used in the Super-Kamiokande experiment, where the shock wave caused by the failure of a single detector appears to have triggered the implosion of the other detectors in a chain reaction.
In finance, the risk of cascading failures of financial institutions is referred to as systemic risk: the failure of one financial institution may cause other financial institutions (its counterparties) to fail, cascading throughout the system. Institutions that are believed to pose systemic risk are deemed either "too big to fail" (TBTF) or "too interconnected to fail" (TICTF), depending on why they appear to pose a threat.
Note however that systemic risk is not due to individual institutions per se, but due to the interconnections.
A related (though distinct) type of cascading failure in finance occurs in the stock market, exemplified by the 2010 Flash Crash.
Interdependent cascading failures
Diverse infrastructures such as water supply, transportation, fuel and power stations are coupled together. Owing to this coupling, interdependent networks are extremely sensitive to random failure, and in particular to targeted attacks, such that a failure of a small fraction of nodes from one network can produce an iterative cascade of failures in several interdependent networks. Electrical blackouts frequently result from a cascade of failures between interdependent networks, and the problem has been dramatically exemplified by the several large-scale blackouts that have occurred in recent years. Blackouts are a fascinating demonstration of the important role played by the dependencies between networks. For example, the September 28, 2003 blackout in Italy resulted in a widespread failure of the railway network, health care systems, and financial services and, in addition, severely influenced the communication networks. The partial failure of the communication system in turn further impaired the power grid management system, thus producing a positive feedback on the power grid. This example emphasizes how inter-dependence can significantly magnify the damage in an interacting network system. A framework to study the cascading failures between coupled networks based on percolation theory was developed recently. Cascading failures in spatially embedded systems have been shown to lead to extreme vulnerability. For the dynamic process of cascading failures see refs. A model for repairing failures so to avoid cascading failures was developed by Di Muro et al.
Furthermore, it was shown that such systems when embedded in space are extremely vulnerable to localized attacks or failures. Above a critical radius of damage, the failure may spread to the entire system.
Model for overload cascading failures
- Brittle system
- Butterfly effect
- Byzantine failure
- Cascading rollback
- Chain reaction
- Chaos theory
- Cache stampede
- Congestion collapse
- Domino effect
- For Want of a Nail (proverb)
- Interdependent networks
- Kessler Syndrome
- Percolation theory
- Progressive collapse
- Virtuous circle and vicious circle
- Wicked problem
- Zhai, Chao. "Modeling and Identification of Worst-Case Cascading Failures in Power Systems" (PDF). Cornell University Library. Retrieved 11 April 2017.
- Daqing, Li; Yinan, Jiang; Rui, Kang; Havlin, Shlomo (2014-06-20). "Spatial correlation analysis of cascading failures: Congestions and Blackouts". Scientific Reports. 4 (1). Bibcode:2014NatSR...4E5381D. doi:10.1038/srep05381. ISSN 2045-2322.
- Hines, Paul D. H.; Dobson, Ian; Rezaei, Pooya (2016). "Cascading Power Outages Propagate Locally in an Influence Graph that is not the Actual Grid Topology". IEEE Transactions on Power Systems: 1–1. arXiv:1508.01775. doi:10.1109/TPWRS.2016.2578259. ISSN 0885-8950.
- Petroski, Henry (1992). To Engineer Is Human: The Role of Failure in Structural Design. Vintage. ISBN 978-0-679-73416-1.
- Huang, Xuqing; Vodenska, Irena; Havlin, Shlomo; Stanley, H. Eugene (2013). "Cascading Failures in Bi-partite Graphs: Model for Systemic Risk Propagation". Scientific Reports. 3. arXiv:1210.4973. Bibcode:2013NatSR...3E1219H. doi:10.1038/srep01219. ISSN 2045-2322. PMC 3564037. PMID 23386974.
- "Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack" (PDF).
- Rinaldi, S.M.; Peerenboom, J.P.; Kelly, T.K. (2001). "Identifying, understanding, and analyzing critical infrastructure interdependencies". IEEE Control Syst. 21: 11–25.
- V. Rosato, Issacharoff, L., Tiriticco, F., Meloni, S., Porcellinis, S.D., & Setola, R. (2008). "Modelling interdependent infrastructures using interacting dynamical models". International Journal of Critical Infrastructures. 4: 63–79. doi:10.1504/IJCIS.2008.016092.
- S. V. Buldyrev, R. Parshani, G. Paul, H. E. Stanley, S. Havlin (2010). "Catastrophic cascade of failures in interdependent networks". Nature. 464 (7291): 1025–8. arXiv:1012.0206. Bibcode:2010Natur.464.1025B. doi:10.1038/nature08932. PMID 20393559.
- Bashan, Amir; Berezin, Yehiel; Buldyrev, Sergey V.; Havlin, Shlomo (2013). "The extreme vulnerability of interdependent spatially embedded networks". Nature Physics. 9: 667–672. arXiv:1206.2062. Bibcode:2013NatPh...9..667B. doi:10.1038/nphys2727. ISSN 1745-2473.
- Zhou, D.; Bashan, A.; Cohen, R.; Berezin, Y.; Shnerb, N.; Havlin, S. (2014). "Simultaneous first- and second-order percolation transitions in interdependent networks". Phys. Rev. E. 90: 012803. arXiv:1211.2330. Bibcode:2014PhRvE..90a2803Z. doi:10.1103/PhysRevE.90.012803.
- Di Muro, M. A.; La Rocca, C. E.; Stanley, H. E.; Havlin, S.; Braunstein, L. A. (2016-03-09). "Recovery of Interdependent Networks". Scientific Reports. 6 (1). arXiv:1512.02555. Bibcode:2016NatSR...622834D. doi:10.1038/srep22834. ISSN 2045-2322.
- Berezin, Yehiel; Bashan, Amir; Danziger, Michael M.; Li, Daqing; Havlin, Shlomo (2015-03-11). "Localized attacks on spatially embedded networks with dependencies". Scientific Reports. 5 (1). Bibcode:2015NatSR...5E8934B. doi:10.1038/srep08934. ISSN 2045-2322.
- Motter, A. E.; Lai, Y. C. (2002). "Cascade-based attacks on complex networks". Phys. Rev. E. 66: 065102.
- Zhao, J.; Li, D.; Sanhedrai, H.; Cohen, R.; Havlin, S. (2016). "Spatio-temporal propagation of cascading overload failures in spatially embedded networks". Nature Communications. 7: 10094. Bibcode:2016NatCo...710094Z. doi:10.1038/ncomms10094.
- Toshiyuki Miyazaki (1 March 2005). "Comparison of defense strategies for cascade breakdown on SF networks with degree correlations" (PDF). Archived from the original (PDF) on 2009-02-20.
- Russ Cooper (1 June 2005). "(In)Secure Shell?". RedmondMag.com. Archived from the original on 2007-09-28. Retrieved 2007-09-08.
- US Department of Homeland Security (5 February 2007). "Cascade Net (simulation program)". Center for Homeland Defense and Security. Archived from the original on 2008-12-28. Retrieved 2007-09-08.
- Space Weather: Blackout — Massive Power Grid Failure
- Cascading failure demo applet (Monash University's Virtual Lab)
- A. E. Motter and Y.-C. Lai, Cascade-based attacks on complex networks, Physical Review E (Rapid Communications) 66, 065102 (2002).
- P. Crucitti, V. Latora and M. Marchiori, Model for cascading failures in complex networks, Physical Review E (Rapid Communications) 69, 045104 (2004).
- Protection Strategies for Cascading Grid Failures — A Shortcut Approach
- I. Dobson, B. A. Carreras, and D. E. Newman, preprint A loading-dependent model of probabilistic cascading failure, Probability in the Engineering and Informational Sciences, vol. 19, no. 1, January 2005, pp. 15–32.
- Nova: Crash of Flight 111 on September 2, 1998. Swissair Flight 111 flying from New York to Geneva slammed into the Atlantic Ocean off the coast of Nova Scotia with 229 people aboard. Originally believed a terrorist act. After $39 million investigation, insurance settlement of $1.5 billion and more than four years, investigators unravel the puzzle: cascading failure. What is the legacy of Swissair 111? "We have a window into the internal structure of design, checks and balances, protection, and safety." -David Evans, Editor-in-Chief of Air Safety Week.
- PhysicsWeb story: Accident grounds neutrino lab
- The Structure and Dynamics of Large Scale Organizational Networks (Dan Braha, New England Complex Systems Institute)
- From Single Network to Network of Networks http://havlin.biu.ac.il/Pdf/Bremen070715a.pdf