This article has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
A system accident is an "unanticipated interaction of multiple failures" in a complex system. This complexity can be either of technology or people organization, and frequently has major aspects of both. A system accident can be easy to see in hindsight, but difficult in foresight because there are simply too many different action pathways to seriously consider all of them. These accidents often resemble Rube Goldberg devices in the way that small errors of judgment, flaws in technology, and insignificant damages combine to form an emergent disaster.
As another way to describe system accidents, William Langewiesche wrote, "the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur."
Safety features themselves can sometimes be the added complexity which leads to a system accident. J. Daniel Beckham writes, "It is ironic how often tightly coupled devices designed to provide safety are themselves the causes of disasters. Studies of the early warning systems set up to signal missile attacks on North America found that the failure of the safety devices themselves caused the most serious danger: false indicators of an attack that could have easily triggered a retaliation. Accidents at both Chernobyl and Three Mile Island were set off by failed safety systems."
In 2012 Charles Perrow wrote, "A normal accident [system accident] is where everyone tries very hard to play safe, but unexpected interaction of two or more failures (because of interactive complexity), causes a cascade of failures (because of tight coupling)." Charles Perrow uses the term normal accident to emphasize that, given the current level of technology, such accidents are highly likely over a number of years or decades.
There is an aspect of an animal devouring its own tail, in that more formality and effort to get it exactly right can actually make the situation worse. For example, the more organizational rigmarole that is involved in adjusting to changing conditions, the more employees will delay reporting the changing conditions. The more emphasis on formality, the less likely employees and managers will engage in real communication. And new rules can make the situation worse, both by adding another layer of complexity and by telling employees, yet again, that they are not to think but merely to follow the rules.
- 1 Scott Sagan
- 2 Possible system accidents
- 3 Possible future applications of concept
- 4 See also
- 5 References
Scott Sagan has multiple publications discussion the reliability of complex systems, especially regarding nuclear weapons. The Limits of Safety (1993) provided an extensive review of close calls during the Cold War that could have resulted in a nuclear war by accident.
Possible system accidents
This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these template messages)(Learn how and when to remove this template message)
Apollo 13 space flight, 1970
Apollo 13 Review Board:
" [Introduction] . . . It was found that the accident was not the result of a chance malfunction in a statistical sense, but rather resulted from an unusual combination of mistakes, coupled with a somewhat deficient and unforgiving design [Emphasis added]. . .
"g. In reviewing these procedures before the flight, officials of NASA, ER, and Beech did not recognize the possibility of damage due to overheating. Many of these officials were not aware of the extended heater operation. In any event, adequate thermostatic switches might have been expected to protect the tank."
Three Mile Island, 1979
"It resembled other accidents in nuclear plants and in other high risk, complex and highly interdependent operator-machine systems; none of the accidents were caused by management or operator ineptness or by poor government regulation, though these characteristics existed and should have been expected. I maintained that the accident was normal, because in complex systems there are bound to be multiple faults that cannot be avoided by planning and that operators cannot immediately comprehend."
ValuJet (AirTran) 592, Everglades, 1996
He points out that in "the huge MD-80 maintenance manual . . . By diligently pursuing his options, the mechanic could have found his way to a different part of the manual and learned that . . . [oxygen generators] must be disposed of in accordance with local regulatory compliances and using authorized procedures."
- That is, most safety procedures as written are "correct" in a sense, but neither helpful nor informative.
Step 2. The unmarked cardboard boxes, stored for weeks on a parts rack, were taken over to SabreTech's shipping and receiving department and left on the floor in an area assigned to ValuJet property.
Step 3. Continental Airlines, a potential SabreTech customer, was planning an inspection of the facility, so a SabreTech shipping clerk was instructed to clean up the work place. He decided to send the oxygen generators to ValuJet's headquarters in Atlanta and labelled the boxes "aircraft parts". He had shipped ValuJet material to Atlanta before without formal approval. Furthermore, he misunderstood the green tags to indicate "unserviceable" or "out of service" and jumped to the conclusion that the generators were empty.
Step 4. The shipping clerk made up a load for the forward cargo hold of the five boxes plus two large main tires and a smaller nose tire. He instructed a co-worker to prepare a shipping ticket stating "oxygen canisters - empty". The co-worker wrote, "Oxy Canisters" followed by "Empty" in quotation marks. The tires were also listed.
Step 5. A day or two later the boxes were delivered to the ValuJet ramp agent for acceptance on Flight 592. The shipping ticket listing tires and oxygen canisters should have caught his attention but didn't. The canisters were then loaded against federal regulations, as ValuJet was not registered to transport hazardous materials. It is possible that, in the ramp agent's mind, the possibility of SabreTech workers sending him hazardous cargo was inconceivable.
2008 financial institution near-meltdown
In a 2014 monograph, economist Alan Blinder stated that complicated financial instruments made it hard for potential investors to judge whether the price was reasonable. In a section entitled "Lesson # 6: Excessive complexity is not just anti-competitive, it’s dangerous," he further stated, "But the greater hazard may come from opacity. When investors don’t understand the risks that inhere in the securities they buy (examples: the mezzanine tranche of a CDO-squared; a CDS on a synthetic CDO,...), big mistakes can be made--especially if rating agencies tell you they are triple-A, to wit, safe enough for grandma. When the crash comes, losses may therefore be much larger than investors dreamed imaginable. Markets may dry up as no one knows what these securities are really worth. Panic may set in. Thus complexity per se is a source of risk."
This section is empty. You can help by adding to it. (April 2019)
Possible future applications of concept
Five-fold increase in airplane safety since 1980s, but flight systems sometimes switch to unexpected "modes" on their own
This section's title may not reflect its contents and was requested to be renamed.April 2019) (Learn how and when to remove this template message)(
In an article entitle "The Human Factor", William Langewiesche talks the 2009 crash of Air France Flight 447 over the mid-Atlantic. He points out that, since the 1980s when the transition to automated cockpit systems began, safety has improved fivefold. Langwiesche writes, "In the privacy of the cockpit and beyond public view, pilots have been relegated to mundane roles as system managers." He quotes engineer Earl Wiener who takes the humorous statement attributed to the Duchess of Windsor that one can never be too rich or too thin, and adds "or too careful about what you put into a digital flight-guidance system." Wiener says that the effect of automation is typically to reduce the workload when it is light, but to increase it when it's heavy.
Boeing Engineer Delmar Fadden said that once capacities are added to flight management systems, they become impossibly expensive to remove because of certification requirements. But if unused, may in a sense lurk in the depths unseen.
Langewiesche cites industrial engineer Nadine Sarter who writes about "automation surprises," often related to system modes the pilot does not fully understand or that the system switches to on its own. In fact, one of the more common questions asked in cockpits today is, "What’s it doing now?" In response to this, Langewiesche again points to the fivefold increase in safety and writes, "No one can rationally advocate a return to the glamour of the past."
Healthier interplay between theory and practice in which safety rules are sometimes changed?
This section's title may not reflect its contents and was requested to be renamed.April 2019) (Learn how and when to remove this template message)(
From the article "A New Accident Model for Engineering Safer Systems," by Nancy Leveson, in Safety Science, April 2004:
"However, instructions and written procedures are almost never followed exactly as operators strive to become more efficient and productive and to deal with time pressures. . . . . even in such highly constrained and high-risk environments as nuclear power plants, modification of instructions is repeatedly found and the violation of rules appears to be quite rational, given the actual workload and timing constraints under which the operators must do their job. In these situations, a basic conflict exists between error as seen as a deviation from the normative procedure and error as seen as a deviation from the rational and normally used effective procedure (Rasmussen and Pejtersen, 1994)."
- Perrow, Charles (1984 & 1999). Normal Accidents: Living with High-Risk Technologies, With a New Afterword and a Postscript on the Y2K Problem, Basic Books, 1984, Princeton University Press, 1999, page 70.
- In full, Langewiesche's quote is, "Charles Perrow's thinking is more difficult for pilots like me to accept. Perrow came unintentionally to his theory about normal accidents after studying the failings of large organizations. His point is not that some technologies are riskier than others, which is obvious, but that the control and operation of some of the riskiest technologies require organizations so complex that serious failures are virtually guaranteed to occur [Emphasis added]. Those failures will occasionally combine in unforeseeable ways, and if they induce further failures in an operating environment of tightly interrelated processes, the failures will spin out of control, defeating all interventions." From The Lessons of Valujet 592, The Atlantic, March 1998, in the section entitled A "Normal Accident" about two-thirds of the way into the article.
- The Crash of ValuJet 592: Implications for Health Care, J. Daniel Beckham, January '99. DOC file --> http://www.beckhamco.com/41articlescategory/054_crashofvalujet592.doc Mr. Beckham runs a health care consulting company, and this article is included on the company website.
- GETTING TO CATASTROPHE: CONCENTRATIONS, COMPLEXITY AND COUPLING, Charles Perrow, The Montréal Review, December 2012.
- Reason, James (1990-10-26). Human Error. Cambridge University Press. ISBN 0-521-31419-4.
- Langewiesche, William (March 1998). The Lessons of Valujet 592, The Atlantic. See especially the last three paragraphs of this long article: “ . . . Understanding why might keep us from making the system even more complex, and therefore perhaps more dangerous, too.”
- Sagan, Scott D. (1993). The Limits of Safety: Organizations, Accidents, and Nuclear Weapons. Princeton U. Pr. ISBN 0-691-02101-5.
- REPORT OF APOLLO 13 REVIEW BOARD ("Cortright Report"), Chair Edgar M. Cortright, CHAPTER 5, FINDINGS, DETERMINATIONS, AND RECOMMENDATIONS.
- Perrow, C. (1982), [https://inis.iaea.org/search/search.aspx?orig_q=RN:13677929 Perrow's abstract for his chapter entitled "The President’s Commission and the Normal Accident," in Sils, D., Wolf, C. and Shelanski, V. (Eds), Accident at Three Mile Island: The Human Dimensions, Boulder, Colorado, U.S: Westview Press, 1982 pp.173–184.
- Stimpson, Brian (Oct. 1998). Operating Highly Complex and Hazardous Technological Systems Without Mistakes: The Wrong Lessons from ValuJet 592, Manitoba Professional Engineer. This article provides counter-examples of complex organizations which have good safety records such as U.S. Nimitz-class aircraft carriers and the Diablo Canyon nuclear plant in California.
- See also Normal Accidents: Living with High-Risk Technologies, Charles Perrow, revised 1999 edition, pages 383 & 592.
- What Did We Learn from the Financial Crisis, the Great Recession, and the Pathetic Recovery? (PDF file), Alan S. Blinder, Princeton University, Griswold Center for Economic Policy Studies, Working Paper No. 243, November 2014.
- The Human Factor, Vanity Fair, William Langewiesche, September 17, 2014. " . . . pilots have been relegated to mundane roles as system managers, . . . Since the 1980s, when the shift began, the safety record has improved fivefold, to the current one fatal accident for every five million departures. No one can rationally advocate a return to the glamour of the past."
- A New Accident Model for Engineering Safer Systems, Nancy Leveson, Safety Science, Vol. 42, No. 4, April 2004. Paper based on research partially supported by National Science Foundation and NASA. " . . In fact, a common way for workers to apply pressure to management without actually going out on strike is to 'work to rule,' which can lead to a breakdown in productivity and even chaos. . "
- Cooper, Alan (2004-03-05). The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How To Restore The Sanity. Indianapolis: Sams - Pearson Education. ISBN 0-672-31649-8.
- Gross, Michael Joseph (May 29, 2015). Life and Death at Cirque du Soleil, This Vanity Fair article states: " . . . A system accident is one that requires many things to go wrong in a cascade. Change any element of the cascade and the accident may well not occur, but every element shares the blame. . . "
- Helmreich, Robert L. (1994). "Anatomy of a system accident: The crash of Avianca Flight 052". International Journal of Aviation Psychology. 4 (3): 265–284. doi:10.1207/s15327108ijap0403_4. PMID 11539174.
- Hopkins, Andrew (June 2001). "Was Three Mile Island A Normal Accident?" (PDF). Journal of Contingencies and Crisis Management. 9 (2): 65–72. doi:10.1111/1468-5973.00155. Archived from the original (PDF) on August 29, 2007. Retrieved 2008-03-06.
- Beyond Engineering: A New Way of Thinking About Technology, Todd La Prote, Karlene Roberts, and Gene Rochlin, Oxford University Press, 1997. This book provides counter-examples of complex systems which have good safety records.
- Pidgeon, Nick (Sept. 22, 2011). "In retrospect: Normal accidents," Nature.
- Perrow, Charles (May 29, 2000). "Organizationally Induced Catastrophes" (PDF). Institute for the Study of Society and Environment. University Corporation for Atmospheric Research. Retrieved February 6, 2009.
- Roush, Wade Edmund. CATASTROPHE AND CONTROL: HOW TECHNOLOGICAL DISASTERS ENHANCE DEMOCRACY, Ph.D Dissertation, Massachusetts Institute of Technology, 1994, page 15. ' . . Normal Accidents is essential reading today for industrial managers, organizational sociologists, historians of technology, and interested lay people alike, because it shows that a major strategy engineers have used in this century to keep hazardous technologies under control -- multiple layers of "fail-safe" backup devices -- often adds a dangerous level of unpredictability to the system as a whole. . '
- "Test shows oxygen canisters sparking intense fire". CNN.com. 1996-11-19. Retrieved 2008-03-06.
- Wallace, Brendan (2009-03-05). Beyond Human Error. Florida: CRC Press. ISBN 978-0-8493-2718-6.