||This article's use of external links may not follow Wikipedia's policies or guidelines. (July 2015)|
The term Use Error has recently been introduced to replace the commonly used terms human error and user error. The new term, which has already been adopted by international standards organizations for medical devices (see "Use errors in health care" below for references), suggests that accidents should be attributed to the circumstances, rather than to the human beings who happened to be there.
- 1 The need for the terminological change
- 2 Use errors vs. force majeure
- 3 Use errors in health care
- 4 Sources of use errors
- 5 An operational definition of use errors
- 6 Classifying use errors
- 7 Critics
- 8 References
- 9 External links
The need for the terminological change
The term "use error" was first used in May 1995 in an MD+DI guest editorial, “The Issue Is ‘Use,’ Not ‘User,’ Error,” by William Hyman. Traditionally, human errors are considered as a special aspect of human factors. Accordingly, they are attributed to the human operator, or user. When taking this approach, we assume that the system design is perfect, and the only source for the use errors is the human operator. For example, the U.S. Department of Defense (DoD) HFACS  classifies use errors attributed to the human operator, disregarding improper design and configuration setting, which often result in missing alarms, or in inappropriate alerting.
The need for changing the term was due to a common malpractice of the stakeholders (the responsible organizations, the authorities, journalists) in cases of accidents. Instead of investing in fixing the error-prone design, management attributed the error to the users. The need for the change has been pointed out by the accident investigators:
- Early in 1983, Erik Hollnagel pointed out that the term Human Error refers to the outcome, not to the cause. A user action is typically classified as an error only if the results are painful 
- In the story “Leap of Faith” of his book “Set Phasers on Stun”, Steve Casey suggested that the accident of the Indian Airlines Flight 605 near Bangalor in 1990 could have been avoided, should the investigators of the Air France Flight 296 accident of 1988 past the Mulhouse-Habsheim airport considered the circumstances (exceptional situation), rather than the pilots (human errors).
- In his book “Managing the Risks of Organizational Accidents” (Organizational models of accidents) James Reason explained and demonstrated that often, the circumstances for accidents could have been controlled by the responsible organization, and not by the operators.
- In his book “The Field Guide to Understanding Human Errors”, Sidney Dekker argued that blaming the operators according to “The Old View” results in defensive behavior of operators, which hampers the efforts to learn from near-misses and from accidents.
- In a recent study by Harel and Weiss  the authors suggested that the Zeelim accident during an Israeli military exercise in 1992 could have been prevented, should the Israeli forces have focused on learning from the accident of 1990, rather than on punishing the field officers involved in the exercise.
Use errors vs. force majeure
A mishap is typically considered as either a use error or a force majeure: 
- A use error is a mishap in which a human operator is involved. Typically, such mishaps are attributed to the failure of the human operator 
- A force majeure is a mishap that does not involve a human being in the chain of events preceding the event.
Use errors in health care
In 1998, Cook, Woods and Miller presented the concept of hindsight bias, exemplified by celebrated accidents in medicine, by a workgroup on patient safety . The workgroup pointed at the tendency to attribute accidents in health care to isolated human failures. They provide references to early research about the effect of knowledge of the outcome, which was unavailable beforehand, on later judgement about the processes that led up to that outcome. They explain that in looking back, we tend to oversimplify the situation that the actual practitioners faces. They conclude focusing on the hindsight knowledge prevents our understanding of the richer story, the circumstances of the human error.
- an act or omission of an act that results in a different medical device response than intended by the manufacturer or expected by the user.
ISO standards about medical devices and procedures provide examples of use errors, which are attributed to human factors, include slips, lapses and mistakes. Practically, this means that they are attributed to the user, implying the user’s accountability. The U.S. Food and Drug Administration glossary of medical devices provides the following explanation about this term:
- "Safe and effective use of a medical device means that users do not make errors that lead to injury and they achieve the desired medical treatment. If safe and effective use is not achieved, use error has occurred. Why and how use error occurs is a human factors concern."
With this interpretation by ISO and the FDA, the term ‘use error’ is actually synonymous with ‘user error’. Another approach, which distinguishes ‘use errors’ from ‘user errors', is taken by IEC 62366. Annex A includes an explanation justifying the new term:
- "This International Standard uses the concept of use error. This term was chosen over the more commonly used term of “human error” because not all errors associated with the use of medical device are the result of oversight or carelessness of the part of the user of the medical device. Much more commonly, use errors are the direct result of poor user interface design."
This explanation complies with “The New View”, which Sidney Dekker suggested as an alternative to “The Old View”. This interpretation favors investigations intended to understand the situation, rather than blaming the operators.
In a 2011 report draft on health IT usability, the U.S. National Institute of Standards and Technology (NIST) defines "use error" in healthcare IT this way: “Use error is a term used very specifically to refer to user interface designs that will engender users to make errors of commission or omission. It is true that users do make errors, but many errors are due not to user error per se but due to designs that are flawed, e.g., poorly written messaging, misuse of color-coding conventions, omission of information, etc.".
Sources of use errors
The Task-oriented Systems engineering considers two sources of user difficulties: 
- User errors
- User incapability to handle system failures.
Example of user error
An example of an accident due to a user error is the ecological disaster of 1967 caused by the Torrey Canyon supertanker. The accident was due to a combination of several exceptional events, the result of which was that the supertanker was heading directly to the rocks. At that point, the captain failed to change the course because the steering control lever was inadvertently set to the Control position, which disconnected the rudder from the wheel at the helm.
Examples of user failure to handle system failure
An operational definition of use errors
The ad-hoc definition implies that the use error is the consequence of a user command. This complies with the reactive approach to safety (hazard prevention), which might end up in a fatalistic attitude, implying that we cannot avoid use errors. The proactive approach, on the contrary, enables prevention of such mishaps, by considering the circumstances of the mishaps, regardless of the results (ergonomics). A proactive definition proposed by Harel and Weiss  is:
- "A user command is a use error if the results do not comply with the designer’s intention."
The proactive definition is not operational, as intentions are not in the scope of common engineering practices. To enable detection of unexpected events, the definition is rephrased, using engineering terms, such as design requirements and guidelines. An operative definition of a use error proposed by Harel and Weiss is:
- "A user command is a use error if it is not in the scope of predefined user commands appropriate to the operational scenario."
This definition complies with the STAMP model  proposed by Nancy Leveson. According to this model, normal use is defined by constrains to the system operation, and accidents may be attributed to deviation from these constrains.
This definition is operative, because:
- we know what are the predefined commands,
- we can formalize the operational scenarios, and,
- we can assign the user commands to operational procedures or constrains, associated with the operating scenario.
For example, the use error in the Torrey Canyon oil spill may be described by:
- The predefined commands, including setting the steering control to either of the Manual, Automatic or Control position
- Formalizing the Navigation and the Maintenance operational scenarios
- Assigning the Control position to the Maintenance scenario, but not to the Navigation scenario.
Classifying use errors
The URM Model  characterizes use errors in terms of the user’s failure to manage a system deficiency. Six categories of use errors are described in a URM document:
- Expected faults with risky results;
- Expected faults with unexpected results;
- Expected user errors in identifying risky situations (this study);
- User Errors in handling expected faults;
- Expected errors in function selection;
- Unexpected faults, due to operating in exceptional states.
Erik Hollnagel argues that going from and 'old' view to a 'new' view is not enough. One should go all the way to a 'no' view. This means that the notion of error, whether user error or use error might be destructive rather than constructive. Instead, he proposes to focus on the performance variability of everyday actions, on the basis that this performance variability is both useful and necessary. In most cases the result is that things go right, in a few cases that things go wrong. But the reason is the same.  Hollnagel expanded on this in his writings about the efficiency–thoroughness trade-off principle  of Resilience Engineering,  and the Resilient Health Care Net. 
- , Department of Defense Human Factors Analysis and Classification System: A mishap investigation and data analysis tool
- , Weiler and Harel: Managing the Risks of Use Errors: The ITS Warning Systems Case Study
- , Dekker: Reinvention of Human Error
- , Erik Hollnagel home page
- , Hollnagel: Why "Human Error" is a Meaningless Concept
- , Steve Casey: Set Phasers on Stun
- , Sidney Dekker: The Field Guide to Understanding Human Error
- , Harel & Weiss, Mitigating the Risks of Unexpected Events by Systems Engineering
- , Dekker, 2007: The Field Guide to Understanding Human Error
- , Cook RI, Woods DD, Miller C  A Tale of Two Stories: Contrasting Views of Patient Safety
- , FDA, Medical Devices, Glossary
- NISTIR 7804: Technical Evaluation, Testing and Validation of the Usability of Electronic Health Records, Draft, Sept. 2011, pg. 10. 
- , Zonnenshain & Harel: Task-oriented SE, INCOSE 2009 Conference, Singapore
- Steve Casey, A memento of your service, in Set Phasers on Stun, 1998
- , Nancy Leveson home page
-  Hollnagel: Understanding accidents-from root causes to performance variability
- , The ETTO Principle – Efficiency-Thoroughness Trade-Off
-  Hollnagel, Paries, Woods, Wreathall (editors): Resilience engineering in practice
-  the Resilient Health Care Net
- IEC 62366:2007 - Medical devices -- Application of usability engineering to medical devices
- Department of Defense Human Factors Analysis and Classification System: A mishap investigation and data analysis tool
- Managing the Risks of Use Errors: The ITS Warning Systems Case Study
- The re-invention of human error
- Why "Human Error" is a Meaningless Concept
- Dekker: The Field Guide to Understanding Human Error
- Mitigating the Risks of Unexpected Events by Systems Engineering
- FDA Medical Devices Glossary
- Nancy Leveson home page
- Cook RI, Woods DD, Miller C 1998, A Tale of Two Stories
- Hollnagel: Understanding accidents-from root causes to performance variability
- Hollnagel, Paries, Woods, Wreathall (editors): Resilience engineering in practice
- The ETTO Principle – Efficiency-Thoroughness Trade-Off
- the Resilient Health Care Net
- Dekker, 2007: The Field Guide to Understanding Human Error
- Zonnenshain & Harel: Task-oriented Systems Engineering, INCOSE 2009 Conference, Singapore