|Part of a series on|
Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct. This problem has come under increasing scrutiny as decision making in such critical contexts as intensive care units, nuclear power plants, and aircraft cockpits have increasingly involved computerized system monitors and decision aids. Errors of automation bias tend to occur when decision-making involves a degree of dependence on computers or other automated aids and the human element is largely confined to monitoring the tasks underway. Examples of such situations can involve not only such urgent matters as flying on automatic pilot but also such mundane matters as the use of spell-checking programs.
The tendency toward overreliance on automated aids is known as "automation misuse".
- 1 Errors of commission and omission
- 2 Factors
- 3 Screen design
- 4 Availability
- 5 Awareness of process
- 6 Team vs. individual
- 7 Training
- 8 Automation failure and "learned carelessness"
- 9 Provision of system confidence information
- 10 External pressures
- 11 Definitional problems
- 12 Automation-induced complacency
- 13 Sectors
- 14 Aviation
- 15 Health care
- 16 Military
- 17 Correcting bias
- 18 See also
- 19 References
- 20 Further reading
- 21 External links
Errors of commission and omission
Automation bias can take the form of omission errors, which occur when automated devices fail to detect or indicate problems, or commission errors, which occur when users follow an automated directive without taking into account other sources of information.
Errors of omission have been shown to result from cognitive vigilance decrements, while errors of commission result from a combination of a failure to take information into account and an excessive faith in the reliability of automated aids. Errors of commission occur for three reasons: (1) overt redirection of attention away from the automated aid; (2) diminished attention to the aid; (3) active discounting of information that counters the aid's recommendations. Omission errors take place when the human decision-maker fails to notice an automation failure, for example when a spell-check program misses a misspelled word or offers a false correction.
Training that focused on the reduction of automation bias and related problems has been shown to lower the rate of commission errors, but not of omission errors.
The presence of automatic aids, as one source puts it, "diminishes the likelihood that decision makers will either make the cognitive effort to seek other diagnostic information or process all available information in cognitively complex ways." It also renders users more likely to conclude their assessment of a situation too hastily after being prompted by an automatic aid to take a specific course of action.
According to one source, there are three main factors that lead to automation bias. First, the human tendency to choose the least cognitive approach to decision-making, which is called the cognitive miser hypothesis. Second, the tendency of humans to view automated aids as having an analytical ability superior to their own. Third, the tendency of humans to reduce their own effort when sharing tasks, either with another person or with an automated aid.
Other factors leading to an over-reliance on automation and thus to automation bias include inexperience in a task (though inexperienced users tend to be most benefited by automated decision support systems), lack of confidence in one's own abilities, a lack of readily available alternative information, or desire to save time and effort on complex tasks or high workloads. It has been shown that people who have greater confidence in their own decision-making abilities tend to be less reliant on external automated support, while those with more trust in decision support systems (DSS) were more dependent upon it.
One study, published in the Journal of the American Medical Informatics Association, found that the position and prominence of advice on a screen can impact the likelihood of automation bias, with prominently displayed advice, correct or not, is more likely to be followed; another study, however, seemed to discount the importance of this factor. According to another study, a greater amount of on-screen detail can make users less "conservative" and thus increase the likelihood of automation bias. One study showed that making individuals accountable for their performance or the accuracy of their decisions reduced automation bias.
Awareness of process
One study also found that when users are made aware of the reasoning process employed by a decision support system, they are likely to adjust their reliance accordingly, thus reducing automation bias.
Team vs. individual
The performance of jobs by crews instead of individuals acting alone does not necessarily eliminate automation bias. One study has shown that when automated devices failed to detect system irregularities, teams were no more successful than solo performers at responding to those irregularities.
Automation failure and "learned carelessness"
It has been shown that automation failure is followed by a drop in operator trust, which in turn is succeeded by a slow recovery of trust. The decline in trust after an initial automation failure has been described as the first-failure effect. By the same token, if automated aids prove to be highly reliable over time, the result is likely to be a heightened level of automation bias. This is called "learned carelessness."
Provision of system confidence information
In cases where system confidence information is provided to users, that information itself can become a factor in automation bias.
Studies have shown that the more external pressures are exerted on an individual's cognitive capacity, the more he or she may rely on external support.
Although automation bias has been the subject of many studies, there continues to be complaints that it remains ill-defined and that reporting of incidents involving automation bias is unsystematic.
A review of various automation bias studies categorized the different types of tasks where automated aids were used as well as what function the automated aids served. Tasks where automated aids were used were categorized as monitoring tasks, diagnosis tasks, or treatment tasks. Types of automated assistance were listed as Alerting automation, which track important changes and alert the user, Decision support automation, which may provide a diagnosis or recommendation, or Implementation automation, where the automated aid performs a specified task. Generalizing the effects of automation bias may undermine the development of specific and effective solutions.
The concept of automation bias is viewed as overlapping with automation-induced complacency, also known more simply as automation complacency. Like automation bias, it is a consequence of the misuse of automation and involves problems of attention. While automation bias involves a tendency to trust decision-support systems, automation complacency involves insufficient attention to and monitoring of automation output, usually because that output is viewed as reliable. "Although the concepts of complacency and automation bias have been discussed separately as if they were independent," writes one expert, "they share several commonalities, suggesting they reflect different aspects of the same kind of automation misuse." It has been proposed, indeed, that the concepts of complacency and automation bias be combined into a single "integrative concept" because these two concepts "might represent different manifestations of overlapping automation-induced phenomena" and because "automation-induced complacency and automation bias represent closely linked theoretical concepts that show considerable overlap with respect to the underlying processes."
Automation complacency has been defined as "poorer detection of system malfunctions under automation compared with under manual control." NASA's Aviation Safety Reporting System (ASRS) defines complacency as "self-satisfaction that may result in non-vigilance based on an unjustified assumption of satisfactory system state." Several studies have indicated that it occurs most often when operators are engaged in both manual and automated tasks at the same time. This complacency can be sharply reduced when automation reliability varies over time instead of remaining constant, but is not reduced by experience and practice. Both expert and inexpert participants can exhibit automation bias as well as automation complacency. Neither of these problems can be easily overcome by training.
The term "automation complacency" was first used in connection with aviation accidents or incidents in which pilots, air-traffic controllers, or other workers failed to check systems sufficiently, assuming that everything was fine when, in reality, an accident was about to occur. Operator complacency, whether or not automation-related, has long been recognized as a leading factor in air accidents.
To some degree, user complacency offsets the benefits of automation, and when an automated system's reliability level falls below a certain level, then automation will no longer be a net asset. One 2007 study suggested that this automation occurs when the reliability level reaches approximately 70%. Other studies have found that automation with a reliability level below 70% can be of use to persons with access to the raw information sources, which can be combined with the automation output to improve performance.
At first, discussion of automation bias focused largely on aviation. Automated aids have played an increasing role in cockpits, taking a growing role in the control of such flight tasks as determining the most fuel-efficient routes, navigating, and detecting and diagnosing system malfunctions. The use of these aids, however, can lead to less attentive and less vigilant information seeking and processing on the part of human beings. In some cases, human beings may place more confidence in the misinformation provided by flight computers than in their own skills.
An important factor in aviation-related automation bias is the degree to which pilots perceive themselves as responsible for the tasks being carried out by automated aids. One study of pilots showed that the presence of a second crewmember in the cockpit did not affect automation bias. A 1994 study compared the impact of low and high levels of automation (LOA) on pilot performance, and concluded that pilots working with a high LOA spent less time reflecting independently on flight decisions.
In another study, all of the pilots given false automated alerts that instructed them to shut off an engine did so, even though those same pilots insisted in an interview that they would not respond to such an alert by shutting down an engine, and would instead have reduced the power to idle. One 1998 study found that pilots with approximately 440 hours of flight experience detected more automation failures than did nonpilots, although both groups showed complacency effects. A 2001 study of pilots using a cockpit automation system, the Engine-indicating and crew-alerting system (EICAS), showed evidence of complacency. The pilots detected fewer engine malfunctions when using the system than when performing the task manually.
In a 2005 study, experienced air-traffic controllers used high-fidelity simulation of an ATC (Free Flight) scenario that involved the detection of conflicts among "self-separating" aircraft. They had access to an automated device that identified potential conflicts several minutes ahead of time. When the device failed near the end of the simulation process, considerably fewer controllers detected the conflict than when the situation was handled manually. Other studies have produced similar findings.
Two studies of automation bias in aviation discovered a higher rate of commission errors than omission errors, while another aviation study found 55% omission rates and 0% commission rates. Automation-related omissions errors are especially common during the cruise phase of When a China Airlines flight lost power in one engine, the autopilot attempted to correct for this problem by lowering the left wing, an action that hid the problem from the crew. When the autopilot was disengaged, the airplane rolled to the right and descended steeply, causing extensive damage. The 1983 shooting down of a Korean Airlines 747 over Soviet airspace occurred because the Korean crew "relied on automation that had been inappropriately set up, and they never checked their progress manually."
Clinical decision support systems (CDSS) are designed to aid clinical decision-making. They have the potential to effect a great improvement in this regard, and to result in improved patient outcomes. Yet while CDSS, when used properly, bring about an overall improvement in performance, they also cause errors that may not be recognized owing to automation bias. One danger is that the incorrect advice given by these systems may cause users to change a correct decision that they have made on their own. Given the highly serious nature of some of the potential consequences of AB in the health-care field, it is especially important to be aware of this problem when it occurs in clinical settings.
Sometimes automation bias in clinical settings is a major problem that renders CDSS, on balance, counterproductive; sometimes it is minor problem, with the benefits outweighing the damage done. One study found more automation bias among older users, but it was noted that could be a result not of age but of experience. Studies suggest, indeed, that familiarity with CDSS often leads to desensitization and habituation effects. Although automation bias occurs more often among persons who are inexperienced in a given task, inexperienced users exhibit the most performance improvement when they use CDSS. In one study, the use of CDSS improved clinicians' answers by 21%, from 29% to 50%, with 7% of correct non-CDSS answers being changed incorrectly.
A 2005 study found that when primary-care physicians used electronic sources such as PubMed, Medline, and Google, there was a "small to medium" increase in correct answers, while in an equally small percentage of instances the physicians were misled by their use of those sources, and changed correct to incorrect answers.
Studies in 2004 and 2008 that involved the effect of automated aids on diagnosis of breast cancer found clear evidence of automation bias involving omission errors. Cancers diagnosed in 46% of cases without automated aids were discovered in only 21% of cases with automated aids that failed to identify the cancer.
Automation bias can be a crucial factor in the use of intelligent decision support systems for military command-and-control operations. One 2004 study found that automation bias effects have contributed to a number of fatal military decisions, including friendly-fire killings during the Iraq War. Researchers have sought to determine the proper LOA for decision support systems in this field.
Automation bias can be mitigated by the design of automated systems, such as reducing the prominence of the display, decreasing detail or complexity of information displayed, or couching automated assistance as supportive information rather than as directives or commands. Training on an automated system, which includes introducing deliberate errors has been shown to be significantly more effective at reducing automation bias than just informing users that errors can occur. However, excessive checking and questioning automated assistance can increase time pressures and complexity of tasks thus reducing the benefits of automated assistance, so design of an automated decision support system can balance positive and negative effects rather than attempt to eliminate negative effects.
- Cummings, Mary (2004). "Automation Bias in Intelligent Time Critical Decision Support Systems" (PDF). AIAA 1st Intelligent Systems Technical Conference. ISBN 978-1-62410-080-2. doi:10.2514/6.2004-6313.
- Skitka, Linda. "Automation". University of Illinois. University of Illinois at Chicago. Retrieved 16 January 2017.
- Mosier, Kathleen; Skitka, Linda; Heers, Susan; Burdick, Mark (February 1997). "Automation Bias: Decision Making and Performance in High-Tech Cockpits". International Journal of Aviation Psychology. 8 (1): 47–63. doi:10.1207/s15327108ijap0801_3. Retrieved 17 January 2017.
- Mosier, Kathleen; Dunbar, Melisa; McDonnell, Lori; Skitka, Linda; Burdick, Mark; Rosenblatt, Bonnie (October 1, 1998). "Automation Bias and Errors: Are Teams Better than Individuals?". PROCEEDINGS of the HUMAN FACTORS AND ERGONOMICS SOCIETY: 201–205. Retrieved 17 January 2017.
- Parasuraman, Raja; Manzey, Dietrich (June 2010). "Complacency and Bias in Human Use of Automation: An Attentional Integration". The Journal of the Human Factors and Ergonomics Society. 52 (3): 381–410. doi:10.1177/0018720810376055. Retrieved 17 January 2017.
- Goddard, K.; Roudsari, A.; Wyatt, J. C. (2012). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". Journal of the American Medical Informatics Association. 19 (1): 121–127. PMC . PMID 21685142. doi:10.1136/amiajnl-2011-000089.
- Alberdi, Eugenio; Strigini, Lorenzo; Povyakalo, Andrey A.; Ayton, Peter (2009). "Why Are People's Decisions Sometimes Worse with Computer Support?". Computer Safety, Reliability, and Security. Lecture Notes in Computer Science. 5775. Springer Berlin Heidelberg. pp. 18–31. ISBN 978-3-642-04467-0. doi:10.1007/978-3-642-04468-7_3.
- Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy C. (2014). "Automation bias: Empirical results assessing influencing factors". International Journal of Medical Informatics. 83 (5): 368–375. PMID 24581700. doi:10.1016/j.ijmedinf.2014.01.001.
- Lyell, David; Coiera, Enrico (August 2016). "Automation bias and verification complexity: a systematic review". Journal of the American Medical Informatics Association. 24 (2): 424–431. doi:10.1093/jamia/ocw105.
- Goddard, Kate; Roudsari, Abdul; Wyatt, Jeremy (June 16, 2011). "Automation bias: a systematic review of frequency, effect mediators, and mitigators". J Am Med Inform Association. 19 (1): 121–127. PMC . doi:10.1136/amiajnl-2011-000089.
- Mosier, Kathleen; Skitka, Linda; Dunbar, Melisa; McDonnell, Lori (November 13, 2009). "Aircrews and Automation Bias: The Advantages of Teamwork?". The International Journal of Aviation Psychology. 11 (1): 1–14. Retrieved 17 January 2017.
- Bahner, J. Elin; Hüper, Anke-Dorothea; Manzey, Dietrich (2008). "Misuse of automated decision aids: Complacency, automation bias and the impact of training experience". International Journal of Human-Computer Studies. 66 (9): 688–699. doi:10.1016/j.ijhcs.2008.06.001.
- Goddard, K; Roudsari, A; Wyatt, J. C. (2011). "Automation bias - a hidden issue for clinical decision support system use". International Perspectives in Health Informatics. Studies in Health Technology and Informatics. 164. pp. 17–22. ISBN 978-1-60750-708-6. PMID 21335682.
- Mosier, K. L.; Skitka, L. J.; Heers, S; Burdick, M (1998). "Automation bias: Decision making and performance in high-tech cockpits". The International journal of aviation psychology. 8 (1): 47–63. PMID 11540946. doi:10.1207/s15327108ijap0801_3.