Jump to content


From Wikipedia, the free encyclopedia

Neuroergonomics is the application of neuroscience to ergonomics. Traditional ergonomic studies rely predominantly on psychological explanations to address human factors issues such as: work performance, operational safety, and workplace-related risks (e.g., repetitive stress injuries). Neuroergonomics, in contrast, addresses the biological substrates of ergonomic concerns, with an emphasis on the role of the human nervous system.


Neuroergonomics has two major aims: to use existing/emerging knowledge of human performance and brain function to design systems for safer and more efficient operation, and to advance this understanding of the relationship between brain function and performance in real-world tasks.

To meet these goals, neuroergonomics combines two disciplines—neuroscience, the study of brain function, and human factors, the study of how to match technology with the capabilities and limitations of people so they can work effectively and safely. The goal of merging these two fields is to use the startling discoveries of human brain and physiological functioning both to inform the design of technologies in the workplace and home, and to provide new training methods that enhance performance, expand capabilities, and optimize the fit between people and technology.

Research in the area of neuroergonomics has blossomed in recent years with the emergence of noninvasive techniques for monitoring human brain function that can be used to study various aspects of human behavior in relation to technology and work, including mental workload, visual attention, working memory, motor control, human-automation interaction, and adaptive automation. Consequently, this interdisciplinary field is concerned with investigations of the neural bases of human perception, cognition, and performance in relation to systems and technologies in the real world—for example, in the use of computers and various other machines at home or in the workplace, and in operating vehicles such as aircraft, cars, trains, and ships.


Functional neuroimaging[edit]

A central goal of neuroergonomics is to study the way in which brain function is related to task/work performance. To do this, noninvasive neuroimaging methods are typically used to record direct neurophysiological markers of brain activity through electrical activity electroencephalography (EEG), magnetoencephalography (MEG) or through indirect metabolic positron-emission tomography (PET) and neurovascular measures of neural activity including functional magnetic resonance imaging (fMRI), functional near-infrared spectroscopy (fNIRS), transcranial Doppler (TCD) sonography. Typically, neuroergonomic studies are more application-oriented than basic cognitive neuroscience studies and often require a balance between controlled environments and naturalistic settings. Studies using larger room-scale neuroimaging setups such as PET, MEG, and fMRI, offer increased spatial and temporal resolution at the expense of increased restrictions on participants actions. Using more mobile techniques such as fNIRS and EEG, research may be conducted in more realistic settings including even participation in the actual work being investigated (ex: driving). These techniques have the advantage of being more affordable and versatile, but may also compromise by reducing the number of areas recorded and the ability to image neural activity from deeper brain regions. Together the application of both controlled lab experiments and the translation of findings in realistic contexts represents the spectrum of neuroimaging in neuroergonomics.


Neurostimulation methods may also be used apart, or in conjunction with neuroimaging approaches to probe the involvement of cortical regions in task performance. Techniques such as transcranial magnetic stimulation (TMS) and transcranial direct-current stimulation (tDCS) can be used to temporarily alter the excitability of cortical regions. It is proposed that stimulating a cortical region (particularly with TMS) can disrupt or enhance that regions function, permitting researchers to test specific hypotheses related to human performance.

Some studies have shown the promise of using TMS and tDCS to improve cognitive skills during tasks. While initially used to treat various neurological disorders such as Parkinson's disease or dementia, the scope of TMS is expanding. In TMS, electricity is passed through a magnetic coil that is positioned near the person's scalp. Results from studies show that noninvasive brain stimulation leads to 20 more minutes of sustained vigilance performance.[1]


Psychophysiological measures are physiological measures (blood, heart rate, skin conductance, etc.) which change as part of psychological processes. Although not considered as a direct neural measure, neuroergonomics also promotes the use of physiological correlates as dependent measures when they can serve as an index of neural activities such as attention, motor, or affective processes. These measures can be used in conjunction with neuroimaging measures, or as a substitute when the acquisition of neuroimaging measures is too costly, dangerous, or otherwise impractical. Psychophysiology is a distinct field from neuroergonomics; however, the principals and objectives can be considered complementary.


Mental Workload Assessment[edit]

Using an fMRI, mental workload can be quantified by an increase in cerebral blood flow in regions of the prefrontal cortex (PFC). Many fMRI studies show that there is increased PFC activation during a working memory task. Equally important as measuring mental workload, is evaluating the operator vigilance, or attentiveness. Using TCD to monitor blood flow velocity in intercranial arteries, it was shown that a decrease in blood flow was associated with a decrease in vigilance and depletion of cognitive resources.[2]

Adaptive Automation[edit]

Adaptive automation, a novel neuroergonomic concept, refers to a human-machine system that uses real-time assessment of the operator's workload to make the necessary changes to enhance performance. For adaptive automation to work, the system must utilize an accurate operator-state classifier for the real-time assessment. Operator-state classifiers such as discriminant analysis and artificial neural networks show an accuracy of 70% to 85% in real-time. An important part to properly implementing adaptive automation is figuring out how big a workload needs to be to require intervention. Implementing neuroergonomic adaptive automation would require the development of nonintrusive sensors and even techniques to track eye movement. Current research into assessing a person's mental state includes using facial electromyography to detect confusion.[3]

Experiments show that a human-robot team performs better at controlling air and ground vehicles than either a human or robot (i.e. the automatic target recognition system). When compared to 100% human control and static automation, participants showed higher trust and self-confidence, as well as lower perceived workload, when using adaptive automation.[4]

In adaptive automation, getting the machine to accurately reason how to respond to the changes and get back to peak performance is the biggest challenge. The machine has to be able to determine to what extent it must make the changes. This is also a consequence of the complexity of the system and factors such as: how easily can the sensed parameter be quantified, how many parameters in the machine's system can be changed, and how well can these different machine parameters be coordinated.

Brain Computer Interfaces[edit]

A developing area of research called brain–computer interfaces (BCIs) strives to use different types of brain signals to operate external devices, without any motor input from the person. BCIs provide promise for patients with limited motor capabilities, such as those with amyotrophic lateral sclerosis. When the user engages in a specific mental activity, it generates a unique brain electrical potential that is processed and relayed into a signal for the external device. BCIs using signals from EEGs and ERPs have been used to operate voice synthesizers and move robotic arms. Research for BCIs began in the 1970s at the University of California Los Angeles, and its current focus is towards neuroprosthetic applications. BCIs can be substantially improved by incorporating high-level control, context, the environment, as well as virtual reality into its design.[5]

Stroke Rehabilitation[edit]

As of 2011, there has been an effort to applying a rehabilitation robot connected to a non-invasive brain–computer interface to promote brain plasticity and motor learning following a stroke. Half of stroke survivors experience unilateral paralysis or weakness, and approximately 30-60% of them do not regain function. Typical treatment, post-stroke, involves constraint-induced movement therapy and robotic therapy, which work to restore motor activity by forcing the movement of the weak limbs. Current active therapy cannot be utilized by patients who suffer complete control loss or paralysis, and do not have any residual motor ability to work with.

With a focus on these underserved patients, a BCI was created that used the electrical brain signals detected by an EEG to control an upper-limb rehabilitative robot. The user is instructed to imagine the motor activity while the EEG picks up the associated brain signals. The BCI uses a linear transformation algorithm to convert the EEG spectral features into commands for the robot. An experiment done on 24 subjects tested a non-BCI group, which used sensorimotor rhythms to control the robot, against the BCI-group, which used the BCI-robot system. The results from the brain-plasticity analysis showed that there was a decrease in beta wave activity in the subjects of the BCI-group, which is associated with a change in movement. The results also showed that the BCI-group performed better than the non-BCI group in every measure for motor learning.[6][undue weight?discuss]

Virtual Reality[edit]

Virtual reality could allow for testing how human operators would work in dangerous environments without actually putting them in harm's way. For example, it would allow the testing of how fatigue or a new technology would affect a driver or a pilot in their specific environment, without the possibility of injury. Being able to evaluate the effects of some new workplace technology in virtual reality, before real life implementation, could save money and lives. Bringing virtual reality technology to the point where it can accurately mimic real life is difficult, but its potential is vast.[7]

Healthcare Training[edit]

Healthcare training programs have adopted virtual reality simulation (VRS) as a training tool for nursing students. This computer-based three-dimensional simulation tool allows for nursing students to practice various nursing skills repeatedly in a risk-free environment. A nursing program at a major Midwestern state university agreed to utilize a VRS module for teaching the insertion of an intravenous (IV) catheter, and complete an evaluation on the effectiveness of the program. The VRS composed of a computer program and a haptic arm device, which worked together to simulate the feel of vascular access. On the computer screen, the user would have to select the equipment for the procedure in the correct order. The user then palpates the veins of the haptic arm, and virtually inserts the IV catheter. The program provides immediate feedback by notifying the user when he/she misses a step and needs to restart the procedure.

Results of evaluation pointed to the VRS an "excellent learning tool" for increasing a student's knowledge on the procedure. All eight of the nursing faculty who participated agreed to this much, and that they would recommend that students work with the VRS before performing the IV catheter insertion on real patients.

This tool allows educators to expose students to an extensive range of real-life patient conditions and nursing experiences. The central advantage of the VRS program is the availability of a variety of case scenarios, which allow students to increase their awareness of differences in patient responses to IV catheter insertion. From the standpoint of the student, the virtual reality simulation helps bridge the gap between nursing theory and practice.[8][undue weight?discuss]

Applications for Neurocognitive Disabilities[edit]

Neuroergonomic assessments have tremendous potential for evaluating the psychomotor performance in an individual with a neurocognitive disability or following a stroke or surgery. They would allow for a standardized method for measuring the change in neurocognitive function during rehabilitation for a neurocognitive disability. In terms of rehabilitation, it would allow for the efforts to be goal-oriented. These tests could be applied for measuring change following operational procedures such as neurosurgery, carotid endarterectomy, and coronary artery bypass graft.[9]

Driving Safety[edit]

One of the main application domains of neuroergonomics is driving safety, especially for older drivers with cognitive impairments. Driving requires the integration of multiple cognitive processes, which can be studied separately if the right kinds of tools are used. The types of tools used to evaluate cognition during driving include driving simulators, instrumented vehicles, and part task simulators.[10]

The Crossmodal Research Laboratory in Oxford is working on developing a system of warning signals to grab the attention of a distracted driver, in an effort to make driving safer for everybody. The research has found that using auditory icons, such as a car horn, is a better warning signal than a pure tone. On top of that, spatial auditory cues work better at redirecting the driver's attention than non-spatialized auditory cues. Cues that integrate multiple senses, such as an audiotactile signal, grab attention better than unisensory cues.[11] Others have evaluated different types of in-vehicle notifications (i.e., auditory icons, speech commands) designed for task management in autonomous trucks for their relevance to separable neural mechanisms; this serves as an effective method to clarify often conflicting findings drawn from behavioral results alone.[12]


  1. ^ R. Andy, M., Nathaniel, B., Craig M., W., & Jeremy, N. (n.d). Modulating the brain at work using noninvasive transcranial stimulation. NeuroImage, 59(Neuroergonomics: The human brain in action and at work), 129-137.
  2. ^ Parasuraman, R. (2008). "Putting the brain to work: Neuroergonomics past, present, and future". Human Factors, 50(3), 468-474.
  3. ^ Durso, F. T. (2012). "Detecting Confusion Using Facial Electromyography". Human Factors, 54(1), 60-69.
  4. ^ de Visser, E., & Parasuraman, R. (2011). Adaptive aiding of human-robot teaming: Effects of imperfect automation on performance, trust, and workload. Journal of Cognitive Engineering and Decision Making, 5(2), 209-231.
  5. ^ Allison, B., Leeb, R., Brunner, C., Muller-Putz, G., Bauernfeind, G., Kelly, J., & Neuper, C. (n.d). Toward smarter BCIs: extending BCIs through hybridization and intelligent control. Journal of Neural Engineering, 9(1).
  6. ^ Babalola, K. (2011). Brain–computer interfaces for inducing brain plasticity and motor learning: implications for brain-injury rehabilitation / by Karolyn Olatubosun Babalola. Atlanta, Ga. : Georgia Institute of Technology, 2011.
  7. ^ Parasuraman, R., & Rizzo, M. (2007). Neuroergonomics: The brain at work. Oxford; New York: Oxford University Press.
  8. ^ Jenson, C., & Forsyth, D. (2012). Virtual Reality Simulation: Using Three-dimensional Technology to Teach Nursing Students. Computers, Informatics, Nursing, 30(6), 312-318.
  9. ^ Henry J., M., & David J., M. (n.d). Neurocognitive disability, stroke, and surgery: A role for neuroergonomics?. Journal of Psychosomatic Research, 63, 613-615.
  10. ^ Lees, M. N., Cosman, J. D., Lee, J. D., Fricke, N., & Rizzo, M. (2010). Translating cognitive neuroscience to the driver's operational environment: A neuroergonomic approach. American Journal of Psychology, 123(4), 391-411.
  11. ^ Spence, C. (2012). Drive safely with neuroergonomics. Psychologist, 25(9), 664-667.
  12. ^ Glatz, C., Krupenia, S. S., Bülthoff, H. H., & Chuang, L. L. (2018, April). "Use the Right Sound for the Right Job: Verbal Commands and Auditory Icons for a Task-Management System Favor Different Information Processes in the Brain". Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 472, 1-10.

Academic conferences[edit]

Further reading[edit]

  • Mickaël Causse, Frédéric Dehais, Patrice Péran, Umberto Sabatini, Josette Pastor (2012). The effects of emotion on pilot decision-making: A neuroergonomic approach to aviation safety. Transportation Research Part C: Emerging Technologies.
  • Parasuraman, R. (2003). "Neuroergonomics: Research and practice." Theoretical Issues in Ergonomics Science, 4, 5-20.

External links[edit]