1
|
White TL, Hancock PA. Specifying advantages of multi-modal cueing: Quantifying improvements with augmented tactile information. APPLIED ERGONOMICS 2020; 88:103146. [PMID: 32421638 DOI: 10.1016/j.apergo.2020.103146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2019] [Revised: 03/26/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
This work examines how tactile cues, encoded with azimuth and distance information, compare with visual and speech cues on performance and mental workload in a target detection task. Two experiments are reported using a simulated environment in which targets were presented at varying azimuth and distance locations. In the first experiment, participants engaged targets both while stationary and while in motion using tactile, visual, or speech cues. A no cueing control was included. In the second multi-modal experiment, participants completed the same task using cue pairings. Performance metrics consisted of hits, misses due to non-detection, misses due to inaccurate engagement, false alarms, response time, navigation errors as well as subjective ratings of mental workload scores were also collected. Results demonstrate the superiority of tactile cues as a means to communicate target location information either as a single modality or when paired with the two other cue types.
Collapse
Affiliation(s)
- Timothy L White
- Combat Capabilities Development Command Army Research Laboratory, Adelphi, MD, USA.
| | - P A Hancock
- Department of Psychology and Institute for Simulation and Training, University of Central Florida, Orlando, FL, USA
| |
Collapse
|
2
|
Ko SM, Lee K, Kim D, Ji YG. Vibrotactile perception assessment for a haptic interface on an antigravity suit. APPLIED ERGONOMICS 2017; 58:198-207. [PMID: 27633214 DOI: 10.1016/j.apergo.2016.06.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2015] [Revised: 06/16/2016] [Accepted: 06/24/2016] [Indexed: 06/06/2023]
Abstract
Haptic technology is used in various fields to transmit information to the user with or without visual and auditory cues. This study aimed to provide preliminary data for use in developing a haptic interface for an antigravity (anti-G) suit. With the structural characteristics of the anti-G suit in mind, we determined five areas on the body (lower back, outer thighs, inner thighs, outer calves, and inner calves) on which to install ten bar-type eccentric rotating mass (ERM) motors as vibration actuators. To determine the design factors of the haptic anti-G suit, we conducted three experiments to find the absolute threshold, moderate intensity, and subjective assessments of vibrotactile stimuli. Twenty-six fighter pilots participated in the experiments, which were conducted in a fixed-based flight simulator. From the results of our study, we recommend 1) absolute thresholds of ∼11.98-15.84 Hz and 102.01-104.06 dB, 2) moderate intensities of 74.36 Hz and 126.98 dB for the lower back and 58.65 Hz and 122.37 dB for either side of the thighs and calves, and 3) subjective assessments of vibrotactile stimuli (displeasure, easy to perceive, and level of comfort). The results of this study will be useful for the design of a haptic anti-G suit.
Collapse
Affiliation(s)
- Sang Min Ko
- Department of Information and Industrial Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Kwangil Lee
- Jin Air Co., Ltd., 453 Gonghang-dearo, Gangseo-gu, Seoul, 07570, Republic of Korea
| | - Daeho Kim
- Republic of Korea Air Force Safety Center, P.O. Box 8, Yeouidaebang-ro 22-gil 77, Dongjak-gu, Seoul, 07056, Republic of Korea
| | - Yong Gu Ji
- Department of Information and Industrial Engineering, Yonsei University, 50 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Republic of Korea.
| |
Collapse
|
3
|
He J, Choi W, McCarley JS, Chaparro BS, Wang C. Texting while driving using Google Glass™: Promising but not distraction-free. ACCIDENT; ANALYSIS AND PREVENTION 2015; 81:218-229. [PMID: 26024837 DOI: 10.1016/j.aap.2015.03.033] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Revised: 03/23/2015] [Accepted: 03/23/2015] [Indexed: 06/04/2023]
Abstract
Texting while driving is risky but common. This study evaluated how texting using a Head-Mounted Display, Google Glass, impacts driving performance. Experienced drivers performed a classic car-following task while using three different interfaces to text: fully manual interaction with a head-down smartphone, vocal interaction with a smartphone, and vocal interaction with Google Glass. Fully manual interaction produced worse driving performance than either of the other interaction methods, leading to more lane excursions and variable vehicle control, and higher workload. Compared to texting vocally with a smartphone, texting using Google Glass produced fewer lane excursions, more braking responses, and lower workload. All forms of texting impaired driving performance compared to undistracted driving. These results imply that the use of Google Glass for texting impairs driving, but its Head-Mounted Display configuration and speech recognition technology may be safer than texting using a smartphone.
Collapse
Affiliation(s)
- Jibo He
- Department of Psychology, Wichita State University, Wichita, KS, USA.
| | - William Choi
- Department of Psychology, Wichita State University, Wichita, KS, USA
| | - Jason S McCarley
- Department of Psychology, Flinders University, Adelaide, South Australia, Australia
| | | | - Chun Wang
- Department of Psychology, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
4
|
Towers J, Burgess-Limerick R, Riek S. Concurrent 3-D sonifications enable the head-up monitoring of two interrelated aircraft navigation instruments. HUMAN FACTORS 2014; 56:1414-1427. [PMID: 25509822 DOI: 10.1177/0018720814536443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
OBJECTIVE The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications. BACKGROUND Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research. METHOD A flight simulator's head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value's deviation from the norm. Each sound pair's position in the listening space indicated the left/right parameter of its instrument's readout. Participants' accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene. RESULTS The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful. Conclusion: A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display. APPLICATION Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.
Collapse
|
5
|
Rauter G, Sigrist R, Koch C, Crivelli F, van Raai M, Riener R, Wolf P. Transfer of complex skill learning from virtual to real rowing. PLoS One 2013; 8:e82145. [PMID: 24376518 PMCID: PMC3869668 DOI: 10.1371/journal.pone.0082145] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2013] [Accepted: 10/30/2013] [Indexed: 11/25/2022] Open
Abstract
Simulators are commonly used to train complex tasks. In particular, simulators are applied to train dangerous tasks, to save costs, and to investigate the impact of different factors on task performance. However, in most cases, the transfer of simulator training to the real task has not been investigated. Without a proof for successful skill transfer, simulators might not be helpful at all or even counter-productive for learning the real task. In this paper, the skill transfer of complex technical aspects trained on a scull rowing simulator to sculling on water was investigated. We assume if a simulator provides high fidelity rendering of the interactions with the environment even without augmented feedback, training on such a realistic simulator would allow similar skill gains as training in the real environment. These learned skills were expected to transfer to the real environment. Two groups of four recreational rowers participated. One group trained on water, the other group trained on a simulator. Within two weeks, both groups performed four training sessions with the same licensed rowing trainer. The development in performance was assessed by quantitative biomechanical performance measures and by a qualitative video evaluation of an independent, blinded trainer. In general, both groups could improve their performance on water. The used biomechanical measures seem to allow only a limited insight into the rowers' development, while the independent trainer could also rate the rowers' overall impression. The simulator quality and naturalism was confirmed by the participants in a questionnaire. In conclusion, realistic simulator training fostered skill gains to a similar extent as training in the real environment and enabled skill transfer to the real environment. In combination with augmented feedback, simulator training can be further exploited to foster motor learning even to a higher extent, which is subject to future work.
Collapse
Affiliation(s)
- Georg Rauter
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
- * E-mail:
| | - Roland Sigrist
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Claudio Koch
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Francesco Crivelli
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Mark van Raai
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Robert Riener
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| | - Peter Wolf
- Sensory-Motor Systems (SMS) Lab, Institute of Robotics and Intelligent Systems (IRIS), ETH Zurich, Zurich, Switzerland
- Medical Faculty, University of Zurich, Zurich, Switzerland
| |
Collapse
|
6
|
Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review. Psychon Bull Rev 2013; 20:21-53. [PMID: 23132605 DOI: 10.3758/s13423-012-0333-8] [Citation(s) in RCA: 476] [Impact Index Per Article: 43.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborated.
Collapse
|
7
|
Wiggins MW. The role of cue utilisation and adaptive interface design in the management of skilled performance in operations control. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2012. [DOI: 10.1080/1463922x.2012.724725] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
8
|
Ho C, Reed N, Spence C. Multisensory in-car warning signals for collision avoidance. HUMAN FACTORS 2007; 49:1107-1114. [PMID: 18074709 DOI: 10.1518/001872007x249965] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
OBJECTIVE A driving simulator study was conducted in order to assess the relative utility of unimodal auditory, unimodal vibrotactile, and combined audiotactile (i.e., multisensory) in-car warning signals to alert and inform drivers of likely front-to-rear-end collision events in a situation modeled on real-world driving. BACKGROUND The implementation of nonvisual in-car warning signals may have important safety implications in lessening any visual overload during driving. Multisensory integration can provide synergistic facilitation effects. METHOD The participants drove along a rural road in a car-following scenario in either the presence or absence of a radio program in the background. The brake light signals of the lead vehicle were also unpredictably either enabled or disabled on a trial-by-trial basis. RESULTS The results showed that the participants initiated their braking responses significantly more rapidly following the presentation of audiotactile warning signals than following the presentation of either unimodal auditory or unimodal vibrotactile warning signals. CONCLUSION Multisensory warning signals offer a particularly effective means of capturing driver attention in demanding situations such as driving. APPLICATION The potential value of such multisensory in-car warning signals is explained with reference to recent cognitive neuroscience research.
Collapse
Affiliation(s)
- Cristy Ho
- Department of Experimental Psychology, University of Oxford, United Kingdom.
| | | | | |
Collapse
|
9
|
Gunn DV, Warm JS, Nelson WT, Bolia RS, Schumsky DA, Corcoran KJ. Target acquisition with UAVs: vigilance displays and advanced cuing interfaces. HUMAN FACTORS 2005; 47:488-97. [PMID: 16435691 DOI: 10.1518/001872005774859971] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Vigilance and threat detection are critical human factors considerations in the control of unmanned aerial vehicles (UAVs). Utilizing a vigilance task in which threat detections (critical signals) led observers to perform a subsequent manual target acquisition task, this study provides information that might have important implications for both of these considerations in the design of future UAV systems. A sensory display format resulted in more threat detections, fewer false alarms, and faster target acquisition times and imposed a lighter workload than did a cognitive display format. Additionally, advanced visual, spatial-audio, and haptic cuing interfaces enhanced acquisition performance over no cuing in the target acquisition phase of the task, and they did so to a similar degree. Thus, in terms of potential applications, this research suggests that a sensory format may be the best display format for threat detection by future UAV operators, that advanced cuing interfaces may prove useful in future UAV systems, and that these interfaces are functionally interchangeable.
Collapse
|